Disclaimer: This dissertation methodology has been written by a student and is not an example of our professional work, which you can see examples of here.

Any opinions, findings, conclusions, or recommendations expressed in this dissertation methodology are those of the authors and do not necessarily reflect the views of UKDiss.com.

Effects of Trade Space Choices and the Simulation Refinement Process

Info: 10767 words (43 pages) Dissertation Methodology
Published: 22nd Mar 2021

Reference this

Methodology

Based on the literature review conducted on systems engineering, risk-based design, and MBSE, previous research can be leveraged to define a MBSE process with embedded trade space analysis that supports appropriate concept development, design, implementation, fielding, and support throughout the lifecycle of a system. This chapter identifies the basic MBSE process, discusses the specific challenges associated with design of complex systems in a rapid changing world that need to be address by a MBSE approach, and details the embedded trade space analysis approach, leveraging the research on MBSE process.

GENERAL OVERVIEW

A Counter-UxS system integrates capabilities to support, detect, classify, track, and defeat COTS unmanned system technologies. Since no single capability or standalone system will provide the required capability, the C-UxS system can be considered a System of Systems with multiple complimentary capabilities or sub-systems integrated together. The C-UxS system will be defined using MBSE processes and the Innoslate tools discussed in earlier chapters to fulfil the C-UxS mission as defined by the DRM with the appropriate level of mission effectiveness. Using the MBSE process, this research will provided views of the system, as it is modeled, during the AoA phase of the project. This research will show how embedding discrete events simulation into the Innoslate model using the scripting tools increases the overall understanding of the available trade space. Analysis of the results will be provided in summary table showing the cause and effect of different trades indicated by the changes in the theoretical mission effectiveness reference metrics.

Detailed CHALLENGES

There are numerous challenges associated with defining a C-UxS system reference architecture that can support the full lifecycle and multiple instantiations. This complex problem space makes it important to follow a defined and repeatable process while documenting capabilities, functionality, assumptions, and trade space decisions. With the high rate of change of COTS UxS threats and C-UxS capabilities, the process will have to be responsive, transparent and flexible. If short cuts to the process are made, than the C-UxS system will not support the capabilities required by the users when they are needed and the design of the C-UxS Architecture will not fully meet the supportability and extendibility requirements. More importantly, if the initial trades made early in the process are not well understood and captured inside the development model, then the system will not be able to evolve past the AoA baseline to support future threats, even if it could support them when initially fielded using the initial AoA trades.

The first challenge is that in early phases of concept development the trade space can have many dimensions, such as types and number of capabilities, so it is not obvious what combination provides the best mission effectiveness value. This challenge manifests itself when modeling a system and optimizing the composite architecture for maximum mission effectiveness when the exact value of each independent capability can be a range, not a point value, and the all the possible combinations of each capability and how the capabilities are integrated can be very high. This can easily be an un-solvable problem from an optimization perspective, so the system designers are going to have to make some best guesses to try them. In addition, the mission effectiveness of a system does not have to be optimal; it merely has to be equal to or above the required mission effectiveness value set by the end user. In some cases, a sub-optimal mission effectiveness value might be the best value when cost and reliability are considered. For system designers, it is very hard to resist the temptation not to go with the very best technology, so a clear understanding needs to developed and maintained when making selections and knowing how those selections impact total system performance.

The second challenge is combinational and the dynamic complexity, that can make seeing the direct impact of a change almost impossible. Combinational and dynamic complexity are concepts defined by the operational research community, but they can apply to the engineering community, since the line between system definition and system analysis is being blurred. For this research, combination complexity is defined as the point where multiple systems with unique capabilities are being integrated into a single system, but the unique aspect of each capability must still be accounted for in the design. Dynamic complexity is being defined for this research as the common changes to input can cause unique changes to output for each independent system that is part of the system of systems. In others words, changes to input can affect each system differently. This is a challenge, because what may be an obvious optimization trade to one part of the system or during one part of the process, might not be an optimal trade overall. For example decisions made during the AoA phase, might be misunderstood during the design phase and a subsequent trade might inadvertently make the system less optimized instead of more optimized. Though not directly studied in MBSE, it has been assessed other areas. For example, Vaneman (2017b) states, “the organization’s ability to master these transient periods is fundamental to achieving steady state operations more efficiently, thus reducing losses due to sub-optimal performance.” (Vaneman 2017b) In terms of the DOD acquisition process, the DOD has been very successful to converge on an optimized solution during the AoA phase and during the design phase, but very often those two optimized solutions are different and the understanding of why they differ is lost, since the end result is the trades based on a down selection many different possible combinations and the dynamic effect of those combinations.

The third challenge is that each site can be slightly different, so a one-size-only solution might work for one site it was designed for, but fails at another. The variation the external conditions at different sites can be obvious, like obvious differences in terrain, or be harder to visualize differences in regulations, radio frequency spectrum interference, and acoustic noise.

The forth challenge is that the COTS technology has a very fast refresh rate., In this case for C-UxS, there might be a new threat system introduced every six to 12 months, a time period that could easily be inside the typical development timeline, so a design solution that was valid during the AoA might not still be valid when the systems is fielded. The awareness that a threat improvement significantly moves the needle is very important to make sure the C-UxS system remains relevant.

Together these challenges need a robust modeling process, automated trade space analysis, and flexibility to do initial “what ifs” and to validate that capabilities and architecture designs made in the past are still valid in the present.

PROCESS TOOLS AND DEFINITIONS

Modeling Languages

For this research the primary two modeling languages used as Systems Modeling Language and Life Cycle Modeling Language.

System Modeling Language

The Systems Modeling Language is best defined and described by the OMG Systems Modeling Language (OMG SysML) web page (Open Management Group 2017), “What is SysML” and is included below:

SysML is a general-purpose graphical modeling language for specifying, analyzing, designing, and verifying complex systems that may include hardware, software, information, personnel, procedures, and facilities. In particular, the language provides graphical representations with a semantic foundation for modeling system requirements, behavior, structure, and parametrics, which is used to integrate with other engineering analysis models.

The behavior diagrams include the use case diagram, activity diagram, sequence diagram, and state machine diagram. A use-case diagram provides a high-level description of functionality that is achieved through interaction among systems or system parts. The activity diagram represents the flow of data and control between activities. A sequence diagram represents the interaction between collaborating parts of a system. The state machine diagram describes the state transitions and actions that a system or its parts perform in response to events.

SysML includes a graphical construct to represent text based requirements and relate them to other model elements. The requirements diagram captures requirements hierarchies and requirements derivation, and they satisfy and verify relationships allow a modeler to relate a requirement to a model element that satisfies or verifies the requirements. The requirement diagram provides a bridge between the typical requirements management tools and the system models.

The parametric diagram represents constraints on system property values such as performance, reliability, and mass properties, and serves as a means to integrate the specification and design models with engineering analysis models.

SysML also includes an allocation relationship to represent various types of allocation, including allocation of functions to components, logical to physical components, and software to hardware.

The modeling method for this research will use many of the SysML extensions it provides to UML as show in Figure 5. Specifically the performance properties as defined in the Parametric Diagram. The parametric diagram, as shown in Figure 6, might lead one to believe that it is a single physical representative block for each capability that is easily defined and searched for, but for this effort, and for most modeling efforts, the parametric data is defined in multiple ways, and at time in different ways, for each capability as part of the modeling and scripting effort.

Lifecycle Modeling Language

The Lifecycle Modeling Language is best defined and described by the Lifecycle Modeling Organization (2015) Lifecycle Modeling Language Specification 1.1:

The basis for the LML formulation is the classic entity, relationship, and attribute meta-meta model. This formulation modifies the classical approach slightly by including attributes on relationships, to provide the adverb, as well as the noun (entity), relationship (verb), and attribute (adjective) language elements. Since LML was designed to translate to object languages, such as UML/SysML, these language elements correspond to classes (entity), relations (relationship), and properties (attribute).

Extending the above reference from Lifecycle Modeling Organization , Vaneman (2016, 5) states, “Once mapped, the LML visualization models can be associated to the corresponding LML entity, and by extension provides an ontology for SysML. Providing this ontology will prove important to practitioners as they will be able to better represent the complexities of a system.” In other words, LML tries to simplify the modeling effort, where DODAF might have had multiple names for very similar nodes and lead to confusion by the modelers, LML just has one node to choose from.

Mapping of SysML to LML Diagrams

The good news is that the complexities and simplification of UML, SysML, and LML are now primarily handled by the tools that support the modeling process. This research will primarily use LML model language, but note that the tag of “Use Case” is overload and has different meaning to different architecture practitioners. For reference, a mapping between the SysML diagrams and the LML entities is shown in Table 4. This mapping provides an understanding of commonality among the SysML and LML visualization models (Vaneman 2016, 6).

Mapping of SysML Diagrams to LML Diagrams and Entities. Source: Vaneman (2016).

SysML Models LML Models LML Entities
Activity Action Diagram Action, Input/Output
Sequence Sequence Action, Asset
State Machine State Machine Characteristic (State), Action (Event)
Use Case Asset Diagram Asset, Connection
Block Definition Class Diagram, Hierarchy Chart Input/Output (Data Class), Action (Method), Characteristic (Property)
Internal Block Asset Diagram Asset, Connection
Package Asset Diagram Asset, Connection
Parametric Hierarchy, Spider, Radar Characteristic
Requirement Hierarchy, Spider Requirement and related entities

Vaneman goes on to state:

The Lifecycle Modeling Language defines a new approach to MBSE that simplifies the ontologies and logical constructs found previous MBSE methods and languages. Coupling SysML and LML provides an environment with an ontology that allows system concepts to be better represented by denoting underlying properties, relationships, and interrelationships. LML provides a means to improve how we model system functionality to ensure functions are embedded in the design at the proper points and captured as part of the functional and physical requirements needed for design and test. (Vaneman 2016, 6)

Therefore, while it is important to understand the basis of the modeling languages and the mapping between the UML extensions that are available for use. One does not have to fully understand all the details provided in the three specifications.

Simulation

The simulation engine is based on a combination of discrete event simulation for known static decision points selected by the user and Monte Carlo simulation where system performance effect can be randomized. Real Time Discrete Event Simulation is a model, both mathematical and logical of a designed system that has decision paths at precise point in simulation time. The discrete event simulator predicates key system/project metrics based on user input and random events. Monte Carlo simulation definition that best fits this process, is randomizing of tasks that impact the results, but not specifically modeled. For this effort, the simulation capability will allow the variation of design choices that affect the key mission effectiveness metric to indicate the best point in the solution space where mission effectiveness meets the required value.

Tools:

Innoslate

This effort was modeled in the Spec Innovations Innoslate tool. Innoslate includes the modern end-to-end design, modeling, and traceability capabilities systems in industry standard LML, SysML, and IDEF0. These models, such as the activity diagram, can easily be simulated with integrated discrete event and Monte Carlo simulators coupled with additional handwritten scripts to better specify flow. The Innoslate tool allows for local execution of the model and ability to export the results to a comma separated value file for additional analysis.

The clean interface, simple relationships, and modern diagram visualizations make managing model entities easier than ever. Currently, there are over nine different diagrams to visualize behavioral models including: executable Action, Sequence, N-squared, and IDEF0. Physical models have eight different diagrams including: Asset, Class, Use Case, and Organization Chart. All of the diagrams are drag droppable, allowing for quick model design and construction. The diagrams conform to the LML, SysML, or the IDEF0 standard. Innoslate includes both a discrete event simulator and fully scalable discrete event and Monte Carlo simulator to execute system models and verify model correctness. These simulators can calculate system’s time, cost, resource levels, and produce easy to read graphical outputs (including Gantt charts, cost curves, resource usage). The model maturity checker evaluates the model according to best-practice heuristics developed by research at Naval Postgraduate School and Stevens Institute of Technology (Innoslate 2016).

As discussed above the Innosalte tool will provide an easy to use graphical interface allowing us to build a well-defined model based on the UML, SysML, and LML well defined rules.

Microsoft Excel

Microsoft Excel is a common product in the Microsoft Office Family, which will be used for analysis of the comma-separated files that will be generated by the Innoslate simulation engine. Standard techniques of pivot tables, formulas, and graphing will be used to help analyze the results on the simulation in terms of quantifiable effect on mission effectiveness.

process

Process flow

The following sections describe the methodology to support system definition and assessment of trade space decisions. This will address the formal definition of a system and the interactive method of assessing and refining the system definition. The generalized recommended process is shown in Figure 7. The methodology for this research will be broken up into four steps, system definition, sub-system definition, mission analysis, and assessment for the generic tailored MBSE Process. As discussed previously, the focus area will be on mission analysis and assessment, but since the system must be defined before it can be analyzed and assessed, this research will also develop a simple modeling plan, a use case reference mission, and a high level set of requirements. The research and the use cases will be based by a tailored version of the DODAF 2.0 products and views as defined in the All View -1 (AV-1) available in appendix X. Since system definition and decomposition is not the focus area, the discussion will be limited, but it is included in the model for completeness.

Generic Tailored MBSE Process

System Definition

The system definition phase will start with the an all view for the overall model development plan, define system level requirements, identify key stakeholders, develop the initial set of design reference missions, create the initial capability view defining the systems capabilities, create the initial operational view defining the system functionality, and define the metrics and targets values for determining mission effectiveness later in the process. A clear concise system level definition is critical for overall success of the effort, since the analysis on the effectiveness to the trade-space decisions will be based on the simulation results compared to the system level required performance metrics. An example of the format for a requirements and performance metrics table is shown in Table 1, here the system stakeholders can define the constraints for cost and mission effectiveness.

Example of Requirements and Performance Metrics

Capability Weight (1-3) Description of Metric Metric Target Value
System Requirement 1 1 Plain English description of metric Value of required performance
System Req N 3 Plain English description of metric Value of required performance

Sub-system Definition

The sub-system definition phase will state the problem at the sub-systems level by decomposing the capability view and the operational viewto the leaf level and defining the appropriate action views for use case definition. At this point, the basic flow of the initial system will be created and defined in the action views and the decomposition of the action views. The actions views will perform a capability or a sub-function of the capability. All capabilities shall be fulfilled by an action that provided the required functionality. The inclusion of what capabilities (CV’s) and functionality (OV’s) are fulfilled by an action views provides the system traceability that can be used in the completeness and coverages metrics.

Mission Analysis

The mission analysis phase will develop rules for the action flow defined by the SV-4’s action views. By creating input blocks to define inputs and input variability and by adding script engines to the action diagram logic blocks and for the discrete event simulation. The discrete event simulation will be run at different phases of the process, with multiple operator selected combination of capabilities, and with results being stored inside the tool and exported to a common separated value (csv) export file. Innoslate’ s discrete event simulation run off the “advance functional flow diagram.” Innoslate allows for the operator input and scripted instructions to describe low level flow. Note that the case study in chapter 4 describes a 3-part evaluation, but for a real program this would be continuous evaluation whenever new trades are made as part of the development process. In the case study, the first part will be completed during concept demonstration, second part completed as part of system design, and the third part completed as part of instantiation/installation. Each phase will refine the environmental constraints and the requirements. As an example, metrics to be used included cost, coverage, and mission effectiveness, naturally both the metric values and the threat environment change over time.

Assessment

Using both the MBSE tool and Excel spreadsheet calculations, the results will be calculated and compared to the mission effectiveness metric target values. This step includes importing the results into the spreadsheet, graphing the analysis summery to allow for comparisons, and documenting the results. Inside spreadsheet, analysis will utilize pivot tables to organize raw data and summarize results. Based on the results, as compared to the mission effectiveness metrics, the process is repeated by re-entering the sub-system definition phase to update any/or refines the model to improve results. The process is continued throughout the lifecycle to account for system updates, changes in environment, and needs of the stakeholders.

Recursive Refinement

The power of the process is that one does not have to know everything upfront before one can do a basic assessment of what is known. The process is an iterative cycle in which what is currently known about a desired system for that iteration of the definition or design phase is entered into the model or simulation scrips. The system is simulated in a representative environment, performance is assessed, then based on the knowledge learn by the assessment, the system is recursively refined. Though the goal is to implement positive changes to the system based on the assessment, many times changes to the capabilities, functionalities, and environment can also significantly impact performance. This is why one SSoT is so valuable, since even when changes made by an engineer that are not meant to explicitly impact performance, but ultimately do, can be assessed by the process, caught, and fixed. Many times these implicit changes thought to be benign to system performance can have a significant impact.

Exit of Process

The actual exit of the MBSE lifecycle process is not really until the system is retired, but there are times when one might exit a phase of the lifecycle and hand it over to another entity for care and feeding. If the process is followed correctly all the knowledge is already captured inside the Single Source of Truth (SSoT), than nothing more needs to happen, but it not than this knowledge needs to be added. Since the exit of one phase is most likely the entrance to another phase, the handover from the previous knowledge manager to the current knowledge manager is key. For this effort, the exit of the process will be the documentation of results by the use case to show the how the above process fully supports the needs of the development process. The use case example in the next chapter will be broken up into three refinement iterations to show how the MBSE is refined based on the feedback from the simulation and assessment efforts. A real effort would have many more refinement iterations over the lifecycle, but it is felt that three iterations will have enough granularity to demonstrate the positive effect on the overall system development process.

The previous section identified a tailored MBSE process for including a continuous AoA process within the SSoT model. This process will use a case study for a Navy C-UxS architecture to confirm that the process is applicable and effective. This use case will make one pass through the process shown in Figure 7 to demonstrate all the steps. The C-UxS, which requires flexibility and adaptability to support multiple unique sites provides a challenging case study for the proposed process. The proposed MBSE  process with embedded simulation will provide a iterative lifecycle framework that can be used to from concept development through critical design activities phase. The research is that the MBSE process of defining the system provides valuable context and the simulation process of doing “what if” provides knowledge, but if done together as  single process it will combine context and knowledge for much greater understanding of the system.

A.                 Counter Unmanned systemS Process Use CASE

The process identified in the previous chapter is applied to the C-UxS acquisition environment. The below sections go through each step of the process and employ the use case for a Navy architecture. For reference, step one provides broad context back to the DoD domain. Step two, decomposes the broad context defined in step one down to details relevant to C-UxS problem statement and suitable to define relevant action diadem. Step three defines the actions that are relevant to the system metrics in the C-UxS action diagram. Step four, links the system definition phase (context) with the system analysis phase by adding simulation flow and simulation scripts to the model. Step 5, completes the process (knowledge) by assessing the simulation output based on the context of the model. For additional, general context appendix A contains the full model views and on request the model is available on-line via the Innoslate web application and the full DRM is available for additional mission context.

1.                   System definition

The system definition phase will use the all view template for the overall model development plan, define system level requirements, identify key stakeholders, develop the initial set of design reference missions, create the initial CV-2 defining the systems capabilities, create the initial OV-5N defining the system functionality, and define the metrics and targets values for determining mission effectiveness later in the process. A clear concise system level definition is critical for overall success of the effort, since the analysis on the effectiveness to the trade-space decisions will be based on the simulation results compared to the system level required performance metrics. Note that NAVAIR used the fit for purpose view referred to as the OV-5N to show the high level operation view and decomposition, since the initial operation view is closer to a system of systems, than a subcomponent to a system which is the tradition definition of an OV-5.

a.                   Overall model development plan

(1)                Scope

This architecture to counter all types of unmanned systems provides a government owned defined (open) architecture that will support current and future Navy acquisition efforts. The architecture depicts and describes the capability requirements, operational activities, and the system views. The architecture provides the appropriate functional decomposition, functional interaction, and functional interoperability as appropriate. To enable commonality the architecture builds on existing capability and functional decomposition and definitions along with linkages existing data dictionaries and data definitions.

This architecture is intended for CONUS, OCONUS, and Maritime applications. The system is intended to initially counter Group 1–2 unmanned aerial systems (UAS); however, this architecture may be expanded upon to encompass unmanned ground, surface, and underwater systems as necessary.

(2)                Viewpoints and Models Developed

Table 3 lists the Department of Defense (DOD) Architecture Framework (DODAF) version 2.02, August 2010 view required to satisfy the C-UAS purpose and intent. Please note the OV-5N is a fit for purpose view developed by the Navy to show functionality at the system or system of system level.

  1. Viewpoint and Models Developed
Applicable Viewpoints Models Titles
All AV-1 Overall Plan
Capability CV-2 System Capability Decomposition
Operational OV-1 Operational View
Operational OV-5N System Functionality Decomposition
System SV-4 System Action Flow

(3)                Assumptions

The architectures and associated data can be leveraged and reused by subsequent capability developers and program offices for the development of solutions for C-UxS.

The required capabilities are reasonably covered by the Joint Capability Areas, the JCA’s which provide a common vernacular and context. The JCA’s is prun-able and extendable to allow for best fit to the C-UxS architecture. The JCA’s are used as the bases for the CV-2 decomposition.

The required functionality are reasonably covered by the Joint Common Systems Functional List, the JCSFL which provides a common vernacular and context. The JCSFL’s is prun-able and extendable to allow for best fit to the C-UxS architecture. . The JCSFL’s are used as the bases for the OV-5N decomposition.

(4)                Constraints

Develop the Countering Unmanned Systems ICD Architecture thoroughly and expeditiously to support the Counter-UAS Rapid Deployment Capability (RDC) and Rapid Prototype Engineering Development (RPED). The Speed to Fleet Imitative effort was limited to process development, process proof, and initial definition of the generic counter UAS architecture.

(5)                Purpose and Perspective

The countering unmanned system architecture depicts and describes the capability requirements, operational activities, system functionality involved in countering unmanned systems across all the domains and operational environments. Because these are an initial set of architectures, they provide a basis for identifying follow-on capability developmental efforts.

b.                   Define System Level Requirements

The C-UxS system will be designed to be flexible in nature, but with a government owned core so vendor specific solutions can be modified without said vendors required involvement. The C-UxS System will be initially designed to protect land-based sites against group 1 small COTS UAS’s as the example C-UxS. The architecture will be flexible enough to allow for growth in support of maritime-based sites (ships) and COTS based equipment in unmanned surface vessels, underwater vessels, and ground vehicles.

This use case will focus on the trade space analysis for detect, the sensor type, capabilities, numbers, and combinations. This will be demonstrated by the capabilities and functionality included with the C-UxS action diagram (SV-4).

The requirements and performance metrics table is shown in Table 4, here the system stakeholders can define the constraints for cost and mission effectiveness. In a real world program, this requirement table would be much more detailed, but for this effort, it will be very simple to better show cause and effect on the process.

  1. Requirements and Performance Metrics
Required Capability Weight (1-3) Description of Metric Metric Target Value
1.0 The C-UxS shall classify threats over an area covering 10 by 20 kilometers 2 Coverage of the sensors by type of covered area divided by the total area > 90%
2.0 The C-UxS shall have a deployed cost of the sensors be less than $500,000.00 1 Cost of the material and installed cost of the sensors divided the allowed cost of the sensors < 90%
3.0 The C-UxS shall defeat 80% of the threats 3 Number of defeated threats divided by the total threats < 40%

c.                    Identify Key Stakeholders

The stakeholders for this use case are defined as the funding authority (OPNAV), the acquisition authority (PEO-U&W), the subject matter experts, the representative end users (user community) , and the test and evaluation team. For this use case the user community has provided an initial starting point area size that needs to be protected and the required level of protection to allow for continuation of normal operations. The funding authority has provided the deployed cost of the sensors. The subject matter experts will provide type and realistic capability and cost of the sensors for analysis. The test and evaluation team will verify that the metrics are relevant and collectable.

d.                   Initial Mission Definition

The C-UxS OV-1 (see Figure 8) describes, at a high level, the operational scenario addressed by this architecture. The desired outcome for this architectural effort is to enable the development and acquisition of a suite of capabilities to prevent and mitigate the adversaries’ use of the UxS that capitalizes on both material and non-material approaches. Due to the diversity of this threat and the intentional domain agnostic approach to the architecture, capabilities resulting from this architecture should inform solutions in other domains and mission sets. This will result in a suite of modular (open) capabilities that can be rapidly integrated to keep pace with the rapid growth of UxS technologie

To keep this use case generic and unclassified, a simple circular area with approximate diameter of 16 kilometers was used as the representative area to be protected. The circle has five zones, the outer three zones are a restricted area where small UASs are not allowed and where the sensor coverage starts. The forth zone is the act and engage zone where any UASs inside this zone are considered as threats to be acted on with appropriate defeat methods. The fifth and final zone is the failure zone, anytime a threat enters this zone it is consider compromised and all work must stop. The mission will be a success if the required percentage are detected, classified and defeated outside the failure zone and considered a failure if the required percentage is not defeated before entering the failure area. A simple diagram of the mission area is shown in Figure 9.

e.                   Create the Initial CV-2 Defining the Systems Capabilities

The CV-2 defines and decomposes the required capabilities at the appropriate level. As stated in the assumptions, the JCA’s provide a common vernacular and context across the DOD and are applicable to the C-UxS. JCA’s are pruned and extended to allowing for best fit to the C-UxS architecture. The JCA’s are used as the bases for the CV-2 decomposition shown in Figure 10 initial CV-2 snapshot. The entire CV-2 is too large to be shown in its entirety, though it is available in the associated model, but the snapshot demonstrates the context in relation to the area relevant to the research. For this phase of the process, the CV-2  contains level 0 is the capability area heading, level 1 and level 2 provided the additional context in terms of the DOD JCA. The use of the JCA and the additional context is to allow the stakeholders to verify that the initial context is correct. In this snapshot the subject matter experts have selected Battlespace Awareness, Force Application, Logistics, can Command and Control as required high level capabilities. An additional required high level capability for the use case, but not shown in the snapshot, is the Interact (External) capability. The most relevant capabilities of Battlespace Awareness, Force Application, Command and Control, and Interact with the threat UxS are decomposed in the next step of the process.

  1. Initial CV-2 snapshot

f.                     Create the Initial OV-5N Defining the System Functionality

The OV-5N defines and decomposes the required functionalities at the appropriate level. Like the JCAs, the JCSFL provide a common vernacular and context across the DOD and are applicable to the C-UxS. JCSFL’s are pruned and extended to allowing for best fit to the C-UxS architecture. The JCSFLs are used as the bases for the OV-5N decomposition shown in Figure 11. The entire OV-5N is too large to be shown in its entirety, though it is available in the associated model, but the snapshot demonstrates the context in relation to this research. For this view level 0 is the OV-5N heading, level 1 and level 2 provided the context in terms of the DOD JCSF.

  1. Initial OV-5N snapshot

In this snap shot, the subject matter experts have Sense, Manage Tracking and Tracks, Generate Situational Awareness, and Mission Level Analysis as required high level functionality. Additional required functionality in the use case, but not shown in the snapshot, Mission execution functionality. The most relevant capabilities of Sense, Manage Tracking and Tracks, Generate Situational Awareness, Mission Level Analysis, and Mission execution are decomposed in the next step of the process.

2.                   Sub-system Definition

The sub-system definition phase will state the problem at the sub-systems level of the C-UxS by decomposing the CV-2 and the OV-5N to the leaf level and defining the appropriate SV-4A C-UxS Flow high level action views for use case definition. The C-UxS Flow shows the basic flow of the system and how the AoA simulation capability is integrated. The decomposition, simulation, and analysis of the action view C-UxS flow is the focus of the use case research and will be expanded on below.

a.                   Decompose CV-2 to Leaf Node Capability

In this phase the CV-2 had to be decomposed to the leaf level so the action view components could be built with the relevant level of detail. This expanded the CV-2 from two levels to five introducing the additional detail of specific types of sensors such as the Radar. The example leave node shown in Figure 12 is the Radar leaf capability that is part of the Battlespace awareness capability group. Additional required leaf node capabilities for this research are Electronic Emissions sensing, Eletroptic sensing, Fuse tracks, Operator interactions, and Defeat.

  1. Decomposed CV-2 snapshot

b.                   Decompose OV-5N to Leaf Node Functionality

Similarly, the OV-5N 2 had to be decomposed to the leaf level so the action view flow could be built with the relevant level of detail. This expanded the OV-5N from two levels to three. The example leave node shown in Figure 13 is the Search with Active Sensor, Search with Passive Sensor, Fuse Track Measurements, and Evaluate and Assess Engagement. Additional required leaf node functionality for this research are Conduct Manual Engagement, Conduct Automatic Engagement, and Fly Threat Small UAS.

  1. Decomposed OV-5N snapshot

3.                   Create and Define the SV-4A Action View

The action view is the first cross over point between the standard MBSE modeling process and process presented in this paper.  The action view allows the static capabilities to be combined in a meaningful way for the C-UxS problem and thay dynamically simulated. For this effort, the goal is to balance cost, coverage, and mission effectiveness based on the top level requirements and metrics defined above. To do that, the action view defines the interactions between the sensors, fusion, operator, defeat, and the threats. The top level action flow is show in Figure 14 and defines the interactions and flow of the functionality across the capabilities.

  1. SV-4 Top Level Action Flow

Next, the flow of each top level action is decomposed to the sub-level flow. For example, Figure 15 shows the Radar decomposition. The decomposition allows the number of Radars to be varied along with the capabilities of the Radar to detect small UASs, and if detected to classifiy the detections as threat small UASs.

  1. SV-4 Subset Radar Action Flow Decomposition

After Radar is decomposed, Electronic Emission, Electoptics, Fuse, and Operator are decomposed in the model, but not shown in the paper. The Electronic Emission and Electoptics decomposing in a manner very similar to Radar. The Fuse capability takes the output of the sensors and uses rules to command action or to recommend action to the operator as shown in Figure 16.

  1. SV-4 Subset Fuse Action Flow Decomposition

The automatic defeat is based on output from Fuse, as show in Figure 14, when a threat UAS is classified as a threat by all three sensors types. The success of the automatic defeat is based on a simple probability, where there is an 80% chance of defeat based on a timely automatic response and clear fused data from heterogeneous sensors. If the sensor data is not unanimously seen as classified by fuse, than an operator is brought into the loop as shown in top level Figure 14 and decomposed in Figure 17. Figure 17 only shows a partial view due to its size, but based on rules it allows four choices by the operator.

  1. SV-4 Subset Operator Action Flow Decomposition

The first operator choice is operator override, this is based on two sensors types classifying a threat with the third detecting it. The second operator choice is it initiate and is based on one sensor classifying a threat and at least one other sensor detecting it. The third is the operator continuing to monitor detected, but not classified threats,

4.                   Mission Analysis

The mission analysis phases will take place during the AoA phase (concept development), prototyping phase, and final design phase to show how metric achievement progresses. This first phase will demonstrate the concept development phase and the analysis for an AoA. The user will be able to assess and optimize different concepts and can refine those concepts as more detailed information is available. Based on the metrics in Table 3, the user can very the sensor type, numbers of sensors, and capability of the sensors. A constraint on the case study is that each sensor type must cover the entire operations area. Cost is a function of (coverage * capability), such that the higher the coverage and capability, the higher the cost per sensor. A cost weighting value makes radars an order of magnitude more expensive than electronic emission sensor and electroptic sensors. Rules will be developed to govern the action flow defined by the SV-4’s action views and these rules will remain constant thought out the case study for simplicity.

The first step is to set up the simulation to allow for user input of key areas of assessment. The user input blocks are shown in Figure 18 and the user input script in Figure 18.

  1. Discrete Event Simulation User input Blocks
  1. User Input Script

This allows the user to vary the environment, the sensors types, sensors numbers, and sensor capabilities to classify or detect a threat. Figure 18 shows input blocks for threats, radar parameters, emission values, and Electroptic (EO) values. Figure 19 show and example for the threat and radar parameter script input. Figure 20 shows the input block during execution of the script.

  1. Simulation User Input Block

Next, the threat parameters are created in the threat set up block show in Figure 13 and defined by script shown in Figure 21.

  1. Threat Setup script

The threat signatures are scaled between 0 and 100, to be compared to the capabilities of the sensors as enter by the user and compared in the sensor block. The sensor detection is based on the threat signature (user input), the distance factor of the threat to the sensor (random) compared to the sensor capacity to detect and classify (user input). The threat will be visible to each sensor once. The script for radar detection is shown in Figure 23. The scripts for the other two sensor types, emission and EO, are done in a similar manner.

  1. Radar Detection Script

Next the rules for the fuse capability shown above in Figure 15 and are scripted as shown in Figure 23.

  1. Fuse Track’s Script

The fuse script roles up the output from the sensors to a single value that effectively recommends an action to the system, such as automatic defeat as show in Figure 14 or passes information onto operator to manage final process as partially shown in in Figure 17. The short hand notation in the script cases of “CCC” means that all sensor types classified the threat, “CCD” means two of the three classified the detections as a threat and all detected the threat, “CDD” means that all three detected the threat with on classifying it as a threat, “DDX” means only two detected the threat, and “NNN” means the threat was never detected by any sensor. This is a very simple fusion rule set for this case study, but the fusing capability can be greatly influence the trade space if a more advanced version is used, but that is beyond the scope of this research.

The final step in the action flow is to activate the defeat mechanism as shown in Figure 14 for automatic or as shown in Figure 17 for operator choice of override automatic defeat, initiate attempt defeat, operator monitor, or no action based on the Fuse_Output variable. Weather the UxS is defeated or not is based on a simple probability, where it is biased such that the systems will more likely to be defeated when all sensors classify and less likely for be defeated when on one sensors type can classify.

The discreet simulation runs will be based on the simulation input tables, which varies the input of the type of sensors, sensor capabilities, and number of each sensor type, for the simulation input values. The each simulation run will simulate 1000 threats and results are summed by the script and shown in console window. Results are transfer by hand back to the spreadsheet for further analysis.

With the Model and scripts set up, the assessments can now be completed

5.                   Assessment

This case study will focus on the first recursive refinement phase be completed in a multi-phase effort to demonstrate the effects trade space choices and the simulation refinement process. The first phase will demonstrate what the team knows during the AoA phase, and demonstrated the six simulation runs used to refine the design choices. The early work completed during the Rapid Development Capability (RDC) effort, which has already shown that discovered a homogenous solution will not work, so this effort is starting where they left off with the heterogeneous sensor mix and refined from there. The screen shot of the model coverage is shown in Figure 24 and the console output is shown on Figure 25.

  1. Model Coverage in Simulation
  1. Model Results in Console Window

a.                   Input

The AoA Phase is starts the process with the SME best understanding of the simulation capabilities. The SME’s came up with six different system combinations and types to compare capabilities of higher cost systems to lower cost systems, with coverage and cost being kept constant to the threshold value of 9599% coverage as shown in Table 2.

  1. Initial Mission Values

b.                   Results

For each of the runs, the input values where enter into the Innoslate discrete event simulation tool, run, and the results captured from the console window and recorded below in table 3. The use of the “print to console” capability greatly simplified the “what if” process. For this use case the first three runs did not achieve the desired metric values, so the focus was shifted to a more balance approach. This shift in focus lead to finding at least three viable concepts for the AoA than can be carried forward as shown by runs 4-6, where run 5 showed the most promise.

  1. First Phase Mission Effectiveness Results
  Threats Defeated Total Effectveness Sensor Cost
1 682 68% $505,737
2 778 78% $503,114
3 685 69% $456,908
4 810 81% $489,623
5 870 87% $485,500
6 864 86% $501,884

c.                    Analysis

For this use case, it is not readily apparent the cause and effect of changing the different sensor combinations in a static viewpoint prospective. Thought the model looks simple, each part adds significant complexity, with only the rule based fusion engine being deterministic. In this use case the both the threats and the operator responses where dynamic (and non-obvious. For example, early experimental runs, that were thought to have optimal sensor combinations, did not receive results as good as expected. The later runs learned from these early failures to meet the required mission effectiveness and resulted in improved results. This showed that knowledge and understanding gained by trial and error process provided by the simulation to this relatively simple use case, is very valuable part of system design. This use case only shows the first phase, but one can easily see that changes to the model, specifically refinement of the fusion engine, can greatly impact the mission effectiveness. For example, simpler fusion  technique might favor a single powerful sensor type, where a more complex fusion technique might allow for lesser capable sensors. In the analysis of this use case, sensor capability was directly related to system cost. For this effort, runs 5 and 6 showed the most promising results, which helped us understand the appropriate mix of capabilities for future systems.

6.                   Recursive refinement

As mentioned above, some of the mission effectiveness was driven by the choices made in the fusion engine design and rule set. The next natural step in the refinement process might be to then leave the sensor combination the same, but to try different variations of the fusion engine rule set. Alternatively, the next natural step in the refinement process might be to add additional operator aids to help with the defeat process.  Or, the next natural step might be to additionally automate the process so the variability of the operator is completely removed.

7.                   Exit of process

The actual exit of the MBSE lifecycle process and this use case is not really until the system is retired, but following the process once through the cycle to build the model, action diagrams, and scripts allowed assessment of the iterative process. One can easily envision how it will enhance the design process and later phases of the lifecycle

B.                  analysis of the process

Chapter 3 provided MBSE process with simulation that can be used to assess the solution early in the process and throughout the design lifecycle. Chapter 4 provided a realistic, thought truncated, use case for a Counter Unmanned System using the MBSE process with simulation. The mission effectiveness metric as assessed for each sensor combination at the end of each run, to see how the simulation output compares to the target values of the metrics.  Based on the knowledge learned, the stakeholders and the designers can adjust the sensor parameters for the next run and the target values for the next phase.  This allowed for gradual convergence on a viable solution for the stakeholders in the first phase. Though not shown by the use case, the process cycles can accounted for changes in the capabilities, environment, and threats which all have an effect on the overall effectiveness of the solution. For example, the radar capabilities that are used in the first phase might prove to be too optimistic after initial testing is complete, so system than does not meet the missing effectiveness metric and might drive cost. Correspondingly, the opposite might happen, advancement in artificial intelligence/image processing might make optical sensors much more effective than first thought, greatly increasing mission effectiveness and reducing cost.

This use case is intended to demonstrate a process that can be leveraged for all system development efforts. The actual MBSE process with integrated simulation for a large development effort will be much more extensive with all the additional capabilities, functionality, and action cases, to include the interactions between multiple system effectiveness’ metrics target values. Additionally, the ability to use the single source of truth with the built in simulation at any time in the lifecycle or a site specific instantiation of the C-UxS allows for continuous knowledge of the systems mission effectiveness metric.

II.       Conclusions/Future Work

A.                 Summary of the Analysis

The use case analysis shows that neither the static view nor the dynamic analysis initial solution would have met the stakeholders requirements without the knowledge gained by multiple runs through the solution space via simulation. The interaction between sensors, fusion, and operator seem simple when looked at statically, but the dynamic interactions are shown to be complex for even this simplistic use case. For example, the initial three concept simulation runs did not meet the mission effectiveness requirements, even though they all looked perfectly viable from a static perspective. So, though the inner loop of the iterative process one learned to apply a different focus, one which placed more of a balanced sensor selection approach, which in turn allowed the design to meet the requirement. Without the iterative dynamic analysis throughout the process the stakeholders might have settled on a non-optimized solution that too strongly favored a Radar solution over a more optimal balance solution. The key for this research is not the numerical values determined to be the feasible optimized solution, but the fact that iterative simulation added knowledge inside the engineering model. In turn, that knowledge allowed for system optimization inside the MBSE process.

B.                  Conclusions

Many great ideas, including early engineering of the light bulb by Thomas Edison are built on a long series of trials and errors, engineering failures so to speak. This process of trail and error still exist today, only difference is that the systems are much more complex, so the failures and be much more costly. In the book Black Box Thinking Syed (2015) presents the idea that failure is a fact of life, but one can choose to either learn from failure or to pretend that the failure was out of their personal control and not learn from it. Deming, in his break thought book Out of a Crisis (1982) also stressed that understanding failure was the key to future success and the foundation of process improvement. For aviation, learning from failure has always been part of the culture, but almost all of these failures happen after the system is designed and sometimes too late to be cost effectively fixed. This effort moves the power of learning from failure into the early phases of the process, exactly when it is the best time for “the learn and fix cycle” to happen. The power to learn by failing is even greater than the power of early success, since failure causes one to wonder why. This process of understanding why something might have failed is when knowledge is truly gained. This thesis provided a MBSE process with integrated simulation that allows engineers to flexibility to try many and fail early, so they can succeed in the long term. Multiple designs can be quickly built, including some that have a higher risk to reward, and then performance of each can be evaluated relative to each other. The act of understanding why one solution is better or not as good as another solution helps one build a greater understanding of the system. It also allows the designer a better understand of how sensitive a solution might be to a single technology. The process defined above, provides a framework to conduct the definition and design phase of system development using a defined and iterative process built on previous MBSE development research on developing large complex systems. This research and the enhanced MBSE process will help in the development of future large complex systems during system definition and design phases by providing validation of the system model though simulation and analysis and is available in all phases of the system lifecycle, as long as the model is maintained as part of the lifecycle process. A general MBSE process with integrated simulation and analysis was shown in Figure 7 and described in chapter 3. A representative Counter Unmanned System use case exercised the proposed process. This allowed validation of the process and a use case provided a clear example on how the assessment provides a better overall product.

The discoveries of the related research, the enhancement to the current MBSE process, and the example use case addressed many of the primary research questions that were proposed. The objectives of the thesis effort were met with the development of the MBSE process with embedded simulation and the C-UxS use case. The beneficiaries of the research will be developers of large complex systems in dynamic environments who can to use the process as a function of fielding and maintaining effort. The multiple research questions are discussed below, with corresponding details identified for each based on the research.

  1. How can MBSE be used to forecast and investigate mission effectiveness, caused by material and design limitations, to inform and influence the early stages of the system design process? The question is address in two parts, the first is the process shown in Figure 7 and described in Chapter 3 and the second is the use case which clearly shows the model development, the action flow, and the scripting to interactively assess mission effectiveness. The MBSE process with embedded simulation provides a powerful frame work for complex system development.
  2. Once the architectural model is complete with embedded mission effectivity analysis, how can multiple runs of the simulation with varying the component level effectiveness probabilities of design choices be used to determine overall system sensitivity? The question is addresses with the use case, specifically with the inputs of the six runs shown in table 2 and the results shown in Table 3. In the use case the design choices where the sensor mix and the capabilities selected from each sensor type. The variation show where the use case was the most sensitive, in this case the unbalanced system perform poorly compared to balance systems.
  3.    How can the results of the system sensitivity results and analysis be used to optimize design and reliability requirements? This effort leveraged previous work by Perez (2014) where fault analysis was shown to be viable. The use case was based on capability and cost, but and additional dimension of fault/reliability could have been added based on past work and this research effort.

4) Based on the architectural model with the simulation, how can one-use sensitivity analysis techniques to adjust the project’s path forward by having a continuous positive impact on the early stages of the development process?  The question was not explicitly addressed by the use case, there would have had to have multiple phases included, but it was addressed by the iterative portion on the MBSE process show in Figure 7. Specifically the recursive step described in chapter 3, section 6 takes what you know and is simulated in a representative environment, performance is assessed, then based on the knowledge learn by the assessment, the system is recursively refined. The changes made clearly show in a relative manner if the change was positive or negative impact on the system performance. The path forward becomes relatively clear, back out a bad change and try something else or continue to refine a good change. At time when progress is stalled, one might have to make seemly random changes, just to see the pattern of positive and negative effects to determine a new course of action.

C.                  Recommendations for future work

As stated above, only the first phase of a development effort was done in the C-UxS use case for a single action view. The process used should be expanded to follow a real program through multiple phases and with a boarder use of action views to further refine the process. The third research area on using the process to relating reliability requirements was not expanded on due to the limitation of the use case, but this is a very interesting area for further research. Additionally this research used a single tool set, but there are multiple tool sets available that might provide additional insight, a broader investigation is required before recommending a single tool set.


This dissertation methodology has been written by a student and is published as an example. See our guide on How to Write a Dissertation Methodology for guidance on writing your own methodology.

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this dissertation methodology and no longer wish to have your work published on the UKDiss.com website then please: