Creating Faultable Network Models of Complex Engineered Systems

Author(s):  
Brandon M. Haley ◽  
Andy Dong ◽  
Irem Y. Tumer

This paper presents a new methodology for modeling complex engineered systems using complex networks for failure analysis. Many existing network-based modeling approaches for complex engineered systems “abstract away” the functional details to focus on the topological configuration of the system and thus do not provide adequate insight into system behavior. To model failures more adequately, we present two types of network representations of a complex engineered system: a uni-partite architectural network and a weighted bi-partite behavioral network. Whereas the architectural network describes physical inter-connectivity, the behavioral network represents the interaction between functions and variables in mathematical models of the system and its constituent components. The levels of abstraction for nodes in both network types affords the evaluation of failures involving morphology or behavior, respectively. The approach is shown with respect to a drivetrain model. Architectural and behavioral networks are compared with respect to the types of faults that can be described. We conclude with considerations that should be employed when modeling complex engineered systems as networks for the purpose of failure analysis.

Author(s):  
Sean C. Hunter ◽  
David C. Jensen ◽  
Irem Y. Tumer ◽  
Christopher Hoyle

For many complex engineered systems, a risk informed approach to design is critical to ensure both robust safety and system reliability. Early identification of failure paths in complex systems can greatly reduce the costs and risks absorbed by a project in future failure mitigation strategies. By exploring the functional effect of potential failures, designers can identify preferred architectures and technologies prior to acquiring specific knowledge of detailed physical system forms and behaviors. Early design-stage failure analysis is enabled by model-based design, with several research methodologies having been developed to support this design stage analysis through the use of computational models. The abstraction necessary for implementation at the design stage, however, leads to challenges in validating the analysis results presented by these models. This paper describes initial work on the comparison of models at varying levels of abstraction with results obtained on an experimental testbed in an effort to validate a function-based failure analysis method. Specifically, the potential functional losses of a simple rover vehicle are compared with experimental findings of similar failure scenarios. Expected results of the validation procedure suggest that a model’s validity and quality are a function of the depth to which functional details are described.


Author(s):  
Hannah S. Walsh ◽  
Andy Dong ◽  
Irem Y. Tumer

All methods associated with failure analysis attempt to identify critical design variables and parameters such that appropriate process controls can be implemented to detect problems before they occur. This paper introduces a new approach to the identification of critical design variables and parameters through the concept of bridging nodes. Using a network-based perspective in which design parameters and variables are modeled as nodes, results show that vulnerable parameters tend to be bridging nodes, which are nodes that connect two or more groups of nodes that are organized together in order to perform an intended function. This paper extends existing modeling capabilities based upon a behavioral network analysis (BNA) approach and presents empirical results identifying the relationship between bridging nodes and parameter vulnerability as determined by existing, network metric-based methods. These topological network robustness metrics were used to analyze a large number of engineering systems. Bridging nodes are associated with significantly larger changes in network degradation, as measured by these metrics, than non-bridging nodes when subject to attack (p < 0.001). The results indicate the structural role of vulnerable design parameters in a behavioral network.


Author(s):  
Hoda Mehrpouyan ◽  
Dimitra Giannakopoulou ◽  
Irem Y. Tumer ◽  
Chris Hoyle ◽  
Guillaume Brat

This paper presents a novel safety specification and verification approach based on the compositional reasoning and model checking algorithms. The behavioral specification of each component and subsystem is modeled to describe the overall structure of the design. Then, these specifications are analyzed to determine the least number of component redundancies that are required to tolerate and prevent catastrophic system failure. The framework utilizes Labelled Transition Systems (LTS) formalism to model the behavior of components and subsystems. Furthermore, compositional analysis is used to reason about the components’ constraints (or assumptions) on their environments and the properties (or guarantees) of their output. This identification of local safety properties of components and subsystems leads to satisfaction of the desired safety requirements for the global system. A model of quad-redundant Electro-Mechanical Actuator (EMA) is constructed and, in an iterative approach, its safety properties are analyzed. Experimental results confirm the feasibility of the proposed approach for verifying the safety issues associated with complex systems in the early stages of the design process.


2016 ◽  
Vol 138 (12) ◽  
Author(s):  
Brandon M. Haley ◽  
Andy Dong ◽  
Irem Y. Tumer

It has been assumed, but not yet tested, that the topological disintegration of networks is relatable to degradations in complex engineered system behavior and that extant network metrics are capable of capturing these degradations. This paper tests three commonly used network metrics used to quantify the topological robustness of networks for their ability to characterize the degree of failure in engineered systems: average shortest path length, network diameter, and a robustness coefficient. A behavioral network of a complex engineered system is subjected to “attack” to simulate potential failures to the system. Average shortest path length and the robustness coefficient showed topological disintegration patterns which differed between nominal and failed cases, regardless of failure implementation location. The network diameter metric is not sufficiently dependent on local cluster topology to show changes in topology with edge removal failure strategies. The results show that topological metrics from the field of complex networks are applicable to complex engineered systems when they account for both local and global topological changes.


2018 ◽  
Vol 4 ◽  
Author(s):  
Hannah S. Walsh ◽  
Andy Dong ◽  
Irem Y. Tumer

Recent advances in early stage failure analysis approaches have introduced behavioral network analysis (BNA), which applies a network-based model of a complex engineered system to detect the system-level effect of ‘local’ failures of design variables and parameters. Previous work has shown that changes in microscale network metrics can signify system-level performance degradation. This article introduces a new insight into the influence of the community structure of the behavioral network on the failure tolerance of the system through the role of bridging nodes. Bridging nodes connect a community of nodes in a system to one or more nodes or communities outside of the community. In a study of forty systems, it is found that bridging nodes, under attack, are associated with significantly larger system-level behavioral degradation than non-bridging nodes. This finding indicates that the modularity of the behavioral network could be key to understanding the failure tolerance of the system and that parameters associated with bridging nodes between modules could play a vital role in system degradation.


Author(s):  
John R. Devaney

Occasionally in history, an event may occur which has a profound influence on a technology. Such an event occurred when the scanning electron microscope became commercially available to industry in the mid 60's. Semiconductors were being increasingly used in high-reliability space and military applications both because of their small volume but, also, because of their inherent reliability. However, they did fail, both early in life and sometimes in middle or old age. Why they failed and how to prevent failure or prolong “useful life” was a worry which resulted in a blossoming of sophisticated failure analysis laboratories across the country. By 1966, the ability to build small structure integrated circuits was forging well ahead of techniques available to dissect and analyze these same failures. The arrival of the scanning electron microscope gave these analysts a new insight into failure mechanisms.


Author(s):  
Anne E. Gattiker ◽  
Phil Nigh ◽  
Wojciech Maly

Abstract This article provides an analysis of a class of failures observed during the SEMATECH-sponsored Test Methods Experiment. The analysis focuses on use of test-based failure analysis and IDDQ signature analysis to gain insight into the physical mechanisms underlying such subtle failures. In doing so, the analysis highlights techniques for understanding failure mechanisms using only tester data. In the experiment, multiple test methods were applied to a 0.45 micrometer effective channel length ASIC. Specifically, ICs that change test behavior from before to after burn-in are studied to understand the physical nature of the mechanism underlying their failure. Examples of the insights provided by the test-based analysis include identifying cases where there are multiple or complex defects and distinguishing cases where the defect type is likely to be a short versus an open and determining if the defect is marginal. These insights can be helpful for successful failure analysis.


2009 ◽  
Vol 21 (5) ◽  
pp. 473-485 ◽  
Author(s):  
Everard van Kemenade ◽  
Teun W. Hardjono

PurposeThe purpose of this paper is to define what factors cause willingness and/or resistance among lecturers in universities towards external evaluation systems, especially accreditation.Design/methodology/approachA model has been designed to describe possible factors of willingness and/or resistance towards accreditation based on Ajzen and Metselaar. A literature review has been undertaken on the effects of external evaluation like ISO 9000 as well as accreditation systems such as Accreditation Board for Engineering and Technology and European Quality Improvement System. A questionnaire has been administered to a group of 63 lecturers from three departments at Fontys University in The Netherlands. The results of this preliminary survey have been presented to 1,500 academics in The Netherlands and Flanders to collect empirical data.FindingsResistance to accreditation can be found in the consequences of accreditation for the work of the lecturer (workload), negative emotions (stress and insecurity); the lack of knowledge and experience (help from specialists is needed); and lack of acceptance (other paradigm).Originality/valueThe paper provides more insight into the difficulties that organizations, especially universities, have to commit their employees to external evaluation. It might be possible to generalize the findings to other professionals in other organizations. Little research in this field has been undertaken so far.


AI Magazine ◽  
2012 ◽  
Vol 33 (2) ◽  
pp. 55 ◽  
Author(s):  
Nisarg Vyas ◽  
Jonathan Farringdon ◽  
David Andre ◽  
John Ivo Stivoric

In this article we provide insight into the BodyMedia FIT armband system — a wearable multi-sensor technology that continuously monitors physiological events related to energy expenditure for weight management using machine learning and data modeling methods. Since becoming commercially available in 2001, more than half a million users have used the system to track their physiological parameters and to achieve their individual health goals including weight-loss. We describe several challenges that arise in applying machine learning techniques to the health care domain and present various solutions utilized in the armband system. We demonstrate how machine learning and multi-sensor data fusion techniques are critical to the system’s success.


Author(s):  
Frank H. Johnson ◽  
DeWitt William E.

Analytical Tools, Like Fault Tree Analysis, Have A Proven Track Record In The Aviation And Nuclear Industries. A Positive Tree Is Used To Insure That A Complex Engineered System Operates Correctly. A Negative Tree (Or Fault Tree) Is Used To Investigate Failures Of Complex Engineered Systems. Boeings Use Of Fault Tree Analysis To Investigate The Apollo Launch Pad Fire In 1967 Brought National Attention To The Technique. The 2002 Edition Of Nfpa 921, Guide For Fire And Explosion Investigations, Contains A New Chapter Entitled Failure Analysis And Analytical Tools. That Chapter Addresses Fault Tree Analysis With Respect To Fire And Explosion Investigation. This Paper Will Review The Fundamentals Of Fault Tree Analysis, List Recent Peer Reviewed Papers About The Forensic Engineering Use Of Fault Tree Analysis, Present A Relevant Forensic Engineering Case Study, And Conclude With The Results Of A Recent University Study On The Subject.


Sign in / Sign up

Export Citation Format

Share Document