scholarly journals ScOSA system software: the reliable and scalable middleware for a heterogeneous and distributed on-board computer architecture

Author(s):  
Andreas Lund ◽  
Zain Alabedin Haj Hammadeh ◽  
Patrick Kenny ◽  
Vishav Vishav ◽  
Andrii Kovalov ◽  
...  

AbstractDesigning on-board computers (OBC) for future space missions is determined by the trade-off between reliability and performance. Space applications with higher computational demands are not supported by currently available, state-of-the-art, space-qualified computing hardware, since their requirements exceed the capabilities of these components. Such space applications include Earth observation with high-resolution cameras, on-orbit real-time servicing, as well as autonomous spacecraft and rover missions on distant celestial bodies. An alternative to state-of-the-art space-qualified computing hardware is the use of commercial-off-the-shelf (COTS) components for the OBC. Not only are these components cheap and widely available, but they also achieve high performance. Unfortunately, they are also significantly more vulnerable to errors induced by radiation than space-qualified components. The ScOSA (Scalable On-board Computing for Space Avionics) Flight Experiment project aims to develop an OBC architecture which avoids this trade-off by combining space-qualified radiation-hardened components (the reliable computing nodes, RCNs) together with COTS components (the high performance nodes, HPNs) into a single distributed system. To abstract this heterogeneous architecture for the application developers, we are developing a middleware for the aforementioned OBC architecture. Besides providing an monolithic abstraction of the distributed system, the middleware shall also enhance the architecture by providing additional reliability and fault tolerance. In this paper, we present the individual components comprising the middleware, alongside the features the middleware offers. Since the ScOSA Flight Experiment project is a successor of the OBC-NG and the ScOSA projects, its middleware is also a further development of the existing middleware. Therefore, we will present and discuss our contributions and plans for enhancement of the middleware in the course of the current project. Finally, we will present first results for the scalability of the middleware, which we obtained by conducting software-in-the-loop experiments of different sized scenarios.

Aerospace ◽  
2021 ◽  
Vol 8 (6) ◽  
pp. 169
Author(s):  
Ahmed E. S. Nosseir ◽  
Angelo Cervone ◽  
Angelo Pasini

Green propellants are currently considered as enabling technology that is revolutionizing the development of high-performance space propulsion, especially for small-sized spacecraft. Modern space missions, either in LEO or interplanetary, require relatively high-thrust and impulsive capabilities to provide better control on the spacecraft, and to overcome the growing challenges, particularly related to overcrowded LEOs, and to modern space application orbital maneuver requirements. Green monopropellants are gaining momentum in the design and development of small and modular liquid propulsion systems, especially for CubeSats, due to their favorable thermophysical properties and relatively high performance when compared to gaseous propellants, and perhaps simpler management when compared to bipropellants. Accordingly, a novel high-thrust modular impulsive green monopropellant propulsion system with a micro electric pump feed cycle is proposed. MIMPS-G500mN is designed to be capable of delivering 0.5 N thrust and offers theoretical total impulse Itot from 850 to 1350 N s per 1U and >3000 N s per 2U depending on the burnt monopropellant, which makes it a candidate for various LEO satellites as well as future Moon missions. Green monopropellant ASCENT (formerly AF-M315E), as well as HAN and ADN-based alternatives (i.e., HNP225 and LMP-103S) were proposed in the preliminary design and system analysis. The article will present state-of-the-art green monopropellants in the (EIL) Energetic Ionic Liquid class and a trade-off study for proposed propellants. System analysis and design of MIMPS-G500mN will be discussed in detail, and the article will conclude with a market survey on small satellites green monopropellant propulsion systems and commercial off-the-shelf thrusters.


2021 ◽  
Vol 9 ◽  
pp. 756-773
Author(s):  
Elias Stengel-Eskin ◽  
Kenton Murray ◽  
Sheng Zhang ◽  
Aaron Steven White ◽  
Benjamin Van Durme

While numerous attempts have been made to jointly parse syntax and semantics, high performance in one domain typically comes at the price of performance in the other. This trade-off contradicts the large body of research focusing on the rich interactions at the syntax–semantics interface. We explore multiple model architectures that allow us to exploit the rich syntactic and semantic annotations contained in the Universal Decompositional Semantics (UDS) dataset, jointly parsing Universal Dependencies and UDS to obtain state-of-the-art results in both formalisms. We analyze the behavior of a joint model of syntax and semantics, finding patterns supported by linguistic theory at the syntax–semantics interface. We then investigate to what degree joint modeling generalizes to a multilingual setting, where we find similar trends across 8 languages.


2021 ◽  
Vol 14 (4) ◽  
pp. 1-32
Author(s):  
Sebastian Sabogal ◽  
Alan George ◽  
Gary Crum

Deep learning (DL) presents new opportunities for enabling spacecraft autonomy, onboard analysis, and intelligent applications for space missions. However, DL applications are computationally intensive and often infeasible to deploy on radiation-hardened (rad-hard) processors, which traditionally harness a fraction of the computational capability of their commercial-off-the-shelf counterparts. Commercial FPGAs and system-on-chips present numerous architectural advantages and provide the computation capabilities to enable onboard DL applications; however, these devices are highly susceptible to radiation-induced single-event effects (SEEs) that can degrade the dependability of DL applications. In this article, we propose Reconfigurable ConvNet (RECON), a reconfigurable acceleration framework for dependable, high-performance semantic segmentation for space applications. In RECON, we propose both selective and adaptive approaches to enable efficient SEE mitigation. In our selective approach, control-flow parts are selectively protected by triple-modular redundancy to minimize SEE-induced hangs, and in our adaptive approach, partial reconfiguration is used to adapt the mitigation of dataflow parts in response to a dynamic radiation environment. Combined, both approaches enable RECON to maximize system performability subject to mission availability constraints. We perform fault injection and neutron irradiation to observe the susceptibility of RECON and use dependability modeling to evaluate RECON in various orbital case studies to demonstrate a 1.5–3.0× performability improvement in both performance and energy efficiency compared to static approaches.


2020 ◽  
Vol 38 (3-4) ◽  
pp. 1-30
Author(s):  
Rakesh Kumar ◽  
Boris Grot

The front-end bottleneck is a well-established problem in server workloads owing to their deep software stacks and large instruction footprints. Despite years of research into effective L1-I and BTB prefetching, state-of-the-art techniques force a trade-off between metadata storage cost and performance. Temporal Stream prefetchers deliver high performance but require a prohibitive amount of metadata to accommodate the temporal history. Meanwhile, BTB-directed prefetchers incur low cost by using the existing in-core branch prediction structures but fall short on performance due to BTB’s inability to capture the massive control flow working set of server applications. This work overcomes the fundamental limitation of BTB-directed prefetchers, which is capturing a large control flow working set within an affordable BTB storage budget. We re-envision the BTB organization to maximize its control flow coverage by observing that an application’s instruction footprint can be mapped as a combination of its unconditional branch working set and, for each unconditional branch, a spatial encoding of the cache blocks around the branch target. Effectively capturing a map of the application’s instruction footprint in the BTB enables highly effective BTB-directed prefetching that outperforms the state-of-the-art prefetchers by up to 10% for equivalent storage budget.


2020 ◽  
Author(s):  
David Belo ◽  
Nuno Bento ◽  
Hugo Silva ◽  
Ana Fred ◽  
Hugo Gamboa

Abstract Background: Biometric Systems (BS) are based on a pattern recognition problem where the individual traits of a person are coded and compared. The Electrocardiogram (ECG) as a biometric emerged, as it fulfills the requirements of a BS. Methods: Inspired by the high performance shown by Deep Neural Networks(DNN), this work proposes two architectures to improve current results in both identification and authentication: Temporal Convolutional Neural Network (TCNN) and Recurrent Neural Network (RNN). The last two results weresubmitted to a simple classifier, which exploits the error of prediction of theformer and the scores given by the last. Results: The robustness and applicability of these architectures were tested onFantasia, MIT-BIH and CYBHi databases. The TCNN outperforms the RNNachieving 100%, 96% and 90% of accuracy, respectively, for identification and 0.0%, 0.1% and 2.2% equal error rate for authentication. Conclusions: When comparing to previous work, both architectures reachedresults beyond the state-of-the-art. Even though this experience was a success,the inclusion of these techniques may provide a system that could reduce thevalidation acquisition time.


2018 ◽  
Author(s):  
Rhea M Howard ◽  
Annie C. Spokes ◽  
Samuel A Mehr ◽  
Max Krasnow

Making decisions in a social context often requires weighing one's own wants against the needs and preferences of others. Adults are adept at incorporating multiple contextual features when deciding how to trade off their welfare against another. For example, they are more willing to forgo a resource to benefit friends over strangers (a feature of the individual) or when the opportunity cost of giving up the resource is low (a feature of the situation). When does this capacity emerge in development? In Experiment 1 (N = 208), we assessed the decisions of 4- to 10-year-old children in a picture-based resource tradeoff task to test two questions: (1) When making repeated decisions to either benefit themselves or benefit another person, are children’s choices internally consistent with a particular valuation of that individual? (2) Do children value friends more highly than strangers and enemies? We find that children demonstrate consistent person-specific welfare valuations and value friends more highly than strangers and enemies. In Experiment 2 (N = 200), we tested adults using the same pictorial method. The pattern of results successfully replicated, but adults’ decisions were more consistent than children’s and they expressed more extreme valuations: relative to the children, they valued friends more and valued enemies less. We conclude that despite children’s limited experience allocating resources and navigating complex social networks, they behave like adults in that they reference a stable person-specific valuation when deciding whether to benefit themselves or another and that this rule is modulated by the child’s relationship with the target.


Author(s):  
Mark Endrei ◽  
Chao Jin ◽  
Minh Ngoc Dinh ◽  
David Abramson ◽  
Heidi Poxon ◽  
...  

Rising power costs and constraints are driving a growing focus on the energy efficiency of high performance computing systems. The unique characteristics of a particular system and workload and their effect on performance and energy efficiency are typically difficult for application users to assess and to control. Settings for optimum performance and energy efficiency can also diverge, so we need to identify trade-off options that guide a suitable balance between energy use and performance. We present statistical and machine learning models that only require a small number of runs to make accurate Pareto-optimal trade-off predictions using parameters that users can control. We study model training and validation using several parallel kernels and more complex workloads, including Algebraic Multigrid (AMG), Large-scale Atomic Molecular Massively Parallel Simulator, and Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. We demonstrate that we can train the models using as few as 12 runs, with prediction error of less than 10%. Our AMG results identify trade-off options that provide up to 45% improvement in energy efficiency for around 10% performance loss. We reduce the sample measurement time required for AMG by 90%, from 13 h to 74 min.


2014 ◽  
Vol 907 ◽  
pp. 139-149 ◽  
Author(s):  
Eckart Uhlmann ◽  
Florian Heitmüller

In gas turbines and turbo jet engines, high performance materials such as nickel-based alloys are widely used for blades and vanes. In the case of repair, finishing of complex turbine blades made of high performance materials is carried out predominantly manually. The repair process is therefore quite time consuming. And the costs of presently available repair strategies, especially for integrated parts, are high, due to the individual process planning and great amount of manually performed work steps. Moreover, there are severe risks of partial damage during manually conducted repair. All that leads to the fact that economy of scale effects remain widely unused for repair tasks, although the piece number of components to be repaired is increasing significantly. In the future, a persistent automation of the repair process chain should be achieved by developing adaptive robot assisted finishing strategies. The goal of this research is to use the automation potential for repair tasks by developing a technology that enables industrial robots to re-contour turbine blades via force controlled belt grinding.


Sign in / Sign up

Export Citation Format

Share Document