An Architecture for Incorporating Interactive Visualizations Into Scientific Simulations

Author(s):  
Ravishankar Mathur ◽  
Cesar A. Ocampo
Author(s):  
William B. Rouse

This book discusses the use of models and interactive visualizations to explore designs of systems and policies in determining whether such designs would be effective. Executives and senior managers are very interested in what “data analytics” can do for them and, quite recently, what the prospects are for artificial intelligence and machine learning. They want to understand and then invest wisely. They are reasonably skeptical, having experienced overselling and under-delivery. They ask about reasonable and realistic expectations. Their concern is with the futurity of decisions they are currently entertaining. They cannot fully address this concern empirically. Thus, they need some way to make predictions. The problem is that one rarely can predict exactly what will happen, only what might happen. To overcome this limitation, executives can be provided predictions of possible futures and the conditions under which each scenario is likely to emerge. Models can help them to understand these possible futures. Most executives find such candor refreshing, perhaps even liberating. Their job becomes one of imagining and designing a portfolio of possible futures, assisted by interactive computational models. Understanding and managing uncertainty is central to their job. Indeed, doing this better than competitors is a hallmark of success. This book is intended to help them understand what fundamentally needs to be done, why it needs to be done, and how to do it. The hope is that readers will discuss this book and develop a “shared mental model” of computational modeling in the process, which will greatly enhance their chances of success.


Author(s):  
Charles Miller ◽  
Lucas Lecheler ◽  
Bradford Hosack ◽  
Aaron Doering ◽  
Simon Hooper

Information visualization involves the visual, and sometimes interactive, presentation and organization of complex data in a clear, compelling representation. Information visualization is an essential element in peoples’ daily lives, especially those in data-driven professions, namely online educators. Although information visualization research and methods are prevalent in the diverse fields of healthcare, statistics, economics, information technology, computer science, and politics, few examples of successful information visualization design or integration exist in online learning. The authors provide a background of information visualization in education, explore a set of potential roles for information visualization in the future design and integration of online learning environments, provide examples of contemporary interactive visualizations in education, and discuss opportunities to move forward with design and research in this emerging area.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


2020 ◽  
Author(s):  
Stevenn Volant ◽  
Pierre Lechat ◽  
Perrine Woringer ◽  
Laurence Motreff ◽  
Christophe Malabat ◽  
...  

Abstract BackgroundComparing the composition of microbial communities among groups of interest (e.g., patients vs healthy individuals) is a central aspect in microbiome research. It typically involves sequencing, data processing, statistical analysis and graphical representation of the detected signatures. Such an analysis is normally obtained by using a set of different applications that require specific expertise for installation, data processing and in some case, programming skills. ResultsHere, we present SHAMAN, an interactive web application we developed in order to facilitate the use of (i) a bioinformatic workflow for metataxonomic analysis, (ii) a reliable statistical modelling and (iii) to provide among the largest panels of interactive visualizations as compared to the other options that are currently available. SHAMAN is specifically designed for non-expert users who may benefit from using an integrated version of the different analytic steps underlying a proper metagenomic analysis. The application is freely accessible at http://shaman.pasteur.fr/, and may also work as a standalone application with a Docker container (aghozlane/shaman), conda and R. The source code is written in R and is available at https://github.com/aghozlane/shaman. Using two datasets (a mock community sequencing and published 16S rRNA metagenomic data), we illustrate the strengths of SHAMAN in quickly performing a complete metataxonomic analysis. ConclusionsWe aim with SHAMAN to provide the scientific community with a platform that simplifies reproducible quantitative analysis of metagenomic data.


Author(s):  
Елена Макарова ◽  
Elena Makarova ◽  
Дмитрий Лагерев ◽  
Dmitriy Lagerev ◽  
Федор Лозбинев ◽  
...  

This paper describes text data analysis in the course of managerial decision making. The process of collecting textual data for further analysis as well as the use of visualization in human control over the correctness of data collection is considered in depth. An algorithm modification for creating an "n-gram cloud" visualization is proposed, which can help to make visualization accessible to people with visual impairments. Also, a method of visualization of n-gram vector representation models (word embedding) is proposed. On the basis of the conducted research, a part of a software package was implemented, which is responsible for creating interactive visualizations in a browser and interoperating with them.


2021 ◽  
Vol 4 ◽  
pp. 152-165
Author(s):  
Andrew Iliadis Iliadis ◽  
Tony Liao ◽  
Isabel Pedersen ◽  
Jing Han

Machines produce and operate using complex systems of metadata that need to be catalogued, sorted, and processed. Many students lack the experience with metadata and sufficient knowledge about it to understand it as part of their data literacy skills. This paper describes an educational and interactive database activity designed for teaching undergraduate communication students about the creation, value, and logic of structured data. Through a set of virtual instructional videos and interactive visualizations, the paper describes how students can gain experience with structured data and apply that knowledge to successfully find, curate, and classify a digital archive of media artifacts. The pedagogical activity, teaching materials, and archives are facilitated through and housed in an online resource called Fabric of Digital Life (fabricofdigitallife.com). We end by discussing the activity’s relevance for the emerging field of human-machine communication.


Sign in / Sign up

Export Citation Format

Share Document