Strategies for large-scale structural problems on high-performance computers

1990 ◽  
Vol 18 (3b) ◽  
pp. 267-280
Author(s):  
Ahmed K. Noor ◽  
Jeanne M. Peters
1996 ◽  
Vol 07 (03) ◽  
pp. 295-303 ◽  
Author(s):  
P. D. CODDINGTON

Large-scale Monte Carlo simulations require high-quality random number generators to ensure correct results. The contrapositive of this statement is also true — the quality of random number generators can be tested by using them in large-scale Monte Carlo simulations. We have tested many commonly-used random number generators with high precision Monte Carlo simulations of the 2-d Ising model using the Metropolis, Swendsen-Wang, and Wolff algorithms. This work is being extended to the testing of random number generators for parallel computers. The results of these tests are presented, along with recommendations for random number generators for high-performance computers, particularly for lattice Monte Carlo simulations.


Author(s):  
Valentin Cristea ◽  
Ciprian Dobre ◽  
Corina Stratan ◽  
Florin Pop

The architectural shift presented in the previous chapters towards high performance computers assembled from large numbers of commodity resources raises numerous design issues and assumptions pertaining to traceability, fault tolerance and scalability. Hence, one of the key challenges faced by high performance distributed systems is scalable monitoring of system state. The aim of this chapter is to realize a survey study of existing work and trends in distributed systems monitoring by introducing the involved concepts and requirements, techniques, models and related standardization activities. Monitoring can be defined as the process of dynamic collection, interpretation and presentation of information concerning the characteristics and status of resources of interest. It is needed for various purposes such as debugging, testing, program visualization and animation. It may also be used for general management activities, which have a more permanent and continuous nature (performance management, configuration management, fault management, security management, etc.). In this case the behavior of the system is observed and monitoring information is gathered. This information is used to make management decisions and perform the appropriate control actions on the system. Unlike monitoring which is generally a passive process, control actively changes the behavior of the managed system and it has to be considered and modeled separately. Monitoring proves to be an essential process to observe and improve the reliability and the performance of large-scale distributed systems.


2002 ◽  
Vol 1 (4) ◽  
pp. 403-420 ◽  
Author(s):  
D. Stanescu ◽  
J. Xu ◽  
M.Y. Hussaini ◽  
F. Farassat

The purpose of this paper is to demonstrate the feasibility of computing the fan inlet noise field around a real twin-engine aircraft, which includes the radiation of the main spinning modes from the engine as well as the reflection and scattering by the fuselage and the wing. This first-cut large-scale computation is based on time domain and frequency domain approaches that employ spectral element methods for spatial discretization. The numerical algorithms are designed to exploit high-performance computers such as the IBM SP4. Although the simulations could not match the exact conditions of the only available experimental data set, they are able to predict the trends of the measured noise field fairly well.


2019 ◽  
Author(s):  
Cristiano Capone ◽  
Matteo di Volo ◽  
Alberto Romagnoni ◽  
Maurizio Mattia ◽  
Alain Destexhe

AbstractHigher and higher interest has been shown in the recent years to large scale spiking simulations of cerebral neuronal networks, coming both from the presence of high performance computers and increasing details in the experimental observations. In this context it is important to understand how population dynamics are generated by the designed parameters of the networks, that is the question addressed by mean field theories. Despite analytic solutions for the mean field dynamics has already been proposed generally for current based neurons (CUBA), the same for more realistic neural properties, such as conductance based (COBA) network of adaptive exponential neurons (AdEx), a complete analytic model has not been achieved yet. Here, we propose a novel principled approach to map a COBA on a CUBA. Such approach provides a state-dependent approximation capable to reliably predict the firing rate properties of an AdEx neuron with non-instantaneous COBA integration. We also applied our theory to population dynamics, predicting the dynamical properties of the network in very different regimes, such as asynchronous irregular (AI) and synchronous irregular (SI) (slow oscillations, SO).This results show that a state-dependent approximation can be successfully introduced in order to take into account the subtle effects of COBA integration and to deal with a theory capable to correctly predicts the activity in regimes of alternating states like slow oscillations.


2020 ◽  
Vol 643 ◽  
pp. A42 ◽  
Author(s):  
◽  
Y. Akrami ◽  
K. J. Andersen ◽  
M. Ashdown ◽  
C. Baccigalupi ◽  
...  

We present the NPIPE processing pipeline, which produces calibrated frequency maps in temperature and polarization from data from the Planck Low Frequency Instrument (LFI) and High Frequency Instrument (HFI) using high-performance computers. NPIPE represents a natural evolution of previous Planck analysis efforts, and combines some of the most powerful features of the separate LFI and HFI analysis pipelines. For example, following the LFI 2018 processing procedure, NPIPE uses foreground polarization priors during the calibration stage in order to break scanning-induced degeneracies. Similarly, NPIPE employs the HFI 2018 time-domain processing methodology to correct for bandpass mismatch at all frequencies. In addition, NPIPE introduces several improvements, including, but not limited to: inclusion of the 8% of data collected during repointing manoeuvres; smoothing of the LFI reference load data streams; in-flight estimation of detector polarization parameters; and construction of maximally independent detector-set split maps. For component-separation purposes, important improvements include: maps that retain the CMB Solar dipole, allowing for high-precision relative calibration in higher-level analyses; well-defined single-detector maps, allowing for robust CO extraction; and HFI temperature maps between 217 and 857 GHz that are binned into 0′.9 pixels (Nside = 4096), ensuring that the full angular information in the data is represented in the maps even at the highest Planck resolutions. The net effect of these improvements is lower levels of noise and systematics in both frequency and component maps at essentially all angular scales, as well as notably improved internal consistency between the various frequency channels. Based on the NPIPE maps, we present the first estimate of the Solar dipole determined through component separation across all nine Planck frequencies. The amplitude is (3366.6 ± 2.7) μK, consistent with, albeit slightly higher than, earlier estimates. From the large-scale polarization data, we derive an updated estimate of the optical depth of reionization of τ = 0.051 ± 0.006, which appears robust with respect to data and sky cuts. There are 600 complete signal, noise and systematics simulations of the full-frequency and detector-set maps. As a Planck first, these simulations include full time-domain processing of the beam-convolved CMB anisotropies. The release of NPIPE maps and simulations is accompanied with a complete suite of raw and processed time-ordered data and the software, scripts, auxiliary data, and parameter files needed to improve further on the analysis and to run matching simulations.


2021 ◽  
Vol 11 (6) ◽  
Author(s):  
Agastya P. Bhati ◽  
Shunzhou Wan ◽  
Dario Alfè ◽  
Austin R. Clyde ◽  
Mathis Bode ◽  
...  

The race to meet the challenges of the global pandemic has served as a reminder that the existing drug discovery process is expensive, inefficient and slow. There is a major bottleneck screening the vast number of potential small molecules to shortlist lead compounds for antiviral drug development. New opportunities to accelerate drug discovery lie at the interface between machine learning methods, in this case, developed for linear accelerators, and physics-based methods. The two in silico methods, each have their own advantages and limitations which, interestingly, complement each other. Here, we present an innovative infrastructural development that combines both approaches to accelerate drug discovery. The scale of the potential resulting workflow is such that it is dependent on supercomputing to achieve extremely high throughput. We have demonstrated the viability of this workflow for the study of inhibitors for four COVID-19 target proteins and our ability to perform the required large-scale calculations to identify lead antiviral compounds through repurposing on a variety of supercomputers.


Sign in / Sign up

Export Citation Format

Share Document