Topological Complexity of Algorithms

Author(s):  
A. Libgober
2019 ◽  
Vol 8 (4) ◽  
pp. 9461-9464

Current quantum computer simulation strategies are inefficient in simulation and their realizations are also failed to minimize those impacts of the exponential complexity for simulated quantum computations. We proposed a Quantum computer simulator model in this paper which is a coordinated Development Environment – QuIDE (Quantum Integrated Development Environment) to support the improvement of algorithm for future quantum computers. The development environment provides the circuit diagram of graphical building and flexibility of source code. Analyze the complexity of algorithms shows the performance results of the simulator and used for simulation as well as result of its deployment during simulation


2017 ◽  
Vol 247 ◽  
pp. 105-112 ◽  
Author(s):  
Vladislav V. Gurzhiy ◽  
Sergey V. Krivovichev ◽  
Ivan G. Tananaev

1965 ◽  
Vol 12 (3) ◽  
pp. 364-375 ◽  
Author(s):  
Franco Mileto ◽  
Gianfranco Putzolu

2012 ◽  
Vol 19 (2) ◽  
pp. 215-225 ◽  
Author(s):  
H. O. Ghaffari ◽  
R. P. Young

Abstract. Through research conducted in this study, a network approach to the correlation patterns of void spaces in rough fractures (crack type II) was developed. We characterized friction networks with several networks characteristics. The correlation among network properties with the fracture permeability is the result of friction networks. The revealed hubs in the complex aperture networks confirmed the importance of highly correlated groups to conduct the highlighted features of the dynamical aperture field. We found that there is a universal power law between the nodes' degree and motifs frequency (for triangles it reads T(k) ∝ kβ (β ≈ 2 ± 0.3)). The investigation of localization effects on eigenvectors shows a remarkable difference in parallel and perpendicular aperture patches. Furthermore, we estimate the rate of stored energy in asperities so that we found that the rate of radiated energy is higher in parallel friction networks than it is in transverse directions. The final part of our research highlights 4 point sub-graph distribution and its correlation with fluid flow. For shear rupture, we observed a similar trend in sub-graph distribution, resulting from parallel and transversal aperture profiles (a superfamily phenomenon).


Author(s):  
B. M. Shubik ◽  

The processes of development of hydrocarbon deposits are accompanied, as a rule, by an increase in the level of seismicity and, in particular, by the occurrence of technogenic earthquakes and other deformation phenomena associated with changes in the geodynamic regime. To monitor deformation and geodynamic processes, a seismic monitoring service should be organized. A similar monitoring system is also required for the analysis of aftershock and volcanic activity. Monitoring technology should be based on the use of reliable and fast methods of automatic detection and localization of seismic events of various scales. Traditional approaches to the detection and localization of earthquake epicenters and hypocenters are based on the analysis of data recorded by one or more single seismic stations. In that case, seismic event coordinates are estimated by means of signal extraction from noise and accurately measuring arrival times of a number of specific phases of the seismic signal at each recording point. Existing computational techniques have inherited this traditional approach. However, automatic procedures based on the ideology of manual processing turn out to be extremely laborious and ineffective due to the complexity of algorithms adequate to the actions of an experienced geophysicist-interpreter. The article contains a description of new approaches to the synthesis of automatic monitoring systems, which are based on the principles of emission tomography, use of spatial registration systems, energy analysis of wave fields and methods of converting real waveforms into low-frequency model signals (so-called filter masks/templates). The monitoring system was successfully tested in the process of detecting and locating the epicenters and hypocenters of 19 weak local earthquakes in Israel, as well as a quarry explosion.


1992 ◽  
Vol 24 (2) ◽  
pp. 289-304 ◽  
Author(s):  
P J Densham ◽  
G Rushton

Solution techniques for location-allocation problems usually are not a part of microcomputer-based geoprocessing systems because of the large volumes of data to process and store and the complexity of algorithms. In this paper, it is shown that processing costs for the most accurate, heuristic, location-allocation algorithm can be drastically reduced by exploiting the spatial structure of location-allocation problems. The strategies used, preprocessing interpoint distance data as both candidate and demand strings, and use of them to update an allocation table, allow the solution of large problems (3000 nodes) in a microcomputer-based, interactive decisionmaking environment. Moreover, these strategies yield solution times which increase approximately linearly with problem size. Tests on four network problems validate these claims.


2021 ◽  
Vol 4 ◽  
Author(s):  
Fan Zhang ◽  
Melissa Petersen ◽  
Leigh Johnson ◽  
James Hall ◽  
Sid E. O’Bryant

Driven by massive datasets that comprise biomarkers from both blood and magnetic resonance imaging (MRI), the need for advanced learning algorithms and accelerator architectures, such as GPUs and FPGAs has increased. Machine learning (ML) methods have delivered remarkable prediction for the early diagnosis of Alzheimer’s disease (AD). Although ML has improved accuracy of AD prediction, the requirement for the complexity of algorithms in ML increases, for example, hyperparameters tuning, which in turn, increases its computational complexity. Thus, accelerating high performance ML for AD is an important research challenge facing these fields. This work reports a multicore high performance support vector machine (SVM) hyperparameter tuning workflow with 100 times repeated 5-fold cross-validation for speeding up ML for AD. For demonstration and evaluation purposes, the high performance hyperparameter tuning model was applied to public MRI data for AD and included demographic factors such as age, sex and education. Results showed that computational efficiency increased by 96%, which helped to shed light on future diagnostic AD biomarker applications. The high performance hyperparameter tuning model can also be applied to other ML algorithms such as random forest, logistic regression, xgboost, etc.


Sign in / Sign up

Export Citation Format

Share Document