scholarly journals Deep extreme feature extraction: New MVA method for searching particles in high energy physics

Filomat ◽  
2018 ◽  
Vol 32 (5) ◽  
pp. 1711-1725
Author(s):  
Chao Ma ◽  
Jinhui Xu ◽  
Tiancheng Hou ◽  
Bin Lan ◽  
Zhenhua Zhang

In this paper, we propose Deep Extreme Feature Extraction (DEFE), a new ensemble MVA method for searching ?+?- channel of Higgs bosons in high energy physics. DEFE can be viewed as a deep ensemble learning scheme that trains a strongly diverse set of neural feature learners without explicitly encouraging diversity and penalizing correlations, which is achieved by adopting an implicit neural controller (not involved in feed forward computation) that directly controls and distributes gradient flows from higher level deep prediction network. Such model-independent controller results in that every single local feature learned are used in the feature-to-output mapping stage, avoiding the blind averaging of features. DEFE makes the ensembles ?deep? in the sense that it allows deep post-process of these features that try to learn to select and abstract the ensemble of neural feature learners. Based the construction and approximation of the so-called extreme selection region, the DEFE model is able to be trained efficiently, and extract discriminative features from multiple angles and dimensions, hence the improvement of the selection region of searching new particles in HEP can be achieved. With the application of this model, a selection region full of signal processes can be obtained through the training of miniature collision events set. In comparison with the Classic Deep Neural Network, DEFE shows a state-of-the-art performance: the error rate has decreased by about 37%, the accuracy has broken through 90% for the first time, along with the discovery significance has reached a standard deviation of 6.0?. Experimental data shows that DEFE is able to train an ensemble of discriminative feature learners that boosts the over performance of final prediction. Furthermore, among high-level features, there are still some important patterns that are unidentified by DNN and are independent of low-level features, while DEFE is able to identify these significant patterns more efficiently

2016 ◽  
Vol 40 ◽  
pp. 1660116
Author(s):  
Wim de Boer

This paper is a contribution to the memorial session for Michel Borghini at the Spin 2014 conference in Bejing, honoring his pivotal role for the development of polarized targets in high energy physics. Borghini proposed for the first time the correct mechanism for dynamic polarization in polarized targets using organic materials doped with free radicals. In these amorphous materials the spin levels are broadened by spin-spin interactions and g-factor anisotropy, which allows a high dynamic polarization of nuclei by cooling of the spin-spin interaction reservoir. In this contribution I summarize the experimental evidence for this mechanism. These pertinent experiments were done at CERN in the years 1971 - 1974, when I was a graduate student under the guidance of Michel Borghini. I finish by shortly describing how Borghini’s spin temperature theory is now applied in cancer therapy.


2016 ◽  
Vol 31 (33) ◽  
pp. 1644015 ◽  
Author(s):  
Yuan Zhang

After the Higgs discovery, it is believed that a circular [Formula: see text] collider could serve as a Higgs factory. The high energy physics community in China launched a study of a 50–100 km ring collider. A preliminary conceptual design report (Pre-CDR) has been published in early 2015. This report is based on a 54-km ring design. Some progress on beam–beam effect study after Pre-CDR is shown in the paper. We estimate the beamstrahlung lifetime using a pure strong–strong code as a comparison with the result obtained using a quasi-strong–strong method. The effect of parasitic crossing in the pretzel scheme is also estimated for the very first time. The feasibility of the main parameters for partial double ring scheme are evaluated from the point view of beam–beam interaction.


Author(s):  
Victor Christianto

In a recent paper published at Advances in High Energy Physics (AHEP) journal, Yang Zhao et al. derived Maxwell equations on Cantor sets from the local fractional vector calculus. It can be shown that Maxwell equations on Cantor sets in a fractal bounded domain give efficiency and accuracy for describing the fractal electric and magnetic fields. However, so far there is no derivation of equations for electrodynamics of superconductor on Cantor sets. Therefore, in this paper I present for the first time a derivation of London-Proca-Hirsch equations on Cantor sets. The name of London-Proca-Hirsch is proposed because the equations were based on modifying Proca and London-Hirsch’s theory of electrodynamics of superconductor. Considering that Proca equations may be used to explain electromagnetic effects in superconductor, I suggest that the proposed London-Proca-Hirsch equations on Cantor sets can describe electromagnetic of fractal superconductors. It is hoped that this paper may stimulate further investigations and experiments in particular for fractal superconductor. It may be expected to have some impact to fractal cosmology modeling too.


2019 ◽  
Vol 214 ◽  
pp. 01003
Author(s):  
Sioni Summers ◽  
Andrew Rose

Track reconstruction at the CMS experiment uses the Combinatorial Kalman Filter. The algorithm computation time scales exponentially with pileup, which will pose a problem for the High Level Trigger at the High Luminosity LHC. FPGAs, which are already used extensively in hardware triggers, are becoming more widely used for compute acceleration. With a combination of high performance, energy efficiency, and predictable and low latency, FPGA accelerators are an interesting technology for high energy physics. Here, progress towards porting of the CMS track reconstruction to Maxeler Technologies’ Dataflow Engines is shown, programmed with their high level language MaxJ. The performance is compared to CPUs, and further steps to optimise for the architecture are presented.


2019 ◽  
Vol 214 ◽  
pp. 06026
Author(s):  
Jim Pivarsk ◽  
Jaydeep Nandi ◽  
David Lange ◽  
Peter Elmer

In the last stages of data analysis, physicists are often forced to choose between simplicity and execution speed. In High Energy Physics (HEP), high-level languages like Python are known for ease of use but also very slow execution. However, Python is used in speed-critical data analysis in other fields of science and industry. In those fields, most operations are performed on Numpy arrays in an array programming style; this style can be adopted for HEP by introducing variable-sized, nested data structures. We describe how array programming may be extended for HEP use-cases and an implementation known as awkward-array. We also present integration with ROOT, Apache Arrow, and Parquet, as well as preliminary performance results.


2021 ◽  
Vol 251 ◽  
pp. 03051
Author(s):  
Ali Hariri ◽  
Darya Dyachkova ◽  
Sergei Gleyzer

Accurate and fast simulation of particle physics processes is crucial for the high-energy physics community. Simulating particle interactions with the detector is both time consuming and computationally expensive. With its proton-proton collision energy of 13 TeV, the Large Hadron Collider is uniquely positioned to detect and measure the rare phenomena that can shape our knowledge of new interactions. The High-Luminosity Large Hadron Collider (HLLHC) upgrade will put a significant strain on the computing infrastructure and budget due to increased event rate and levels of pile-up. Simulation of highenergy physics collisions needs to be significantly faster without sacrificing the physics accuracy. Machine learning approaches can offer faster solutions, while maintaining a high level of fidelity. We introduce a graph generative model that provides effiective reconstruction of LHC events on the level of calorimeter deposits and tracks, paving the way for full detector level fast simulation.


2020 ◽  
Vol 245 ◽  
pp. 07051
Author(s):  
Marian Babik ◽  
Shawn McKee

High Energy Physics (HEP) experiments rely on the networks as one of the critical parts of their infrastructure both within the participating laboratories and sites as well as globally to interconnect the sites, data centres and experiments instrumentation. Network virtualisation and programmable networks are two key enablers that facilitate agile, fast and more economical network infrastructures as well as service development, deployment and provisioning. Adoption of these technologies by HEP sites and experiments will allow them to design more scalable and robust networks while decreasing the overall cost and improving the effectiveness of the resource utilization. The primary challenge we currently face is ensuring that WLCG and its constituent collaborations will have the networking capabilities required to most effectively exploit LHC data for the lifetime of the LHC. In this paper we provide a high level summary of the HEPiX NFV Working Group report that explored some of the novel network capabilities that could potentially be deployment in time for HL-LHC.


Sign in / Sign up

Export Citation Format

Share Document