WHAT YOUR NEXT EXPERIMENT'S DATA WILL LOOK LIKE: EVENT STORES IN THE LARGE HADRON COLLIDER ERA

2005 ◽  
Vol 20 (16) ◽  
pp. 3871-3873 ◽  
Author(s):  
DAVID MALON

Each new generation of collider experiments confronts the challenge of delivering an event store having at least the performance and functionality of current-generation stores, in the presence of an order of magnitude more data and new computing paradigms (object orientation just a few years ago; grid and service-based computing today). The ATLAS experiment at the Large Hadron Collider, for example, will produce 1.6-megabyte events at 200 Hz–an annual raw data volume of 3.2 petabytes. With derived and simulated data, the total volume may approach 10 petabytes per year. Scale, however, is not the only challenge. In the Large Hadron Collider (LHC) experiments, the preponderance of computing power will come from outside the host laboratory. More significantly, no single site will host a complete copy of the event store–data will be distributed, not simply replicated for convenience, and many physics analyses will routinely require distributed (grid) computing. This paper uses the emerging ATLAS computing model to provide a glimpse of how next-generation event stores are taking shape, touching on key issues in navigation, distribution, scale, coherence, data models and representation, metadata infrastructure, and the role(s) of databases in event store management.

In this chapter, the exciting developments in micropattern detectors in recent years are described. This includes GEM and MICROMEGAS detectors combined with micropixel readout, some peculiar designs of GEM and GEM-like detectors sensitive to UV and visible light, large area (>1m2) GEM and MICROMEGAS prototypes developed for the upgrades of the experiments at the large hadron collider, etc. A special focus is put on a new generation of spark-proof micropattern detectors, using resistive electrodes instead of traditional metallic ones. These detectors operate as ordinary micropattern detectors. However, in the case of occasional sparks, their current is limited by the resistivity of the electrodes so that the energy of the discharge is reduced by several orders of magnitude. Various designs of such detectors have been developed and successfully tested, including resistive GEM, resistive MICROMEGAS, resistive MSGC, etc. Among this family of detectors, a special place belongs to resistive parallel-plate micropattern detectors allowing one to achieve at the same time excellent spatial (38 µm) and time (77 ps) resolutions. Finally, the potential of multilayer detector technology for further optimization of the detector operation is discussed.


2015 ◽  
Vol 13 (4) ◽  
pp. 511-521 ◽  
Author(s):  
M. Battistin ◽  
S. Berry ◽  
A. Bitadze ◽  
P. Bonneau ◽  
J. Botelho-Direito ◽  
...  

Abstract The silicon tracker of the ATLAS experiment at CERN Large Hadron Collider will operate around –15°C to minimize the effects of radiation damage. The present cooling system is based on a conventional evaporative circuit, removing around 60 kW of heat dissipated by the silicon sensors and their local electronics. The compressors in the present circuit have proved less reliable than originally hoped, and will be replaced with a thermosiphon. The working principle of the thermosiphon uses gravity to circulate the coolant without any mechanical components (compressors or pumps) in the primary coolant circuit. The fluorocarbon coolant will be condensed at a temperature and pressure lower than those in the on-detector evaporators, but at a higher altitude, taking advantage of the 92 m height difference between the underground experiment and the services located on the surface. An extensive campaign of tests, detailed in this paper, was performed using two small-scale thermosiphon systems. These tests confirmed the design specifications of the full-scale plant and demonstrated operation over the temperature range required for ATLAS. During the testing phase the system has demonstrated unattended long-term stable running over a period of several weeks. The commissioning of the full scale thermosiphon is ongoing, with full operation planned for late 2015.


2020 ◽  
Vol 245 ◽  
pp. 03027
Author(s):  
David Cameron ◽  
Vincent Garonne ◽  
Paul Millar ◽  
Shaojun Sun ◽  
Wenjing Wu

ATLAS@Home is a volunteer computing project which enables members of the public to contribute computing power to run simulations of the ATLAS experiment at CERN’s Large Hadron Collider. The computing resources provided to ATLAS@Home increasingly come not only from traditional volunteers, but also from data centres or office computers at institutes associated to ATLAS. The design of ATLAS@Home was built around not giving out sensitive credentials to volunteers, which means that a sandbox is needed to bridge data transfers between trusted and untrusted domains. As the scale of ATLAS@Home increases, this sandbox becomes a potential data management bottleneck. This paper explores solutions to this problem based on relaxing the constraints of sending credentials to trusted volunteers, allowing direct data transfer to grid storage and avoiding the intermediate sandbox. Fully trusted resources such as grid worker nodes can run with full access to grid storage, whereas semi-trusted resources such as student desktops can be provided with “macaroons”: time-limited access tokens which can only be used for specific files. The steps towards implementing these solutions as well as initial results with real ATLAS simulation tasks are discussed along with the experience gained so far and the next steps in the project.


2021 ◽  
Vol 251 ◽  
pp. 04019
Author(s):  
Andrei Kazarov ◽  
Adrian Chitan ◽  
Andrei Kazymov ◽  
Alina Corso-Radu ◽  
Igor Aleksandrov ◽  
...  

The ATLAS experiment at the Large Hadron Collider (LHC) operated very successfully in the years 2008 to 2018, in two periods identified as Run 1 and Run 2. ATLAS achieved an overall data-taking efficiency of 94%, largely constrained by the irreducible dead-time introduced to accommodate the limitations of the detector read-out electronics. Out of the 6% dead-time only about 15% could be attributed to the central trigger and DAQ system, and out of these, a negligible fraction was due to the Control and Configuration subsystem. Despite these achievements, and in order to improve even more the already excellent efficiency of the whole DAQ system in the coming Run 3, a new campaign of software updates was launched for the second long LHC shutdown (LS2). This paper presents, using a few selected examples, how the work was approached and which new technologies were introduced into the ATLAS Control and Configuration software. Despite these being specific to this system, many solutions can be considered and adapted to different distributed DAQ systems.


2021 ◽  
Vol 16 (12) ◽  
pp. C12028
Author(s):  
Md.A.A. Samy ◽  
A. Lapertosa ◽  
L. Vannoli ◽  
C. Gemme ◽  
G.-F. Dalla Betta

Abstract CERN is planning to upgrade its Large Hadron Collider to the High Luminosity phase (HL-LHC), pushing detector technologies to cope with unprecedently demanding performance in terms of particle rate and radiation hardness. The ATLAS experiment decided to equip the innermost layer (L0) of its Inner Tracker (ITk) with small-pitch 3D pixels of two different geometries, i.e., 25 µm × 100 µm for the central barrel and 50 µm × 50 µm for the lateral rings. A new generation of 3D pixels featuring these small-pitch dimensions and reduced active thickness (∼150 µm) has been developed to this purpose within a collaboration of INFN and FBK since 2014. Recently, the R&D activities have been focused on the characterization of modules based on sensors compatible with the RD53A readout chip, which were tested in laboratory and at beam lines. In this paper, we report on the characterization of modules irradiated with protons up to a fluence of 1 × 1016 neq/cm2, including threshold tuning and noise measurements, and results from beam tests performed at DESY. Moreover, we will discuss about the electrical characteristics at wafer level and at module level before and after irradiation.


2013 ◽  
Vol 22 (07) ◽  
pp. 1330015
Author(s):  
◽  
DOMIZIA ORESTANO

This document presents a brief overview of some of the experimental techniques employed by the ATLAS experiment at the CERN Large Hadron Collider (LHC) in the search for the Higgs boson predicted by the standard model (SM) of particle physics. The data and the statistical analyses that allowed in July 2012, only few days before this presentation at the Marcel Grossman Meeting, to firmly establish the observation of a new particle are described. The additional studies needed to check the consistency between the newly discovered particle and the Higgs boson are also discussed.


Sign in / Sign up

Export Citation Format

Share Document