scholarly journals The design of a memory controller for DDR SDRAM

2021 ◽  
Author(s):  
Gary Anthony Thorpe

Memory system performance is an important factor in determining overall system performance. The design of key components of the memory system, such as the memory controller, becomes more important as memory performance becomes a limiting factor in high performance computing. This work focuses on the design of a unit which sends control signals to Double Data Rate Synchronous Dram (DDR SDRAM). The design is based on established concepts such as access reordering. A novel, adaptive page policy based on a machine learning algorithm has been developed in this work and evaluated with traditional page policies. the work illustrates some of the design trade-offs in a memory controller and the performance of the designs when using real application address traces.The results show that access reordering improves the performance of DDR SDRAM compared to in-order scheduling (up to 50% improvement) and that scheduling multiple requests can result in latency hiding. The dynamic page policy approximates the best static page policy in most cases.

2021 ◽  
Author(s):  
Gary Anthony Thorpe

Memory system performance is an important factor in determining overall system performance. The design of key components of the memory system, such as the memory controller, becomes more important as memory performance becomes a limiting factor in high performance computing. This work focuses on the design of a unit which sends control signals to Double Data Rate Synchronous Dram (DDR SDRAM). The design is based on established concepts such as access reordering. A novel, adaptive page policy based on a machine learning algorithm has been developed in this work and evaluated with traditional page policies. the work illustrates some of the design trade-offs in a memory controller and the performance of the designs when using real application address traces.The results show that access reordering improves the performance of DDR SDRAM compared to in-order scheduling (up to 50% improvement) and that scheduling multiple requests can result in latency hiding. The dynamic page policy approximates the best static page policy in most cases.


Energies ◽  
2019 ◽  
Vol 12 (11) ◽  
pp. 2129 ◽  
Author(s):  
Alberto Cocaña-Fernández ◽  
Emilio San José Guiote ◽  
Luciano Sánchez ◽  
José Ranilla

High Performance Computing Clusters (HPCCs) are common platforms for solving both up-to-date challenges and high-dimensional problems faced by IT service providers. Nonetheless, the use of HPCCs carries a substantial and growing economic and environmental impact, owing to the large amount of energy they need to operate. In this paper, a two-stage holistic optimisation mechanism is proposed to manage HPCCs in an eco-efficiently manner. The first stage logically optimises the resources of the HPCC through reactive and proactive strategies, while the second stage optimises hardware allocation by leveraging a genetic fuzzy system tailored to the underlying equipment. The model finds optimal trade-offs among quality of service, direct/indirect operating costs, and environmental impact, through multiobjective evolutionary algorithms meeting the preferences of the administrator. Experimentation was done using both actual workloads from the Scientific Modelling Cluster of the University of Oviedo and synthetically-generated workloads, showing statistical evidence supporting the adoption of the new mechanism.


2021 ◽  
Vol 2069 (1) ◽  
pp. 012153
Author(s):  
Rania Labib

Abstract Architects often investigate the daylighting performance of hundreds of design solutions and configurations to ensure an energy-efficient solution for their designs. To shorten the time required for daylighting simulations, architects usually reduce the number of variables or parameters of the building and facade design. This practice usually results in the elimination of design variables that could contribute to an energy-optimized design configuration. Therefore, recent research has focused on incorporating machine learning algorithms that require the execution of only a relatively small subset of the simulations to predict the daylighting and energy performance of buildings. Although machine learning has been shown to be accurate, it still becomes a time-consuming process due to the time required to execute a set of simulations to be used as training and validation data. Furthermore, to save time, designers often decide to use a small simulation subset, which leads to a poorly designed machine learning algorithm that produces inaccurate results. Therefore, this study aims to introduce an automated framework that utilizes high performance computing (HPC) to execute the simulations necessary for the machine learning algorithm while saving time and effort. High performance computing facilitates the execution of thousands of tasks simultaneously for a time-efficient simulation process, therefore allowing designers to increase the size of the simulation’s subset. Pairing high performance computing with machine learning allows for accurate and nearly instantaneous building performance predictions.


2015 ◽  
Vol 25 (03) ◽  
pp. 1541005
Author(s):  
Alexandra Vintila Filip ◽  
Ana-Maria Oprescu ◽  
Stefania Costache ◽  
Thilo Kielmann

High-Performance Computing (HPC) systems consume large amounts of energy. As the energy consumption predictions for HPC show increasing numbers, it is important to make users aware of the energy spent for the execution of their applications. Drawing from our experience with exposing cost and performance in public clouds, in this paper we present a generic mechanism to compute fast and accurate estimates for the tradeoffs between the performance (expressed as makespan) and the energy consumption of applications running on HPC clusters. We validate our approach by implementing it in a prototype, called E-BaTS and validating it with a wide variety of HPC bags-of-tasks. Our experiments show that E-BaTS produces conservative estimates with errors below 5%, while requiring at most 12% of the energy and time of an exhaustive search for providing configurations close to the optimal ones in terms of trade-offs between energy consumption and makespan.


Computation ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 86
Author(s):  
Eduardo Patricio Estévez Estévez Ruiz ◽  
Giovanny Eduardo Caluña Caluña Chicaiza ◽  
Fabian Rodolfo Jiménez Patiño ◽  
Joaquín Cayetano López López Lago ◽  
Saravana Prakash Thirumuruganandham

Optimizing HPC systems based on performance factors and bottlenecks is essential for designing an HPC infrastructure with the best characteristics and at a reasonable cost. Such insight can only be achieved through a detailed analysis of existing HPC systems and the execution of their workloads. The “Quinde I” is the only and most powerful supercomputer in Ecuador and is currently listed third on the South America. It was built with the IBM Power 8 servers. In this work, we measured its performance using different parameters from High-Performance Computing (HPC) to compare it with theoretical values and values obtained from tests on similar models. To measure its performance, we compiled and ran different benchmarks with the specific optimization flags for Power 8 to get the maximum performance with the current configuration in the hardware installed by the vendor. The inputs of the benchmarks were varied to analyze their impact on the system performance. In addition, we compile and compare the performance of two algorithms for dense matrix multiplication SRUMMA and DGEMM.


2021 ◽  
Vol 13(62) (2) ◽  
pp. 705-714
Author(s):  
Arpad Kerestely

Efficient High Performance Computing for Machine Learning has become a necessity in the past few years. Data is growing exponentially in domains like healthcare, government, economics and with the development of IoT, smartphones and gadgets. This big volume of data, needs a storage space which no traditional computing system can offer, and needs to be fed to Machine Learning algorithms so useful information can be extracted out of it. The larger the dataset that is fed to a Machine Learning algorithm the more precise the results will be, but also the time to compute those results will increase. Thus, the need for Efficient High Performance computing in the aid of faster and better Machine Learning algorithms. This paper aims to unveil how one benefits from another, what research has achieved so far and where is it heading.


Sign in / Sign up

Export Citation Format

Share Document