working set
Recently Published Documents


TOTAL DOCUMENTS

195
(FIVE YEARS 30)

H-INDEX

17
(FIVE YEARS 1)

2022 ◽  
Vol 19 (1) ◽  
pp. 1-26
Author(s):  
Aditya Ukarande ◽  
Suryakant Patidar ◽  
Ram Rangan

The compute work rasterizer or the GigaThread Engine of a modern NVIDIA GPU focuses on maximizing compute work occupancy across all streaming multiprocessors in a GPU while retaining design simplicity. In this article, we identify the operational aspects of the GigaThread Engine that help it meet those goals but also lead to less-than-ideal cache locality for texture accesses in 2D compute shaders, which are an important optimization target for gaming applications. We develop three software techniques, namely LargeCTAs , Swizzle , and Agents , to show that it is possible to effectively exploit the texture data working set overlap intrinsic to 2D compute shaders. We evaluate these techniques on gaming applications across two generations of NVIDIA GPUs, RTX 2080 and RTX 3080, and find that they are effective on both GPUs. We find that the bandwidth savings from all our software techniques on RTX 2080 is much higher than the bandwidth savings on baseline execution from inter-generational cache capacity increase going from RTX 2080 to RTX 3080. Our best-performing technique, Agents , records up to a 4.7% average full-frame speedup by reducing bandwidth demand of targeted shaders at the L1-L2 and L2-DRAM interfaces by 23% and 32%, respectively, on the latest generation RTX 3080. These results acutely highlight the sensitivity of cache locality to compute work rasterization order and the importance of locality-aware cooperative thread array scheduling for gaming applications.


2022 ◽  
Vol 6 (1) ◽  
pp. 1-25
Author(s):  
Junjie Yan ◽  
Kevin Huang ◽  
Kyle Lindgren ◽  
Tamara Bonaci ◽  
Howard J. Chizeck

In this article, we present a novel approach for continuous operator authentication in teleoperated robotic processes based on Hidden Markov Models (HMM). While HMMs were originally developed and widely used in speech recognition, they have shown great performance in human motion and activity modeling. We make an analogy between human language and teleoperated robotic processes (i.e., words are analogous to a teleoperator’s gestures, sentences are analogous to the entire teleoperated task or process) and implement HMMs to model the teleoperated task. To test the continuous authentication performance of the proposed method, we conducted two sets of analyses. We built a virtual reality (VR) experimental environment using a commodity VR headset (HTC Vive) and haptic feedback enabled controller (Sensable PHANToM Omni) to simulate a real teleoperated task. An experimental study with 10 subjects was then conducted. We also performed simulated continuous operator authentication by using the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). The performance of the model was evaluated based on the continuous (real-time) operator authentication accuracy as well as resistance to a simulated impersonation attack. The results suggest that the proposed method is able to achieve 70% (VR experiment) and 81% (JIGSAWS dataset) continuous classification accuracy with as short as a 1-second sample window. It is also capable of detecting an impersonation attack in real-time.


2021 ◽  
Vol 7 ◽  
pp. e799
Author(s):  
Zhenlong Sun ◽  
Jing Yang ◽  
Xiaoye Li ◽  
Jianpei Zhang

Support vector machine (SVM) is a robust machine learning method and is widely used in classification. However, the traditional SVM training methods may reveal personal privacy when the training data contains sensitive information. In the training process of SVMs, working set selection is a vital step for the sequential minimal optimization-type decomposition methods. To avoid complex sensitivity analysis and the influence of high-dimensional data on the noise of the existing SVM classifiers with privacy protection, we propose a new differentially private working set selection algorithm (DPWSS) in this paper, which utilizes the exponential mechanism to privately select working sets. We theoretically prove that the proposed algorithm satisfies differential privacy. The extended experiments show that the DPWSS algorithm achieves classification capability almost the same as the original non-privacy SVM under different parameters. The errors of optimized objective value between the two algorithms are nearly less than two, meanwhile, the DPWSS algorithm has a higher execution efficiency than the original non-privacy SVM by comparing iterations on different datasets. To the best of our knowledge, DPWSS is the first private working set selection algorithm based on differential privacy.


2021 ◽  
Author(s):  
Ivona Litsova ◽  

Working remotely has become a common approach for many companies in the past year. This raises several questions including the work-life balance and worker’s effectiveness in the home-office environment. Here also comes the question of the training of employees and how to develop them in the new working set-up. Online programs become more and more popular among the employers. They can be in the form of webinars, virtual classrooms, conferences, etc. A focus of this article is to outline the ways to measure soft skills after online trainings. I did a literature review which covers books and journals on soft skills measurement topic with the purpose to clarify the methodology for evaluation of the results after online courses. The article provides additional findings from the conducted survey among the regular employees in a technical company from the IT sector in Eastern Europe. The outcomes from it confirmed that reactions are what organizations usually measure and it is done by using feedback forms during or after the course which helps to improve the future sessions. Learning objectives should be defined in advance and can be measured during the training by carefully observing the participation of the trainees. Results could be measured on a later stage after careful consideration of the productivity of the employee and analysis of his/ her feedback, behavior and changes in the working process after the course. Kirkpatrick’s model will be the starting point of the discussion in the context of the following steps for evaluation – reaction, learning, behavior and results.


2021 ◽  
Author(s):  
Yuxuan Jing ◽  
Rami M. Younis

Abstract Automatic differentiation software libraries augment arithmetic operations with their derivatives, thereby relieving the programmer of deriving, implementing, debugging, and maintaining derivative code. With this encapsulation however, the responsibility of code optimization relies more heavily on the AD system itself (as opposed to the programmer and the compiler). Moreover, given that there are multiple contexts in reservoir simulation software for which derivatives are required (e.g. property package and discrete operator evaluations), the AD infrastructure must also be adaptable. An Operator Overloading AD design is proposed and tested to provide scalability and computational efficiency seemlessly across memory- and compute-bound applications. This is achieved by 1) use of portable and standard programming language constructs (C++17 and OpenMP 4.5 standards), 2) adopting a vectorized programming interface, 3) lazy evaluation via expression templates, and 4) multiple memory alignment and layout policies. Empirical analysis is conducted on various kernels spanning various arithmetic intensity and working set sizes. Cache- aware roofline analysis results show that the performance and scalability attained are reliably ideal. In terms of floapting point operations executed per second, the performance of the AD system matches optimized hand-code. Finally, the implementation is benchmarked using the Automatically Differentiable Expression Templates Library (ADETL).


Author(s):  
Federica Coppola

AbstractIn Responsible Brains (MIT Press, 2018), Hirstein, Sifferd and Fagan apply the language of cognitive neuroscience to dominant understandings of criminal responsibility in criminal law theory. The Authors make a compelling case that, under such dominant understandings, criminal responsibility eventually ‘translates’ into a minimal working set of executive functions (MWS) that are primarily mediated by the frontal lobes of the brain. In so arguing, the Authors seem to unquestioningly accept the law’s view of the “responsible person” as a mixture of cognitive capacities and mechanisms—thereby leaving aside other fundamental aspects of individuals’ human agency. This commentary article offers a critique of the Authors’ rationalist and individualist approach. The critique can be summarized through the following claim: We humans, as responsible beings, are more than our executive functions. This claim articulates through four main points of discussion: (1) role of emotions in moral judgments and behavior; (2) executive functions and normative criteria for legal insanity; (3) impact of adverse situational factors on executive functions; (4) Authors’ account of punishment and, especially, rehabilitation.


Author(s):  
Stella Bitchebe ◽  
Djob Mvondo ◽  
Laurent Réveillère ◽  
Noël de Palma ◽  
Alain Tchana
Keyword(s):  
Set Size ◽  

Author(s):  
Alan Kawarai Lefor ◽  
Kanako Harada ◽  
Aristotelis Dosis ◽  
Mamoru Mitsuishi

2021 ◽  
Vol 53 (6) ◽  
pp. 1-36
Author(s):  
Peter J. Denning

The working set model for program behavior was invented in 1965. It has stood the test of time in virtual memory management for over 50 years. It is considered the ideal for managing memory in operating systems and caches. Its superior performance was based on the principle of locality, which was discovered at the same time; locality is the observed tendency of programs to use distinct subsets of their pages over extended periods of time. This tutorial traces the development of working set theory from its origins to the present day. We will discuss the principle of locality and its experimental verification. We will show why working set memory management resists thrashing and generates near-optimal system throughput. We will present the powerful, linear-time algorithms for computing working set statistics and applying them to the design of memory systems. We will debunk several myths about locality and the performance of memory systems. We will conclude with a discussion of the application of the working set model in parallel systems, modern shared CPU caches, network edge caches, and inventory and logistics management.


Sign in / Sign up

Export Citation Format

Share Document