near threshold
Recently Published Documents


TOTAL DOCUMENTS

2871
(FIVE YEARS 236)

H-INDEX

85
(FIVE YEARS 5)

2022 ◽  
Vol 105 (1) ◽  
Author(s):  
Rebekah Hermsmeier ◽  
Adrien Devolder ◽  
Paul Brumer ◽  
Timur V. Tscherbul

Author(s):  
Sergio Blasón ◽  
Tiago Werner ◽  
Julius Kruse ◽  
Mauro Madia ◽  
Petr Miarka ◽  
...  

2021 ◽  
Author(s):  
Haiyang Jiang ◽  
Bingqian Xu ◽  
Peng Cao ◽  
Hao Cai

2021 ◽  
Author(s):  
Mehdi Safarpour

<div>Operating at reduced voltages promises substantial energy efficiency improvement, however the downside is significant down-scaling of clock frequency. This paper propose vision chips as excellent fit for low-voltage operation. Low-level sensory data processing in many Internet-of-Things (IoT) devices pursue energy efficiency by utilizing sleep modes or slowing the clocking to the minimum. To curb the share of stand-by power dissipation in those designs, near-threshold/sub-threshold operational points or ultra-low-leakage processes in fabrication are employed. Those limit the clocking rates significantly, reducing the computing throughputs of individual processing cores. In this contribution we explore compensating for the performance loss of operating in near-threshold region ($V_{dd}=$0.6V) through massive parallelization. Benefits of near-threshold operation and massive parallelism are optimum energy consumption per instruction operation and minimized memory round-trips, respectively. The Processing Elements (PE) of the design are based on Transport Triggered Architecture. The fine grained programmable parallel solution allows for fast and efficient computation of learnable low-level features (e.g. local binary descriptors and convolutions). Other operations, including Max-pooling have also been implemented. The programmable design achieves excellent energy efficiency for Local Binary Patterns computations. </div><div>Our results demonstrates that the inherent properties of chip processor and vision applications allow voltage and clock frequency aggressively without having to compromise performance. </div>


2021 ◽  
Author(s):  
Mehdi Safarpour

<div>Operating at reduced voltages promises substantial energy efficiency improvement, however the downside is significant down-scaling of clock frequency. This paper propose vision chips as excellent fit for low-voltage operation. Low-level sensory data processing in many Internet-of-Things (IoT) devices pursue energy efficiency by utilizing sleep modes or slowing the clocking to the minimum. To curb the share of stand-by power dissipation in those designs, near-threshold/sub-threshold operational points or ultra-low-leakage processes in fabrication are employed. Those limit the clocking rates significantly, reducing the computing throughputs of individual processing cores. In this contribution we explore compensating for the performance loss of operating in near-threshold region ($V_{dd}=$0.6V) through massive parallelization. Benefits of near-threshold operation and massive parallelism are optimum energy consumption per instruction operation and minimized memory round-trips, respectively. The Processing Elements (PE) of the design are based on Transport Triggered Architecture. The fine grained programmable parallel solution allows for fast and efficient computation of learnable low-level features (e.g. local binary descriptors and convolutions). Other operations, including Max-pooling have also been implemented. The programmable design achieves excellent energy efficiency for Local Binary Patterns computations. </div><div>Our results demonstrates that the inherent properties of chip processor and vision applications allow voltage and clock frequency aggressively without having to compromise performance. </div>


2021 ◽  
Vol 119 (18) ◽  
pp. 182105
Author(s):  
Zhijian Guo ◽  
Xinmiao Zhu ◽  
Kaiyue Wang ◽  
Yufei Zhang ◽  
Yuming Tian ◽  
...  

2021 ◽  
Vol 104 (7) ◽  
Author(s):  
Rong Wang ◽  
Wei Kou ◽  
Chengdong Han ◽  
Jarah Evslin ◽  
Xurong Chen

2021 ◽  
pp. 2102384
Author(s):  
Rui‐Ting Gao ◽  
Shujie Liu ◽  
Xiaotian Guo ◽  
Rongao Zhang ◽  
Jinlu He ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document