lookup table
Recently Published Documents


TOTAL DOCUMENTS

778
(FIVE YEARS 247)

H-INDEX

33
(FIVE YEARS 7)

Author(s):  
Ahmad Al-Jarrah ◽  
Amer Albsharat ◽  
Mohammad Al-Jarrah

<p>This paper proposes a new algorithm for text encryption utilizing English words as a unit of encoding. The algorithm vanishes any feature that could be used to reveal the encrypted text through adopting variable code lengths for the English words, utilizing a variable-length encryption key, applying two-dimensional binary shuffling techniques at the bit level, and utilizing four binary logical operations with randomized shuffling inputs. English words that alphabetically sorted are divided into four lookup tables where each word has assigned an index. The strength of the proposed algorithm concluded from having two major components. Firstly, each lookup table utilizes different index sizes, and all index sizes are not multiples of bytes. Secondly, the shuffling operations are conducted on a two-dimensional binary matrix with variable length. Lastly, the parameters of the shuffling operation are randomized based on a randomly selected encryption key with varying size. Thus, the shuffling operations move adjacent bits away in a randomized fashion. Definitively, the proposed algorithm vanishes any signature or any statistical features of the original message. Moreover, the proposed algorithm reduces the size of the encrypted message as an additive advantage which is achieved through utilizing the smallest possible index size for each lookup table.</p>


2022 ◽  
Vol 15 (1) ◽  
pp. 219-249
Author(s):  
Mahtab Majdzadeh ◽  
Craig A. Stroud ◽  
Christopher Sioris ◽  
Paul A. Makar ◽  
Ayodeji Akingunola ◽  
...  

Abstract. The photolysis module in Environment and Climate Change Canada's online chemical transport model GEM-MACH (GEM: Global Environmental Multi-scale – MACH: Modelling Air quality and Chemistry) was improved to make use of the online size and composition-resolved representation of atmospheric aerosols and relative humidity in GEM-MACH, to account for aerosol attenuation of radiation in the photolysis calculation. We coupled both the GEM-MACH aerosol module and the MESSy-JVAL (Modular Earth Submodel System) photolysis module, through the use of the online aerosol modeled data and a new Mie lookup table for the model-generated extinction efficiency, absorption and scattering cross sections of each aerosol type. The new algorithm applies a lensing correction factor to the black carbon absorption efficiency (core-shell parameterization) and calculates the scattering and absorption optical depth and asymmetry factor of black carbon, sea salt, dust and other internally mixed components. We carried out a series of simulations with the improved version of MESSy-JVAL and wildfire emission inputs from the Canadian Forest Fire Emissions Prediction System (CFFEPS) for 2 months, compared the model aerosol optical depth (AOD) output to the previous version of MESSy-JVAL, satellite data, ground-based measurements and reanalysis products, and evaluated the effects of AOD calculations and the interactive aerosol feedback on the performance of the GEM-MACH model. The comparison of the improved version of MESSy-JVAL with the previous version showed significant improvements in the model performance with the implementation of the new photolysis module and with adopting the online interactive aerosol concentrations in GEM-MACH. Incorporating these changes to the model resulted in an increase in the correlation coefficient from 0.17 to 0.37 between the GEM-MACH model AOD 1-month hourly output and AERONET (Aerosol Robotic Network) measurements across all the North American sites. Comparisons of the updated model AOD with AERONET measurements for selected Canadian urban and industrial sites, specifically, showed better correlation coefficients for urban AERONET sites and for stations located further south in the domain for both simulation periods (June and January 2018). The predicted monthly averaged AOD using the improved photolysis module followed the spatial patterns of MERRA-2 reanalysis (Modern-Era Retrospective analysis for Research and Applications – version 2), with an overall underprediction of AOD over the common domain for both seasons. Our study also suggests that the domain-wide impacts of direct and indirect effect aerosol feedbacks on the photolysis rates from meteorological changes are considerably greater (3 to 4 times) than the direct aerosol optical effect on the photolysis rate calculations.


2022 ◽  
pp. 1-15
Author(s):  
Kévin Fourteau ◽  
Pascal Hagenmuller ◽  
Jacques Roulle ◽  
Florent Domine

Abstract Heated needle probes provide the most convenient method to measure snow thermal conductivity. Recent studies have suggested that this method underestimates snow thermal conductivity; however the reasons for this discrepancy have not been elucidated. We show that it originates from the fact that, while the theory behind the method assumes that the measurements reach a logarithmic regime, this regime is not reached within the standard measurement procedure. Using the needle probe without this logarithmic regime leads to thermal conductivity underestimations of tens of percents. Moreover, we show that the poor thermal contact between the probe and the snow due to insertion damages results in a further underestimation. Thus, we encourage the use of fixed needle probes, set up before the snow season and buried under snowfalls, rather than hand-inserted probes. Finally, we propose a method to correct the measurements performed with such fixed needle probes buried in snow. This correction is based on a lookup table, derived specifically for the Hukseflux TP02 needle probe model, frequently used in snow studies. Comparison between corrected measurements and independent estimations of snow thermal conductivity obtained with numerical simulations shows an overall improvement of the needle probe values after application of the correction.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 191
Author(s):  
Daniel R. Prado ◽  
Jesús A. López-Fernández ◽  
Manuel Arrebola

In this work, a simple, efficient and accurate database in the form of a lookup table to use in reflectarray design and direct layout optimization is presented. The database uses N-linear interpolation internally to estimate the reflection coefficients at coordinates that are not stored within it. The speed and accuracy of this approach were measured against the use of the full-wave technique based on local periodicity to populate the database. In addition, it was also compared with a machine learning technique, namely, support vector machines applied to regression in the same conditions, to elucidate the advantages and disadvantages of each one of these techniques. The results obtained from the application to the layout design, analysis and crosspolar optimization of a very large reflectarray for space applications show that, despite using a simple N-linear interpolation, the database offers sufficient accuracy, while considerably accelerating the overall design process as long as it is conveniently populated.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 420
Author(s):  
Iñigo Cortés ◽  
Johannes Rossouw van der Merwe ◽  
Elena Simona Lohan ◽  
Jari Nurmi ◽  
Wolfgang Felber

This paper evaluates the performance of robust adaptive tracking techniques with the direct-state Kalman filter (DSKF) used in modern digital global navigation satellite system (GNSS) receivers. Under the assumption of a well-known Gaussian distributed model of the states and the measurements, the DSKF adapts its coefficients optimally to achieve the minimum mean square error (MMSE). In time-varying scenarios, the measurements’ distribution changes over time due to noise, signal dynamics, multipath, and non-line-of-sight effects. These kinds of scenarios make difficult the search for a suitable measurement and process noise model, leading to a sub-optimal solution of the DSKF. The loop-bandwidth control algorithm (LBCA) can adapt the DSKF according to the time-varying scenario and improve its performance significantly. This study introduces two methods to adapt the DSKF using the LBCA: The LBCA-based DSKF and the LBCA-based lookup table (LUT)-DSKF. The former method adapts the steady-state process noise variance based on the LBCA’s loop bandwidth update. In contrast, the latter directly relates the loop bandwidth with the steady-state Kalman gains. The presented techniques are compared with the well-known state-of-the-art carrier-to-noise density ratio (C/N0)-based DSKF. These adaptive tracking techniques are implemented in an open software interface GNSS hardware receiver. For each implementation, the receiver’s tracking performance and the system performance are evaluated in simulated scenarios with different dynamics and noise cases. Results confirm that the LBCA can be successfully applied to adapt the DSKF. The LBCA-based LUT-DSKF exhibits superior static and dynamic system performance compared to other adaptive tracking techniques using the DSKF while achieving the lowest complexity.


2021 ◽  
Author(s):  
James Barry ◽  
Anna Herman-Czezuch ◽  
Nicola Kimiaie ◽  
Stefanie Meilinger ◽  
Christopher Schirrmeister ◽  
...  

&lt;p class=&quot;western&quot; align=&quot;justify&quot;&gt;The rapid increase in solar photovoltaic (PV) installations worldwide has resulted in the electricity grid becoming increasingly dependent on atmospheric conditions, thus requiring more accurate forecasts of incoming solar irradiance. In this context, measured data from PV systems are a valuable source of information about the optical properties of the atmosphere, in particular the cloud optical depth (COD). This work reports first results from an inversion algorithm developed to infer global, direct and diffuse irradiance as well as atmospheric optical properties from PV power measurements, with the goal of assimilating this information into numerical weather prediction (NWP) models.&lt;/p&gt; &lt;p class=&quot;western&quot; align=&quot;justify&quot;&gt;High resolution measurements from both PV systems and pyranometers were collected as part of the BMWi-funded MetPVNet project, in the Allg&amp;#228;u region during autumn 2018 and summer 2019. These data were then used together with a PV model and both the DISORT and MYSTIC radiative transfer schemes within libRadtran (Emde et al., 2016; Mayer and Kylling, 2005)&amp;#8288; to infer cloud optical depth as well as direct, diffuse and global irradiance under highly variable atmospheric conditions. Hourly averages of each of the retrieved quantities were compared with the corresponding predictions of the COSMO weather model as well as data from satellite retrievals, and periods with differing degrees of variability and different cloud types were analysed. The DISORT-based algorithm is able to accurately retrieve COD, direct and diffuse irradiance components as long as the cloud fraction is high enough, whereas under broken cloud conditions the presence of 3D effects can lead to large errors. In that case the global horizontal irradiance is derived from tilted irradiance measurements and/or PV data using a lookup table based on the MYSTIC 3D Monte Carlo radiative transfer solver (Mayer, 2009)&amp;#8288;. This work will provide the basis for future investigations using a larger number of PV systems to evaluate the improvements to irradiance and power forecasts that could be achieved by the assimilation of inferred irradiance into an NWP model.&lt;/p&gt; &lt;p class=&quot;western&quot;&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;/p&gt; &lt;p class=&quot;western&quot;&gt;Emde, C., Buras-Schnell, R., Kylling, A., Mayer, B., Gasteiger, J., Hamann, U., Kylling, J., Richter, B., Pause, C., Dowling, T. and Bugliaro, L.: The libRadtran software package for radiative transfer calculations (version 2.0.1), Geosci. Model Dev., 9(5), 1647&amp;#8211;1672, doi:10.5194/gmd-9-1647-2016, 2016.&lt;/p&gt; &lt;p class=&quot;western&quot;&gt;Mayer, B.: Radiative transfer in the cloudy atmosphere, EPJ Web Conf., 1, 75&amp;#8211;99, doi:10.1140/epjconf/e2009-00912-1, 2009.&lt;/p&gt; &lt;p class=&quot;western&quot;&gt;Mayer, B. and Kylling, A.: Technical note: The libRadtran software package for radiative transfer calculations - description and examples of use, Atmos. Chem. Phys., 5(7), 1855&amp;#8211;1877, doi:10.5194/acp-5-1855-2005, 2005.&lt;/p&gt;


2021 ◽  
Author(s):  
Matteo Leandro ◽  
Nada Elloumi ◽  
Alberto Tessarolo ◽  
Jonas Kristiansen Nøland

<div>One of the attractive benefits of slotless machines is low losses at high speeds, which could be emphasized by a careful stator core loss assessment, potentially available already at the pre-design stage. Unfortunately, mainstream iron loss estimation methods are typically implemented in the finite element analysis (FEA) environment with a constant-coefficients dummy model, leading to weak extrapolations with huge errors. In this paper, an analytical method for iron loss prediction in the stator core of slotless PM machines is derived. It is based on the extension of the 2-D field solution over the entire machine geometry. Then, the analytical solution is combined with variable- or constant-coefficient loss models (i.e., VARCO or CCM), which can be efficiently computed by vectorized post-processing. VARCO loss models are shown to be preferred at a general level.Moreover, the paper proposes a lookup-table-based (LUT) solution as an alternative approach. The main contribution lies in the numerical link between the analytical field solution and the iron loss estimate, with the aid of a code implementation of the proposed methodology. First, the models are compared against a sufficiently dense dataset available from laminations manufacturer for validation purposes. Then, all the methods are compared for the slotless machine case. Finally, the models are applied to a real case study and validated experimentally.</div>


2021 ◽  
Author(s):  
Matteo Leandro ◽  
Nada Elloumi ◽  
Alberto Tessarolo ◽  
Jonas Kristiansen Nøland

<div>One of the attractive benefits of slotless machines is low losses at high speeds, which could be emphasized by a careful stator core loss assessment, potentially available already at the pre-design stage. Unfortunately, mainstream iron loss estimation methods are typically implemented in the finite element analysis (FEA) environment with a constant-coefficients dummy model, leading to weak extrapolations with huge errors. In this paper, an analytical method for iron loss prediction in the stator core of slotless PM machines is derived. It is based on the extension of the 2-D field solution over the entire machine geometry. Then, the analytical solution is combined with variable- or constant-coefficient loss models (i.e., VARCO or CCM), which can be efficiently computed by vectorized post-processing. VARCO loss models are shown to be preferred at a general level.Moreover, the paper proposes a lookup-table-based (LUT) solution as an alternative approach. The main contribution lies in the numerical link between the analytical field solution and the iron loss estimate, with the aid of a code implementation of the proposed methodology. First, the models are compared against a sufficiently dense dataset available from laminations manufacturer for validation purposes. Then, all the methods are compared for the slotless machine case. Finally, the models are applied to a real case study and validated experimentally.</div>


Symmetry ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2385
Author(s):  
Xue Sun ◽  
Chao-Chin Wu ◽  
Yan-Fang Liu

In the field of computational biology, sequence alignment is a very important methodology. BLAST is a very common tool for performing sequence alignment in bioinformatics provided by National Center for Biotechnology Information (NCBI) in the USA. The BLAST server receives tens of thousands of queries every day on average. Among the procedures of BLAST, the hit detection process whose core architecture is a lookup table is the most time-consuming. In the latest work, a lightweight BLASTP on CUDA GPU with a hybrid query-index table was proposed for servicing the sequence query length shorter than 512, which effectively improved the query efficiency. According to the reported protein sequence length distribution, about 90% of sequences are equal to or smaller than 1024. In this paper, we propose an improved lightweight BLASTP to speed up the hit detection time for longer query sequences. The largest sequence is enlarged from 512 to 1024. As a result, one more bit is required to encode each sequence position. To meet the requirement, an extended hybrid query-index table (EHQIT) is proposed to accommodate three sequence positions in a four-byte table entry, making only one memory access sufficient to retrieve all the position information as long as the number of hits is equal to or smaller than three. Moreover, if there are more than three hits for a possible word, all the position information will be stored in contiguous table entries, which eliminates branch divergence and reduces memory space for pointers to overflow buffer. A square symmetric scoring matrix, Blosum62, is used to determine the relative score made by matching two characters in a sequence alignment. The experimental results show that for queries shorter than 512 our improved lightweight BLASTP outperforms the original lightweight BLASTP with speedups of 1.2 on average. When the number of hit overflows increases, the speedup can be as high as two. For queries shorter than 1024, our improved lightweight BLASTP can provide speedups ranging from 1.56 to 3.08 over the CUDA-BLAST. In short, the improved lightweight BLASTP can replace the original one because it can support a longer query sequence and provide better performance.


Sign in / Sign up

Export Citation Format

Share Document