straightforward application
Recently Published Documents


TOTAL DOCUMENTS

126
(FIVE YEARS 13)

H-INDEX

14
(FIVE YEARS 0)

The Holocene ◽  
2021 ◽  
pp. 095968362110604
Author(s):  
Carolina Senn ◽  
Willy Tinner ◽  
Vivian A Felde ◽  
Erika Gobet ◽  
Jacqueline FN van Leeuwen ◽  
...  

Past vegetation and biodiversity dynamics, reconstructed using palaeoecological methods, can contribute to assessing the magnitude of the current biodiversity crisis and anticipating future risks and challenges. Among the different palaeoecological techniques, pollen analysis is probably the most widely used to reconstruct vegetation and plant diversity changes through time. Such reconstructions demand robust and comprehensive calibration studies addressing the pollen representation of extant vegetation to be sound. However, calibration studies are rare in the Mediterranean biodiversity hotspot, particularly regarding plant diversity. Here, we contribute to filling this gap by investigating the modern pollen signature of Mediterranean vegetation across a large environmental gradient in northern Greece. At each sampling site ( n = 61), we quantitatively compared the composition and diversity of plant (vegetation surveys) and pollen assemblages (moss/topsoil samples) using numerical techniques. Further, we compared these terrestrial pollen assemblages with those from lake sediment surface samples of the same region. We found an overall good match between plant and pollen assemblages, with maquis and mixed deciduous forest displaying particularly distinct pollen signatures. In contrast, the high regional importance of pines and oaks and their large pollen production blurred the pollen representation of other forested vegetation types and of shrublands and grasslands. Plant and pollen richness and their evenness showed similar declining trends with increasing altitude, but plant and pollen evenness bore a better match than richness. A more detailed vegetation-specific view on the data suggests that pine pollen seriously affected pollen richness and evenness in most of the pine-dominated stands. Lastly, our results suggest a rather straightforward application of vegetation-pollen relationships from moss/topsoil samples to interpret pollen assemblages from lakes in Mediterranean settings.



2021 ◽  
Vol 68 (5) ◽  
pp. 1-39
Author(s):  
Bernhard Haeupler ◽  
Amirbehshad Shahrasbi

We introduce synchronization strings , which provide a novel way to efficiently deal with synchronization errors , i.e., insertions and deletions. Synchronization errors are strictly more general and much harder to cope with than more commonly considered Hamming-type errors , i.e., symbol substitutions and erasures. For every ε > 0, synchronization strings allow us to index a sequence with an ε -O(1) -size alphabet, such that one can efficiently transform k synchronization errors into (1 + ε)k Hamming-type errors . This powerful new technique has many applications. In this article, we focus on designing insdel codes , i.e., error correcting block codes (ECCs) for insertion-deletion channels. While ECCs for both Hamming-type errors and synchronization errors have been intensely studied, the latter has largely resisted progress. As Mitzenmacher puts it in his 2009 survey [30]: “ Channels with synchronization errors...are simply not adequately understood by current theory. Given the near-complete knowledge, we have for channels with erasures and errors...our lack of understanding about channels with synchronization errors is truly remarkable. ” Indeed, it took until 1999 for the first insdel codes with constant rate, constant distance, and constant alphabet size to be constructed and only since 2016 are there constructions of constant rate insdel codes for asymptotically large noise rates. Even in the asymptotically large or small noise regimes, these codes are polynomially far from the optimal rate-distance tradeoff. This makes the understanding of insdel codes up to this work equivalent to what was known for regular ECCs after Forney introduced concatenated codes in his doctoral thesis 50 years ago. A straightforward application of our synchronization strings-based indexing method gives a simple black-box construction that transforms any ECC into an equally efficient insdel code with only a small increase in the alphabet size. This instantly transfers much of the highly developed understanding for regular ECCs into the realm of insdel codes. Most notably, for the complete noise spectrum, we obtain efficient “near-MDS” insdel codes, which get arbitrarily close to the optimal rate-distance tradeoff given by the Singleton bound. In particular, for any δ ∈ (0,1) and ε > 0, we give a family of insdel codes achieving a rate of 1 - δ - ε over a constant-size alphabet that efficiently corrects a δ fraction of insertions or deletions.



2021 ◽  
Vol 11 (21) ◽  
pp. 9892
Author(s):  
Ricard Borges ◽  
Francesc Sebé

Digital cash is a form of money that is stored digitally. Its main advantage when compared to traditional credit or debit cards is the possibility of carrying out anonymous transactions. Diverse digital cash paradigms have been proposed during the last decades, providing different approaches to avoid the double-spending fraud, or features like divisibility or transferability. This paper presents a new digital cash paradigm that includes the so-called no-valued e-coins, which are e-coins that can be generated free of charge by customers. A vendor receiving a payment cannot distinguish whether the received e-coin is valued or not, but the customer will receive the requested digital item only in the former case. A straightforward application of bogus transactions involving no-valued e-coins is the masking of consumption patterns. This new paradigm has also proven its validity in the scope of privacy-preserving pay-by-phone parking systems, and we believe it can become a very versatile building block in the design of privacy-preserving protocols in other areas of research. This paper provides a formal description of the new paradigm, including the features required for each of its components together with a formal analysis of its security.



Energies ◽  
2021 ◽  
Vol 14 (18) ◽  
pp. 5840
Author(s):  
Andreas Schwabauer ◽  
Marco Mancini ◽  
Yunus Poyraz ◽  
Roman Weber

The subject of this work is the mathematical modelling of a counter-current moving-bed gasifier fuelled by wood-pellets. Two versions of the model have been developed: the one-dimensional (1D) version-solving a set of Ordinary Differential Equations along the gasifier height-and the three-dimensional (3D) version where the balanced equations are solved using Computational Fluid Dynamics. Unique procedures have been developed to provide unconditionally stable solutions and remove difficulties occurring by using conventional numerical methods for modelling counter-current reactors.The procedures reduce the uncertainties introduced by other mathematical approaches, and they open up the possibility of straightforward application to more complex software, including commercial CFD packages. Previous models of Hobbs et al., Di Blasi and Mandl et al. used a correction factor to tune calculated temperatures to measured values. In this work, the factor is not required. Using the 1D model, the Mandl et al. 16.6 kW gasifier was scaled to 9.5 MW input; the 89% cold-gas efficiency, observed at 16.6 kW input, decreases only slightly to 84% at the 9.5 MW scale.



2021 ◽  
Author(s):  
Wenxiu Wang ◽  
Hamidreza Heydarian ◽  
Teun A.P.M. Huijben ◽  
Sjoerd Stallinga ◽  
Bernd Rieger

AbstractWe present a fast particle fusion method for particles imaged with single-molecule localization microscopy. The state-of-the-art approach based on all-to-all registration has proven to work well but its computational cost scales unfavourably with the number of particles N, namely as N2. Our method overcomes this problem and achieves a linear scaling of computational cost with N by making use of the Joint Registration of Multiple Point Clouds (JRMPC) method. Straightforward application of JRMPC fails as mostly locally optimal solutions are found. These usually contain several overlapping clusters, that each consist of well-aligned particles, but that have different poses. We solve this issue by repeated runs of JRMPC for different initial conditions, followed by a classification step to identify the clusters, and a connection step to link the different clusters obtained for different initializations. In this way a single well-aligned structure is obtained containing the majority of the particles.We achieve reconstructions of experimental DNA-origami datasets consisting of close to 400 particles within only 10 min on a CPU, with an image resolution of 3.2 nm. In addition, we show artifact-free reconstructions of symmetric structures without making any use of the symmetry. We also demonstrate that the method works well for poor data with a low density of labelling and for 3D data.



2021 ◽  
Author(s):  
Pavlin G. Poličar ◽  
Martin Stražar ◽  
Blaž Zupan

AbstractDimensionality reduction techniques, such as t-SNE, can construct informative visualizations of high-dimensional data. When jointly visualising multiple data sets, a straightforward application of these methods often fails; instead of revealing underlying classes, the resulting visualizations expose dataset-specific clusters. To circumvent these batch effects, we propose an embedding procedure that uses a t-SNE visualization constructed on a reference data set as a scaffold for embedding new data points. Each data instance from a new, unseen, secondary data is embedded independently and does not change the reference embedding. This prevents any interactions between instances in the secondary data and implicitly mitigates batch effects. We demonstrate the utility of this approach by analyzing six recently published single-cell gene expression data sets with up to tens of thousands of cells and thousands of genes. The batch effects in our studies are particularly strong as the data comes from different institutions using different experimental protocols. The visualizations constructed by our proposed approach are clear of batch effects, and the cells from secondary data sets correctly co-cluster with cells of the same type from the primary data. We also show the predictive power of our simple, visual classification approach in t-SNE space matches the accuracy of specialized machine learning techniques that consider the entire compendium of features that profile single cells.



2021 ◽  
Vol 31 (3) ◽  
pp. 1-34
Author(s):  
Yuliya Butkova ◽  
Arnd Hartmanns ◽  
Holger Hermanns

Markov automata are a compositional modelling formalism with continuous stochastic time, discrete probabilities, and nondeterministic choices. In this article, we present extensions to M ODEST , an expressive high-level language with roots in process algebra, that allow large Markov automata models to be specified in a succinct, modular way. We illustrate the advantages of M ODEST over alternative languages. Model checking Markov automata models requires dedicated algorithms for time-bounded and long-run average reward properties. We describe and evaluate the state-of-the-art algorithms implemented in the mcsta model checker of the M ODEST T OOLSET . We find that mcsta improves the performance and scalability of Markov automata model checking compared to earlier and alternative tools. We explain a partial-exploration approach based on the BRTDP method designed to mitigate the state space explosion problem of model checking, and experimentally evaluate its effectiveness. This problem can be avoided entirely by purely simulation-based techniques, but the nondeterminism in Markov automata hinders their straightforward application. We explain how lightweight scheduler sampling can make simulation possible, and provide a detailed evaluation of its usefulness on several benchmarks using the M ODEST T OOLSET ’s modes simulator.



Author(s):  
Wei Cheng ◽  
Sylvain Guilley ◽  
Claude Carlet ◽  
Jean-Luc Danger ◽  
Sihem Mesnager

This paper presents a unified approach to quantifying the information leakages in the most general code-based masking schemes. Specifically, by utilizing a uniform representation, we highlight first that all code-based masking schemes’ side-channel resistance can be quantified by an all-in-one framework consisting of two easy-tocompute parameters (the dual distance and the number of conditioned codewords) from a coding-theoretic perspective. In particular, we use signal-to-noise ratio (SNR) and mutual information (MI) as two complementary metrics, where a closed-form expression of SNR and an approximation of MI are proposed by connecting both metrics to the two coding-theoretic parameters. Secondly, considering the connection between Reed-Solomon code and SSS (Shamir’s Secret Sharing) scheme, the SSS-based masking is viewed as a particular case of generalized code-based masking. Hence as a straightforward application, we evaluate the impact of public points on the side-channel security of SSS-based masking schemes, namely the polynomial masking, and enhance the SSS-based masking by choosing optimal public points for it. Interestingly, we show that given a specific security order, more shares in SSS-based masking leak more information on secrets in an information-theoretic sense. Finally, our approach provides a systematic method for optimizing the side-channel resistance of every code-based masking. More precisely, this approach enables us to select optimal linear codes (parameters) for the generalized code-based masking by choosing appropriate codes according to the two coding-theoretic parameters. Summing up, we provide a best-practice guideline for the application of code-based masking to protect cryptographic implementations.



Author(s):  
Sharanya Sankar ◽  
Kate O’Neill ◽  
Maurice Bagot D’Arc ◽  
Florian Rebeca ◽  
Marie Buffier ◽  
...  

RADA16 is a synthetic peptide that exists as a viscous solution in an acidic formulation. In an acidic aqueous environment, the peptides spontaneously self-assemble into β-sheet nanofibers. Upon exposure and buffering of RADA16 solution to the physiological pH of biological fluids such as blood, interstitial fluid and lymph, the nanofibers begin physically crosslinking within seconds into a stable interwoven transparent hydrogel 3-D matrix. The RADA16 nanofiber hydrogel structure closely resembles the 3-dimensional architecture of native extracellular matrices. These properties make RADA16 formulations ideal topical hemostatic agents for controlling bleeding during surgery and to prevent post-operative rebleeding. A commercial RADA16 formulation is currently used for hemostasis in cardiovascular, gastrointestinal, and otorhinolaryngological surgical procedures, and studies are underway to investigate its use in wound healing and adhesion reduction. Straightforward application of viscous RADA16 into areas that are not easily accessible circumvents technical challenges in difficult-to-reach bleeding sites. The transparent hydrogel allows clear visualization of the surgical field and facilitates suture line assessment and revision. The shear-thinning and thixotropic properties of RADA16 allow its easy application through a narrow nozzle such as an endoscopic catheter. RADA16 hydrogels can fill tissue voids and do not swell so can be safely used in close proximity to pressure-sensitive tissues and in enclosed non-expandable regions. By definition, the synthetic peptide avoids potential microbiological contamination and immune responses that may occur with animal-, plant-, or mineral-derived topical hemostats. In vitro experiments, animal studies, and recent clinical experiences suggest that RADA16 nanofibrous hydrogels can act as surrogate extracellular matrices that support cellular behavior and interactions essential for wound healing and for tissue regenerative applications. In the future, the unique nature of RADA16 may also allow us to use it as a depot for precisely regulated drug and biopharmaceutical delivery.



2021 ◽  
Vol 57 (6) ◽  
Author(s):  
Jens O. Andersen

AbstractMagnetic catalysis is the enhancement of a condensate due to the presence of an external magnetic field. Magnetic catalysis at $$T=0$$ T = 0 is a robust phenomenon in low-energy theories and models of QCD as well as in lattice simulations. We review the underlying physics of magnetic catalysis from both perspectives. The quark-meson model is used as a specific example of a model that exhibits magnetic catalysis. Regularization and renormalization are discussed and we pay particular attention to a consistent and correct determination of the parameters of the Lagrangian using the on-shell renormalization scheme. A straightforward application of the quark-meson model and the NJL model leads to the prediction that the chiral transition temperature $$T_{\chi }$$ T χ is increasing as a function of the magnetic field B. This is in disagreement with lattice results, which show that $$T_{\chi }$$ T χ is a decreasing function of B, independent of the pion mass. The behavior can be understood in terms of the so-called valence and sea contributions to the quark condensate and the competition between them. We critically examine these ideas as well recent attempts to improve low-energy models using lattice input.



Sign in / Sign up

Export Citation Format

Share Document