daunting task
Recently Published Documents


TOTAL DOCUMENTS

734
(FIVE YEARS 321)

H-INDEX

26
(FIVE YEARS 6)

2022 ◽  
Vol 10 (1) ◽  
pp. 92
Author(s):  
Lenaïg G. Hemery ◽  
Kailan F. Mackereth ◽  
Levy G. Tugade

Marine energy devices are installed in highly dynamic environments and have the potential to affect the benthic and pelagic habitats around them. Regulatory bodies often require baseline characterization and/or post-installation monitoring to determine whether changes in these habitats are being observed. However, a great diversity of technologies is available for surveying and sampling marine habitats, and selecting the most suitable instrument to identify and measure changes in habitats at marine energy sites can become a daunting task. We conducted a thorough review of journal articles, survey reports, and grey literature to extract information about the technologies used, the data collection and processing methods, and the performance and effectiveness of these instruments. We examined documents related to marine energy development, offshore wind farms, oil and gas offshore sites, and other marine industries around the world over the last 20 years. A total of 120 different technologies were identified across six main habitat categories: seafloor, sediment, infauna, epifauna, pelagic, and biofouling. The technologies were organized into 12 broad technology classes: acoustic, corer, dredge, grab, hook and line, net and trawl, plate, remote sensing, scrape samples, trap, visual, and others. Visual was the most common and the most diverse technology class, with applications across all six habitat categories. Technologies and sampling methods that are designed for working efficiently in energetic environments have greater success at marine energy sites. In addition, sampling designs and statistical analyses should be carefully thought through to identify differences in faunal assemblages and spatiotemporal changes in habitats.


2022 ◽  
Author(s):  
Priyadarshini Rai ◽  
Atishay Jain ◽  
Neha Jha ◽  
Divya Sharma ◽  
Shivani Kumar ◽  
...  

Dysregulation of a gene′s function, either due to mutations or impairments in regulatory networks, often triggers pathological states in the affected tissue. Comprehensive mapping of these apparent gene–pathology relationships is an ever daunting task, primarily due to genetic pleiotropy and lack of suitable computational approaches. With the advent of high throughput genomics platforms and community scale initiatives such as the Human Cell Landscape (HCL) project [1], researchers have been able to create gene expression portraits of healthy tissues resolved at the level of single cells. However, a similar wealth of knowledge is currently not at our finger–tip when it comes to diseases. This is because the genetic manifestation of a disease is often quite heterogeneous and is confounded by several clinical and demographic covariates. To circumvent this, we mined ≈18 million PubMed abstracts published till May 2019 and selected ≈6.1 million of them that describe the pathological role of genes in different diseases. Further, we employed a word embedding technique from the domain of Natural Language Processing (NLP) to learn vector representation of entities such as genes, diseases, tissues, etc., in a way such that their relationship is preserved in a vector space. Notably, Pathomap, by the virtue of its underpinning theory, also learns transitive relationships. Pathomap provided a vector representation of words indicating a possible association between DNMT3A/BCOR with CYLD cutaneous syndrome (CCS). The first manuscript reporting this finding was not part of our training data.


2022 ◽  
Author(s):  
Tahereh Salehi ◽  
Mariam Zomorodi ◽  
Paweł Pławiak ◽  
Mina Abbaszade ◽  
Vahid Salari

Abstract Quantum computing is a new and advanced topic that refers to calculations based on the principles of quantum mechanics. Itmakes certain kinds of problems be solved easier compared to classical computers. This advantage of quantum computingcan be used to implement many existing problems in different fields incredibly effectively. One important field that quantumcomputing has shown great results in machine learning. Until now, many different quantum algorithms have been presented toperform different machine learning approaches. In some special cases, the execution time of these quantum algorithms will bereduced exponentially compared to the classical ones. But at the same time, with increasing data volume and computationtime, taking care of systems to prevent unwanted interactions with the environment can be a daunting task and since thesealgorithms work on machine learning problems, which usually includes big data, their implementation is very costly in terms ofquantum resources. Here, in this paper, we have proposed an approach to reduce the cost of quantum circuits and to optimizequantum machine learning circuits in particular. To reduce the number of resources used, in this paper an approach includingdifferent optimization algorithms is considered. Our approach is used to optimize quantum machine learning algorithms forbig data. In this case, the optimized circuits run quantum machine learning algorithms in less time than the original onesand by preserving the original functionality. Our approach improves the number of quantum gates by 10.7% and 14.9% indifferent circuits and the number of time steps is reduced by three and 15 units, respectively. This is the amount of reduction forone iteration of a given sub-circuit U in the main circuit. For cases where this sub-circuit is repeated more times in the maincircuit, the optimization rate is increased. Therefore, by applying the proposed method to circuits with big data, both cost andperformance are improved.


2022 ◽  
pp. 57-78

This chapter examines the notions of stigma, bias, and myth of poverty reduction and focuses specifically on rural poor populations in nations that fell behind in implementing the global targets of poverty reduction, the majority of them in Sub-Saharan Africa. The task is to examine various characterizations of myth and stigma in historical discourse and explain the processes and mechanisms by which myth and stigma function as a mediator of various tensions within historical discourse. First, this chapter describes the characterizations of stigma and the misconceptions of poverty; second, it explains the barriers and the daunting task of poverty reduction; and third, it shows how negative perceptions of poverty ultimately complicate the implementation of the poverty reduction agenda.


2022 ◽  
pp. 310-326
Author(s):  
Adebowale Jeremy Adetayo

The current competitive environment is significantly modifying the libraries' learning processes due to an information explosion, allowing this to be transformed into knowledge. This opportunity has been exploited in the past by the tools of “business intelligence,” but integrating it into libraries is still a daunting task. Absorptive capacity was applied to smart libraries from Schöpel's multidimensional model's perspective. Literature was thoroughly reviewed from credible sources such as ISI Web of Knowledge and Scopus. The contribution to the literature is smart library development through absorptive capacity. This approach aims to create a library intelligence model that aims to explain the absorptive capacity process that leads to smart services, people, place, and governance. This chapter presents a unique integration of various concepts: the concept of absorptive capacity and smart library. This allows the development of better library practices by obtaining benefits from these investments and facilitating intelligence creation inside libraries.


2022 ◽  
Author(s):  
Daria Kleeva ◽  
Gurgen Soghoyan ◽  
Ilia Komoltsev ◽  
Mikhail Sinkin ◽  
Alexei Ossadtchi

Epilepsy is a widely spread neurological disease, whose treatment often requires resection of the pathological cortical tissue. Interictal spike analysis observed in the non-invasively collected EEG or MEG data offers an attractive way to localize epileptogenic cortical structures for surgery planning purposes. Interictal spike detection in lengthy multichannel data is a daunting task that is still often performed manually. This frequently limits such an analysis to a small portion of the data which renders the appropriate risks of missing the potentially epileptogenic region. While a plethora of automatic spike detection techniques have been developed each with its own assumptions and limitations, non of them is ideal and the best results are achieved when the output of several automatic spike detectors are combined. This is especially true in the low signal-to-noise ratio conditions. To this end we propose a novel biomimetic approach for automatic spike detection based on a constrained mixed spline machinery that we dub as fast parametric curve matching (FPCM). Using the peak-wave shape parametrization, the constrained parametric morphological model is constructed and convolved with the observed multichannel data to efficiently determine mixed spline parameters corresponding to each time-point in the dataset. Then the logical predicates that directly map to verbalized text-book like descriptions of the expected interictal event morphology allow us to accomplish the spike detection task. The results of simulations mimicking typical low SNR scenario show the robustness and high ROC AUC values of the FPCM method as compared to the spike detection performed using more conventional approaches such as wavelet decomposition, template matching or simple amplitude thresholding. Applied to the real MEG and EEG data from the human patients and to rat ECoG data, the FPCM technique demonstrates reliable detection of the interictal events and localization of epileptogenic zones concordant with independent conclusions made by the epileptologist. Since the FPCM is computationally light, tolerant to high amplitude artifacts and flexible to accommodate verbalized descriptions of the arbitrary target morphology, it may complement the existing arsenal of means for analysis of noisy interictal datasets.


2021 ◽  
pp. 575-580
Author(s):  
Yuliia Tatarinova ◽  
Olha Sinelnikova

Prioritizing bug fixes becomes a daunting task due to the increasing number of vulnerability disclosure programs.  When making a decision, not only the Common Vulnerability Scoring System (CVSS) but also the probability of exploitation, the trend of particular security issues should be taken into account. This paper aims to discuss the sources and approaches for measuring degree of interest in a specific vulnerability at a particular point in real-time. This research presents а new metric and estimation model which is based on vulnerability assessment. We compared several techniques to determine the most suitable approach and relevant sources for improving vulnerability management and prioritization problems. We chose the Google Trend analytics tool to gather trend data, distinguish main features and build data set. The result of this study is the regression equation which helps efficiently prioritize vulnerabilities considering the public interest in the particular security issue. The proposed method provides the popularity estimation of Common Vulnerabilities and Exposures (CVE) using public resources.


2021 ◽  
Vol 6 (4) ◽  
Author(s):  
Christopher U. Onova ◽  
Temidayo O. Omotehinwa

Combatting email spam has remained a very daunting task. Despite the over 99% accuracy in most non-image-based spam email detection, studies on image-based spam hardly attain such a high level of accuracy as new email spamming techniques that defeat existing spam filters emerges from time to time. The number of email spams sent out daily has remained a key factor in the continued use of spam. In this paper, a simple convolutional neural network model, 123DNet was developed and trained with 28,929 images drawn from 2 public datasets and a Personally Generated dataset. The model was optimized to the least set of layers to have 1 input layer, 2 embedded Convolutional layers as a hidden layer, and 3 neural network layers. The model was tested with a total of 4,339 images of the three dataset samples and then with a separate set of 1,200 images to test performance on never-seen-before images. A Classification Performance analysis was carried out using the confusion matrix. Performance metrics including Accuracy, Precision, True Negative Accuracy, Sensitivity, Specificity, and F1 Measure were computed to ascertain the model’s performance. The Model returned an F1 Score of 97% on a public dataset’s test sample and 88% on Never-seen-before test samples outperforming some pre-existing models while performing significantly well on the newly generated image test samples. It is recommended that a model that performed so well with new never-seen-before spam images be integrated into spam filtering systems. Keywords- Convolutional Neural Network, Deep Learning,  Image-based Spam Detection


Sign in / Sign up

Export Citation Format

Share Document