scholarly journals Deep learning enabled inverse design in nanophotonics

Nanophotonics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 1041-1057 ◽  
Author(s):  
Sunae So ◽  
Trevon Badloe ◽  
Jaebum Noh ◽  
Jorge Bravo-Abad ◽  
Junsuk Rho

AbstractDeep learning has become the dominant approach in artificial intelligence to solve complex data-driven problems. Originally applied almost exclusively in computer-science areas such as image analysis and nature language processing, deep learning has rapidly entered a wide variety of scientific fields including physics, chemistry and material science. Very recently, deep neural networks have been introduced in the field of nanophotonics as a powerful way of obtaining the nonlinear mapping between the topology and composition of arbitrary nanophotonic structures and their associated functional properties. In this paper, we have discussed the recent progress in the application of deep learning to the inverse design of nanophotonic devices, mainly focusing on the three existing learning paradigms of supervised-, unsupervised-, and reinforcement learning. Deep learning forward modelling i.e. how artificial intelligence learns how to solve Maxwell’s equations, is also discussed, along with an outlook of this rapidly evolving research area.

2021 ◽  
Author(s):  
revathi B. S. ◽  
A. Meena Kowshalya

Abstract Image Captioning is the process of generating textual descriptions of an image. These descriptions need to be syntactically and semantically correct. Image Captioning has potential advantages in many applications like image indexing techniques, devices for visually impaired persons, social media and several other natural language processing applications. Image Captioning is a popular research area where numerous scopes for new findings exist in preparation of datasets, generating language models, developing the models and evaluating the same. This paper extensively surveys very early literature that includes the advent of Artificial Intelligence, the Machine Learning pathway, the photography era, the early Deep Learning and the current Deep Learning methodology for image Captioning. This survey will definitely help novice researchers to understand the roadmap to current techniques.


Author(s):  
Sumit Kaur

Abstract- Deep learning is an emerging research area in machine learning and pattern recognition field which has been presented with the goal of drawing Machine Learning nearer to one of its unique objectives, Artificial Intelligence. It tries to mimic the human brain, which is capable of processing and learning from the complex input data and solving different kinds of complicated tasks well. Deep learning (DL) basically based on a set of supervised and unsupervised algorithms that attempt to model higher level abstractions in data and make it self-learning for hierarchical representation for classification. In the recent years, it has attracted much attention due to its state-of-the-art performance in diverse areas like object perception, speech recognition, computer vision, collaborative filtering and natural language processing. This paper will present a survey on different deep learning techniques for remote sensing image classification. 


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


Author(s):  
Seonho Kim ◽  
Jungjoon Kim ◽  
Hong-Woo Chun

Interest in research involving health-medical information analysis based on artificial intelligence, especially for deep learning techniques, has recently been increasing. Most of the research in this field has been focused on searching for new knowledge for predicting and diagnosing disease by revealing the relation between disease and various information features of data. These features are extracted by analyzing various clinical pathology data, such as EHR (electronic health records), and academic literature using the techniques of data analysis, natural language processing, etc. However, still needed are more research and interest in applying the latest advanced artificial intelligence-based data analysis technique to bio-signal data, which are continuous physiological records, such as EEG (electroencephalography) and ECG (electrocardiogram). Unlike the other types of data, applying deep learning to bio-signal data, which is in the form of time series of real numbers, has many issues that need to be resolved in preprocessing, learning, and analysis. Such issues include leaving feature selection, learning parts that are black boxes, difficulties in recognizing and identifying effective features, high computational complexities, etc. In this paper, to solve these issues, we provide an encoding-based Wave2vec time series classifier model, which combines signal-processing and deep learning-based natural language processing techniques. To demonstrate its advantages, we provide the results of three experiments conducted with EEG data of the University of California Irvine, which are a real-world benchmark bio-signal dataset. After converting the bio-signals (in the form of waves), which are a real number time series, into a sequence of symbols or a sequence of wavelet patterns that are converted into symbols, through encoding, the proposed model vectorizes the symbols by learning the sequence using deep learning-based natural language processing. The models of each class can be constructed through learning from the vectorized wavelet patterns and training data. The implemented models can be used for prediction and diagnosis of diseases by classifying the new data. The proposed method enhanced data readability and intuition of feature selection and learning processes by converting the time series of real number data into sequences of symbols. In addition, it facilitates intuitive and easy recognition, and identification of influential patterns. Furthermore, real-time large-capacity data analysis is facilitated, which is essential in the development of real-time analysis diagnosis systems, by drastically reducing the complexity of calculation without deterioration of analysis performance by data simplification through the encoding process.


2018 ◽  
Vol 103 (2) ◽  
pp. 167-175 ◽  
Author(s):  
Daniel Shu Wei Ting ◽  
Louis R Pasquale ◽  
Lily Peng ◽  
John Peter Campbell ◽  
Aaron Y Lee ◽  
...  

Artificial intelligence (AI) based on deep learning (DL) has sparked tremendous global interest in recent years. DL has been widely adopted in image recognition, speech recognition and natural language processing, but is only beginning to impact on healthcare. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography and visual fields, achieving robust classification performance in the detection of diabetic retinopathy and retinopathy of prematurity, the glaucoma-like disc, macular oedema and age-related macular degeneration. DL in ocular imaging may be used in conjunction with telemedicine as a possible solution to screen, diagnose and monitor major eye diseases for patients in primary care and community settings. Nonetheless, there are also potential challenges with DL application in ophthalmology, including clinical and technical challenges, explainability of the algorithm results, medicolegal issues, and physician and patient acceptance of the AI ‘black-box’ algorithms. DL could potentially revolutionise how ophthalmology is practised in the future. This review provides a summary of the state-of-the-art DL systems described for ophthalmic applications, potential challenges in clinical deployment and the path forward.


News is a routine in everyone's life. It helps in enhancing the knowledge on what happens around the world. Fake news is a fictional information madeup with the intension to delude and hence the knowledge acquired becomes of no use. As fake news spreads extensively it has a negative impact in the society and so fake news detection has become an emerging research area. The paper deals with a solution to fake news detection using the methods, deep learning and Natural Language Processing. The dataset is trained using deep neural network. The dataset needs to be well formatted before given to the network which is made possible using the technique of Natural Language Processing and thus predicts whether a news is fake or not.


2020 ◽  
Vol 9 (1) ◽  
pp. 2663-2667

In this century, Artificial Intelligence AI has gained lot of popularity because of the performance of the AI models with good accuracy scores. Natural Language Processing NLP which is a major subfield of AI deals with analysis of huge amounts of Natural Language data and processing it. Text Summarization is one of the major applications of NLP. The basic idea of Text Summarization is, when we have large news articles or reviews and we need a gist of news or reviews with in a short period of time then summarization will be useful. Text Summarization also finds its unique place in many applications like patent research, Help desk and customer support. There are numerous ways to build a Text Summarization Model but this paper will mainly focus on building a Text Summarization Model using seq2seq architecture and TensorFlow API.


2021 ◽  
Vol 17 (14) ◽  
pp. 103-118
Author(s):  
Mohammed Enamul Hoque ◽  
Kuryati Kipli

Image recognition and understanding is considered as a remarkable subfield of Artificial Intelligence (AI). In practice, retinal image data have high dimensionality leading to enormous size data. As the morphological retinal image datasets can be analyzed in an expansive and non-invasive way, AI more precisely Deep Learning (DL) methods are facilitating in developing intelligent retinal image analysis tools. The most recently developed DL technique, Convolutional Neural Network (CNN) showed remarkable efficiency in identifying, localizing, and quantifying the complex and hierarchical image features that are responsible for severe cardiovascular diseases. Different deep layered CNN architectures such as LeeNet, AlexNet, and ResNet have been developed exploiting CNN morphology. This wide variety of CNN structures can iteratively learn complex data structures of different datasets through supervised or unsupervised learning and perform exquisite analysis for feature recognition independently to diagnose threatening cardiovascular diseases. In modern ophthalmic practice, DL based automated methods are being used in retinopathy screening, grading, identifying, and quantifying the pathological features to employ further therapeutic approaches and offering a wide potentiality to get rid of ophthalmic system complexity. In this review, the recent advances of DL technologies in retinal image segmentation and feature extraction are extensively discussed. To accomplish this study the pertinent materials were extracted from different publicly available databases and online sources deploying the relevant keywords that includes retinal imaging, artificial intelligence, deep learning and retinal database. For the associated publications the reference lists of selected articles were further investigated.


2021 ◽  
Vol 11 (24) ◽  
pp. 12116
Author(s):  
Shanza Abbas ◽  
Muhammad Umair Khan ◽  
Scott Uk-Jin Lee ◽  
Asad Abbas

Natural language interfaces to databases (NLIDB) has been a research topic for a decade. Significant data collections are available in the form of databases. To utilize them for research purposes, a system that can translate a natural language query into a structured one can make a huge difference. Efforts toward such systems have been made with pipelining methods for more than a decade. Natural language processing techniques integrated with data science methods are researched as pipelining NLIDB systems. With significant advancements in machine learning and natural language processing, NLIDB with deep learning has emerged as a new research trend in this area. Deep learning has shown potential for rapid growth and improvement in text-to-SQL tasks. In deep learning NLIDB, closing the semantic gap in predicting users’ intended columns has arisen as one of the critical and fundamental problems in this research field. Contributions toward this issue have consisted of preprocessed feature inputs and encoding schema elements afore of and more impactful to the targeted model. Various significant work contributed towards this problem notwithstanding, this has been shown to be one of the critical issues for the task of developing NLIDB. Working towards closing the semantic gap between user intention and predicted columns, we present an approach for deep learning text-to-SQL tasks that includes previous columns’ occurrences scores as an additional input feature. Overall exact match accuracy can also be improved by emphasizing the improvement of columns’ prediction accuracy, which depends significantly on column prediction itself. For this purpose, we extract the query fragments from previous queries’ data and obtain the columns’ occurrences and co-occurrences scores. Column occurrences and co-occurrences scores are processed as input features for the encoder–decoder-based text to the SQL model. These scores contribute, as a factor, the probability of having already used columns and tables together in the query history. We experimented with our approach on the currently popular text-to-SQL dataset Spider. Spider is a complex data set containing multiple databases. This dataset includes query–question pairs along with schema information. We compared our exact match accuracy performance with a base model using their test and training data splits. It outperformed the base model’s accuracy, and accuracy was further boosted in experiments with the pretrained language model BERT.


Sign in / Sign up

Export Citation Format

Share Document