scholarly journals Application of Artificial Intelligence in Food Industry—a Guideline

Author(s):  
Nidhi Rajesh Mavani ◽  
Jarinah Mohd Ali ◽  
Suhaili Othman ◽  
M. A. Hussain ◽  
Haslaniza Hashim ◽  
...  

AbstractArtificial intelligence (AI) has embodied the recent technology in the food industry over the past few decades due to the rising of food demands in line with the increasing of the world population. The capability of the said intelligent systems in various tasks such as food quality determination, control tools, classification of food, and prediction purposes has intensified their demand in the food industry. Therefore, this paper reviews those diverse applications in comparing their advantages, limitations, and formulations as a guideline for selecting the most appropriate methods in enhancing future AI- and food industry–related developments. Furthermore, the integration of this system with other devices such as electronic nose, electronic tongue, computer vision system, and near infrared spectroscopy (NIR) is also emphasized, all of which will benefit both the industry players and consumers.

2019 ◽  
Vol 96 ◽  
pp. 303-310 ◽  
Author(s):  
Bruna Caroline Geronimo ◽  
Saulo Martiello Mastelini ◽  
Rafael Humberto Carvalho ◽  
Sylvio Barbon Júnior ◽  
Douglas Fernandes Barbin ◽  
...  

2019 ◽  
Vol 8 (1) ◽  
pp. 1070-1083
Author(s):  
Roberto Fernandes Ivo ◽  
Douglas de Araújo Rodrigues ◽  
José Ciro dos Santos ◽  
Francisco Nélio Costa Freitas ◽  
Luis Flaávio Gaspar Herculano ◽  
...  

Computers ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 6 ◽  
Author(s):  
Sajad Sabzi ◽  
Razieh Pourdarbani ◽  
Juan Ignacio Arribas

A computer vision system for automatic recognition and classification of five varieties of plant leaves under controlled laboratory imaging conditions, comprising: 1–Cydonia oblonga (quince), 2–Eucalyptus camaldulensis dehn (river red gum), 3–Malus pumila (apple), 4–Pistacia atlantica (mt. Atlas mastic tree) and 5–Prunus armeniaca (apricot), is proposed. 516 tree leaves images were taken and 285 features computed from each object including shape features, color features, texture features based on the gray level co-occurrence matrix, texture descriptors based on histogram and moment invariants. Seven discriminant features were selected and input for classification purposes using three classifiers: hybrid artificial neural network–ant bee colony (ANN–ABC), hybrid artificial neural network–biogeography based optimization (ANN–BBO) and Fisher linear discriminant analysis (LDA). Mean correct classification rates (CCR), resulted in 94.04%, 89.23%, and 93.99%, for hybrid ANN–ABC; hybrid ANN–BBO; and LDA classifiers, respectively. Best classifier mean area under curve (AUC), mean sensitivity, and mean specificity, were computed for the five tree varieties under study, resulting in: 1–Cydonia oblonga (quince) 0.991 (ANN–ABC), 95.89% (ANN–ABC), 95.91% (ANN–ABC); 2–Eucalyptus camaldulensis dehn (river red gum) 1.00 (LDA), 100% (LDA), 100% (LDA); 3–Malus pumila (apple) 0.996 (LDA), 96.63% (LDA), 94.99% (LDA); 4–Pistacia atlantica (mt. Atlas mastic tree) 0.979 (LDA), 91.71% (LDA), 82.57% (LDA); and 5–Prunus armeniaca (apricot) 0.994 (LDA), 88.67% (LDA), 94.65% (LDA), respectively.


Photonics ◽  
2019 ◽  
Vol 6 (3) ◽  
pp. 90 ◽  
Author(s):  
Bosworth ◽  
Russell ◽  
Jacob

Over the past decade, the Human–Computer Interaction (HCI) Lab at Tufts University has been developing real-time, implicit Brain–Computer Interfaces (BCIs) using functional near-infrared spectroscopy (fNIRS). This paper reviews the work of the lab; we explore how we have used fNIRS to develop BCIs that are based on a variety of human states, including cognitive workload, multitasking, musical learning applications, and preference detection. Our work indicates that fNIRS is a robust tool for the classification of brain-states in real-time, which can provide programmers with useful information to develop interfaces that are more intuitive and beneficial for the user than are currently possible given today’s human-input (e.g., mouse and keyboard).


Meat Science ◽  
2018 ◽  
Vol 140 ◽  
pp. 72-77 ◽  
Author(s):  
Xin Sun ◽  
Jennifer Young ◽  
Jeng-Hung Liu ◽  
David Newman

2020 ◽  
Vol 45 (3) ◽  
pp. 379 ◽  
Author(s):  
Vathsala Patil ◽  
BM Zeeshan Hameed ◽  
DasharathrajK Shetty ◽  
Nithesh Naik ◽  
Nikhil Nagaraj ◽  
...  

It is well-known that a bad weather, e.g. haze, rain, or snow affects severely the quality of the captured images or videos. Also raindrops adhered to a glass window or camera lens can severely affect the visibility of background scene and degrade the image quality, which consequently degrades the performance of many image processing and computer vision system algorithms. These algorithms are used in various applications such as object detection, tracking, recognition, and surveillance also in navigation. Rain removal from a video or a single image has been an active research topic over the past decade. Today, it continues to draw attentions in outdoor vision systems (e.g. surveillance) where the ultimate goal is to produce a clear and clean image or video. The most critical task here is to separate rain component from the other part. For that purpose, we are proposing an efficient algorithm to remove rain from a color image.


Sign in / Sign up

Export Citation Format

Share Document