vision algorithm
Recently Published Documents


TOTAL DOCUMENTS

253
(FIVE YEARS 97)

H-INDEX

16
(FIVE YEARS 5)

Author(s):  
Dili Shen ◽  
Shengfei Zhang ◽  
Wuyi Ming ◽  
Wenbin He ◽  
Guojun Zhang ◽  
...  

Chemosensors ◽  
2022 ◽  
Vol 10 (1) ◽  
pp. 25
Author(s):  
Patrícia S. Peixoto ◽  
Pedro H. Carvalho ◽  
Ana Machado ◽  
Luisa Barreiros ◽  
Adriano A. Bordalo ◽  
...  

Antibiotic resistance is a major health concern of the 21st century. The misuse of antibiotics over the years has led to their increasing presence in the environment, particularly in water resources, which can exacerbate the transmission of resistance genes and facilitate the emergence of resistant microorganisms. The objective of the present work is to develop a chemosensor for screening of sulfonamides in environmental waters, targeting sulfamethoxazole as the model analyte. The methodology was based on the retention of sulfamethoxazole in disks containing polystyrene divinylbenzene sulfonated sorbent particles and reaction with p-dimethylaminocinnamaldehyde, followed by colorimetric detection using a computer-vision algorithm. Several color spaces (RGB, HSV and CIELAB) were evaluated, with the coordinate a_star, from the CIELAB color space, providing the highest sensitivity. Moreover, in order to avoid possible errors due to variations in illumination, a color palette is included in the picture of the analytical disk, and a correction using the a_star value from one of the color patches is proposed. The methodology presented recoveries of 82–101% at 0.1 µg and 0.5 µg of sulfamethoxazole (25 mL), providing a detection limit of 0.08 µg and a quantification limit of 0.26 µg. As a proof of concept, application to in-field analysis was successfully implemented.


2022 ◽  
Vol 2161 (1) ◽  
pp. 012059
Author(s):  
Rohan Nigam ◽  
Meghana Rao ◽  
Nihal Rian Dias ◽  
Arjun Hariharan ◽  
Amit Choraria ◽  
...  

Abstract Agriculture is the primary source of livelihood for a large section of the society in India, and the ever-increasing demand for high quality and high quantity yield calls for highly efficient and effective farming methods. Grow-IoT is a smart analytics app for comprehensive plant health analysis and remote farm monitoring platform to ensure that the farmer is aware of all the critical factors affecting the farm status. The cameras installed on the field facilitate capturing images of the plants to determine plant health based on phenotypic characteristics. Visual feedback is provided by the computer vision algorithm using image segmentation to classify plant health into three distinct categories. The sensors installed on the field relay crucial information to the Cloud for real-time optimized farm status management. All the data relayed can then be viewed using the user-friendly Grow-IoT app to remotely monitor integral aspects of the farm and take the required actions in case of critical conditions. Thus, the mobile platform combined with computer vision for plant health analysis and smart sensor modules gives the farmer a technical perspective. The simplistic design of the application makes sure that the user has the least cognitive load while using it. Overall, the smart module is a significant technical step to facilitate efficient produce across all seasons in a year.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Kangying Wang ◽  
Minghui Wang

Rain will cause the occlusion and blur of background and target objects and affect the image visual effect and subsequent image analysis. Aiming at the problem of insufficient rain removal in the current rain removal algorithm, in order to improve the accuracy of computer vision algorithm in the process of rain removal, this paper proposes a multistage framework based on progressive restoration combined with recurrent neural network and feature complementarity technology to remove rain streak from single images. Firstly, the encoder-decoder subnetwork is adapted to learn multiscale information and extract richer rain features. Secondly, the original resolution image restored by decoder is used to preserve refined image details. Finally, we use the effective information of the previous stage to guide the rain removal of the next stage by the recurrent neural network. The final experimental results show that a multistage feature complementarity network performs well on both synthetic rainy data sets and real-world rainy data sets can remove rain more completely, preserve more background details, and achieve better visual effects compared with some popular single-image deraining methods.


Author(s):  
Florin-Bogdan MARIN ◽  
Mihaela MARIN

The objective of this experimental research is to identify solutions to detect drones using computer vision algorithm. Nowadays danger of drones operating near airports and other important sites is of utmost importance. The proposed techniques resolution pictures with a good rate of detection. The technique is using information concerning movement patterns of drones.


2021 ◽  
Vol 2142 (1) ◽  
pp. 012022
Author(s):  
K A Timakov

Abstract In the last few years, machine learning and machine vision technologies have started to gain more and more popularity. This industry occupies one of the leading positions in the field of information technology. The paper is devoted to the development of a machine vision algorithm based on new generations of FPGAs for recognizing handwritten Cyrillic characters in images and video streams, in particular. The article raises the issues of using FPGA as an image segmentation accelerator and organizing work with the video stream, choosing the most suitable FPGA platform, creating training samples of handwritten characters, and working with the convolutional neural network AlexNet.


2021 ◽  
Vol 2107 (1) ◽  
pp. 012037
Author(s):  
K S Tan ◽  
M N Ayob ◽  
H B Hassrizal ◽  
A H Ismail ◽  
M S Muhamad Azmi ◽  
...  

Abstract Vision aided pick and place cartesian robot is a combination of machine vision system and robotic system. They communicate with each other simultaneously to perform object sorting. In this project, machine vision algorithm for object sorting to solve the problem in failure sorting due to imperfection of images edges and different types of colours is proposed. The image is acquired by a camera and followed by image calibration. Pre-processing of image is performed through these methods, which are HSI colour space transformation, Gaussian filter for image filtering, Otsu’s method for image binarization, and Canny edge detection. LabVIEW edge-based geometric matching is selected for template matching. After the vision application analysed the image, electrical signal will send to robotic arm for object sorting if the acquired image is matched with template image. The proposed machine vision algorithm has yielded an accurate template matching score from 800 to 1000 under different disturbances and conditions. This machine vision algorithm provides more customizable parameters for each methods yet improves the accuracy of template matching.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258672
Author(s):  
Gabriel Carreira Lencioni ◽  
Rafael Vieira de Sousa ◽  
Edson José de Souza Sardinha ◽  
Rodrigo Romero Corrêa ◽  
Adroaldo José Zanella

The aim of this study was to develop and evaluate a machine vision algorithm to assess the pain level in horses, using an automatic computational classifier based on the Horse Grimace Scale (HGS) and trained by machine learning method. The use of the Horse Grimace Scale is dependent on a human observer, who most of the time does not have availability to evaluate the animal for long periods and must also be well trained in order to apply the evaluation system correctly. In addition, even with adequate training, the presence of an unknown person near an animal in pain can result in behavioral changes, making the evaluation more complex. As a possible solution, the automatic video-imaging system will be able to monitor pain responses in horses more accurately and in real-time, and thus allow an earlier diagnosis and more efficient treatment for the affected animals. This study is based on assessment of facial expressions of 7 horses that underwent castration, collected through a video system positioned on the top of the feeder station, capturing images at 4 distinct timepoints daily for two days before and four days after surgical castration. A labeling process was applied to build a pain facial image database and machine learning methods were used to train the computational pain classifier. The machine vision algorithm was developed through the training of a Convolutional Neural Network (CNN) that resulted in an overall accuracy of 75.8% while classifying pain on three levels: not present, moderately present, and obviously present. While classifying between two categories (pain not present and pain present) the overall accuracy reached 88.3%. Although there are some improvements to be made in order to use the system in a daily routine, the model appears promising and capable of measuring pain on images of horses automatically through facial expressions, collected from video images.


Sign in / Sign up

Export Citation Format

Share Document