Ecologically Valid Multimodal Data Collection

Author(s):  
Hagen Fritz ◽  
Zoltan Nagy
Author(s):  
Laurence Devillers ◽  
Sophie Rosset ◽  
Guillaume Dubuisson Duplessis ◽  
Mohamed A. Sehili ◽  
Lucile Bechade ◽  
...  

2020 ◽  
Author(s):  
Somtirtha Roy ◽  
Tijana Radivojevic ◽  
Mark Forrer ◽  
Jose Manuel Marti ◽  
Vamshi Jonnalagadda ◽  
...  

AbstractBiology has changed radically in the past two decades, growing from a purely descriptive science into also a design science. The availability of tools that enable the precise modification of cells, as well as the ability to collect large amounts of multimodal data, open the possibility of sophisticated bioengineering to produce fuels, specialty and commodity chemicals, materials, and other renewable bioproducts. However, despite new tools and exponentially increasing data volumes, synthetic biology cannot yet fulfill its true potential due to our inability to predict the behavior of biological systems. Here, we present a set of tools that, combined, provide the ability to store, visualize and leverage these data to predict the outcome of bioengineering efforts.


2021 ◽  
Vol 13 (16) ◽  
pp. 3191
Author(s):  
Haitham Ezzy ◽  
Motti Charter ◽  
Antonello Bonfante ◽  
Anna Brook

Small mammals, and particularly rodents, are common inhabitants of farmlands, where they play key roles in the ecosystem, but when overabundant, they can be major pests, able to reduce crop production and farmers’ incomes, with tangible effects on the achievement of Sustainable Development Goals no 2 (SDG2, Zero Hunger) of the United Nations. Farmers do not currently have a standardized, accurate method of detecting the presence, abundance, and locations of rodents in their fields, and hence do not have environmentally efficient methods of rodent control able to promote sustainable agriculture oriented to reduce the environmental impacts of cultivation. New developments in unmanned aerial system (UAS) platforms and sensor technology facilitate cost-effective data collection through simultaneous multimodal data collection approaches at very high spatial resolutions in environmental and agricultural contexts. Object detection from remote-sensing images has been an active research topic over the last decade. With recent increases in computational resources and data availability, deep learning-based object detection methods are beginning to play an important role in advancing remote-sensing commercial and scientific applications. However, the performance of current detectors on various UAS-based datasets, including multimodal spatial and physical datasets, remains limited in terms of small object detection. In particular, the ability to quickly detect small objects from a large observed scene (at field scale) is still an open question. In this paper, we compare the efficiencies of applying one- and two-stage detector models to a single UAS-based image and a processed (via Pix4D mapper photogrammetric program) UAS-based orthophoto product to detect rodent burrows, for agriculture/environmental applications as to support farmer activities in the achievements of SDG2. Our results indicate that the use of multimodal data from low-cost UASs within a self-training YOLOv3 model can provide relatively accurate and robust detection for small objects (mAP of 0.86 and an F1-score of 93.39%), and can deliver valuable insights for field management with high spatial precision able to reduce the environmental costs of crop production in the direction of precision agriculture management.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 568
Author(s):  
Bertrand Schneider ◽  
Javaria Hassan ◽  
Gahyun Sung

While the majority of social scientists still rely on traditional research instruments (e.g., surveys, self-reports, qualitative observations), multimodal sensing is becoming an emerging methodology for capturing human behaviors. Sensing technology has the potential to complement and enrich traditional measures by providing high frequency data on people’s behavior, cognition and affects. However, there is currently no easy-to-use toolkit for recording multimodal data streams. Existing methodologies rely on the use of physical sensors and custom-written code for accessing sensor data. In this paper, we present the EZ-MMLA toolkit. This toolkit was implemented as a website and provides easy access to multimodal data collection algorithms. One can collect a variety of data modalities: data on users’ attention (eye-tracking), physiological states (heart rate), body posture (skeletal data), gestures (from hand motion), emotions (from facial expressions and speech) and lower-level computer vision algorithms (e.g., fiducial/color tracking). This toolkit can run from any browser and does not require dedicated hardware or programming experience. We compare this toolkit with traditional methods and describe a case study where the EZ-MMLA toolkit was used by aspiring educational researchers in a classroom context. We conclude by discussing future work and other applications of this toolkit, potential limitations and implications.


2002 ◽  
Vol 34 (2) ◽  
pp. 227-230
Author(s):  
Peter J. Molfese ◽  
Terri L. Bonebright ◽  
Theresa M. Herman ◽  
Catherine A. Roe

Sign in / Sign up

Export Citation Format

Share Document