This study describes the development of an image-based insect trap diverging from the plug-in camera insect trap paradigm in that (a) it does not require manual annotation of images to learn how to count targeted pests, and (b) it self-disposes the captured insects, and therefore is suit- able for long-term deployment. The device consists of an imaging sensor integrated with Raspberry Pi microcontroller units with embedded deep learning algorithms that count agricultural pests in- side a pheromone-based funnel trap. The device also receives commands from the server, which configures its operation, while an embedded servomotor can automatically rotate the detached bot- tom of the bucket to dispose of dehydrated insects as they begin to pile up. Therefore, it completely overcomes a major limitation of camera-based insect traps: the inevitable overlap and occlusion caused by the decay and layering of insects during long-term operation, thus extending the auton- omous operational capability. We study cases that are underrepresented in the literature such as counting in situations of congestion and significant debris using crowd counting algorithms en- countered in human surveillance. Finally, we perform comparative analysis of the results from dif- ferent deep learning approaches (YOLOv7/8, crowd counting, deep learning regression). Interest- ingly, there is no one optimal clear-cut counting approach that can cover all situations involving small and large insects with overlap. By weighting the pros and cons we suggest that YOLOv7/8 provides the best embedded solution in general. We open-source the code and a large database of Lepidopteran plant pests.

Image-Based Insect Counting Embedded in E-Traps That Learn without Manual Image Annotation and Self-Dispose Captured Insects / I. Saradopoulos, I. Potamitis, A.I. Konstantaras, P. Eliopoulos, S. Ntalampiras, I. Rigakis. - In: INFORMATION. - ISSN 2078-2489. - 14:5(2023), pp. 267.1-267.21. [10.3390/info14050267]

Image-Based Insect Counting Embedded in E-Traps That Learn without Manual Image Annotation and Self-Dispose Captured Insects

S. Ntalampiras
Penultimo
;
2023

Abstract

This study describes the development of an image-based insect trap diverging from the plug-in camera insect trap paradigm in that (a) it does not require manual annotation of images to learn how to count targeted pests, and (b) it self-disposes the captured insects, and therefore is suit- able for long-term deployment. The device consists of an imaging sensor integrated with Raspberry Pi microcontroller units with embedded deep learning algorithms that count agricultural pests in- side a pheromone-based funnel trap. The device also receives commands from the server, which configures its operation, while an embedded servomotor can automatically rotate the detached bot- tom of the bucket to dispose of dehydrated insects as they begin to pile up. Therefore, it completely overcomes a major limitation of camera-based insect traps: the inevitable overlap and occlusion caused by the decay and layering of insects during long-term operation, thus extending the auton- omous operational capability. We study cases that are underrepresented in the literature such as counting in situations of congestion and significant debris using crowd counting algorithms en- countered in human surveillance. Finally, we perform comparative analysis of the results from dif- ferent deep learning approaches (YOLOv7/8, crowd counting, deep learning regression). Interest- ingly, there is no one optimal clear-cut counting approach that can cover all situations involving small and large insects with overlap. By weighting the pros and cons we suggest that YOLOv7/8 provides the best embedded solution in general. We open-source the code and a large database of Lepidopteran plant pests.
edge computing; e-traps; insect monitoring
Settore INF/01 - Informatica
2023
Article (author)
File in questo prodotto:
File Dimensione Formato  
information-14-00267.pdf

accesso aperto

Descrizione: Article
Tipologia: Publisher's version/PDF
Dimensione 2.2 MB
Formato Adobe PDF
2.2 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/967810
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact