Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 3
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

Workpiece surface roughness measurement based on traditional machine vision technology faces numerous problems such as complex index design, poor robustness of the lighting environment, and slow detection speed, which make it unsuitable for industrial production. To address these problems, this paper proposes an improved YOLOv5 method for milling surface roughness detection. This method can automatically extract image features and possesses higher robustness in lighting environments and faster detection speed. We have effectively improved the detection accuracy of the model for workpieces located at different positions by introducing Coordinate Attention (CA). The experimental results demonstrate that this study’s improved model achieves accurate surface roughness detection for moving workpieces in an environment with light intensity ranging from 592 to 1060 lux. The average precision of the model on the test set reaches 97.3%, and the detection speed reaches 36 frames per second.
Go to article

Authors and Affiliations

Xiao Lv
1
Huaian Yi
1
Runji Fang
1
Shuhua Ai
1
Enhui Lu
2

  1. School of Mechanical and Control Engineering, Guilin University of Technology, Guilin, 541006,People’s Republic of China
  2. School of Mechanical Engineering, Yangzhou University, Yangzhou, 225009, People’s Republic of China
Download PDF Download RIS Download Bibtex

Abstract

The development of surveillance video vehicle detection technology in modern intelligent transportation systems is closely related to the operation and safety of highways and urban road systems. Yet, the current object detection network structure is complex, requiring a large number of parameters and calculations, so this paper proposes a lightweight network based on YOLOv5. It can be easily deployed on video surveillance equipment even with limited performance, while ensuring real-time and accurate vehicle detection. Modified MobileNetV2 is used as the backbone feature extraction network of YOLOv5, and DSC “depthwise separable convolution” is used to replace the standard convolution in the bottleneck layer structure. The lightweight YOLOv5 is evaluated in the UA-DETRAC and BDD100k datasets. Experimental results show that this method reduces the number of parameters by 95% as compared with the original YOLOv5s and achieves a good tradeoff between precision and speed.
Go to article

Authors and Affiliations

Yurui Wang
1
ORCID: ORCID
Guoping Yang
1
Jingbo Guo
1

  1. Shanghai University of Engineering Science, School of Mechanical and Automotive Engineering, Shanghai, China
Download PDF Download RIS Download Bibtex

Abstract

The article presents research on animal detection in thermal images using the YOLOv5 architecture. The goal of the study was to obtain a model with high performance in detecting animals in this type of images, and to see how changes in hyperparameters affect learning curves and final results. This manifested itself in testing different values of learning rate, momentum and optimizer types in relation to the model’s learning performance. Two methods of tuning hyperparameters were used in the study: grid search and evolutionary algorithms. The model was trained and tested on an in-house dataset containing images with deer and wild boars. After the experiments, the trained architecture achieved the highest score for Mean Average Precision (mAP) of 83%. These results are promising and indicate that the YOLO model can be used for automatic animal detection in various applications, such as wildlife monitoring, environmental protection or security systems.
Go to article

Authors and Affiliations

Łukasz Popek
1 3
Rafał Perz
2 3
Grzegorz Galiński
1
Artur Abratański
2 3

  1. Warsaw University of Technology, Faculty of Electronics and Information Technology
  2. Warsaw University of Technology,Faculty of Power and Aeronautical Engineering
  3. Sieć badawcza Rafał Perz, Poland

This page uses 'cookies'. Learn more