基于改进YOLOv7-tiny的火灾检测算法Fire Detection Algorithm Based on Improved YOLOv7-tiny
王远志,吴迪
摘要(Abstract):
针对现有目标检测算法对发生火灾时小目标火焰漏检、烟雾误检和检测精度低等问题,文章提出了一种基于改进YOLOv7-tiny的火灾检测算法。该算法首先在颈部网络引入聚集和分发机制(gather and distribute mechanism,GD)来改进特征融合模块,从而增强了多尺度特征融合能力。其次采用智慧交并比损失函数(Wise Intersection over Union Loss, WIoU Loss)来作为框回归损失函数,以改善正负样本不平衡问题,进而增强模型对小目标的检测能力。最后在主干网络嵌入协调注意力机制(coordinate attention, CA),使网络能够获得特征图中更多的注意力信息,从而帮助模型更精确地定位火焰烟雾特征。结果表明,对比原始的YOLOv7-tiny算法,改进的算法精确率提高了3.6%,召回率提高了2.0%,平均精度提高了2.7%,这将有助于在复杂场景和小目标火焰中更好地进行火灾检测。
关键词(KeyWords): YOLOv7-tiny算法;智慧交并比损失函数;注意力机制;火灾检测
基金项目(Foundation): 国家重点研发计划项目(SQ2020YFF0402315)
作者(Author): 王远志,吴迪
DOI: 10.13757/j.cnki.cn34-1328/n.2025.03.010
参考文献(References):
- [1]陈志芬,黄靖玲,李亚.适应城市消防规划需求的火灾风险评估研究[J].中国安全生产科学技术, 2019, 15(5):185-191.
- [2]CHEN S J, HOVDE D C, PETERSON K A, et al. Fire detection using smoke and gas sensors[J]. Fire Safety Journal, 2007, 42(8):507-515.
- [3]GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C].IEEE Conference on Computer Vision and Pattern Recognition, 2014.
- [4]GIRSHICK R. Fast R-CNN[C]. IEEE International Conference on Computer Vision, 2015.
- [5]REN S, HE K, GIRSHICK R, et al. Faster R-CNN:towards real-time object detection with region proposal networks[J]. Advances in Neural Information Processing Systems, 2015, 39(6):1137-1149.
- [6]REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once:unified, real-time object detection[C]. IEEE Conference on Computer Vision and Pattern Recognition, 2016.
- [7]LIU W, ANGUELOV D, ERHAN D, et al. SSD:single shot multibox detector[C]. European Conference on Computer Vision, 2016.
- [8]LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]. IEEE International Conference on Computer Vision, 2017.
- [9]KIM B, LEE J. A video-based fire detection using deep learning models[J]. Applied Sciences, 2019, 9(14):2862.
- [10]FRIZZI S, KAABI R, BOUCHOUICHA M, et al. Convolutional neural network for video fire and smoke detection[C]. IECON 2016-42nd Annual Conference of the IEEE Industrial Electronics Society, IEEE, 2016.
- [11]ZHOU B L, SONG Y L, YU M H. Fire smoke detection algorithm based on image disposal[J]. Fire Science and Technology, 2016, 35(3):390-393.
- [12]YAR H, KHAN Z A, ULLAH F U M, et al. A modified YOLOv5 architecture for efficient fire detection in smart cities[J]. Expert Systems with Applications, 2023, 231:120465.
- [13]谢书翰,张文柱,程鹏,等.嵌入通道注意力的YOLOv4火灾烟雾检测模型[J].液晶与显示, 2021, 36(10):1445-1453.
- [14]赵伟,沈乐,徐凯宏.改进YOLOv7算法在火灾现场行人检测中的应用[J].传感器与微系统, 2023, 42(7):165-168.
- [15]WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7:trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C].IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.
- [16]齐向明,董旭.改进Yolov7-tiny的钢材表面缺陷检测算法[J].计算机工程与应用, 2023, 59(12):176-183.
- [17]WANG C, HE W, NIE Y, et al. Gold-YOLO:efficient object detector via gather-and-distribute mechanism[J]. Advances in Neural Information Processing Systems, 2023, 36:51094-51112.
- [18]CHEN D, ZHANG J, JIAO Z, et al. Multi-scale surface defect detection method for bottled products based on variable receptive fields and gather-distribute feature fusion mechanism[J]. Computers and Electrical Engineering, 2024, 116:109148.
- [19]TONG Z, CHEN Y, XU Z, et al. Wise-IoU:bounding box regression loss with dynamic focusing mechanism[J]. arXiv preprint arXiv:2301.10051, 2023.
- [20]HOU Q B, ZHOU D Q, FENG J S. Coordinate attention for efficient mobile network design[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
- [21]石镇岳,侯婷,苏勇东.改进YOLOv7的交通标志检测算法[J].计算机系统应用, 2023, 32(10):157-165.
- [22]刘圣杰,何宁,于海港,等.引入坐标注意力和自注意力的人体关键点检测研究[J].计算机工程, 2022, 48(12):86-94.
- [23]REDMON J, FARHADI A. Yolov3:an incremental improvement[J]. arXiv preprint arXiv:1804.02767, 2018.
- [24]WANG C Y, BOCHKOVSKIYA, LIAO H Y M. Scaled-YOLOv4:scaling cross stage partial network[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
- [25]邱天衡,王玲,王鹏,等.基于改进YOLOv5的目标检测算法研究[J].计算机工程与应用, 2022, 58(13):63-73.