Note: Our project consists of two sub-projects: one is 1Benchmarking Multi-Scene Fire and Smoke Detection and the other is 2Fire and Smoke Detection with Burning Intensity Representation. Our FSD Code release time will be after Dec. 30.


1Benchmarking Multi-Scene Fire and Smoke Detection

1School of Software Technology, Zhejiang University, 2University of Trento,
3Hefei University of Technology, 4Suzhou City University
Corresponding Author

Accepted to PRCV 2024

Note

Dear Visitor,
  Hello! If you would like to use our Unified FSD Datasets, please click on the Google Drive link and provide your institution (e.g., school, company, or None if applicable) along with a brief description of your intended use for the datasets. After that, please send a request for access to the shared folder. Note: Applications without an institution and a description of the intended use will be rejected.
  Additionally, our code includes data processing and evaluation methods. You can view it on our GitHub. Thank you!

Best regards,

Xiaoyi Han

Abstract

The current irregularities in existing public Fire and Smoke Detection (FSD) datasets have become a bottleneck in the advancement of FSD technology. Upon in-depth analysis, we identify the core issue as the lack of standardized dataset construction, uniform evaluation systems, and clear performance benchmarks. To address this issue and drive innovation in FSD technology, we systematically gather diverse resources from public sources to create a more comprehensive and refined FSD benchmark. Additionally, recognizing the inadequate coverage of existing dataset scenes, we strategically expand scenes, relabel, and standardize existing public FSD datasets to ensure accuracy and consistency. We aim to establish a standardized, realistic, unified, and efficient FSD research platform that mirrors real-life scenes closely. Through our efforts, we aim to provide robust support for the breakthrough and development of FSD technology.

MS-FSDB

Our MS-FSDB and other FSD Datasets Overview

At first, we propose a new Multi-Scene Fire and Smoke Detection Benchmark (MS-FSDB) comprising 12,586 images, depicting 2,731 scenes as illustrated in the aforementioned images. Most images within our benchmark possess dimensions exceeding 600 pixels in either length or width. Unlike previous public Fire and Smoke Detection (FSD) datasets, our benchmark not only includes flame detection but also smoke detection tasks. Additionally, it captures complex scenes featuring occlusion, multiple targets, and various viewpoints.

To access MS-FSDB, please click here.

Then, we compared our benchmark with the most prevalent and easily accessible Fire and Smoke Detection (FSD) datasets, subjecting these popular datasets to the same secondary processing as our own, including labeling and image selection, to facilitate research in the field of fire detection. These datasets include Fire-Smoke-Dataset1, Furg-Fire-Dataset2, VisiFire3, FIRESENSE4, BoWFire5.

To access Public FSD Datasets, please click here.

To access Fire-Smoke-Dataset, please click here.
1.DeepQuestAI, Fire-smoke-dataset (2021). URL https://github.com/DeepQuestAI/Fire-Smoke-Dataset.

To access Furg-Fire-Dataset, please click here.

2.V. H¨uttner, C. R. Steffens, S. S. da Costa Botelho, First response fire combat: Deep leaning based visible fire detection, in: 2017 Latin American Robotics Symposium (LARS) and 2017 Brazilian Symposium on Robotics (SBR), IEEE, 2017, pp. 1–6. doi:10.1109/SBR-LARS-R.2017.8215312.

To access VisiFire, please click here.

3.B. U. Toreyin, A. E. Cetin, Online detection of fire in video, in: 2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007, pp. 1–5. doi: 10.1109/CVPR.2007.383442.

To access FIRESENSE, please click here.

4.K. Dimitropoulos, P. Barmpoutis, N. Grammalidis, Spatio-temporal flame modeling and dynamic texture analysis for automatic video-based fire detection, IEEE Transactions on Circuits and Systems for Video Technology 25 (2) (2015) 339–351. doi:10.1109/TCSVT.2014.2339592.

To access BowFire, please click here.

5.D. Y. T. Chino, L. P. S. Avalhais, J. F. Rodrigues, A. J. M. Traina, Bowfire: Detection of fire in still images by integrating pixel color and texture analysis, in: 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images, 2015, pp. 95–102. doi:10.1109/SIBGRAPI.2015.19.

Benchmark Presentation and Experimental Results

BibTeX

@InProceedings{hanprcvfsd,
      author="Han, Xiaoyi and Pu, Nan and Feng, Zunlei and Bei, Yijun and Zhang, Qifei and Cheng, Lechao and Xue, Liang", 
      editor="Lin, Zhouchen and Cheng, Ming-Ming and He, Ran and Ubul, Kurban and Silamu, Wushouer and Zha, Hongbin and Zhou, Jie and Liu, Cheng-Lin", 
      title="Benchmarking Multi-Scene Fire and Smoke Detection",booktitle="Pattern Recognition and Computer Vision", 
      year="2025", publisher="Springer Nature Singapore", address="Singapore", pages="203--218", 
      abstract="The current irregularities in existing public Fire and Smoke Detection (FSD) datasets have become a bottleneck in the advancement of FSD technology. Upon in-depth analysis, we identify the core issue as the lack of standardized dataset construction, uniform evaluation systems, and clear performance benchmarks. To address this issue and drive innovation in FSD technology, we systematically gather diverse resources from public sources to create a more comprehensive and refined FSD benchmark. Additionally, recognizing the inadequate coverage of existing dataset scenes, we strategically expand scenes, relabel, and standardize existing public FSD datasets to ensure accuracy and consistency. We aim to establish a standardized, realistic, unified, and efficient FSD research platform that mirrors real-life scenes closely. Through our efforts, we aim to provide robust support for the breakthrough and development of FSD technology. The project is available at https://xiaoyihan6.github.io/FSD/.",
isbn="978-981-97-8795-1"}

******************************************************************************************************************************************************************************************************************************************************************************************************************************************************************

2Fire and Smoke Detection with Burning Intensity Representation

1School of Software Technology, Zhejiang University, 2China Mobile (Suzhou) Software Technology Co., Ltd.,
3University of Trento, 4Hefei University of Technology
Corresponding Author

Accepted to ACM MM asia 2024

Abstract

An effective Fire and Smoke Detection (FSD) and analysis system is of paramount importance due to the profound destructive potential of fire disasters. However, many existing FSD methods directly employ generic object detection techniques without accounting for the transparency of the fire and smoke, inevitably leading to imprecise localization of fire and smoke areas and consequently diminishing detection performance of fire and smoke. To address this issue, a new Attentive Transparency Detection Head (ATDH) is proposed to improve the accuracy of transparent target detection while retaining the robust feature extraction and fusion capabilities of conventional detection algorithms. In addition, Burning Intensity (BI) is introduced as a pivotal feature for fire-related downstream risk assessment in traditional FSD methodologies. Extensive experiments on multiple FSD datasets showcase the effectiveness and versatility of the proposed FSD model.

Method and Experimental Results

BibTeX

@article{han2024mmasia,
  title={Fire and Smoke Detection with Burning Intensity Representation},
  author={Xiaoyi Han and Yanfei Wu and Nan Pu and Zunlei Feng and Qifei Zhang and Yijun Bei and Lechao Cheng},
  booktitle={The 6th ACM Multimedia Asia conference},
  year={2024}
}