论文标题
基于图像的自动拨号仪的深度学习:数据集和基线
Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines
论文作者
论文摘要
智能电表可实现远程和自动电力,水和天然气消耗读数,并在发达国家广泛部署。尽管如此,仍有大量的非智能仪表正在运行。基于图像的自动读数(AMR)重点是处理这种类型的仪表读数。我们估计,巴西的Paraná(Copel)的能源公司每月进行850,000多个拨号表的读数。这些仪表是这项工作的重点。我们的主要贡献是:(i)称为UFPR-ADMR的公共现实世界拨号仪数据集(根据请求共享); (ii)对拟议数据集的基于深度学习的识别基线; (iii)对拨号表AMR中存在的主要问题的详细错误分析。据我们所知,这是第一个引入多拨号仪阅读的深度学习方法并对无约束图像进行实验的工作。我们在拨号检测阶段达到了100.0%的F1得分,而R-CNN和YOLO都达到了100.0%,而使用更快的R-CNN(resnext-101),拨号的识别率达到了93.6%,识别率达到93.6%,识别率达到75.25%。
Smart meters enable remote and automatic electricity, water and gas consumption reading and are being widely deployed in developed countries. Nonetheless, there is still a huge number of non-smart meters in operation. Image-based Automatic Meter Reading (AMR) focuses on dealing with this type of meter readings. We estimate that the Energy Company of Paraná (Copel), in Brazil, performs more than 850,000 readings of dial meters per month. Those meters are the focus of this work. Our main contributions are: (i) a public real-world dial meter dataset (shared upon request) called UFPR-ADMR; (ii) a deep learning-based recognition baseline on the proposed dataset; and (iii) a detailed error analysis of the main issues present in AMR for dial meters. To the best of our knowledge, this is the first work to introduce deep learning approaches to multi-dial meter reading, and perform experiments on unconstrained images. We achieved a 100.0% F1-score on the dial detection stage with both Faster R-CNN and YOLO, while the recognition rates reached 93.6% for dials and 75.25% for meters using Faster R-CNN (ResNext-101).