VGGNet and Attention Mechanism-Based Image Quality Assessment Algorithm in Symmetry Edge Intelligence Systems
With the rapid development of Internet of Things (IoT) technology, the number of devices connected to the network is exploding. How to improve the performance of edge devices has become an important challenge. Research on quality evaluation algorithms for brain tumor images remains scarce within sym...
Saved in:
| Published in: | Symmetry (Basel) Vol. 17; no. 3; p. 331 |
|---|---|
| Main Authors: | , , , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Basel
MDPI AG
01.03.2025
|
| Subjects: | |
| ISSN: | 2073-8994, 2073-8994 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | With the rapid development of Internet of Things (IoT) technology, the number of devices connected to the network is exploding. How to improve the performance of edge devices has become an important challenge. Research on quality evaluation algorithms for brain tumor images remains scarce within symmetry edge intelligence systems. Additionally, the data volume in brain tumor datasets is frequently inadequate to support the training of neural network models. Most existing non-reference image quality assessment methods are based on natural statistical laws or construct a single-network model without considering visual perception characteristics, resulting in significant differences between the final evaluation results and subjective perception. To address these issues, we propose the AM-VGG-IQA (Attention Module Visual Geometry Group Image Quality Assessment) algorithm and extend the brain tumor MRI dataset. Visual saliency features with attention mechanism modules are integrated into AM-VGG-IQA. The integration of visual saliency features brings the evaluation outcomes of the model more in line with human perception. Meanwhile, the attention mechanism module cuts down on network parameters and expedites the training speed. For the brain tumor MRI dataset, our model achieves 85% accuracy, enabling it to effectively accomplish the task of evaluating brain tumor images in edge intelligence systems. Additionally, we carry out cross-dataset experiments. It is worth noting that, under varying training and testing ratios, the performance of AM-VGG-IQA remains relatively stable, which effectively demonstrates its remarkable robustness for edge applications. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 2073-8994 2073-8994 |
| DOI: | 10.3390/sym17030331 |