PixISegNet: pixel-level iris segmentation network using convolutional encoder–decoder with stacked hourglass bottleneck
In this paper, we present a new iris ROI segmentation algorithm using a deep convolutional neural network (NN) to achieve the state-of-the-art segmentation performance on well-known iris image data sets. The authors’ model surpasses the performance of state-of-the-art Iris DenseNet framework by appl...
Uložené v:
| Vydané v: | IET biometrics Ročník 9; číslo 1; s. 11 - 24 |
|---|---|
| Hlavní autori: | , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Stevenage
The Institution of Engineering and Technology
01.01.2020
John Wiley & Sons, Inc |
| Predmet: | |
| ISSN: | 2047-4938, 2047-4946, 2047-4946 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Shrnutí: | In this paper, we present a new iris ROI segmentation algorithm using a deep convolutional neural network (NN) to achieve the state-of-the-art segmentation performance on well-known iris image data sets. The authors’ model surpasses the performance of state-of-the-art Iris DenseNet framework by applying several strategies, including multi-scale/ multi-orientation training, model training from scratch, and proper hyper-parameterisation of crucial parameters. The proposed PixISegNet consists of an autoencoder which primarily uses long and short skip connections and a stacked hourglass network between encoder and decoder. There is a continuous scale up–down in stacked hourglass networks, which helps in extracting features at multiple scales and robustly segments the iris even in an occluded environment. Furthermore, cross-entropy loss and content loss optimise the proposed model. The content loss considers the high-level features, thus operating at a different scale of abstraction, which compliments the cross-entropy loss, which considers pixel-to-pixel classification loss. Additionally, they have checked the robustness of the proposed network by rotating images to certain degrees with a change in the aspect ratio along with blurring and a change in contrast. Experimental results on the various iris characteristics demonstrate the superiority of the proposed method over state-of-the-art iris segmentation methods considered in this study. In order to demonstrate the network generalisation, they deploy a very stringent TOTA (i.e. train-once-test-all) strategy. Their proposed method achieves $E_1$E1 scores of 0.00672, 0.00916 and 0.00117 on UBIRIS-V2, IIT-D and CASIA V3.0 Interval data sets, respectively. Moreover, such a deep convolutional NN for segmentation when included in an end-to-end iris recognition system with a siamese based matching network will augment the performance of the siamese network. |
|---|---|
| Bibliografia: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 2047-4938 2047-4946 2047-4946 |
| DOI: | 10.1049/iet-bmt.2019.0025 |