Spatially variant biases considered self-supervised depth estimation based on laparoscopic videos
Depth estimation is an essential tool in obtaining depth information for robotic surgery and augmented reality technology in the current laparoscopic surgery robot system. Since there is a lack of ground-truth for depth values and laparoscope motions during operation, depth estimation networks have...
Uložené v:
| Vydané v: | Computer methods in biomechanics and biomedical engineering. Ročník 10; číslo 3; s. 274 - 282 |
|---|---|
| Hlavní autori: | , , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Taylor & Francis
04.05.2022
|
| Predmet: | |
| ISSN: | 2168-1163, 2168-1171 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Shrnutí: | Depth estimation is an essential tool in obtaining depth information for robotic surgery and augmented reality technology in the current laparoscopic surgery robot system. Since there is a lack of ground-truth for depth values and laparoscope motions during operation, depth estimation networks have difficulties in predicting depth maps from laparoscopic images under the supervised strategy. It is challenging to generate the correct depth maps for the different environments from abdominal images. To tackle these problems, we propose a novel monocular self-supervised depth estimation network with sparse nest architecture. We design a non-local block to capture broader and deeper context features that can further enhance the scene-variant generalisation capacity of the network for the differences in datasets. Moreover, we introduce an improved multi-mask feature in the loss function to tackle the classical occlusion problem based on the time-series information from stereo videos. We also use heteroscedastic aleatoric uncertainty to reduce the effect of noisy data for depth estimation. We compared our proposed method with other existing methods for different scenes in datasets. The experimental results show that the proposed model outperformed the state-of-the-art models qualitatively and quantitatively. |
|---|---|
| ISSN: | 2168-1163 2168-1171 |
| DOI: | 10.1080/21681163.2021.2015723 |