Deconstruct to reconstruct: an automated pipeline for parsing complex CT assemblies

Many technical products are assemblies formed from smaller, versatile building blocks. Deconstructing such assemblies is an industrially important problem and an inspiring challenge for machine learning approaches. For the first time, we present an effective and fully automated pipeline for parsing...

Full description

Saved in:
Bibliographic Details
Published in:Machine vision and applications Vol. 37; no. 1; p. 8
Main Authors: Lippmann, Peter, Remme, Roman, Hamprecht, Fred A.
Format: Journal Article
Language:English
Published: Berlin/Heidelberg Springer Berlin Heidelberg 01.01.2026
Springer Nature B.V
Subjects:
ISSN:0932-8092, 1432-1769
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Many technical products are assemblies formed from smaller, versatile building blocks. Deconstructing such assemblies is an industrially important problem and an inspiring challenge for machine learning approaches. For the first time, we present an effective and fully automated pipeline for parsing large-scale, complex 3D assemblies from computed tomography (CT) scans into their individual parts. We have generated and make available a high-quality dataset of simulated, physically accurate CT scans with ground truth annotations. It consists of seven high-resolution CT scans ( voxels) of different technical assemblies with up to 3600 parts, each annotated with instance and semantic labels. The parts strongly vary in size and sometimes differ in fine details only. Our pipeline successfully handles the high-resolution volumetric inputs (3–30 GB) and produces detailed reconstructions of complex assemblies. The pipeline combines a 3D deep boundary detection network trained only on simulated CT scans with efficient graph partitioning to segment the 3D scans. The predicted instance segments are matched and aligned with a known part catalog to form a set of candidate part poses. The subset of these proposals that jointly best reconstructs the assembly is found by solving an instance of the maximum weighted independent set problem. We demonstrate that our approach generalizes to different CT scan setups and yields promising results even on real CT scans. Our pipeline is applicable to models that include parts not seen during training, making our approach adaptable to real-world scenarios.
AbstractList Many technical products are assemblies formed from smaller, versatile building blocks. Deconstructing such assemblies is an industrially important problem and an inspiring challenge for machine learning approaches. For the first time, we present an effective and fully automated pipeline for parsing large-scale, complex 3D assemblies from computed tomography (CT) scans into their individual parts. We have generated and make available a high-quality dataset of simulated, physically accurate CT scans with ground truth annotations. It consists of seven high-resolution CT scans ( voxels) of different technical assemblies with up to 3600 parts, each annotated with instance and semantic labels. The parts strongly vary in size and sometimes differ in fine details only. Our pipeline successfully handles the high-resolution volumetric inputs (3–30 GB) and produces detailed reconstructions of complex assemblies. The pipeline combines a 3D deep boundary detection network trained only on simulated CT scans with efficient graph partitioning to segment the 3D scans. The predicted instance segments are matched and aligned with a known part catalog to form a set of candidate part poses. The subset of these proposals that jointly best reconstructs the assembly is found by solving an instance of the maximum weighted independent set problem. We demonstrate that our approach generalizes to different CT scan setups and yields promising results even on real CT scans. Our pipeline is applicable to models that include parts not seen during training, making our approach adaptable to real-world scenarios.
Many technical products are assemblies formed from smaller, versatile building blocks. Deconstructing such assemblies is an industrially important problem and an inspiring challenge for machine learning approaches. For the first time, we present an effective and fully automated pipeline for parsing large-scale, complex 3D assemblies from computed tomography (CT) scans into their individual parts. We have generated and make available a high-quality dataset of simulated, physically accurate CT scans with ground truth annotations. It consists of seven high-resolution CT scans ( $$\sim \! 2000^3$$ voxels) of different technical assemblies with up to 3600 parts, each annotated with instance and semantic labels. The parts strongly vary in size and sometimes differ in fine details only. Our pipeline successfully handles the high-resolution volumetric inputs (3–30 GB) and produces detailed reconstructions of complex assemblies. The pipeline combines a 3D deep boundary detection network trained only on simulated CT scans with efficient graph partitioning to segment the 3D scans. The predicted instance segments are matched and aligned with a known part catalog to form a set of candidate part poses. The subset of these proposals that jointly best reconstructs the assembly is found by solving an instance of the maximum weighted independent set problem. We demonstrate that our approach generalizes to different CT scan setups and yields promising results even on real CT scans. Our pipeline is applicable to models that include parts not seen during training, making our approach adaptable to real-world scenarios.
Many technical products are assemblies formed from smaller, versatile building blocks. Deconstructing such assemblies is an industrially important problem and an inspiring challenge for machine learning approaches. For the first time, we present an effective and fully automated pipeline for parsing large-scale, complex 3D assemblies from computed tomography (CT) scans into their individual parts. We have generated and make available a high-quality dataset of simulated, physically accurate CT scans with ground truth annotations. It consists of seven high-resolution CT scans ( voxels) of different technical assemblies with up to 3600 parts, each annotated with instance and semantic labels. The parts strongly vary in size and sometimes differ in fine details only. Our pipeline successfully handles the high-resolution volumetric inputs (3–30 GB) and produces detailed reconstructions of complex assemblies. The pipeline combines a 3D deep boundary detection network trained only on simulated CT scans with efficient graph partitioning to segment the 3D scans. The predicted instance segments are matched and aligned with a known part catalog to form a set of candidate part poses. The subset of these proposals that jointly best reconstructs the assembly is found by solving an instance of the maximum weighted independent set problem. We demonstrate that our approach generalizes to different CT scan setups and yields promising results even on real CT scans. Our pipeline is applicable to models that include parts not seen during training, making our approach adaptable to real-world scenarios.
ArticleNumber 8
Author Lippmann, Peter
Remme, Roman
Hamprecht, Fred A.
Author_xml – sequence: 1
  givenname: Peter
  surname: Lippmann
  fullname: Lippmann, Peter
  email: peter.lippmann@iwr.uni-heidelberg.de
  organization: IWR, Heidelberg University
– sequence: 2
  givenname: Roman
  surname: Remme
  fullname: Remme, Roman
  organization: IWR, Heidelberg University
– sequence: 3
  givenname: Fred A.
  surname: Hamprecht
  fullname: Hamprecht, Fred A.
  organization: IWR, Heidelberg University
BookMark eNp9kE1LxDAQhoOs4O7qH_AU8FydfDWtN1k_YcGD6zmkabp0aZOatKD_3miFvXmaGXifmeFZoYXzziJ0SeCaAMibCEBYkQEVGRBJZCZO0JJwRjMi83KBllCmvoCSnqFVjAcA4FLyJXq7t8a7OIbJjHj0OBzHW6wd1tPoez3aGg_tYLvWWdz4gAcdYuv22Ph-6Own3uywjtH2VdfaeI5OG91Fe_FX1-j98WG3ec62r08vm7ttZigjImOQi_Sd4bLStDTQ0JLwoqJW1loy3VghNJXciJoLAbzRRZPzqpJlIU0tLLA1upr3DsF_TDaO6uCn4NJJxRJICClAphSdUyb4GINt1BDaXocvRUD9yFOzPJXkqV95SiSIzVBMYbe34bj6H-obgI5zsw
Cites_doi 10.1201/9781482277234-12
10.1016/j.cad.2018.09.002
10.1109/CVPR52688.2022.01135
10.1137/1.9781611975499.12
10.1016/j.patcog.2013.02.008
10.1007/978-3-642-23094-3_3
10.1007/978-3-030-58452-8_13
10.1080/16864360.2007.10738497
10.1109/CVPR.2019.00963
10.1109/ICCV.2017.26
10.1109/TPAMI.2020.2980827
10.1109/TRO.2020.3033695
10.1109/TIT.1982.1056489
10.1109/CVPR52688.2022.00823
10.23919/MVA.2017.7986888
10.1145/2739480.2754667
10.1109/CVPR.2012.6248074
10.1109/CVPR.2018.00472
10.1007/s41095-022-0296-2
10.1016/j.cad.2004.01.005
10.1016/j.patcog.2015.02.006
10.1007/s10878-006-9635-y
10.1109/CVPR.2013.175
10.1145/3355089.3356504
10.1109/3DV.2016.79
10.1145/571647.571648
10.1109/SMI.2008.4547955
10.1007/978-3-642-15561-1_11
10.1016/S0166-218X(01)00290-6
10.1145/2980179.2980238
10.1109/WACV48630.2021.00038
10.1016/j.neucom.2015.08.127
10.1007/978-3-319-24574-4_28
10.1016/j.patcog.2006.04.034
10.1007/978-3-319-46475-6_47
10.1109/CVPR42600.2020.00178
10.1109/CVPR.2018.00208
10.1023/B:VISI.0000029664.99615.94
10.1007/s11263-009-0257-6
10.1007/978-3-031-19815-1_6
10.1007/978-3-030-89543-3_51
10.1109/CVPR52729.2023.00827
10.12981/motif.356
10.1109/ICCV.2019.00905
10.1109/ROBOT.2009.5152473
10.1016/j.cirp.2014.05.011
10.15607/RSS.2009.V.021
10.1145/1122501.1122507
10.7554/eLife.57613
10.1109/CVPR.2019.00656
10.1145/3549932
10.1016/j.rcim.2020.102086
10.1007/BF01581239
10.1117/12.57955
10.1007/978-3-642-25382-9_2
10.1109/SMI.2004.1314502
10.1016/j.displa.2021.102053
10.1109/ICCV.2011.6126550
10.1109/CVPR46437.2021.00738
10.1109/CVPR.2010.5540108
10.1109/CVPR52688.2022.01539
10.1109/ICARSC55462.2022.9784795
ContentType Journal Article
Copyright The Author(s) 2025
The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: The Author(s) 2025
– notice: The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID C6C
AAYXX
CITATION
DOI 10.1007/s00138-025-01717-5
DatabaseName Springer Nature OA Free Journals
CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
CrossRef

DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
Computer Science
EISSN 1432-1769
ExternalDocumentID 10_1007_s00138_025_01717_5
GrantInformation_xml – fundername: Ruprecht-Karls-Universität Heidelberg (1026)
GroupedDBID -~C
.4S
.86
.DC
.VR
06D
0R~
0VY
199
1N0
203
29M
2J2
2JN
2JY
2KG
2KM
2LR
2~H
30V
4.4
406
408
409
40D
40E
5GY
5VS
67Z
6NX
78A
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAPKM
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
ABAKF
ABBBX
ABBRH
ABBXA
ABDBE
ABDBF
ABDZT
ABECU
ABFSG
ABFTD
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABRTQ
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABWNU
ABXPI
ACAOD
ACDTI
ACGFS
ACHSB
ACHXU
ACIWK
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACSTC
ACZOJ
ADHHG
ADHIR
ADIMF
ADKNI
ADKPE
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEMSY
AENEX
AEOHA
AEPYU
AETLH
AEVLU
AEXYK
AEZWR
AFBBN
AFDZB
AFHIU
AFLOW
AFOHR
AFQWF
AFWTZ
AFZKB
AGAYW
AGDGC
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHPBZ
AHSBF
AHWEU
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AIXLP
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMXSW
AMYLF
AMYQR
AOCGG
ARMRJ
ASPBG
ATHPR
AVWKF
AXYYD
AYFIA
AYJHY
AZFZN
B-.
BA0
BGNMA
BSONS
C6C
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
EAP
EBLON
EBS
EIOEI
ESBYG
ESX
FEDTE
FERAY
FFXSO
FIGPU
FNLPD
FRRFC
FWDCC
GGCAI
GGRSB
GJIRD
GNWQR
GQ7
GQ8
GXS
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I09
IHE
IJ-
IKXTQ
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
KDC
KOV
LAS
LLZTM
M4Y
MA-
N9A
NB0
NPVJJ
NQJWS
O93
O9G
O9I
O9J
OAM
P19
P9O
PF0
PT4
PT5
QOK
QOS
R89
R9I
RHV
RNS
ROL
RPX
RSV
S16
S1Z
S27
S3B
SAP
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
TSG
TSK
TSV
TUC
U2A
UG4
UOJIU
UTJUX
VC2
W23
W48
WK8
YLTOR
Z45
ZMTXR
~EX
-Y2
1SB
28-
2P1
2VQ
5QI
8FE
8FG
AAOBN
AARHV
AAYTO
AAYXX
ABJCF
ABQSL
ABULA
ACBXY
ACUHS
ADHKG
ADMLS
AEBTG
AEFIE
AEKMD
AFEXP
AFFHD
AFGCZ
AFKRA
AGGDS
AGQPQ
AJBLW
ARAPS
ARCSS
B0M
BBWZM
BDATZ
BENPR
BGLVJ
CAG
CCPQU
CITATION
COF
EAD
EDO
EJD
EMK
EPL
FINBP
FSGXE
H13
HCIFZ
I-F
KOW
L6V
M7S
N2Q
NDZJH
NU0
O9-
P62
PHGZM
PHGZT
PQGLB
PTHSS
R4E
RNI
RZK
S26
S28
SCJ
SCLPG
T16
TUS
UZXMN
VFIZW
ZY4
~8M
ID FETCH-LOGICAL-c2315-3065176c47ba29c0f29148b2e7da73afe55a274c5d45504fa8f64bb7987cd5e03
IEDL.DBID RSV
ISSN 0932-8092
IngestDate Sat Nov 29 03:17:10 EST 2025
Thu Nov 27 01:07:27 EST 2025
Sat Nov 22 01:10:31 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords Complex assemblies
3D Object recognition
Assembly reconstruction
Computed tomography
3D Segmentation
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c2315-3065176c47ba29c0f29148b2e7da73afe55a274c5d45504fa8f64bb7987cd5e03
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
OpenAccessLink https://link.springer.com/10.1007/s00138-025-01717-5
PQID 3274111807
PQPubID 2043753
ParticipantIDs proquest_journals_3274111807
crossref_primary_10_1007_s00138_025_01717_5
springer_journals_10_1007_s00138_025_01717_5
PublicationCentury 2000
PublicationDate 2026-01-01
PublicationDateYYYYMMDD 2026-01-01
PublicationDate_xml – month: 01
  year: 2026
  text: 2026-01-01
  day: 01
PublicationDecade 2020
PublicationPlace Berlin/Heidelberg
PublicationPlace_xml – name: Berlin/Heidelberg
– name: New York
PublicationTitle Machine vision and applications
PublicationTitleAbbrev Machine Vision and Applications
PublicationYear 2026
Publisher Springer Berlin Heidelberg
Springer Nature B.V
Publisher_xml – name: Springer Berlin Heidelberg
– name: Springer Nature B.V
References 1717_CR43
1717_CR45
DG Lowe (1717_CR39) 2004; 60
1717_CR46
1717_CR48
L De Chiffre (1717_CR3) 2014; 63
1717_CR49
CY Ip (1717_CR41) 2007; 4
S Qi (1717_CR47) 2021; 69
J-L Shih (1717_CR42) 2007; 40
Y Yamauchi (1717_CR33) 2023; 9
1717_CR50
1717_CR51
1717_CR52
1717_CR10
1717_CR54
1717_CR11
1717_CR55
1717_CR12
1717_CR56
1717_CR13
1717_CR57
1717_CR58
1717_CR15
1717_CR17
1717_CR18
R Gal (1717_CR40) 2006; 25
R Osada (1717_CR36) 2002; 21
G Flitton (1717_CR21) 2015; 48
M Novotni (1717_CR38) 2004; 36
D Velayudhan (1717_CR22) 2022; 55
Z Zhu (1717_CR44) 2016; 204
Y Nagai (1717_CR23) 2019; 107
1717_CR60
1717_CR61
1717_CR62
L Yi (1717_CR6) 2016; 35
1717_CR63
1717_CR66
1717_CR67
1717_CR24
1717_CR25
1717_CR26
H Yang (1717_CR53) 2020; 37
M Werning (1717_CR1) 2012
1717_CR27
1717_CR28
1717_CR29
1717_CR2
1717_CR5
1717_CR4
1717_CR7
1717_CR9
1717_CR8
A Wolny (1717_CR14) 2020; 9
S Wolf (1717_CR16) 2020; 43
S Lloyd (1717_CR64) 1982; 28
W Pullan (1717_CR69) 2006; 12
1717_CR70
PJ Besl (1717_CR65) 1992; 1611
1717_CR71
1717_CR72
1717_CR73
1717_CR30
1717_CR74
1717_CR75
1717_CR32
1717_CR76
1717_CR34
GT Flitton (1717_CR19) 2010; 1
1717_CR37
G Flitton (1717_CR20) 2013; 46
C Zhuang (1717_CR35) 2021; 68
PR Östergård (1717_CR68) 2002; 120
A Ferreira (1717_CR31) 2010; 89
S Chopra (1717_CR59) 1993; 59
References_xml – ident: 1717_CR63
  doi: 10.1201/9781482277234-12
– volume: 107
  start-page: 23
  year: 2019
  ident: 1717_CR23
  publication-title: Comput. Aided Des.
  doi: 10.1016/j.cad.2018.09.002
– ident: 1717_CR17
  doi: 10.1109/CVPR52688.2022.01135
– ident: 1717_CR67
  doi: 10.1137/1.9781611975499.12
– volume: 46
  start-page: 2420
  issue: 9
  year: 2013
  ident: 1717_CR20
  publication-title: Pattern Recogn.
  doi: 10.1016/j.patcog.2013.02.008
– ident: 1717_CR61
  doi: 10.1007/978-3-642-23094-3_3
– ident: 1717_CR10
  doi: 10.1007/978-3-030-58452-8_13
– volume: 4
  start-page: 629
  issue: 5
  year: 2007
  ident: 1717_CR41
  publication-title: Comput. Aided Des. Appl.
  doi: 10.1080/16864360.2007.10738497
– ident: 1717_CR74
  doi: 10.1109/CVPR.2019.00963
– ident: 1717_CR45
  doi: 10.1109/ICCV.2017.26
– volume: 43
  start-page: 3724
  issue: 10
  year: 2020
  ident: 1717_CR16
  publication-title: IEEE Trans. Pattern Analy. Machine Intell.
  doi: 10.1109/TPAMI.2020.2980827
– volume: 37
  start-page: 314
  issue: 2
  year: 2020
  ident: 1717_CR53
  publication-title: IEEE Trans. Rob.
  doi: 10.1109/TRO.2020.3033695
– ident: 1717_CR18
– volume: 28
  start-page: 129
  issue: 2
  year: 1982
  ident: 1717_CR64
  publication-title: IEEE Trans. Inf. Theory
  doi: 10.1109/TIT.1982.1056489
– ident: 1717_CR9
  doi: 10.1109/CVPR52688.2022.00823
– ident: 1717_CR34
  doi: 10.23919/MVA.2017.7986888
– ident: 1717_CR37
– ident: 1717_CR54
  doi: 10.1145/2739480.2754667
– ident: 1717_CR4
  doi: 10.1109/CVPR.2012.6248074
– ident: 1717_CR56
– ident: 1717_CR7
  doi: 10.1109/CVPR.2018.00472
– volume: 9
  start-page: 319
  issue: 2
  year: 2023
  ident: 1717_CR33
  publication-title: Comput. Visual Media
  doi: 10.1007/s41095-022-0296-2
– volume: 36
  start-page: 1047
  issue: 11
  year: 2004
  ident: 1717_CR38
  publication-title: Comput. Aided Des.
  doi: 10.1016/j.cad.2004.01.005
– volume: 48
  start-page: 2489
  issue: 8
  year: 2015
  ident: 1717_CR21
  publication-title: Pattern Recogn.
  doi: 10.1016/j.patcog.2015.02.006
– volume: 12
  start-page: 303
  year: 2006
  ident: 1717_CR69
  publication-title: J. Comb. Optim.
  doi: 10.1007/s10878-006-9635-y
– ident: 1717_CR62
  doi: 10.1109/CVPR.2013.175
– ident: 1717_CR55
  doi: 10.1145/3355089.3356504
– volume-title: The oxford handbook of compositionality
  year: 2012
  ident: 1717_CR1
– ident: 1717_CR13
  doi: 10.1109/3DV.2016.79
– volume: 21
  start-page: 807
  issue: 4
  year: 2002
  ident: 1717_CR36
  publication-title: ACM Trans. Graphics (TOG)
  doi: 10.1145/571647.571648
– ident: 1717_CR49
  doi: 10.1109/SMI.2008.4547955
– ident: 1717_CR50
  doi: 10.1007/978-3-642-15561-1_11
– volume: 120
  start-page: 197
  issue: 1–3
  year: 2002
  ident: 1717_CR68
  publication-title: Discret. Appl. Math.
  doi: 10.1016/S0166-218X(01)00290-6
– volume: 35
  start-page: 1
  issue: 6
  year: 2016
  ident: 1717_CR6
  publication-title: ACM Trans. Graphics (ToG)
  doi: 10.1145/2980179.2980238
– ident: 1717_CR15
– ident: 1717_CR11
  doi: 10.1109/WACV48630.2021.00038
– volume: 204
  start-page: 41
  year: 2016
  ident: 1717_CR44
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2015.08.127
– ident: 1717_CR12
  doi: 10.1007/978-3-319-24574-4_28
– volume: 40
  start-page: 283
  issue: 1
  year: 2007
  ident: 1717_CR42
  publication-title: Pattern Recogn.
  doi: 10.1016/j.patcog.2006.04.034
– ident: 1717_CR52
  doi: 10.1007/978-3-319-46475-6_47
– ident: 1717_CR2
– ident: 1717_CR8
  doi: 10.1109/CVPR42600.2020.00178
– ident: 1717_CR29
  doi: 10.1109/CVPR.2018.00208
– volume: 60
  start-page: 91
  year: 2004
  ident: 1717_CR39
  publication-title: Int. J. Comput. Vision
  doi: 10.1023/B:VISI.0000029664.99615.94
– ident: 1717_CR73
– ident: 1717_CR25
– volume: 89
  start-page: 327
  year: 2010
  ident: 1717_CR31
  publication-title: Int. J. Comput. Vision
  doi: 10.1007/s11263-009-0257-6
– ident: 1717_CR57
  doi: 10.1007/978-3-031-19815-1_6
– ident: 1717_CR70
  doi: 10.1007/978-3-030-89543-3_51
– ident: 1717_CR75
  doi: 10.1109/CVPR52729.2023.00827
– volume: 1
  start-page: 1
  year: 2010
  ident: 1717_CR19
  publication-title: BMVC
– ident: 1717_CR26
  doi: 10.12981/motif.356
– ident: 1717_CR46
  doi: 10.1109/ICCV.2019.00905
– ident: 1717_CR43
  doi: 10.1109/ROBOT.2009.5152473
– volume: 63
  start-page: 655
  issue: 2
  year: 2014
  ident: 1717_CR3
  publication-title: CIRP Ann.
  doi: 10.1016/j.cirp.2014.05.011
– ident: 1717_CR58
– ident: 1717_CR66
  doi: 10.15607/RSS.2009.V.021
– volume: 25
  start-page: 130
  issue: 1
  year: 2006
  ident: 1717_CR40
  publication-title: ACM Trans. Graphics (TOG)
  doi: 10.1145/1122501.1122507
– volume: 9
  start-page: 57613
  year: 2020
  ident: 1717_CR14
  publication-title: Elife
  doi: 10.7554/eLife.57613
– ident: 1717_CR76
  doi: 10.1109/CVPR.2019.00656
– volume: 55
  start-page: 1
  issue: 8
  year: 2022
  ident: 1717_CR22
  publication-title: ACM Comput. Surveys
  doi: 10.1145/3549932
– volume: 68
  year: 2021
  ident: 1717_CR35
  publication-title: Robotics Comput. Integ. Manufact.
  doi: 10.1016/j.rcim.2020.102086
– volume: 59
  start-page: 87
  issue: 1–3
  year: 1993
  ident: 1717_CR59
  publication-title: Mathe. Program.
  doi: 10.1007/BF01581239
– volume: 1611
  start-page: 586
  year: 1992
  ident: 1717_CR65
  publication-title: Sensor Fusion IV Control Paradigms Data Struct.
  doi: 10.1117/12.57955
– ident: 1717_CR24
– ident: 1717_CR48
– ident: 1717_CR51
  doi: 10.1007/978-3-642-25382-9_2
– ident: 1717_CR28
  doi: 10.1109/SMI.2004.1314502
– volume: 69
  year: 2021
  ident: 1717_CR47
  publication-title: Displays
  doi: 10.1016/j.displa.2021.102053
– ident: 1717_CR60
  doi: 10.1109/ICCV.2011.6126550
– ident: 1717_CR30
– ident: 1717_CR5
  doi: 10.1109/CVPR46437.2021.00738
– ident: 1717_CR27
  doi: 10.1109/CVPR.2010.5540108
– ident: 1717_CR71
  doi: 10.1109/CVPR52688.2022.01539
– ident: 1717_CR32
  doi: 10.1109/ICARSC55462.2022.9784795
– ident: 1717_CR72
SSID ssj0004774
Score 2.4119475
Snippet Many technical products are assemblies formed from smaller, versatile building blocks. Deconstructing such assemblies is an industrially important problem and...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Index Database
Publisher
StartPage 8
SubjectTerms Algorithms
Annotations
Assemblies
Automation
Aviation
Communications Engineering
Computed tomography
Computer Science
Datasets
High resolution
Image Processing and Computer Vision
Localization
Machine learning
Medical imaging
Networks
Pattern Recognition
Quality control
Segments
Title Deconstruct to reconstruct: an automated pipeline for parsing complex CT assemblies
URI https://link.springer.com/article/10.1007/s00138-025-01717-5
https://www.proquest.com/docview/3274111807
Volume 37
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAVX
  databaseName: SpringerLINK Contemporary 1997-Present
  customDbUrl:
  eissn: 1432-1769
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0004774
  issn: 0932-8092
  databaseCode: RSV
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07T8MwED5BYYCBRwFRKMgDG1jKy3HChgoVU4VoQd0iO3akDqRRUxA_n7OTNIBggDGO5UR3tu87n-87gAtHod1SnkszlUoaCMapVDymKmBuhu6YdERgi03w0SiaTuOHOimsbG67NyFJu1Ovkt1sUI2a8quG44VTtg4baO4iU7DhcfzcZkPyinsZkQnuv7FXp8r8PMZXc9RizG9hUWtthrv_-8892KnRJbmppsM-rOm8C7s10iT1Oi6xqSnm0LR1YfsTM-EBjG-No1yRy5LlnCzax2siciJel3PEujhoMStMSrsmiH5JIezZA7EX1fU7GUwIgnP9IvH75SE8De8mg3ta11-gKaI-Rk1ReZeHacCl8OLUybwYnSfpaa4E90WmGRPo1KZMmdToIBNRFgZS8jjiqWLa8Y-gk89zfQxEe47UkSey0E-xiyPDzI99FaF76WrfD3tw2aghKSqajWRFqGwFmqBAEyvQhPWg32gqqZdcmfiGiMcQ2vEeXDWaaV__PtrJ37qfwpaHjmt1DNOHDopen8Fm-raclYtzOxU_APpt16A
linkProvider Springer Nature
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1NT8MwDLVgIAEHBgPE-MyBG0Tq2qZpuaHBNMSYEBtotyppUmkHtmkdiJ-Pk7YbIDjAsWmUVnYSP8fxM8CZo9BuKbdBU5VI6gvGqVQ8ospnjRTdMekI3xab4N1uOBhED0VSWFbedi9Dknannie72aAaNeVXDccLp2wZVny0WIYx_7H3vMiG5Dn3MiIT3H8jt0iV-XmMr-ZogTG_hUWttWlV__efW7BZoEtylU-HbVjSoxpUC6RJinWcYVNZzKFsq8HGJ2bCHehdG0c5J5clszGZLh4viRgR8TobI9bFQSfDiUlp1wTRL5kIe_ZA7EV1_U6afYLgXL9I_H62C0-tm36zTYv6CzRB1MeoKSrf4EHicyncKHFSN0LnSbqaK8E9kWrGBDq1CVMmNdpPRZgGvpQ8CnmimHa8PaiMxiO9D0S7jtShK9LAS7CLI4PUizwVonvZ0J4X1OG8VEM8yWk24jmhshVojAKNrUBjVoejUlNxseSy2DNEPIbQjtfhotTM4vXvox38rfsprLX79524c9u9O4R1F53Y_EjmCCqoBn0Mq8nbbJhNT-y0_AAfNdqE
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1dS8MwFL34heiDH1NxOjUPvmlY1zZN65uoQ1HGwA_2VpImgT3YFTfFn-9N2rop-iA-tg1pyU2ac5KccwGOPYXzlvI71KhM0lAwTqXiCVUh6xikY9IToUs2wXu9eDBI-jMqfnfavd6SLDUN1qUpn7QLZdqfwje3wUZtKlbr98Ipm4fF0B6kt3z9_mmqjOSlDzOiFPwXJ34lm_m5jq9T0xRvftsidTNPd_3_37wBaxXqJOdlN9mEOZ03YL1CoKQa32O8VSd5qO81YHXGsXAL7i8tgS5NZ8lkRF6ml2dE5ES8TkaIgbHSYlhYqbsmiIpJIdyaBHEH2PU7uXggCNr1s8T3j7fhsXv1cHFNq7wMNEM0yKhNNt_hURZyKfwk84yfIKmSvuZK8EAYzZhAspsxZSXToRGxiUIpeRLzTDHtBTuwkI9yvQtE-57UsS9MFGRYxJORCZJAxUg7OzoIoiac1CFJi9J-I_00WnYNmmKDpq5BU9aEVh21tBqK4zSwBj3W6I434bSO0vTx77Xt_a34ESz3L7vp3U3vdh9WfOS25UpNCxYwCvoAlrK3yXD8cuh66Af-LeNo
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deconstruct+to+reconstruct%3A+an+automated+pipeline+for+parsing+complex+CT+assemblies&rft.jtitle=Machine+vision+and+applications&rft.au=Lippmann%2C+Peter&rft.au=Remme%2C+Roman&rft.au=Hamprecht%2C+Fred+A.&rft.date=2026-01-01&rft.pub=Springer+Berlin+Heidelberg&rft.issn=0932-8092&rft.eissn=1432-1769&rft.volume=37&rft.issue=1&rft_id=info:doi/10.1007%2Fs00138-025-01717-5&rft.externalDocID=10_1007_s00138_025_01717_5
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0932-8092&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0932-8092&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0932-8092&client=summon