Learning of 3D Graph Convolution Networks for Point Cloud Analysis

Point clouds are among the popular geometry representations in 3D vision. However, unlike 2D images with pixel-wise layouts, such representations containing unordered data points which make the processing and understanding the associated semantic information quite challenging. Although a number of p...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on pattern analysis and machine intelligence Ročník 44; číslo 8; s. 4212 - 4224
Hlavní autori: Lin, Zhi-Hao, Huang, Sheng-Yu, Wang, Yu-Chiang Frank
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: United States IEEE 01.08.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:0162-8828, 1939-3539, 2160-9292, 1939-3539
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Point clouds are among the popular geometry representations in 3D vision. However, unlike 2D images with pixel-wise layouts, such representations containing unordered data points which make the processing and understanding the associated semantic information quite challenging. Although a number of previous works attempt to analyze point clouds and achieve promising performances, their performances would degrade significantly when data variations like shift and scale changes are presented. In this paper, we propose 3D graph convolution networks (3D-GCN) , which uniquely learns 3D kernels with graph max-pooling mechanisms for extracting geometric features from point cloud data across different scales. We show that, with the proposed 3D-GCN, satisfactory shift and scale invariance can be jointly achieved. We show that 3D-GCN can be applied to point cloud classification and segmentation tasks, with ablation studies and visualizations verifying the design of 3D-GCN.
AbstractList Point clouds are among the popular geometry representations in 3D vision. However, unlike 2D images with pixel-wise layouts, such representations containing unordered data points which make the processing and understanding the associated semantic information quite challenging. Although a number of previous works attempt to analyze point clouds and achieve promising performances, their performances would degrade significantly when data variations like shift and scale changes are presented. In this paper, we propose 3D graph convolution networks (3D-GCN) , which uniquely learns 3D kernels with graph max-pooling mechanisms for extracting geometric features from point cloud data across different scales. We show that, with the proposed 3D-GCN, satisfactory shift and scale invariance can be jointly achieved. We show that 3D-GCN can be applied to point cloud classification and segmentation tasks, with ablation studies and visualizations verifying the design of 3D-GCN.
Point clouds are among the popular geometry representations in 3D vision. However, unlike 2D images with pixel-wise layouts, such representations containing unordered data points which make the processing and understanding the associated semantic information quite challenging. Although a number of previous works attempt to analyze point clouds and achieve promising performances, their performances would degrade significantly when data variations like shift and scale changes are presented. In this paper, we propose 3D graph convolution networks (3D-GCN), which uniquely learns 3D kernels with graph max-pooling mechanisms for extracting geometric features from point cloud data across different scales. We show that, with the proposed 3D-GCN, satisfactory shift and scale invariance can be jointly achieved. We show that 3D-GCN can be applied to point cloud classification and segmentation tasks, with ablation studies and visualizations verifying the design of 3D-GCN.Point clouds are among the popular geometry representations in 3D vision. However, unlike 2D images with pixel-wise layouts, such representations containing unordered data points which make the processing and understanding the associated semantic information quite challenging. Although a number of previous works attempt to analyze point clouds and achieve promising performances, their performances would degrade significantly when data variations like shift and scale changes are presented. In this paper, we propose 3D graph convolution networks (3D-GCN), which uniquely learns 3D kernels with graph max-pooling mechanisms for extracting geometric features from point cloud data across different scales. We show that, with the proposed 3D-GCN, satisfactory shift and scale invariance can be jointly achieved. We show that 3D-GCN can be applied to point cloud classification and segmentation tasks, with ablation studies and visualizations verifying the design of 3D-GCN.
Author Lin, Zhi-Hao
Wang, Yu-Chiang Frank
Huang, Sheng-Yu
Author_xml – sequence: 1
  givenname: Zhi-Hao
  orcidid: 0000-0002-4831-5488
  surname: Lin
  fullname: Lin, Zhi-Hao
  email: r08942062@ntu.edu.tw
  organization: Graduate Institute of Communication Engineering, National Taiwan University, Taipei, Taiwan
– sequence: 2
  givenname: Sheng-Yu
  orcidid: 0000-0002-3149-9620
  surname: Huang
  fullname: Huang, Sheng-Yu
  email: r08942095@ntu.edu.tw
  organization: Graduate Institute of Communication Engineering, National Taiwan University, Taipei, Taiwan
– sequence: 3
  givenname: Yu-Chiang Frank
  orcidid: 0000-0002-2333-157X
  surname: Wang
  fullname: Wang, Yu-Chiang Frank
  email: ycwang@ntu.edu.tw
  organization: Graduate Institute of Communication Engineering, National Taiwan University, Taipei, Taiwan
BackLink https://www.ncbi.nlm.nih.gov/pubmed/33591911$$D View this record in MEDLINE/PubMed
BookMark eNp9kU1PGzEQhq0KVALlDxSpssSll03Hn2sfQ9pCpNDmAGfLu-stphs72LtU_PtuSODAgdNIo-cZzcx7jA5CDA6hzwSmhID-drOaXS-mFCiZMhC6FOoDmlAiodBU0wM0ASJpoRRVR-g453sAwgWwj-iIMaGJJmSCLpbOpuDDHxxbzL7jy2Q3d3gew2Psht7HgH-5_l9MfzNuY8Kr6EOP510cGjwLtnvKPn9Ch63tsjvd1xN0-_PHzfyqWP6-XMxny6JmgvSF1FVblmUFlawFNExxq1oYG43UjnKpBJfCgW6hJq7mnFlZMWKdahSUvNTsBH3dzd2k-DC43Ju1z7XrOhtcHLKhXIMETqUY0fM36H0c0rjvSElFx9s53VJf9tRQrV1jNsmvbXoyL98ZAbUD6hRzTq41te_t9it9sr4zBMw2CPMchNkGYfZBjCp9o75Mf1c620neOfcqaCYEjOv-B7dzkLc
CODEN ITPIDJ
CitedBy_id crossref_primary_10_3390_rs14092187
crossref_primary_10_1109_ACCESS_2022_3144449
crossref_primary_10_1109_TII_2024_3514159
crossref_primary_10_1109_TPAMI_2022_3178184
crossref_primary_10_1109_TGRS_2024_3416219
crossref_primary_10_1109_TPAMI_2024_3378708
crossref_primary_10_1109_TPAMI_2023_3298711
crossref_primary_10_1080_10095020_2023_2264337
crossref_primary_10_1016_j_cag_2023_03_008
crossref_primary_10_7717_peerj_cs_1738
crossref_primary_10_1007_s10489_023_04754_7
crossref_primary_10_1016_j_engappai_2024_109224
crossref_primary_10_1109_TPAMI_2024_3387838
crossref_primary_10_1016_j_cag_2022_07_002
crossref_primary_10_1016_j_neucom_2021_10_072
crossref_primary_10_1109_JSEN_2024_3452673
crossref_primary_10_1109_TIP_2025_3571680
crossref_primary_10_1109_TMM_2024_3358695
crossref_primary_10_1109_TPAMI_2025_3532637
crossref_primary_10_3390_rs13163288
crossref_primary_10_1109_TMM_2023_3342697
crossref_primary_10_1109_TPAMI_2024_3400402
crossref_primary_10_1108_ECAM_12_2023_1227
Cites_doi 10.1109/CVPR.2018.00275
10.1109/CVPR.2016.90
10.1145/3450626.3459787
10.1109/CVPR.2017.11
10.1109/CVPR.2019.00910
10.1007/978-3-319-46484-8_38
10.1109/CVPR.2017.576
10.1109/ICIP.2011.6116679
10.1109/ICCV.2019.00169
10.1145/3065386
10.1109/CVPR42600.2020.00187
10.1111/j.1467-8659.2011.01884.x
10.1109/CVPR.2018.00526
10.1109/TNNLS.2020.2978386
10.1109/CVPR.2018.00376
10.1145/3306346.3322959
10.1109/ICCVW.2015.112
10.1145/3197517.3201301
10.1007/978-3-319-24574-4_28
10.1109/ICCV.2017.99
10.1109/CVPR.2015.7298801
10.1109/CVPR.2018.00409
10.1109/ICCV.2015.114
10.1109/CVPR.2016.609
10.18178/wcse.2019.06.016
10.5555/3157382.3157527
10.1109/TPAMI.2020.2983410
10.5555/3295222.3295263
10.1109/CVPR.2015.7298965
10.1145/2980179.2980238
10.1109/ICCV.2019.00859
10.1109/TPAMI.2007.41
10.1109/TPAMI.2014.2316828
10.1109/CVPR.2016.170
10.1109/CVPR42600.2020.01112
10.1109/CVPR.2018.00979
10.1109/IROS.2008.4650967
10.1109/CVPR.2017.701
10.1109/CVPR.2018.00102
10.1145/3326362
10.1109/IROS.2015.7353481
10.1109/TPAMI.2016.2644615
10.1109/CVPR.2018.00278
10.1109/CVPR.2018.00109
10.1109/CVPR.2018.00478
10.1109/ICCV.2019.00651
10.1109/ROBOT.2009.5152473
10.1109/CVPR.2019.00985
10.1007/978-3-030-01234-2_7
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TPAMI.2021.3059758
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList
PubMed
MEDLINE - Academic
Technology Research Database
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 2160-9292
1939-3539
EndPage 4224
ExternalDocumentID 33591911
10_1109_TPAMI_2021_3059758
9355025
Genre orig-research
Journal Article
GrantInformation_xml – fundername: Ministry of Science and Technology of Taiwan
  grantid: MOST 109-2634-F-002-037
GroupedDBID ---
-DZ
-~X
.DC
0R~
29I
4.4
53G
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
RXW
TAE
TN5
UHB
~02
5VS
9M8
AAYXX
ABFSI
ADRHT
AETEA
AETIX
AGSQL
AI.
AIBXA
ALLEH
CITATION
FA8
H~9
IBMZZ
ICLAB
IFJZH
RNI
RZB
VH1
NPM
RIC
Z5M
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c351t-69bf777b0b6c50d384a8f077bd69e24685465e09f0c1ec443a6b31ae8d8074793
IEDL.DBID RIE
ISICitedReferencesCount 58
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000820521600002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0162-8828
1939-3539
IngestDate Sat Sep 27 21:41:05 EDT 2025
Sun Jun 29 15:54:18 EDT 2025
Wed Feb 19 02:28:10 EST 2025
Sat Nov 29 05:16:00 EST 2025
Tue Nov 18 22:53:29 EST 2025
Wed Aug 27 02:23:53 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 8
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c351t-69bf777b0b6c50d384a8f077bd69e24685465e09f0c1ec443a6b31ae8d8074793
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-2333-157X
0000-0002-3149-9620
0000-0002-4831-5488
PMID 33591911
PQID 2682919425
PQPubID 85458
PageCount 13
ParticipantIDs proquest_journals_2682919425
crossref_primary_10_1109_TPAMI_2021_3059758
crossref_citationtrail_10_1109_TPAMI_2021_3059758
ieee_primary_9355025
pubmed_primary_33591911
proquest_miscellaneous_2490604265
PublicationCentury 2000
PublicationDate 2022-08-01
PublicationDateYYYYMMDD 2022-08-01
PublicationDate_xml – month: 08
  year: 2022
  text: 2022-08-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on pattern analysis and machine intelligence
PublicationTitleAbbrev TPAMI
PublicationTitleAlternate IEEE Trans Pattern Anal Mach Intell
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref12
ref56
ref15
ref14
ref53
ref52
ref55
ref10
ref54
ref17
ref16
ref18
ref51
ref50
ref45
Qi (ref9)
ref48
ref47
ref42
ref41
ref44
ref43
ref49
ref8
ref7
ref4
ref3
ref6
ref5
Belongie (ref27)
ref35
ref34
ref37
ref36
ref31
Roynard (ref19) 2018
ref30
ref33
ref32
ref2
ref1
ref38
Boscaini (ref11)
Li (ref46)
ref24
ref23
ref26
ref25
ref20
ref22
ref21
ref28
ref29
Veličković (ref40) 2018
Hamilton (ref39)
References_xml – year: 2018
  ident: ref40
  article-title: Graph attention networks
  publication-title: Proc. Int. Conf. Learn. Representations
– ident: ref42
  doi: 10.1109/CVPR.2018.00275
– ident: ref7
  doi: 10.1109/CVPR.2016.90
– ident: ref21
  doi: 10.1145/3450626.3459787
– ident: ref10
  doi: 10.1109/CVPR.2017.11
– ident: ref13
  doi: 10.1109/CVPR.2019.00910
– ident: ref3
  doi: 10.1007/978-3-319-46484-8_38
– ident: ref6
  doi: 10.1109/CVPR.2017.576
– year: 2018
  ident: ref19
  article-title: Classification of point cloud scenes with multiscale voxel deep network
– ident: ref31
  doi: 10.1109/ICIP.2011.6116679
– start-page: 1025
  volume-title: Proc. 31st Int. Conf. Neural Inf. Process. Syst.
  ident: ref39
  article-title: Inductive representation learning on large graphs
– ident: ref48
  doi: 10.1109/ICCV.2019.00169
– ident: ref8
  doi: 10.1145/3065386
– ident: ref16
  doi: 10.1109/CVPR42600.2020.00187
– ident: ref33
  doi: 10.1111/j.1467-8659.2011.01884.x
– ident: ref23
  doi: 10.1109/CVPR.2018.00526
– ident: ref41
  doi: 10.1109/TNNLS.2020.2978386
– ident: ref2
  doi: 10.1109/CVPR.2018.00376
– ident: ref43
  doi: 10.1145/3306346.3322959
– ident: ref5
  doi: 10.1109/ICCVW.2015.112
– ident: ref26
  doi: 10.1145/3197517.3201301
– ident: ref52
  doi: 10.1007/978-3-319-24574-4_28
– ident: ref35
  doi: 10.1109/ICCV.2017.99
– ident: ref4
  doi: 10.1109/CVPR.2015.7298801
– ident: ref25
  doi: 10.1109/CVPR.2018.00409
– ident: ref22
  doi: 10.1109/ICCV.2015.114
– ident: ref24
  doi: 10.1109/CVPR.2016.609
– ident: ref37
  doi: 10.18178/wcse.2019.06.016
– ident: ref38
  doi: 10.5555/3157382.3157527
– ident: ref47
  doi: 10.1109/TPAMI.2020.2983410
– start-page: 798
  volume-title: Proc. 13th Int. Conf. Neural Inf. Process. Syst.
  ident: ref27
  article-title: Shape context: A new descriptor for shape matching and object recognition
– ident: ref44
  doi: 10.5555/3295222.3295263
– ident: ref51
  doi: 10.1109/CVPR.2015.7298965
– start-page: 828
  volume-title: Proc. 32nd Int. Conf. Neural Inf. Process. Syst.
  ident: ref46
  article-title: PointCNN: Convolution on X-transformed points
– ident: ref55
  doi: 10.1145/2980179.2980238
– ident: ref17
  doi: 10.1109/ICCV.2019.00859
– ident: ref28
  doi: 10.1109/TPAMI.2007.41
– ident: ref32
  doi: 10.1109/TPAMI.2014.2316828
– ident: ref56
  doi: 10.1109/CVPR.2016.170
– ident: ref50
  doi: 10.1109/CVPR42600.2020.01112
– ident: ref54
  doi: 10.1109/CVPR.2018.00979
– ident: ref30
  doi: 10.1109/IROS.2008.4650967
– ident: ref20
  doi: 10.1109/CVPR.2017.701
– ident: ref1
  doi: 10.1109/CVPR.2018.00102
– ident: ref15
  doi: 10.1145/3326362
– ident: ref18
  doi: 10.1109/IROS.2015.7353481
– start-page: 3197
  volume-title: Proc. 30th Int. Conf. Neural Inf. Process. Syst.
  ident: ref11
  article-title: Learning shape correspondence with anisotropic convolutional neural networks
– ident: ref53
  doi: 10.1109/TPAMI.2016.2644615
– ident: ref34
  doi: 10.1109/CVPR.2018.00278
– ident: ref45
  doi: 10.1109/CVPR.2018.00109
– ident: ref12
  doi: 10.1109/CVPR.2018.00478
– ident: ref14
  doi: 10.1109/ICCV.2019.00651
– ident: ref29
  doi: 10.1109/ROBOT.2009.5152473
– ident: ref49
  doi: 10.1109/CVPR.2019.00985
– start-page: 77
  volume-title: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.
  ident: ref9
  article-title: PointNet: Deep learning on point sets for 3D classification and segmentation
– ident: ref36
  doi: 10.1007/978-3-030-01234-2_7
SSID ssj0014503
Score 2.607508
Snippet Point clouds are among the popular geometry representations in 3D vision. However, unlike 2D images with pixel-wise layouts, such representations containing...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 4212
SubjectTerms 3D classification
3D segmentation
3D vision
Ablation
Convolution
Data points
deformable kernels
Feature extraction
graph convolution networks
Image segmentation
Kernel
point clouds
Representations
Scale invariance
Shape
Task analysis
Three dimensional models
Three-dimensional displays
Two dimensional displays
Title Learning of 3D Graph Convolution Networks for Point Cloud Analysis
URI https://ieeexplore.ieee.org/document/9355025
https://www.ncbi.nlm.nih.gov/pubmed/33591911
https://www.proquest.com/docview/2682919425
https://www.proquest.com/docview/2490604265
Volume 44
WOSCitedRecordID wos000820521600002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2160-9292
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014503
  issn: 0162-8828
  databaseCode: RIE
  dateStart: 19790101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS-RAEC5URPSgru7q-KIFb7vRdLrTSR91fB52mIPC3EL6kUWQRJwZf7_VnU5Q0AVvIV15UI9UVbrqK4ATITOq0Q9FlNo04jRmkeKURpqXSqSJSjNV-WET2WiUTyZyvAB_-l4Ya60vPrOn7tDv5ZtGz92vsjOHBY4-ehEWsyxre7X6HQOe-inIGMGghWMa0TXIxPLsfnz-9w5TwYSeonZLjJBXYYWxVGKuQj_4Iz9g5etY0_uc643vve0mrIfYkpy3yvADFmy9BRvd3AYSzHgL1t6BEG7DRYBY_UeairBLcuMgrMmwqV-DVpJRWyo-JRjgknHzWM_I8KmZG9IhmvyEh-ur--FtFCYrRJqldBYJqSpkoYqV0GlsWM7LvIrxhBHSJlzkbkS6jWUVa2o156wUitHS5sZh56BJ_4KluqntLhCMv5QxVGU-s8SIQjPNDCZV7tthcjkA2vG30AF23E2_eCp8-hHLwouncOIpgngG8Lu_5rkF3fgv9bZjfk8Z-D6Ag06MRbDLaZGIPEGxc7d83C-jRbltkrK2zRxpuHSIQolAmp1W_P29O63Z-_yZ-7CauPYIXyB4AEuzl7k9hGX9Onucvhyh2k7yI6-2bzNq4lk
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9wwEB5RqAocoAXaLlDqSr2VQBw7TnyEBQoCVnvYStyi-BGEhBLE7vL7O3aciEoUiVsUTx7yzGS-iT3fAPwUMqMa41BEqU0jTmMWKU5ppHmpRJqoNFOVbzaRjUb5zY0cL8B-XwtjrfWbz-yBO_Rr-abRc_er7NBxgWOMfgdLKecJbau1-jUDnvo-yIhh0McxkehKZGJ5OBkfXV9gMpjQA7RviRh5BT4wlkrMVug_Ecm3WPk_2vRR52z9be_7EdYCuiRHrTl8ggVbb8B617mBBEfegNVnNISbcBxIVm9JUxF2Qn47EmsybOqnYJdk1G4WnxKEuGTc3NUzMrxv5oZ0nCZb8OfsdDI8j0JvhUizlM4iIVWVZZmKldBpbFjOy7yK8YQR0iZc5K5Juo1lFWtqNeesFIrR0ubGseegU3-Gxbqp7VcgiMCUMVRlPrdETKGZZgbTKvf1MLkcAO3mt9CBeNz1v7gvfAISy8Krp3DqKYJ6BvCrv-ahpd14VXrTTX4vGeZ9ALudGovgmdMiEXmCaudu-Ec_jD7lFkrK2jZzlOHScQolAmW-tOrv791ZzfbLz_wOy-eT66vi6mJ0uQMriSuW8NsFd2Fx9ji33-C9fprdTR_3vPH-Bc3a5Lg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Learning+of+3D+Graph+Convolution+Networks+for+Point+Cloud+Analysis&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Zhi-Hao%2C+Lin&rft.au=Sheng-Yu%2C+Huang&rft.au=Yu-Chiang%2C+Frank+Wang&rft.date=2022-08-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=0162-8828&rft.eissn=1939-3539&rft.volume=44&rft.issue=8&rft.spage=4212&rft_id=info:doi/10.1109%2FTPAMI.2021.3059758&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon