Joint channel–spatial entropy modeling for efficient visual coding

Deep learning-based methods have recently achieved impressive performance in lossy image compression, surpassing traditional codecs in rate-distortion efficiency. However, current learned compressors still struggle to fully exploit crossed-channel redundancies and long-range spatial dependencies in...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neural computing & applications Ročník 37; číslo 21; s. 17111 - 17128
Hlavní autoři: Li, Yuan, Jiang, Xiaotong, Sun, Zitang
Médium: Journal Article
Jazyk:angličtina
Vydáno: London Springer London 01.07.2025
Springer Nature B.V
Témata:
ISSN:0941-0643, 1433-3058
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Deep learning-based methods have recently achieved impressive performance in lossy image compression, surpassing traditional codecs in rate-distortion efficiency. However, current learned compressors still struggle to fully exploit crossed-channel redundancies and long-range spatial dependencies in their latent representations, and many rely on sequential context models that slow down decoding. To address these issues, we propose a novel compression framework that performs joint channel–spatial context modeling for improved entropy coding. Our approach introduces a Multi-Dimensional Conditional Context (MDCC) architecture, which integrates a new non-serial channel-wise context model with spatial context conditioning to capture inter-channel correlations and local dependencies simultaneously. In addition, we design a Residual Local–Global Enhancement module that combines ConvNeXt convolutional blocks and Swin Transformer-based to capture fine-grained textures and global image structure in the latent representation. By augmenting the standard hyperprior with these rich contextual cues, the proposed method more accurately estimates latent distributions, leading to superior compression performance. Experiments on the Kodak and CLIC image datasets demonstrate that the proposed approach achieves up to a 17% bit-rate reduction over the latest VVC (H.266) standard at comparable quality. Furthermore, our model eliminates the autoregressive decoding bottleneck, enabling nearly a 10× faster decoding speed compared to previous state-of-the-art learned compression models. These results establish the effectiveness of joint channel–spatial context modeling and highlight the potential of the proposed MDCC framework for practical, high-performance neural image compression.
AbstractList Deep learning-based methods have recently achieved impressive performance in lossy image compression, surpassing traditional codecs in rate-distortion efficiency. However, current learned compressors still struggle to fully exploit crossed-channel redundancies and long-range spatial dependencies in their latent representations, and many rely on sequential context models that slow down decoding. To address these issues, we propose a novel compression framework that performs joint channel–spatial context modeling for improved entropy coding. Our approach introduces a Multi-Dimensional Conditional Context (MDCC) architecture, which integrates a new non-serial channel-wise context model with spatial context conditioning to capture inter-channel correlations and local dependencies simultaneously. In addition, we design a Residual Local–Global Enhancement module that combines ConvNeXt convolutional blocks and Swin Transformer-based to capture fine-grained textures and global image structure in the latent representation. By augmenting the standard hyperprior with these rich contextual cues, the proposed method more accurately estimates latent distributions, leading to superior compression performance. Experiments on the Kodak and CLIC image datasets demonstrate that the proposed approach achieves up to a 17% bit-rate reduction over the latest VVC (H.266) standard at comparable quality. Furthermore, our model eliminates the autoregressive decoding bottleneck, enabling nearly a 10× faster decoding speed compared to previous state-of-the-art learned compression models. These results establish the effectiveness of joint channel–spatial context modeling and highlight the potential of the proposed MDCC framework for practical, high-performance neural image compression.
Deep learning-based methods have recently achieved impressive performance in lossy image compression, surpassing traditional codecs in rate-distortion efficiency. However, current learned compressors still struggle to fully exploit crossed-channel redundancies and long-range spatial dependencies in their latent representations, and many rely on sequential context models that slow down decoding. To address these issues, we propose a novel compression framework that performs joint channel–spatial context modeling for improved entropy coding. Our approach introduces a Multi-Dimensional Conditional Context (MDCC) architecture, which integrates a new non-serial channel-wise context model with spatial context conditioning to capture inter-channel correlations and local dependencies simultaneously. In addition, we design a Residual Local–Global Enhancement module that combines ConvNeXt convolutional blocks and Swin Transformer-based to capture fine-grained textures and global image structure in the latent representation. By augmenting the standard hyperprior with these rich contextual cues, the proposed method more accurately estimates latent distributions, leading to superior compression performance. Experiments on the Kodak and CLIC image datasets demonstrate that the proposed approach achieves up to a 17% bit-rate reduction over the latest VVC (H.266) standard at comparable quality. Furthermore, our model eliminates the autoregressive decoding bottleneck, enabling nearly a 10× faster decoding speed compared to previous state-of-the-art learned compression models. These results establish the effectiveness of joint channel–spatial context modeling and highlight the potential of the proposed MDCC framework for practical, high-performance neural image compression.
Author Li, Yuan
Jiang, Xiaotong
Sun, Zitang
Author_xml – sequence: 1
  givenname: Yuan
  surname: Li
  fullname: Li, Yuan
  organization: Graduate School of IPS, Waseda University
– sequence: 2
  givenname: Xiaotong
  surname: Jiang
  fullname: Jiang, Xiaotong
  organization: Graduate School of IPS, Waseda University
– sequence: 3
  givenname: Zitang
  orcidid: 0000-0003-2267-421X
  surname: Sun
  fullname: Sun, Zitang
  email: sun.zitang.c09@kyoto-u.jp
  organization: Graduate School of Informatics, Kyoto University
BookMark eNp9kL1OwzAUhS1UJNrCCzBFYg5c_7sjKv-qxAKz5ThOSZXawU6RuvEOvCFPgiFIbEx3ON85V_pmaOKDdwidYjjHAPIiAXCCSyC8xBhTVcIBmmJGaUmBqwmawoLlWDB6hGYpbQCACcWn6OohtH4o7Ivx3nWf7x-pN0NrusL5IYZ-X2xD7brWr4smxMI1TWvbHBVvbdplyoY6Z8fosDFdcie_d46eb66flnfl6vH2fnm5Ki1WEkrGTOWksBXmhGLlCLeSVMQKIkldCcGYNJZWRkiraLXglgtHQKhaUgWqknSOzsbdPobXnUuD3oRd9PmlpnlxIRUX3xQZKRtDStE1uo_t1sS9xqC_benRls629I8tDblEx1LKsF-7-Df9T-sLbzFuoQ
Cites_doi 10.1109/CVPR52688.2022.00590
10.1109/CVPR52688.2022.00563
10.1109/TCSVT.2021.3072430
10.1109/TCSVT.2022.3150014
10.1109/TCSVT.2021.3089491
10.1109/WACV56688.2023.00493
10.1109/TIP.2020.2985225
10.1109/ICASSP49357.2023.10095875
10.1145/3474085.3475213
10.1109/WACV56688.2023.00028
10.1109/CVPR42600.2020.00796
10.1109/CVPR.2016.90
10.1109/DCC52660.2022.00080
10.1109/CVPR52688.2022.01167
10.1109/TPAMI.2023.3322904
10.1109/CVPR52688.2022.01697
10.1109/CVPR52729.2023.00599
10.1109/ICCVW54120.2021.00210
10.1109/CVPR46437.2021.01453
10.1109/TIP.2021.3058615
10.1109/ICCV48922.2021.00986
10.1109/TIT.1973.1055037
10.1109/ICIP40778.2020.9190935
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025 Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025 Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
– notice: The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025.
DBID AAYXX
CITATION
DOI 10.1007/s00521-025-11138-0
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList

DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1433-3058
EndPage 17128
ExternalDocumentID 10_1007_s00521_025_11138_0
GroupedDBID -Y2
-~C
.4S
.86
.DC
.VR
06D
0R~
0VY
123
1N0
1SB
2.D
203
28-
29N
2J2
2JN
2JY
2KG
2LR
2P1
2VQ
2~H
30V
4.4
406
408
409
40D
40E
53G
5QI
5VS
67Z
6NX
8FE
8FG
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AAPKM
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBRH
ABBXA
ABDBE
ABDBF
ABDZT
ABECU
ABFSG
ABFTD
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABLJU
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABRTQ
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFS
ACHSB
ACHXU
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACSNA
ACSTC
ACUHS
ACZOJ
ADHHG
ADHIR
ADHKG
ADIMF
ADKFA
ADKNI
ADKPE
ADMLS
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AEZWR
AFBBN
AFDZB
AFEXP
AFGCZ
AFHIU
AFKRA
AFLOW
AFOHR
AFQWF
AFWTZ
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGQPQ
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHPBZ
AHSBF
AHWEU
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AIXLP
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARCSS
ARMRJ
ASPBG
ATHPR
AVWKF
AXYYD
AYFIA
AYJHY
AZFZN
B-.
B0M
BA0
BBWZM
BDATZ
BENPR
BGLVJ
BGNMA
BSONS
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
EAD
EAP
EBLON
EBS
ECS
EDO
EIOEI
EJD
EMI
EMK
EPL
ESBYG
EST
ESX
F5P
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNWQR
GQ7
GQ8
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I-F
I09
IHE
IJ-
IKXTQ
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
KDC
KOV
KOW
LAS
LLZTM
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
P19
P2P
P62
P9O
PF0
PHGZM
PHGZT
PQGLB
PT4
PT5
QOK
QOS
R4E
R89
R9I
RHV
RIG
RNI
RNS
ROL
RPX
RSV
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TSG
TSK
TSV
TUC
TUS
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
ZMTXR
~8M
~EX
AAYXX
AFFHD
CITATION
ID FETCH-LOGICAL-c1870-44abe76cb152318e25c72b2c6272db66447ac3ba67c83b95c56e2068d73808b73
IEDL.DBID RSV
ISSN 0941-0643
IngestDate Wed Nov 05 06:25:16 EST 2025
Sat Nov 29 07:33:27 EST 2025
Tue Jul 22 01:11:29 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 21
Keywords Self-attention mechanism
Lossy image compression
Joint channel-spatial context
Adaptive neighbored information
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c1870-44abe76cb152318e25c72b2c6272db66447ac3ba67c83b95c56e2068d73808b73
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-2267-421X
PQID 3231978567
PQPubID 2043988
PageCount 18
ParticipantIDs proquest_journals_3231978567
crossref_primary_10_1007_s00521_025_11138_0
springer_journals_10_1007_s00521_025_11138_0
PublicationCentury 2000
PublicationDate 20250700
2025-07-00
20250701
PublicationDateYYYYMMDD 2025-07-01
PublicationDate_xml – month: 7
  year: 2025
  text: 20250700
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: Heidelberg
PublicationTitle Neural computing & applications
PublicationTitleAbbrev Neural Comput & Applic
PublicationYear 2025
Publisher Springer London
Springer Nature B.V
Publisher_xml – name: Springer London
– name: Springer Nature B.V
References 11138_CR19
M Rabbani (11138_CR2) 2002; 17
T Xiao (11138_CR32) 2021; 34
11138_CR23
J Pfaff (11138_CR4) 2021; 31
11138_CR5
11138_CR22
11138_CR44
11138_CR6
11138_CR25
11138_CR7
11138_CR24
11138_CR46
11138_CR27
D Slepian (11138_CR30) 1973; 19
11138_CR26
GK Wallace (11138_CR1) 1991; 38
11138_CR29
11138_CR3
11138_CR28
11138_CR40
11138_CR21
11138_CR43
11138_CR20
Z Duan (11138_CR41) 2024; 46
11138_CR42
Z Guo (11138_CR45) 2021; 32
11138_CR8
11138_CR9
11138_CR12
11138_CR34
11138_CR11
H Liu (11138_CR15) 2022; 32
11138_CR33
11138_CR14
11138_CR36
11138_CR13
11138_CR38
11138_CR37
11138_CR18
11138_CR17
11138_CR39
M Li (11138_CR35) 2020; 29
11138_CR10
T Chen (11138_CR16) 2021; 30
11138_CR31
References_xml – ident: 11138_CR34
  doi: 10.1109/CVPR52688.2022.00590
– ident: 11138_CR11
  doi: 10.1109/CVPR52688.2022.00563
– ident: 11138_CR31
– volume: 31
  start-page: 3834
  issue: 10
  year: 2021
  ident: 11138_CR4
  publication-title: IEEE Trans Circuits Syst Video Technol
  doi: 10.1109/TCSVT.2021.3072430
– volume: 32
  start-page: 5650
  issue: 8
  year: 2022
  ident: 11138_CR15
  publication-title: IEEE Trans Circuits Syst Video Technol
  doi: 10.1109/TCSVT.2022.3150014
– ident: 11138_CR37
– ident: 11138_CR14
– volume: 32
  start-page: 2329
  issue: 4
  year: 2021
  ident: 11138_CR45
  publication-title: IEEE Trans Circuits Syst Video Technol
  doi: 10.1109/TCSVT.2021.3089491
– ident: 11138_CR10
– ident: 11138_CR44
  doi: 10.1109/WACV56688.2023.00493
– volume: 29
  start-page: 5900
  year: 2020
  ident: 11138_CR35
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2020.2985225
– ident: 11138_CR25
  doi: 10.1109/ICASSP49357.2023.10095875
– ident: 11138_CR42
  doi: 10.1145/3474085.3475213
– ident: 11138_CR18
– ident: 11138_CR46
– ident: 11138_CR19
  doi: 10.1109/WACV56688.2023.00028
– ident: 11138_CR39
– ident: 11138_CR7
– ident: 11138_CR12
  doi: 10.1109/CVPR42600.2020.00796
– ident: 11138_CR33
  doi: 10.1109/CVPR.2016.90
– ident: 11138_CR17
  doi: 10.1109/DCC52660.2022.00080
– volume: 38
  start-page: 43
  issue: 1
  year: 1991
  ident: 11138_CR1
  publication-title: Commun ACM
– volume: 17
  start-page: 3
  year: 2002
  ident: 11138_CR2
  publication-title: ELSEVIER Signal Process: Image Commun
– ident: 11138_CR27
– ident: 11138_CR40
– ident: 11138_CR21
– ident: 11138_CR22
  doi: 10.1109/CVPR52688.2022.01167
– volume: 46
  start-page: 436
  issue: 1
  year: 2024
  ident: 11138_CR41
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2023.3322904
– ident: 11138_CR3
– ident: 11138_CR5
– volume: 34
  start-page: 30392
  year: 2021
  ident: 11138_CR32
  publication-title: Adv Neural Inf Process Syst
– ident: 11138_CR38
– ident: 11138_CR36
– ident: 11138_CR13
  doi: 10.1109/CVPR52688.2022.01697
– ident: 11138_CR29
  doi: 10.1109/CVPR52729.2023.00599
– ident: 11138_CR28
  doi: 10.1109/ICCVW54120.2021.00210
– ident: 11138_CR43
– ident: 11138_CR8
– ident: 11138_CR6
  doi: 10.1109/CVPR46437.2021.01453
– ident: 11138_CR24
– volume: 30
  start-page: 3179
  year: 2021
  ident: 11138_CR16
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2021.3058615
– ident: 11138_CR26
– ident: 11138_CR23
  doi: 10.1109/ICCV48922.2021.00986
– volume: 19
  start-page: 471
  issue: 4
  year: 1973
  ident: 11138_CR30
  publication-title: IEEE Trans Inf Theor
  doi: 10.1109/TIT.1973.1055037
– ident: 11138_CR9
  doi: 10.1109/ICIP40778.2020.9190935
– ident: 11138_CR20
SSID ssj0004685
Score 2.3847444
Snippet Deep learning-based methods have recently achieved impressive performance in lossy image compression, surpassing traditional codecs in rate-distortion...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Index Database
Publisher
StartPage 17111
SubjectTerms Artificial Intelligence
Codec
Coding
Compressors
Computational Biology/Bioinformatics
Computational Science and Engineering
Computer Science
Context
Data Mining and Knowledge Discovery
Decoding
Entropy
Image compression
Image Processing and Computer Vision
Lagrange multiplier
Modelling
Normal distribution
Original Article
Probability and Statistics in Computer Science
Representations
Semantics
Spatial dependencies
Wavelet transforms
Title Joint channel–spatial entropy modeling for efficient visual coding
URI https://link.springer.com/article/10.1007/s00521-025-11138-0
https://www.proquest.com/docview/3231978567
Volume 37
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAVX
  databaseName: SpringerLINK Contemporary Journals
  customDbUrl:
  eissn: 1433-3058
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0004685
  issn: 0941-0643
  databaseCode: RSV
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NSgMxEB6kevBi_cVqlRy8aWDNZjfZo6hFRIr4R29Lkk2hILul2xa8-Q6-oU_iJN1tVfSg52SzYTKTbyaZyQdwhAjEuOaK6kQYypllVLEsof0gixKZKBHMCoVvRLcre73ktioKK-ts9_pK0u_U82I3d4KJoS-LqKNHlxQD9WWEO-kIG-7unz5VQ3oiToxbXE4PD6tSmZ_H-ApHCx_z27WoR5tO83_zXIe1yrskZzN12IAlm29Cs2ZuIJUhb8HFdTHIx8RV_eb2-f31rXSJ1filO-sthi_EE-TgXwn6tMT6ZyawiUwH5QR7mcIh3jY8di4fzq9oxadAzSmaJeVcaStioxGz0ZQti4xgmpmYCZbpGD0joUyoVSyMDHUSmSi2LIhlJkIZSC3CHWjkRW53gaCjpzmaO0cNQETrK2Eyi6ttpRAqULoFx7VY0-Hs2Yx0_kCyF1CKAkq9gNKgBe1a8mllQmUa4hwxxI1i0YKTWtKL5t9H2_tb931YZW6xfApuGxrj0cQewIqZjgfl6NCr1gff1sfU
linkProvider Springer Nature
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1PS8MwFA8yBb04_-J0ag_eNFDTtEmPoo6pc4hO2S0kaQYDace6Dbz5HfyGfhJfsnZT0YOek6bhl7z83kveH4SOgIEIVVRiFTONKTEES5LEuOcnYcxjyfxpoHCLtdu8243viqCwvPR2L58k3Uk9C3azN5hg-pIQ2_LoHIOhvkiBsWzG_PuHp0_RkK4QJ9gt1qeHBkWozM9jfKWjuY757VnUsU2j-r95rqHVQrv0zqbbYR0tmHQDVcvKDV4hyJvo4jrrpyPPRv2m5vn99S23jtXwpb3rzQYvniuQA3_1QKf1jEszAU3epJ-PoZfOLONtocfGZee8iYt6ClifglhiSqUyLNIKOBtE2ZBQM6KIjggjiYpAM2JSB0pGTPNAxaEOI0P8iCcs4D5XLNhGlTRLzQ7yQNFTFMSdwg4ARutJphMDq204Y9KXqoaOS1jFYJo2Q8wSJDuABAAkHEDCr6F6ibwoRCgXAcwRTNwwYjV0UiI9b_59tN2_dT9Ey83ObUu0rto3e2iF2IVz7rh1VBkNx2YfLenJqJ8PD9w2-wBH48q4
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3dS8MwEA-iIr44P3E6tQ--aVhN0yZ9FOfwY4yBH-ytJGkGA2nH2g1883_wP_Qv8ZK22xR9EJ-ThnC56-8uud8dQqeAQIRKKrAMmcKUaIIFiUM8cGM_5KFgbkEU7rBul_f7YW-BxW-z3asnyYLTYKo0JXlzFA-aM-Kbuc2EMJj42LRK5xiC9hVqEulNvP7wvMCMtE05IYYx-T3UK2kzP6_xFZrm_ua3J1KLPO3a__e8iTZKr9O5LNRkCy3pZBvVqo4OTmngO6h1lw6T3DFs4ES_fLy9ZybhGr40d8Dp6NWxjXNgBw74uo625SdgyJkOswnMUqlBwl301L5-vLrBZZ8FrC7AXDGlQmoWKAlYDiauia8YkUQFhJFYBuAxMaE8KQKmuCdDX_mBJm7AY-Zxl0vm7aHlJE30PnLAAZQUfgMUNAOQbiCYijVogeaMCVfIOjqrRByNinIa0axwshVQBAKKrIAit44a1SlEpWllkQd7hNDXD1gdnVdSnw__vtrB36afoLVeqx11brv3h2idmHOzWboNtJyPJ_oIrappPszGx1bjPgFxSdOc
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Joint+channel%E2%80%93spatial+entropy+modeling+for+efficient+visual+coding&rft.jtitle=Neural+computing+%26+applications&rft.au=Li%2C+Yuan&rft.au=Jiang%2C+Xiaotong&rft.au=Sun%2C+Zitang&rft.date=2025-07-01&rft.pub=Springer+London&rft.issn=0941-0643&rft.eissn=1433-3058&rft.volume=37&rft.issue=21&rft.spage=17111&rft.epage=17128&rft_id=info:doi/10.1007%2Fs00521-025-11138-0&rft.externalDocID=10_1007_s00521_025_11138_0
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0941-0643&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0941-0643&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0941-0643&client=summon