Wireless Deep Video Semantic Transmission

In this paper, we design a new class of high-efficiency deep joint source-channel coding methods to achieve end-to-end video transmission over wireless channels. The proposed methods exploit nonlinear transform and conditional coding architecture to adaptively extract semantic features across video...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE journal on selected areas in communications Ročník 41; číslo 1; s. 214 - 229
Hlavní autoři: Wang, Sixian, Dai, Jincheng, Liang, Zijian, Niu, Kai, Si, Zhongwei, Dong, Chao, Qin, Xiaoqi, Zhang, Ping
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:0733-8716, 1558-0008
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract In this paper, we design a new class of high-efficiency deep joint source-channel coding methods to achieve end-to-end video transmission over wireless channels. The proposed methods exploit nonlinear transform and conditional coding architecture to adaptively extract semantic features across video frames, and transmit semantic feature domain representations over wireless channels via deep joint source-channel coding. Our framework is collected under the name deep video semantic transmission (DVST). In particular, benefiting from the strong temporal prior provided by the feature domain context, the learned nonlinear transform function becomes temporally adaptive, resulting in a richer and more accurate entropy model guiding the transmission of current frame. Accordingly, a novel rate adaptive transmission mechanism is developed to customize deep joint source-channel coding for video sources. It learns to allocate the limited channel bandwidth within and among video frames to maximize the overall transmission performance. The whole DVST design is formulated as an optimization problem whose goal is to minimize the end-to-end transmission rate-distortion performance under perceptual quality metrics or machine vision task performance metrics. Across standard video source test sequences and various communication scenarios, experiments show that our DVST can generally surpass traditional wireless video coded transmission schemes. The proposed DVST framework can well support future semantic communications due to its video content-aware and machine vision task integration abilities.
AbstractList In this paper, we design a new class of high-efficiency deep joint source-channel coding methods to achieve end-to-end video transmission over wireless channels. The proposed methods exploit nonlinear transform and conditional coding architecture to adaptively extract semantic features across video frames, and transmit semantic feature domain representations over wireless channels via deep joint source-channel coding. Our framework is collected under the name deep video semantic transmission (DVST). In particular, benefiting from the strong temporal prior provided by the feature domain context, the learned nonlinear transform function becomes temporally adaptive, resulting in a richer and more accurate entropy model guiding the transmission of current frame. Accordingly, a novel rate adaptive transmission mechanism is developed to customize deep joint source-channel coding for video sources. It learns to allocate the limited channel bandwidth within and among video frames to maximize the overall transmission performance. The whole DVST design is formulated as an optimization problem whose goal is to minimize the end-to-end transmission rate-distortion performance under perceptual quality metrics or machine vision task performance metrics. Across standard video source test sequences and various communication scenarios, experiments show that our DVST can generally surpass traditional wireless video coded transmission schemes. The proposed DVST framework can well support future semantic communications due to its video content-aware and machine vision task integration abilities.
Author Wang, Sixian
Dong, Chao
Liang, Zijian
Si, Zhongwei
Niu, Kai
Dai, Jincheng
Zhang, Ping
Qin, Xiaoqi
Author_xml – sequence: 1
  givenname: Sixian
  orcidid: 0000-0002-0621-1285
  surname: Wang
  fullname: Wang, Sixian
  organization: Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing, China
– sequence: 2
  givenname: Jincheng
  orcidid: 0000-0002-0310-568X
  surname: Dai
  fullname: Dai, Jincheng
  email: daijincheng@bupt.edu.cn
  organization: Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing, China
– sequence: 3
  givenname: Zijian
  surname: Liang
  fullname: Liang, Zijian
  organization: Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing, China
– sequence: 4
  givenname: Kai
  orcidid: 0000-0002-8076-1867
  surname: Niu
  fullname: Niu, Kai
  organization: Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing, China
– sequence: 5
  givenname: Zhongwei
  orcidid: 0000-0002-8286-2872
  surname: Si
  fullname: Si, Zhongwei
  organization: Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing, China
– sequence: 6
  givenname: Chao
  orcidid: 0000-0002-4922-7762
  surname: Dong
  fullname: Dong, Chao
  organization: Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing, China
– sequence: 7
  givenname: Xiaoqi
  orcidid: 0000-0002-5788-0657
  surname: Qin
  fullname: Qin, Xiaoqi
  organization: State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China
– sequence: 8
  givenname: Ping
  orcidid: 0000-0002-0269-104X
  surname: Zhang
  fullname: Zhang, Ping
  organization: State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China
BookMark eNp9kE1LAzEQhoNUsFV_gHhZ8ORhazJJmuyx1G8KHlr1GLKbCaS0uzXZHvz3prR48OBhGAbeZ2Z4RmTQdi0ScsXomDFa3b0uprMxUIAxB2CVUidkyKTUJaVUD8iQKs5LrdjkjIxSWlHKhNAwJLefIeIaUyruEbfFR3DYFQvc2LYPTbGMtk2bkFLo2gty6u064eWxn5P3x4fl7Lmcvz29zKbzsoGK96VzCNRZXaMQwqPiLJerARqvKOQJpKwVr13tGy24BNTMTSRI7xkV2vFzcnPYu43d1w5Tb1bdLrb5pAElRSWVAp5T6pBqYpdSRG-a0Ns-_9lHG9aGUbP3YvZezN6LOXrJJPtDbmPY2Pj9L3N9YAIi_uarSvIM8B_4aW6E
CODEN ISACEM
CitedBy_id crossref_primary_10_1016_j_phycom_2025_102763
crossref_primary_10_1109_TWC_2024_3519325
crossref_primary_10_1109_TWC_2024_3520870
crossref_primary_10_1109_TWC_2024_3512652
crossref_primary_10_1109_JSAC_2023_3288236
crossref_primary_10_3390_s25092823
crossref_primary_10_1109_JIOT_2024_3448538
crossref_primary_10_1109_TCOMM_2024_3386577
crossref_primary_10_1109_TGCN_2023_3321113
crossref_primary_10_1109_TCCN_2024_3435524
crossref_primary_10_1109_TCCN_2023_3306852
crossref_primary_10_1109_TNSE_2025_3566336
crossref_primary_10_1109_LCOMM_2024_3452715
crossref_primary_10_1109_JSTSP_2023_3304140
crossref_primary_10_1109_TCOMM_2024_3450877
crossref_primary_10_1016_j_jnca_2025_104181
crossref_primary_10_1109_MWC_010_2300180
crossref_primary_10_1109_TIFS_2025_3573192
crossref_primary_10_1109_JSAC_2025_3559160
crossref_primary_10_1109_LWC_2024_3488859
crossref_primary_10_1109_TCOMM_2024_3385915
crossref_primary_10_1109_JIOT_2024_3378779
crossref_primary_10_1109_TWC_2024_3370497
crossref_primary_10_1109_JSAC_2023_3288246
crossref_primary_10_1109_TNSM_2025_3563257
crossref_primary_10_1631_FITEE_2300196
crossref_primary_10_1109_JSAC_2023_3288249
crossref_primary_10_1109_LCOMM_2024_3389709
crossref_primary_10_1109_TCOMM_2024_3364990
crossref_primary_10_1109_JLT_2024_3485065
crossref_primary_10_1109_TWC_2025_3534995
crossref_primary_10_3390_electronics14051005
crossref_primary_10_1109_JIOT_2025_3555437
crossref_primary_10_1109_JIOT_2025_3529492
crossref_primary_10_1109_JSAC_2025_3559158
crossref_primary_10_1109_JSAC_2025_3559115
crossref_primary_10_1109_MCOM_004_2300228
crossref_primary_10_3390_electronics14050956
crossref_primary_10_1109_TCCN_2024_3522140
crossref_primary_10_1016_j_dcan_2025_06_010
crossref_primary_10_1109_ACCESS_2023_3319083
crossref_primary_10_1109_TWC_2024_3452481
crossref_primary_10_1109_JIOT_2024_3352737
crossref_primary_10_1109_JIOT_2025_3532484
crossref_primary_10_1109_TWC_2024_3409735
crossref_primary_10_1109_COMST_2023_3333342
crossref_primary_10_1109_JSAC_2025_3536557
crossref_primary_10_1016_j_dcan_2024_12_001
crossref_primary_10_1109_JSAC_2025_3531537
crossref_primary_10_1109_TCOMM_2024_3369697
crossref_primary_10_1109_JIOT_2025_3581778
crossref_primary_10_3390_s24030957
crossref_primary_10_1109_JIOT_2024_3367438
crossref_primary_10_1109_TVT_2024_3456099
crossref_primary_10_1109_TCOMM_2024_3480982
crossref_primary_10_1109_JSAC_2025_3531579
crossref_primary_10_1109_JIOT_2025_3553504
crossref_primary_10_1109_TMC_2023_3349315
crossref_primary_10_1109_MWC_001_2300014
crossref_primary_10_1109_ACCESS_2024_3399174
crossref_primary_10_1109_TCCN_2024_3520132
crossref_primary_10_3390_electronics14081666
crossref_primary_10_1109_JSAC_2025_3559149
crossref_primary_10_1109_TCOMM_2024_3447870
crossref_primary_10_1109_MWC_001_2400204
crossref_primary_10_1109_TWC_2024_3472612
crossref_primary_10_3390_electronics12132755
crossref_primary_10_1109_ACCESS_2025_3584000
crossref_primary_10_1109_JIOT_2024_3464614
crossref_primary_10_3390_app15137227
crossref_primary_10_1109_JSAC_2025_3559140
crossref_primary_10_1109_JPROC_2024_3520707
crossref_primary_10_1109_JIOT_2025_3545667
crossref_primary_10_1109_TCOMM_2024_3471992
crossref_primary_10_1109_COMST_2023_3300664
crossref_primary_10_1007_s11432_024_4337_1
crossref_primary_10_1109_JIOT_2023_3305011
crossref_primary_10_1109_COMST_2024_3416309
crossref_primary_10_32604_cmes_2023_046837
crossref_primary_10_1109_LCOMM_2023_3339776
crossref_primary_10_1109_JSTSP_2024_3405853
crossref_primary_10_1109_TIFS_2025_3573212
crossref_primary_10_1109_JSAC_2025_3559134
crossref_primary_10_1109_TBC_2025_3559003
crossref_primary_10_1016_j_jfranklin_2025_107598
crossref_primary_10_1109_TWC_2024_3450697
Cites_doi 10.1109/MWC.017.2100705
10.1007/s11263-018-01144-2
10.1109/49.947033
10.1109/JSAC.2022.3180802
10.1109/CVPR.2019.01126
10.1109/JSAIT.2020.2987203
10.1109/CVPR.2017.291
10.1109/CVPR46437.2021.00405
10.1109/TWC.2021.3090048
10.1109/TIP.2020.3016485
10.1109/ICASSP.2018.8461983
10.1109/MCOM.2018.1700839
10.1109/JSAC.2020.3036955
10.1016/j.patrec.2008.04.005
10.1002/j.1538-7305.1948.tb01338.x
10.1007/978-3-540-88682-2_5
10.1109/JSAC.2022.3191354
10.1109/TCSVT.2003.815165
10.1109/MCOM.001.2200099
10.1109/ICCV48922.2021.00986
10.1109/MSP.2010.938080
10.1016/j.eng.2021.11.003
10.1109/TPAMI.2020.2988453
10.1109/TCOMM.2018.2814603
10.1109/PCS.2018.8456272
10.1109/TIT.1981.1056282
10.1109/JSTSP.2020.3034501
10.1109/TSP.2021.3071210
10.1109/TCSVT.2012.2221191
10.1155/2007/47517
10.1109/TCCN.2019.2919300
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
DBID 97E
RIA
RIE
AAYXX
CITATION
7SP
8FD
L7M
DOI 10.1109/JSAC.2022.3221977
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Electronics & Communications Abstracts
Technology Research Database
Advanced Technologies Database with Aerospace
DatabaseTitle CrossRef
Technology Research Database
Advanced Technologies Database with Aerospace
Electronics & Communications Abstracts
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-0008
EndPage 229
ExternalDocumentID 10_1109_JSAC_2022_3221977
9953110
Genre orig-research
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 92067202; 62001049; 62071058; 61971062
  funderid: 10.13039/501100001809
– fundername: Beijing Natural Science Foundation
  grantid: 4222012
  funderid: 10.13039/501100004826
GroupedDBID -~X
.DC
0R~
29I
3EH
4.4
41~
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
ADRHT
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
IBMZZ
ICLAB
IES
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
TN5
VH1
AAYXX
CITATION
7SP
8FD
L7M
ID FETCH-LOGICAL-c293t-dde20da8be444fe731e73db22cf70231e255b73bdbfc84352e81d6525ff1048d3
IEDL.DBID RIE
ISICitedReferencesCount 128
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000927934500014&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0733-8716
IngestDate Sun Nov 30 05:03:38 EST 2025
Sat Nov 29 03:23:04 EST 2025
Tue Nov 18 22:07:53 EST 2025
Wed Aug 27 02:15:09 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 1
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c293t-dde20da8be444fe731e73db22cf70231e255b73bdbfc84352e81d6525ff1048d3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-4922-7762
0000-0002-8076-1867
0000-0002-8286-2872
0000-0002-0621-1285
0000-0002-5788-0657
0000-0002-0269-104X
0000-0002-0310-568X
PQID 2754957723
PQPubID 85481
PageCount 16
ParticipantIDs crossref_citationtrail_10_1109_JSAC_2022_3221977
proquest_journals_2754957723
crossref_primary_10_1109_JSAC_2022_3221977
ieee_primary_9953110
PublicationCentury 2000
PublicationDate 2023-Jan.
2023-1-00
20230101
PublicationDateYYYYMMDD 2023-01-01
PublicationDate_xml – month: 01
  year: 2023
  text: 2023-Jan.
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE journal on selected areas in communications
PublicationTitleAbbrev J-SAC
PublicationYear 2023
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref12
ref15
ref14
ref11
ref10
mentzer (ref31) 2020; 33
ref17
ref19
choi (ref16) 2019
ref18
krizhevsky (ref20) 2009
ballé (ref32) 2016
reed (ref49) 2015
ref46
ref48
ref47
ref42
ref44
zhang (ref4) 2022; 8
ref8
wang (ref39) 2004; 2
ref9
ref5
minnen (ref24) 2018; 31
ballé (ref21) 2017
wiegand (ref41) 2003; 13
li (ref27) 2021; 34
ref35
ref34
ref36
ref30
jaderberg (ref33) 2015; 28
bossen (ref37) 2013; 12
ref2
ref1
ref38
kingma (ref40) 2014
ref26
hoydis (ref43) 2022
ref25
bjontegaard (ref45) 2001
ref22
qin (ref6) 2021
ref28
ref29
ballé (ref23) 2018
ballé (ref3) 2018
seo (ref7) 2021
References_xml – ident: ref8
  doi: 10.1109/MWC.017.2100705
– volume: 12
  year: 2013
  ident: ref37
  publication-title: Common Test Conditions and Software Reference Configurations
– ident: ref36
  doi: 10.1007/s11263-018-01144-2
– year: 2001
  ident: ref45
  publication-title: Calculation of Average PSNR Differences Between RD-Curves
– ident: ref12
  doi: 10.1109/49.947033
– ident: ref9
  doi: 10.1109/JSAC.2022.3180802
– ident: ref29
  doi: 10.1109/CVPR.2019.01126
– ident: ref28
  doi: 10.1109/JSAIT.2020.2987203
– start-page: 1
  year: 2015
  ident: ref49
  article-title: Training deep neural networks on noisy labels with bootstrapping
  publication-title: Proc Int Conf Learn Represent (Workshop)
– ident: ref30
  doi: 10.1109/CVPR.2017.291
– start-page: 1
  year: 2017
  ident: ref21
  article-title: End-to-end optimized image compression
  publication-title: Proc Int Conf Learn Represent
– ident: ref46
  doi: 10.1109/CVPR46437.2021.00405
– year: 2014
  ident: ref40
  article-title: Adam: A method for stochastic optimization
  publication-title: arXiv 1412 6980
– ident: ref18
  doi: 10.1109/TWC.2021.3090048
– ident: ref34
  doi: 10.1109/TIP.2020.3016485
– ident: ref15
  doi: 10.1109/ICASSP.2018.8461983
– ident: ref26
  doi: 10.1109/MCOM.2018.1700839
– ident: ref19
  doi: 10.1109/JSAC.2020.3036955
– ident: ref48
  doi: 10.1016/j.patrec.2008.04.005
– ident: ref1
  doi: 10.1002/j.1538-7305.1948.tb01338.x
– ident: ref47
  doi: 10.1007/978-3-540-88682-2_5
– ident: ref38
  doi: 10.1109/JSAC.2022.3191354
– start-page: 1
  year: 2016
  ident: ref32
  article-title: Density modeling of images using a generalized normalization transformation
  publication-title: Proc Int Conf Learn Represent
– volume: 34
  start-page: 1
  year: 2021
  ident: ref27
  article-title: Deep contextual video compression
  publication-title: Proc Adv Neural Inf Process Syst
– volume: 13
  start-page: 560
  year: 2003
  ident: ref41
  article-title: overview of the h.264/avc video coding standard
  publication-title: IEEE Transactions on Circuits and Systems for Video Technology
  doi: 10.1109/TCSVT.2003.815165
– volume: 2
  start-page: 1398
  year: 2004
  ident: ref39
  article-title: Multiscale structural similarity for image quality assessment
  publication-title: Proc 37th Asilomar Conf Signals Syst Comput
– year: 2022
  ident: ref43
  article-title: Sionna: An open-source library for next-generation physical layer research
  publication-title: arXiv 2203 11854
– ident: ref10
  doi: 10.1109/MCOM.001.2200099
– volume: 33
  start-page: 11913
  year: 2020
  ident: ref31
  article-title: High-fidelity generative image compression
  publication-title: Proc Adv Neural Inf Process Syst
– start-page: 1
  year: 2018
  ident: ref23
  article-title: Variational image compression with a scale hyperprior
  publication-title: Proc Int Conf Learn Represent
– volume: 28
  year: 2015
  ident: ref33
  article-title: Spatial transformer networks
  publication-title: Advances in neural information processing systems
– ident: ref35
  doi: 10.1109/ICCV48922.2021.00986
– year: 2021
  ident: ref6
  article-title: Semantic communications: Principles and challenges
  publication-title: arXiv 2201 01389
– ident: ref11
  doi: 10.1109/MSP.2010.938080
– volume: 8
  start-page: 60
  year: 2022
  ident: ref4
  article-title: Toward wisdom-evolutionary and primitive-concise 6G: A new paradigm of semantic communication networks
  publication-title: Engineering
  doi: 10.1016/j.eng.2021.11.003
– ident: ref44
  doi: 10.1109/TPAMI.2020.2988453
– ident: ref14
  doi: 10.1109/TCOMM.2018.2814603
– ident: ref22
  doi: 10.1109/PCS.2018.8456272
– ident: ref2
  doi: 10.1109/TIT.1981.1056282
– volume: 31
  start-page: 1
  year: 2018
  ident: ref24
  article-title: Joint autoregressive and hierarchical priors for learned image compression
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref25
  doi: 10.1109/JSTSP.2020.3034501
– ident: ref5
  doi: 10.1109/TSP.2021.3071210
– start-page: 1182
  year: 2019
  ident: ref16
  article-title: Neural joint source-channel coding
  publication-title: Proc 36th Int Conf Mach Learn
– ident: ref42
  doi: 10.1109/TCSVT.2012.2221191
– year: 2021
  ident: ref7
  article-title: Semantics-native communication with contextual reasoning
  publication-title: arXiv 2108 05681
– year: 2009
  ident: ref20
  article-title: Learning multiple layers of features from tiny images
– ident: ref13
  doi: 10.1155/2007/47517
– ident: ref17
  doi: 10.1109/TCCN.2019.2919300
– start-page: 1
  year: 2018
  ident: ref3
  article-title: Integer networks for data compression with latent-variable models
  publication-title: Proc Int Conf Learn Represent
SSID ssj0014482
Score 2.7078137
Snippet In this paper, we design a new class of high-efficiency deep joint source-channel coding methods to achieve end-to-end video transmission over wireless...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 214
SubjectTerms Channels
Coding
Computer architecture
Design optimization
Domains
Encoding
Entropy
Feature extraction
Frames (data processing)
joint source-channel coding
Machine vision
nonlinear transform
Performance measurement
rate-distortion
Semantic communications
Semantics
Task analysis
Transforms
Transmission rate (communications)
Video communication
Video transmission
Vision systems
Wireless communication
Wireless sensor networks
Title Wireless Deep Video Semantic Transmission
URI https://ieeexplore.ieee.org/document/9953110
https://www.proquest.com/docview/2754957723
Volume 41
WOSCitedRecordID wos000927934500014&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1558-0008
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014482
  issn: 0733-8716
  databaseCode: RIE
  dateStart: 19830101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8NAEB5q8aAHX1WsVsnBi2LaZLPbTY6lWsRDEarSW2h2Z6HQF334-53dpEFRBA-BQHZD-HYyM9_OzgzADTci1DwTPhknIihCh36iFPdjFRgei5GtGe6aTch-Px4Ok5cK3Je5MIjoDp9h0966WL6eq43dKmslCUmMzafakbKd52qVEQOiGS5iIKPItySgiGCGQdJ6HnS6xAQZa5L0huTwfLNBrqnKD03szEvv8H8fdgQHhRvpdfJ1P4YKzk5g_0txwRrc2oOtE1Jk3gPiwnsfa5x7A5wSlGPlORtFa2w3y07hrff42n3yi8YIviLrvPZJJbFAj-IMOecGZRTSpTPGlJG2nhsST8hklOnMqJj8IYbklbYFE8YQ-4p1dAbV2XyG5-AZMufkIoWKJIqz0MT0nF6bGIEYKcbrEGyhSlVRNdw2r5ikjj0ESWrRTS26aYFuHe7KKYu8ZMZfg2sWznJggWQdGtv1SIufapUySWRWEB2ILn6fdQl7tht8vkPSgOp6ucEr2FUf6_Fqee3k5RO79rqx
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS8MwED-GCuqDX1OcTu2DL4rd2jSx7eOYjqlzCJuyt7ImFxjMbezDv99L1hVFEXwoFJqU8sv17n653B3AJdfCVzwVLhknIihC-W4sJXcj6Wkeib6pGW6bTYTtdtTrxS8FuMlzYRDRHj7Dirm1sXw1lguzVVaNY5IYk0-1bjpnZdlaecyAiIaNGYRB4BoakMUwfS-uPnZqdeKCjFVIfn1yeb5ZIdtW5Ycutgamsfu_T9uDncyRdGrLld-HAo4OYPtLecEiXJmjrUNSZc4d4sR5GygcOx18JzAH0rFWilbZbJcdwmvjvltvullrBFeSfZ67pJSYp_pRipxzjWHg06VSxqQOTUU3JKaQhkGqUi0j8ogYkl96K5jQmvhXpIIjWBuNR3gMjiaDTk6SL0mmOPN1RM_ptbEWiIFkvATeCqpEZnXDTfuKYWL5gxcnBt3EoJtk6JbgOp8yWRbN-Gtw0cCZD8yQLEF5tR5J9lvNEhYSnRVECIKT32ddwGaz-9xKWg_tp1PYMr3hl_slZVibTxd4BhvyYz6YTc-t7HwCpgG9-g
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Wireless+Deep+Video+Semantic+Transmission&rft.jtitle=IEEE+journal+on+selected+areas+in+communications&rft.au=Wang%2C+Sixian&rft.au=Dai%2C+Jincheng&rft.au=Liang%2C+Zijian&rft.au=Niu%2C+Kai&rft.date=2023-01-01&rft.pub=IEEE&rft.issn=0733-8716&rft.volume=41&rft.issue=1&rft.spage=214&rft.epage=229&rft_id=info:doi/10.1109%2FJSAC.2022.3221977&rft.externalDocID=9953110
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0733-8716&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0733-8716&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0733-8716&client=summon