Cinematographic Camera Diffusion Model
Designing effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despite an elaborate film grammar, forged through years of experience, that enables the specification of camera motions through cinematographic properties (framing, shots sizes, a...
Saved in:
| Published in: | Computer graphics forum Vol. 43; no. 2; pp. 1 - n/a |
|---|---|
| Main Authors: | , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Oxford
Blackwell Publishing Ltd
01.05.2024
Wiley |
| Subjects: | |
| ISSN: | 0167-7055, 1467-8659 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Designing effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despite an elaborate film grammar, forged through years of experience, that enables the specification of camera motions through cinematographic properties (framing, shots sizes, angles, motions), there are endless possibilities in deciding how to place and move cameras with characters. Dealing with these possibilities is part of the complexity of the problem. While numerous techniques have been proposed in the literature (optimization‐based solving, encoding of empirical rules, learning from real examples,…), the results either lack variety or ease of control.
In this paper, we propose a cinematographic camera diffusion model using a transformer‐based architecture to handle temporality and exploit the stochasticity of diffusion models to generate diverse and qualitative trajectories conditioned by high‐level textual descriptions. We extend the work by integrating keyframing constraints and the ability to blend naturally between motions using latent interpolation, in a way to augment the degree of control of the designers. We demonstrate the strengths of this text‐to‐camera motion approach through qualitative and quantitative experiments and gather feedback from professional artists. The code and data are available at https://github.com/jianghd1996/Camera-control. |
|---|---|
| AbstractList | Designing effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despitean elaborate film grammar, forged through years of experience, that enables the specification of camera motions throughcinematographic properties (framing, shots sizes, angles, motions), there are endless possibilities in deciding how to placeand move cameras with characters. Dealing with these possibilities is part of the complexity of the problem. While numeroustechniques have been proposed in the literature (optimization-based solving, encoding of empirical rules, learning from realexamples, etc.), the results either lack variety or ease of control. In this paper, we propose a cinematographic camera diffusion model using a transformer-based architecture to handle temporality and exploit the stochasticity of diffusion models to generate diverse and qualitative trajectories conditioned by high-level textual descriptions. We extend the work by integrating keyframing constraints and the ability to blend naturally between motions using latent interpolation, in a way to augment the degree of control of the designers. We demonstrate the strengths of this text-to-camera motion approach through qualitative and quantitative experiments and gather feedback from professionalartists. Designing effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despite an elaborate film grammar, forged through years of experience, that enables the specification of camera motions through cinematographic properties (framing, shots sizes, angles, motions), there are endless possibilities in deciding how to place and move cameras with characters. Dealing with these possibilities is part of the complexity of the problem. While numerous techniques have been proposed in the literature (optimization‐based solving, encoding of empirical rules, learning from real examples,…), the results either lack variety or ease of control. In this paper, we propose a cinematographic camera diffusion model using a transformer‐based architecture to handle temporality and exploit the stochasticity of diffusion models to generate diverse and qualitative trajectories conditioned by high‐level textual descriptions. We extend the work by integrating keyframing constraints and the ability to blend naturally between motions using latent interpolation, in a way to augment the degree of control of the designers. We demonstrate the strengths of this text‐to‐camera motion approach through qualitative and quantitative experiments and gather feedback from professional artists. The code and data are available at https://github.com/jianghd1996/Camera-control . Designing effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despite an elaborate film grammar, forged through years of experience, that enables the specification of camera motions through cinematographic properties (framing, shots sizes, angles, motions), there are endless possibilities in deciding how to place and move cameras with characters. Dealing with these possibilities is part of the complexity of the problem. While numerous techniques have been proposed in the literature (optimization‐based solving, encoding of empirical rules, learning from real examples,…), the results either lack variety or ease of control. In this paper, we propose a cinematographic camera diffusion model using a transformer‐based architecture to handle temporality and exploit the stochasticity of diffusion models to generate diverse and qualitative trajectories conditioned by high‐level textual descriptions. We extend the work by integrating keyframing constraints and the ability to blend naturally between motions using latent interpolation, in a way to augment the degree of control of the designers. We demonstrate the strengths of this text‐to‐camera motion approach through qualitative and quantitative experiments and gather feedback from professional artists. The code and data are available at https://github.com/jianghd1996/Camera-control. Designing effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despite an elaborate film grammar, forged through years of experience, that enables the specification of camera motions through cinematographic properties (framing, shots sizes, angles, motions), there are endless possibilities in deciding how to place and move cameras with characters. Dealing with these possibilities is part of the complexity of the problem. While numerous techniques have been proposed in the literature (optimization‐based solving, encoding of empirical rules, learning from real examples,…), the results either lack variety or ease of control.In this paper, we propose a cinematographic camera diffusion model using a transformer‐based architecture to handle temporality and exploit the stochasticity of diffusion models to generate diverse and qualitative trajectories conditioned by high‐level textual descriptions. We extend the work by integrating keyframing constraints and the ability to blend naturally between motions using latent interpolation, in a way to augment the degree of control of the designers. We demonstrate the strengths of this text‐to‐camera motion approach through qualitative and quantitative experiments and gather feedback from professional artists. The code and data are available at https://github.com/jianghd1996/Camera-control. |
| Author | Liu, Libin Chen, Baoquan Christie, Marc Jiang, Hongda Wang, Xi |
| Author_xml | – sequence: 1 givenname: Hongda orcidid: 0000-0002-0296-4431 surname: Jiang fullname: Jiang, Hongda organization: National Key Laboratory of General AI – sequence: 2 givenname: Xi orcidid: 0000-0001-6586-1926 surname: Wang fullname: Wang, Xi organization: Team VISTA, LIX, École Polytechnique – sequence: 3 givenname: Marc surname: Christie fullname: Christie, Marc organization: University Rennes, Inria, CNRS, IRISA – sequence: 4 givenname: Libin orcidid: 0000-0003-2280-6817 surname: Liu fullname: Liu, Libin organization: National Key Laboratory of General AI – sequence: 5 givenname: Baoquan surname: Chen fullname: Chen, Baoquan email: baoquan@pku.edu.cn organization: National Key Laboratory of General AI |
| BackLink | https://hal.science/hal-04826479$$DView record in HAL |
| BookMark | eNp1kE9PAjEQxRuDiYAe_AYkJiYeFqbb7Z89klXABONFz00pLZQsW-wuGr69xUW96Fxm8vKbyZvXQ53KVwahawxDHGukV3aIKVB6hro4YzwRjOYd1AUcZx71C9Sr6w0AZJzRLrotXGW2qvGroHZrpweF2pqgBvfO2n3tfDV48ktTXqJzq8raXJ16H71OHl6KWTJ_nj4W43miCWM0MZqCTZdL4JRaTXINOTYMLBULQnTKNKQ6XxArRCYISaMpA2KhLDEcU8YXpI_u2rtrVcpdcFsVDtIrJ2fjuTxqkImUZTx_x5G9adld8G97Uzdy4_ehivYkAcrTPBMijdSopXTwdR2Mldo1qomfNUG5UmKQx-BkDE5-Bffr4Wfj28hf7On6hyvN4X9QFtNJu_EJv117XQ |
| CitedBy_id | crossref_primary_10_1109_TMM_2025_3542956 |
| Cites_doi | 10.1145/3181975 10.1145/133994.134088 10.4000/1895.4455 10.1109/CVPR52688.2022.01042 10.1145/3072959.3073712 10.1109/CVPR42600.2020.01016 10.1145/3592097 10.1109/CVPR52688.2022.01117 10.1145/1690388.1690432 10.1109/CVPR.2016.308 10.1145/3503250 10.1007/978-3-540-85412-8_11 10.1145/3422622 10.1007/978-3-031-19836-6_41 10.1109/CVPR52729.2023.01624 10.1109/ICCV48922.2021.00678 10.1007/978-3-030-33950-0_11 10.1109/ICRA.2018.8460703 |
| ContentType | Journal Article |
| Copyright | 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd. 2024 The Eurographics Association and John Wiley & Sons Ltd. Attribution |
| Copyright_xml | – notice: 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd. – notice: 2024 The Eurographics Association and John Wiley & Sons Ltd. – notice: Attribution |
| DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D 1XC VOOES |
| DOI | 10.1111/cgf.15055 |
| DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional Hyper Article en Ligne (HAL) Hyper Article en Ligne (HAL) (Open Access) |
| DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | CrossRef Computer and Information Systems Abstracts |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 1467-8659 |
| EndPage | n/a |
| ExternalDocumentID | oai:HAL:hal-04826479v1 10_1111_cgf_15055 CGF15055 |
| Genre | article |
| GrantInformation_xml | – fundername: National Key R&D Program of China funderid: 2022ZD0160803 |
| GroupedDBID | .3N .4S .DC .GA .Y3 05W 0R~ 10A 15B 1OB 1OC 29F 31~ 33P 3SF 4.4 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 5GY 5HH 5LA 5VS 66C 6J9 702 7PT 8-0 8-1 8-3 8-4 8-5 8UM 8VB 930 A03 AAESR AAEVG AAHQN AAMMB AAMNL AANHP AANLZ AAONW AASGY AAXRX AAYCA AAZKR ABCQN ABCUV ABDBF ABDPE ABEML ABPVW ACAHQ ACBWZ ACCZN ACFBH ACGFS ACPOU ACRPL ACSCC ACUHS ACXBN ACXQS ACYXJ ADBBV ADEOM ADIZJ ADKYN ADMGS ADMLS ADNMO ADOZA ADXAS ADZMN AEFGJ AEGXH AEIGN AEIMD AEMOZ AENEX AEUYR AEYWJ AFBPY AFEBI AFFNX AFFPM AFGKR AFWVQ AFZJQ AGHNM AGQPQ AGXDD AGYGG AHBTC AHEFC AHQJS AIDQK AIDYY AITYG AIURR AJXKR AKVCP ALAGY ALMA_UNASSIGNED_HOLDINGS ALUQN ALVPJ AMBMR AMYDB ARCSS ASPBG ATUGU AUFTA AVWKF AZBYB AZFZN AZVAB BAFTC BDRZF BFHJK BHBCM BMNLL BMXJE BNHUX BROTX BRXPI BY8 CAG COF CS3 CWDTD D-E D-F DCZOG DPXWK DR2 DRFUL DRSTM DU5 EAD EAP EBA EBO EBR EBS EBU EDO EJD EMK EST ESX F00 F01 F04 F5P FEDTE FZ0 G-S G.N GODZA H.T H.X HF~ HGLYW HVGLF HZI HZ~ I-F IHE IX1 J0M K1G K48 LATKE LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LW6 LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N04 N05 N9A NF~ O66 O9- OIG P2W P2X P4D PALCI PQQKQ Q.N Q11 QB0 QWB R.K RDJ RIWAO RJQFR ROL RX1 SAMSI SUPJJ TH9 TN5 TUS UB1 V8K W8V W99 WBKPD WIH WIK WOHZO WQJ WXSBR WYISQ WZISG XG1 ZL0 ZZTAW ~IA ~IF ~WT AAYXX AIQQE CITATION O8X 7SC 8FD JQ2 L7M L~C L~D 1XC VOOES |
| ID | FETCH-LOGICAL-c3665-ec50f2dd0755fc39c091e60f58b33c26c02c9b3f8848332016e08baf3e71567b3 |
| IEDL.DBID | DRFUL |
| ISICitedReferencesCount | 2 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001208910000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0167-7055 |
| IngestDate | Tue Oct 14 20:24:07 EDT 2025 Sun Nov 09 08:13:53 EST 2025 Sat Nov 29 03:41:23 EST 2025 Tue Nov 18 21:05:27 EST 2025 Sun Jul 06 04:45:37 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 2 |
| Keywords | Cinematography Animation Generative AI Camera control |
| Language | English |
| License | Attribution: http://creativecommons.org/licenses/by |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c3665-ec50f2dd0755fc39c091e60f58b33c26c02c9b3f8848332016e08baf3e71567b3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0001-6586-1926 0000-0003-2280-6817 0000-0002-0296-4431 0000-0001-6080-8026 |
| OpenAccessLink | https://hal.science/hal-04826479 |
| PQID | 3057294882 |
| PQPubID | 30877 |
| PageCount | 14 |
| ParticipantIDs | hal_primary_oai_HAL_hal_04826479v1 proquest_journals_3057294882 crossref_citationtrail_10_1111_cgf_15055 crossref_primary_10_1111_cgf_15055 wiley_primary_10_1111_cgf_15055_CGF15055 |
| PublicationCentury | 2000 |
| PublicationDate | May 2024 |
| PublicationDateYYYYMMDD | 2024-05-01 |
| PublicationDate_xml | – month: 05 year: 2024 text: May 2024 |
| PublicationDecade | 2020 |
| PublicationPlace | Oxford |
| PublicationPlace_xml | – name: Oxford |
| PublicationTitle | Computer graphics forum |
| PublicationYear | 2024 |
| Publisher | Blackwell Publishing Ltd Wiley |
| Publisher_xml | – name: Blackwell Publishing Ltd – name: Wiley |
| References | 2012; 1.66 2021; 40.6 2021; 65.1 2019; 38.6 2009 2020; 63.11 2008 2002; 2 2017; 36.4 1992 2020; 33 1991 2020; 21.1 2001; 20 2021; 34 2023 2022 2021 2020 2008; 27 2018 2016 2015 2020; 39.4 2013 2015; 34.4 2018; 37.3 e_1_2_8_27_2 e_1_2_8_28_2 e_1_2_8_29_2 Leake Mackenzie (e_1_2_8_24_2) 2017; 36 e_1_2_8_23_2 e_1_2_8_26_2 Dhariwal Prafulla (e_1_2_8_9_2) 2021 e_1_2_8_2_2 e_1_2_8_4_2 Shacked Ram (e_1_2_8_35_2) 2001 Dhariwal Prafulla (e_1_2_8_10_2) 2021; 34 e_1_2_8_6_2 e_1_2_8_5_2 e_1_2_8_8_2 Jiang Hongda (e_1_2_8_19_2) 2020; 39 e_1_2_8_20_2 e_1_2_8_41_2 e_1_2_8_21_2 Christie Marc (e_1_2_8_7_2) 2008 Lino Christophe (e_1_2_8_22_2) 2015; 34 e_1_2_8_16_2 e_1_2_8_39_2 e_1_2_8_17_2 Jiang Hongda (e_1_2_8_18_2) 2021; 40 Radford Alec (e_1_2_8_32_2) 2021 e_1_2_8_38_2 e_1_2_8_12_2 e_1_2_8_13_2 e_1_2_8_34_2 e_1_2_8_14_2 e_1_2_8_36_2 Raffel Colin (e_1_2_8_33_2) 2020; 21 Ho Jonathan (e_1_2_8_15_2) 2020; 33 Sohl‐Dickstein Jascha (e_1_2_8_37_2) 2015 Xu Ken (e_1_2_8_42_2) 2002; 2 Wang Miao (e_1_2_8_40_2) 2019; 38 e_1_2_8_31_2 e_1_2_8_30_2 Mildenhall Ben (e_1_2_8_25_2) 2021; 65 e_1_2_8_11_2 Arijon Daniel (e_1_2_8_3_2) 1991 |
| References_xml | – start-page: 10146 year: 2020 end-page: 10155 – start-page: 263 year: 2009 end-page: 270 – start-page: 11461 year: 2022 end-page: 11471 – start-page: 8748 year: 2021 end-page: 8763 article-title: Learning transferable visual models from natural language supervision – volume: 2 start-page: 25 year: 2002 end-page: 34 article-title: Constraint‐based automatic placement for scene composition publication-title: Graphics Interface. – start-page: 10684 year: 2022 end-page: 10695 – volume: 65.1 start-page: 99 year: 2021 end-page: 106 article-title: Nerf: Representing scenes as neural radiance fields for view synthesis publication-title: Communications of the ACM – volume: 33 start-page: 6840 year: 2020 end-page: 6851 article-title: Denoising diffusion probabilistic models publication-title: Advances in Neural Information Processing Systems – volume: 63.11 start-page: 139 year: 2020 end-page: 144 article-title: Generative adversarial networks publication-title: Communications of the ACM – volume: 20 start-page: 215 issue: 3 year: 2001 end-page: 227 article-title: Automatic lighting design using a perceptual quality metric – start-page: 2818 year: 2016 end-page: 2826 – start-page: 16933 year: 2023 end-page: 16942 – year: 2021 – volume: 34.4 year: 2015 article-title: Intuitive and Efficient Camera Control with the Toric Space publication-title: ACM Trans. Graph. – volume: 36.4 start-page: 130 year: 2017 end-page: 1 article-title: Computational video editing for dialogue‐driven scenes publication-title: ACM Trans. Graph. – volume: 36.4 start-page: 1 year: 2017 end-page: 10 article-title: Real‐time planning for automated multi‐view drone cinematography publication-title: ACM Transactions on Graphics (TOG) – start-page: 6858 year: 2021 end-page: 6868 – volume: 1.66 start-page: 8 year: 2012 end-page: 33 article-title: The Cinematic Paradigm publication-title: Journal 1895. – volume: 37.3 start-page: 1 year: 2018 end-page: 18 article-title: Directing cinematographic drones publication-title: ACM Trans. on Graphics – volume: 40.6 start-page: 6 year: 2021 end-page: 8 article-title: Camera Keyframing with Style and Control publication-title: ACM Trans. Graph. – start-page: 118 year: 2008 end-page: 129 – start-page: 2256 year: 2015 end-page: 2265 article-title: Deep unsupervised learning using nonequilibrium thermodynamics – volume: 38.6 start-page: 177 year: 2019 end-page: 1 article-title: Write‐a‐video: computational video montage from themed text publication-title: ACM Trans. Graph. – year: 2022 – year: 2023 – start-page: 119 year: 2020 end-page: 129 article-title: Autonomous drone cinematographer: Using artistic principles to create smooth, safe, occlusion‐free trajectories for aerial filming – volume: 34 start-page: 8780 year: 2021 end-page: 8794 article-title: Diffusion models beat gans on image synthesis publication-title: Advances in neural information processing systems – year: 1991 – start-page: 331 year: 1992 end-page: 340 – start-page: 7039 year: 2018 end-page: 7046 – volume: 39.4 year: 2020 article-title: Example‐Driven Virtual Cinematography by Learning Camera Behaviors publication-title: ACM Trans. Graph. – volume: 27 start-page: 2197 issue: 8 year: 2008 end-page: 2218 article-title: Camera control in computer graphics – volume: 34 start-page: 8780 year: 2021 end-page: 8794 article-title: Diffusion Models Beat GANs on Image Synthesis – start-page: 732 year: 2022 end-page: 749 article-title: The One Where They Reconstructed 3D Humans and Environments in TV Shows – volume: 21.1 start-page: 5485 year: 2020 end-page: 5551 article-title: Exploring the limits of transfer learning with a unified text‐to‐text transformer publication-title: The Journal of Machine Learning Research – year: 2013 – volume: 40 start-page: 6 year: 2021 ident: e_1_2_8_18_2 article-title: Camera Keyframing with Style and Control publication-title: ACM Trans. Graph. – ident: e_1_2_8_11_2 doi: 10.1145/3181975 – ident: e_1_2_8_13_2 doi: 10.1145/133994.134088 – ident: e_1_2_8_16_2 – ident: e_1_2_8_2_2 doi: 10.4000/1895.4455 – start-page: 215 volume-title: Computer graphics forum year: 2001 ident: e_1_2_8_35_2 – ident: e_1_2_8_41_2 – volume-title: Grammar of the film language year: 1991 ident: e_1_2_8_3_2 – ident: e_1_2_8_38_2 – ident: e_1_2_8_30_2 doi: 10.1109/CVPR52688.2022.01042 – start-page: 8748 volume-title: International conference on machine learning year: 2021 ident: e_1_2_8_32_2 – start-page: 2256 volume-title: International Conference on Machine Learning year: 2015 ident: e_1_2_8_37_2 – ident: e_1_2_8_26_2 doi: 10.1145/3072959.3073712 – volume: 34 start-page: 8780 year: 2021 ident: e_1_2_8_10_2 article-title: Diffusion models beat gans on image synthesis publication-title: Advances in neural information processing systems – volume: 39 year: 2020 ident: e_1_2_8_19_2 article-title: Example‐Driven Virtual Cinematography by Learning Camera Behaviors publication-title: ACM Trans. Graph. – ident: e_1_2_8_34_2 doi: 10.1109/CVPR42600.2020.01016 – ident: e_1_2_8_4_2 doi: 10.1145/3592097 – start-page: 2197 volume-title: Computer Graphics Forum year: 2008 ident: e_1_2_8_7_2 – ident: e_1_2_8_23_2 doi: 10.1109/CVPR52688.2022.01117 – volume: 33 start-page: 6840 year: 2020 ident: e_1_2_8_15_2 article-title: Denoising diffusion probabilistic models publication-title: Advances in Neural Information Processing Systems – ident: e_1_2_8_31_2 – ident: e_1_2_8_6_2 – ident: e_1_2_8_8_2 doi: 10.1145/1690388.1690432 – ident: e_1_2_8_20_2 – ident: e_1_2_8_17_2 – ident: e_1_2_8_36_2 doi: 10.1109/CVPR.2016.308 – volume: 38 start-page: 177 year: 2019 ident: e_1_2_8_40_2 article-title: Write‐a‐video: computational video montage from themed text publication-title: ACM Trans. Graph. – volume: 65 start-page: 99 year: 2021 ident: e_1_2_8_25_2 article-title: Nerf: Representing scenes as neural radiance fields for view synthesis publication-title: Communications of the ACM doi: 10.1145/3503250 – volume: 21 start-page: 5485 year: 2020 ident: e_1_2_8_33_2 article-title: Exploring the limits of transfer learning with a unified text‐to‐text transformer publication-title: The Journal of Machine Learning Research – ident: e_1_2_8_21_2 doi: 10.1007/978-3-540-85412-8_11 – volume: 2 start-page: 25 year: 2002 ident: e_1_2_8_42_2 article-title: Constraint‐based automatic placement for scene composition publication-title: Graphics Interface. – ident: e_1_2_8_12_2 doi: 10.1145/3422622 – ident: e_1_2_8_29_2 doi: 10.1007/978-3-031-19836-6_41 – volume: 34 year: 2015 ident: e_1_2_8_22_2 article-title: Intuitive and Efficient Camera Control with the Toric Space publication-title: ACM Trans. Graph. – ident: e_1_2_8_39_2 doi: 10.1109/CVPR52729.2023.01624 – ident: e_1_2_8_27_2 doi: 10.1109/ICCV48922.2021.00678 – ident: e_1_2_8_28_2 – start-page: 8780 volume-title: Advances in Neural Information Processing Systems year: 2021 ident: e_1_2_8_9_2 – ident: e_1_2_8_5_2 doi: 10.1007/978-3-030-33950-0_11 – volume: 36 start-page: 130 year: 2017 ident: e_1_2_8_24_2 article-title: Computational video editing for dialogue‐driven scenes publication-title: ACM Trans. Graph. – ident: e_1_2_8_14_2 doi: 10.1109/ICRA.2018.8460703 |
| SSID | ssj0004765 |
| Score | 2.4631748 |
| Snippet | Designing effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despite an elaborate film grammar,... Designing effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despitean elaborate film grammar,... |
| SourceID | hal proquest crossref wiley |
| SourceType | Open Access Repository Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 1 |
| SubjectTerms | Artificial Intelligence Artists Cameras CCS Concepts Cinematography Computational Geometry Computer Science Computing methodologies → Procedural animation Virtual environments |
| Title | Cinematographic Camera Diffusion Model |
| URI | https://onlinelibrary.wiley.com/doi/abs/10.1111%2Fcgf.15055 https://www.proquest.com/docview/3057294882 https://hal.science/hal-04826479 |
| Volume | 43 |
| WOSCitedRecordID | wos001208910000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVWIB databaseName: Wiley Online Library Full Collection 2020 customDbUrl: eissn: 1467-8659 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0004765 issn: 0167-7055 databaseCode: DRFUL dateStart: 19970101 isFulltext: true titleUrlDefault: https://onlinelibrary.wiley.com providerName: Wiley-Blackwell |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1bS8MwFD7s4oM-eBenU4qI-FLpmiZp8Gls1j2MIeJgb6VNEx2MKbv9fk96s4KC4FtIE5r0XPKd0-QLwLUQWiSOMj9ZIwxQhCK28CNpU4bYtsMZd6JU0kM-GvmTiXiqwX1xFibjhygTbsYyUn9tDDyKlxUjl6_6DtEMpXVouqi3XgOa_edgPPw6FskZLai9DWlMTixkNvKUnb8tR_U3sxmygjSreDVdcIK9fw11H3ZznGl1M8U4gJqaH8JOhX3wCG56WEbAmpFWT6XVi0yGyupPtV6bJJplLkqbHcM4eHjpDez82gRbEsaorSR1tJskCAaolkRIhASKOZr6MSHSZdJxpYiJ9n3PJwQBAFOOH0eaKI7BHI_JCTTm73N1ChbVnGkR46Ifaa-jkhijK028RGE_rE1acFt8vVDmnOLmaotZWMQWOPUwnXoLrsqmHxmRxo-NUATlc0N9PegOQ1OHngaxGxebTgvahYTC3OCWIbotDBPQG7k4plQWv78l7D0GaeHs703PYdtFOJNtdWxDY7VYqwvYkpvVdLm4zDXvE6tP1yw |
| linkProvider | Wiley-Blackwell |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3dS8MwED_2IagPfovTqUVEfKl0zZI24MvYnBPrENlgb6VNEx2MKfv6-730ywoKgm8hTWjSy11-d738AnDJueKRJfVP1gAdFC6Jyd1AmJQhtm04zLGCWNKe0--7oxF_LsFtdhYm4YfIA25aM2J7rRVcB6QLWi5e1Q3CGUrLUG3iMqIVqHZeukPv61ykw2jG7a1ZY1JmIZ3Jk3f-th-V33Q2ZAFqFgFrvON0t_831h3YSpGm0UqWxi6U5HQPNgv8g_tw1cYyQtaEtnosjHagY1RGZ6zUUofRDH1V2uQAht27QbtnphcnmIIwRk0pqKXsKEI4QJUgXCAokMxS1A0JETYTli14SJTrNl1CEAIwablhoIh00J1zQnIIlen7VB6BQZXDFA9x2w9UsyGjEP0rRZqRxH5YG9XgOvt8vkhZxfXlFhM_8y5w6n489Rpc5E0_EiqNHxuhDPLnmvy61_J8XYe2BtGbw1eNGtQzEfmpys19NFzoKKA9snFMsTB-f4vfvu_GheO_Nz2H9d7gyfO9h_7jCWzYCG6SxMc6VBazpTyFNbFajOezs3QZfgKhM9sc |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3dS8MwED_2IaIPfovTqUVEfKl0TZM24MvYrBPHGOJgb6VNEx2MOfb193vpx6ygIPgW0oQmvdzd79LLLwDXnCseW1L_ZA0xQOGSmNwLhUkZYtuGy1wrTCTddXs9bzjk_RLc52dhUn6I9Yab1ozEXmsFl9NYFbRcvKk7hDOUlqHqUM5QLavtF3_Q_ToX6TKac3tr1piMWUhn8qw7f_NH5XedDVmAmkXAmngcf_d_Y92DnQxpGs10aexDSU4OYLvAP3gINy0sI2RNaatHwmiFeo_KaI-UWuptNENflTY-goH_8NrqmNnFCaYgjFFTCmopO44RDlAlCBcICiSzFPUiQoTNhGULHhHleY5HCEIAJi0vChWRLoZzbkSOoTL5mMgTMKhymeIRuv1QOQ0ZRxhfKeLEEvthbVyD2_zzBSJjFdeXW4yDPLrAqQfJ1GtwtW46Tak0fmyEMlg_1-TXnWY30HVoaxC9uXzVqEE9F1GQqdw8QMOFgQLaIxvHlAjj97cErUc_KZz-veklbPbbftB96j2fwZaN2CbNe6xDZTFbynPYEKvFaD67yFbhJ2C72pc |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Cinematographic+Camera+Diffusion+Model&rft.jtitle=Computer+graphics+forum&rft.au=Jiang%2C+Hongda&rft.au=Wang%2C+Xi&rft.au=Christie%2C+Marc&rft.au=Liu%2C+Libin&rft.date=2024-05-01&rft.issn=0167-7055&rft.eissn=1467-8659&rft.volume=43&rft.issue=2&rft.epage=n%2Fa&rft_id=info:doi/10.1111%2Fcgf.15055&rft.externalDBID=10.1111%252Fcgf.15055&rft.externalDocID=CGF15055 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0167-7055&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0167-7055&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0167-7055&client=summon |