AADS: Augmented autonomous driving simulation using data-driven algorithms
Simulation systems have become essential to the development and validation of autonomous driving (AD) technologies. The prevailing state-of-the-art approach for simulation uses game engines or high-fidelity computer graphics (CG) models to create driving scenarios. However, creating CG models and ve...
Saved in:
| Published in: | Science robotics Vol. 4; no. 28 |
|---|---|
| Main Authors: | , , , , , , , , , , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
United States
27.03.2019
|
| ISSN: | 2470-9476, 2470-9476 |
| Online Access: | Get more information |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Simulation systems have become essential to the development and validation of autonomous driving (AD) technologies. The prevailing state-of-the-art approach for simulation uses game engines or high-fidelity computer graphics (CG) models to create driving scenarios. However, creating CG models and vehicle movements (the assets for simulation) remain manual tasks that can be costly and time consuming. In addition, CG images still lack the richness and authenticity of real-world images, and using CG images for training leads to degraded performance. Here, we present our augmented autonomous driving simulation (AADS). Our formulation augmented real-world pictures with a simulated traffic flow to create photorealistic simulation images and renderings. More specifically, we used LiDAR and cameras to scan street scenes. From the acquired trajectory data, we generated plausible traffic flows for cars and pedestrians and composed them into the background. The composite images could be resynthesized with different viewpoints and sensor models (camera or LiDAR). The resulting images are photorealistic, fully annotated, and ready for training and testing of AD systems from perception to planning. We explain our system design and validate our algorithms with a number of AD tasks from detection to segmentation and predictions. Compared with traditional approaches, our method offers scalability and realism. Scalability is particularly important for AD simulations, and we believe that real-world complexity and diversity cannot be realistically captured in a virtual environment. Our augmented approach combines the flexibility of a virtual environment (e.g., vehicle movements) with the richness of the real world to allow effective simulation. |
|---|---|
| AbstractList | Simulation systems have become essential to the development and validation of autonomous driving (AD) technologies. The prevailing state-of-the-art approach for simulation uses game engines or high-fidelity computer graphics (CG) models to create driving scenarios. However, creating CG models and vehicle movements (the assets for simulation) remain manual tasks that can be costly and time consuming. In addition, CG images still lack the richness and authenticity of real-world images, and using CG images for training leads to degraded performance. Here, we present our augmented autonomous driving simulation (AADS). Our formulation augmented real-world pictures with a simulated traffic flow to create photorealistic simulation images and renderings. More specifically, we used LiDAR and cameras to scan street scenes. From the acquired trajectory data, we generated plausible traffic flows for cars and pedestrians and composed them into the background. The composite images could be resynthesized with different viewpoints and sensor models (camera or LiDAR). The resulting images are photorealistic, fully annotated, and ready for training and testing of AD systems from perception to planning. We explain our system design and validate our algorithms with a number of AD tasks from detection to segmentation and predictions. Compared with traditional approaches, our method offers scalability and realism. Scalability is particularly important for AD simulations, and we believe that real-world complexity and diversity cannot be realistically captured in a virtual environment. Our augmented approach combines the flexibility of a virtual environment (e.g., vehicle movements) with the richness of the real world to allow effective simulation.Simulation systems have become essential to the development and validation of autonomous driving (AD) technologies. The prevailing state-of-the-art approach for simulation uses game engines or high-fidelity computer graphics (CG) models to create driving scenarios. However, creating CG models and vehicle movements (the assets for simulation) remain manual tasks that can be costly and time consuming. In addition, CG images still lack the richness and authenticity of real-world images, and using CG images for training leads to degraded performance. Here, we present our augmented autonomous driving simulation (AADS). Our formulation augmented real-world pictures with a simulated traffic flow to create photorealistic simulation images and renderings. More specifically, we used LiDAR and cameras to scan street scenes. From the acquired trajectory data, we generated plausible traffic flows for cars and pedestrians and composed them into the background. The composite images could be resynthesized with different viewpoints and sensor models (camera or LiDAR). The resulting images are photorealistic, fully annotated, and ready for training and testing of AD systems from perception to planning. We explain our system design and validate our algorithms with a number of AD tasks from detection to segmentation and predictions. Compared with traditional approaches, our method offers scalability and realism. Scalability is particularly important for AD simulations, and we believe that real-world complexity and diversity cannot be realistically captured in a virtual environment. Our augmented approach combines the flexibility of a virtual environment (e.g., vehicle movements) with the richness of the real world to allow effective simulation. Simulation systems have become essential to the development and validation of autonomous driving (AD) technologies. The prevailing state-of-the-art approach for simulation uses game engines or high-fidelity computer graphics (CG) models to create driving scenarios. However, creating CG models and vehicle movements (the assets for simulation) remain manual tasks that can be costly and time consuming. In addition, CG images still lack the richness and authenticity of real-world images, and using CG images for training leads to degraded performance. Here, we present our augmented autonomous driving simulation (AADS). Our formulation augmented real-world pictures with a simulated traffic flow to create photorealistic simulation images and renderings. More specifically, we used LiDAR and cameras to scan street scenes. From the acquired trajectory data, we generated plausible traffic flows for cars and pedestrians and composed them into the background. The composite images could be resynthesized with different viewpoints and sensor models (camera or LiDAR). The resulting images are photorealistic, fully annotated, and ready for training and testing of AD systems from perception to planning. We explain our system design and validate our algorithms with a number of AD tasks from detection to segmentation and predictions. Compared with traditional approaches, our method offers scalability and realism. Scalability is particularly important for AD simulations, and we believe that real-world complexity and diversity cannot be realistically captured in a virtual environment. Our augmented approach combines the flexibility of a virtual environment (e.g., vehicle movements) with the richness of the real world to allow effective simulation. |
| Author | Pan, C W Zhang, R Ma, Y X Ren, J P Yan, F L Yang, R G Geng, Q C Gong, H J Fang, J Li, W Manocha, D Xu, W W Huang, X Y Wang, G P |
| Author_xml | – sequence: 1 givenname: W orcidid: 0000-0002-0059-3745 surname: Li fullname: Li, W email: liwei87@baidu.com, yangruigang@baidu.com, dm@cs.umd.edu organization: Nanjing University of Aeronautics and Astronautics, Nanjing, China – sequence: 2 givenname: C W orcidid: 0000-0003-0497-7903 surname: Pan fullname: Pan, C W organization: Deepwise AI Lab, Beijing, China – sequence: 3 givenname: R orcidid: 0000-0002-4614-7644 surname: Zhang fullname: Zhang, R organization: Zhejiang University, Hangzhou, China – sequence: 4 givenname: J P orcidid: 0000-0002-7658-6912 surname: Ren fullname: Ren, J P organization: Zhejiang University, Hangzhou, China – sequence: 5 givenname: Y X orcidid: 0000-0001-7237-988X surname: Ma fullname: Ma, Y X organization: University of Hong Kong, Hong Kong, China – sequence: 6 givenname: J orcidid: 0000-0002-7947-6807 surname: Fang fullname: Fang, J organization: National Engineering Laboratory of Deep Learning Technology and Application, Beijing, China – sequence: 7 givenname: F L orcidid: 0000-0003-4418-3809 surname: Yan fullname: Yan, F L organization: National Engineering Laboratory of Deep Learning Technology and Application, Beijing, China – sequence: 8 givenname: Q C orcidid: 0000-0002-0046-5794 surname: Geng fullname: Geng, Q C organization: Beihang University, Beijing, China – sequence: 9 givenname: X Y orcidid: 0000-0002-5786-3101 surname: Huang fullname: Huang, X Y organization: National Engineering Laboratory of Deep Learning Technology and Application, Beijing, China – sequence: 10 givenname: H J surname: Gong fullname: Gong, H J organization: Nanjing University of Aeronautics and Astronautics, Nanjing, China – sequence: 11 givenname: W W surname: Xu fullname: Xu, W W organization: Zhejiang University, Hangzhou, China – sequence: 12 givenname: G P orcidid: 0000-0001-7819-0076 surname: Wang fullname: Wang, G P organization: Beijing Engineering Technology Research Center of Virtual Simulation and Visualization, Peking University, Beijing, China – sequence: 13 givenname: D orcidid: 0000-0001-7047-9801 surname: Manocha fullname: Manocha, D email: liwei87@baidu.com, yangruigang@baidu.com, dm@cs.umd.edu organization: University of Maryland, College Park, MD, USA. liwei87@baidu.com yangruigang@baidu.com dm@cs.umd.edu – sequence: 14 givenname: R G orcidid: 0000-0001-5296-6307 surname: Yang fullname: Yang, R G email: liwei87@baidu.com, yangruigang@baidu.com, dm@cs.umd.edu organization: National Engineering Laboratory of Deep Learning Technology and Application, Beijing, China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/33137750$$D View this record in MEDLINE/PubMed |
| BookMark | eNpNT8lOwzAUtBCIltIvQEI5ckmxHS8xt6jsqsQBOEfPiVOMErvENoi_pxVF4jSjN4venKBD551B6IzgBSFUXIbGjl77aJuwAPjCpSgO0JQyiXPFpDj8xydoHsI7xphIUQhGj9GkKEghJcdT9FhV189XWZXWg3HRtBmk6J0ffApZO9pP69ZZsEPqIVrvshR2hxYi5DvVuAz6tR9tfBvCKTrqoA9mvscZer29eVne56unu4dltcqbAvOYEwDJO9V1oLgCYYwhRhCmCWDgzAhNtGYYVEex4lwXZUOZ0sRIvQ0Ab-kMXfz2bkb_kUyI9WBDY_oenNm-XVPGJS1xKdXWer63Jj2Ytt6MdoDxu_7bT38Adp5jYQ |
| CitedBy_id | crossref_primary_10_1016_j_infsof_2025_107828 crossref_primary_10_1111_mice_13167 crossref_primary_10_1007_s42154_023_00256_x crossref_primary_10_1016_j_visinf_2025_100262 crossref_primary_10_1002_stvr_1892 crossref_primary_10_1016_j_rcim_2023_102641 crossref_primary_10_1016_j_eswa_2023_119990 crossref_primary_10_1177_01423312241296919 crossref_primary_10_1016_j_physa_2023_128628 crossref_primary_10_1109_TNNLS_2021_3053249 crossref_primary_10_1111_cgf_14699 crossref_primary_10_1109_ACCESS_2020_2965089 crossref_primary_10_1038_s41928_022_00719_9 crossref_primary_10_1080_17452759_2022_2068804 crossref_primary_10_3390_s22249930 crossref_primary_10_1111_cgf_13803 crossref_primary_10_26599_JICV_2024_9210048 crossref_primary_10_1109_LRA_2022_3152594 crossref_primary_10_1038_s41467_021_21007_8 crossref_primary_10_1109_LRA_2024_3444671 crossref_primary_10_1126_scirobotics_aaw8703 crossref_primary_10_1177_03611981211035756 crossref_primary_10_1109_TPAMI_2024_3456473 crossref_primary_10_1109_TITS_2025_3571966 crossref_primary_10_1002_rob_22289 crossref_primary_10_1002_cav_70071 crossref_primary_10_1007_s12239_025_00261_5 crossref_primary_10_1109_TIV_2022_3145035 crossref_primary_10_1007_s10462_022_10358_3 crossref_primary_10_1007_s00521_021_06275_1 crossref_primary_10_1109_JSEN_2025_3567617 crossref_primary_10_1109_TITS_2023_3286384 crossref_primary_10_1109_TITS_2024_3369097 crossref_primary_10_1109_TVCG_2021_3114855 crossref_primary_10_1109_TIV_2023_3287278 crossref_primary_10_1109_TAES_2022_3207705 crossref_primary_10_1016_j_trc_2025_105106 crossref_primary_10_1155_2022_7975523 crossref_primary_10_1016_j_trc_2023_104451 crossref_primary_10_1038_s41467_023_37677_5 crossref_primary_10_1109_MITS_2023_3345930 crossref_primary_10_1016_j_nanoen_2025_111292 crossref_primary_10_1109_TIV_2023_3348632 crossref_primary_10_1109_TPAMI_2025_3543072 crossref_primary_10_1016_j_neucom_2019_06_038 crossref_primary_10_1109_TITS_2024_3376579 crossref_primary_10_1038_s44172_024_00220_5 crossref_primary_10_1080_15389588_2024_2399305 crossref_primary_10_1063_5_0272210 crossref_primary_10_3390_electronics13173486 crossref_primary_10_1177_00368504211037771 crossref_primary_10_3390_rs13244999 crossref_primary_10_1109_JSTQE_2021_3093721 crossref_primary_10_3389_fnbot_2022_843026 crossref_primary_10_1109_TSMC_2022_3228590 crossref_primary_10_3390_en15010194 crossref_primary_10_1145_3571286 crossref_primary_10_1109_LRA_2020_2966414 crossref_primary_10_1109_LRA_2024_3375266 crossref_primary_10_1155_2020_8454327 crossref_primary_10_3390_s23198130 crossref_primary_10_1109_MNET_104_2100403 crossref_primary_10_3390_rs15184628 crossref_primary_10_1016_j_aei_2024_102699 crossref_primary_10_1109_TIP_2019_2955280 crossref_primary_10_1080_15472450_2025_2497510 crossref_primary_10_1007_s11042_024_19409_z crossref_primary_10_3390_smartcities8040129 crossref_primary_10_1109_JAS_2022_106115 crossref_primary_10_1109_TGRS_2020_3035469 crossref_primary_10_1109_TASE_2024_3410891 crossref_primary_10_1177_09544070251327256 crossref_primary_10_1155_2021_2444363 crossref_primary_10_1016_j_aap_2022_106812 crossref_primary_10_1016_j_inffus_2024_102665 crossref_primary_10_1007_s42154_024_00289_w crossref_primary_10_1109_TMTT_2023_3234466 crossref_primary_10_1002_smr_2644 crossref_primary_10_1109_ACCESS_2025_3525805 crossref_primary_10_1109_TVT_2022_3165172 crossref_primary_10_1016_j_aap_2025_108043 crossref_primary_10_1007_s00530_025_01927_x crossref_primary_10_1109_LRA_2025_3555938 crossref_primary_10_3390_s24020452 crossref_primary_10_1109_TITS_2021_3072774 crossref_primary_10_1109_TVT_2021_3083268 crossref_primary_10_1016_j_jss_2024_112017 crossref_primary_10_1108_JICV_05_2022_0017 crossref_primary_10_1109_TITS_2023_3259322 crossref_primary_10_3390_rs15153808 crossref_primary_10_1016_j_nanoen_2022_107198 crossref_primary_10_1016_j_procs_2024_06_014 crossref_primary_10_1109_TITS_2020_2991039 crossref_primary_10_1177_09544070251332327 crossref_primary_10_1016_j_physa_2025_130923 |
| ContentType | Journal Article |
| Copyright | Copyright © 2019 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. |
| Copyright_xml | – notice: Copyright © 2019 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. |
| DBID | NPM 7X8 |
| DOI | 10.1126/scirobotics.aaw0863 |
| DatabaseName | PubMed MEDLINE - Academic |
| DatabaseTitle | PubMed MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic PubMed |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | no_fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 2470-9476 |
| ExternalDocumentID | 33137750 |
| Genre | Research Support, Non-U.S. Gov't Journal Article |
| GroupedDBID | 0R~ ABJNI ACGFS AJGZS ALMA_UNASSIGNED_HOLDINGS ARCSS BKF EBS NPM O9- SJN 7X8 |
| ID | FETCH-LOGICAL-c305t-1aa75f9ffa959a6eee1e614b1a0a54e6b1bb40a9f20955b38c249b1e7ba95a5d2 |
| IEDL.DBID | 7X8 |
| ISICitedReferencesCount | 145 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000464024300007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2470-9476 |
| IngestDate | Thu Oct 02 10:43:58 EDT 2025 Wed Feb 19 02:29:50 EST 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 28 |
| Language | English |
| License | Copyright © 2019 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c305t-1aa75f9ffa959a6eee1e614b1a0a54e6b1bb40a9f20955b38c249b1e7ba95a5d2 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ORCID | 0000-0002-0059-3745 0000-0001-7237-988X 0000-0002-7947-6807 0000-0001-7819-0076 0000-0003-0497-7903 0000-0002-7658-6912 0000-0002-4614-7644 0000-0003-4418-3809 0000-0002-0046-5794 0000-0002-5786-3101 0000-0001-5296-6307 0000-0001-7047-9801 |
| PMID | 33137750 |
| PQID | 2457280879 |
| PQPubID | 23479 |
| ParticipantIDs | proquest_miscellaneous_2457280879 pubmed_primary_33137750 |
| PublicationCentury | 2000 |
| PublicationDate | 2019-03-27 |
| PublicationDateYYYYMMDD | 2019-03-27 |
| PublicationDate_xml | – month: 03 year: 2019 text: 2019-03-27 day: 27 |
| PublicationDecade | 2010 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States |
| PublicationTitle | Science robotics |
| PublicationTitleAlternate | Sci Robot |
| PublicationYear | 2019 |
| SSID | ssj0001763642 |
| Score | 2.5006065 |
| Snippet | Simulation systems have become essential to the development and validation of autonomous driving (AD) technologies. The prevailing state-of-the-art approach... |
| SourceID | proquest pubmed |
| SourceType | Aggregation Database Index Database |
| Title | AADS: Augmented autonomous driving simulation using data-driven algorithms |
| URI | https://www.ncbi.nlm.nih.gov/pubmed/33137750 https://www.proquest.com/docview/2457280879 |
| Volume | 4 |
| WOSCitedRecordID | wos000464024300007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8QwEA7qetCD78f6IoLXaJukTetFirqI6LKgwt5K0iTrgtuu267-fSfdynoRBC-9lJA2zEy-TGa-D6GzKNSWU2OJlsYdUDgjCpA9CcOMqYwa6pm6UfhBdLtRvx_3moRb2ZRVfsfEOlDrInM58gvKA6ekFIn4avxOnGqUu11tJDQWUYsBlHGOKfrRPMcCzhPW-jmUC4_EXIQN8VDTNzMpVOHYkM-l_ARsz36HmfV201n_74duoLUGaOJkZhmbaMHkW2j1B_3gNrpPkpunS5xMBzUzp8ZyWrkeh2JaYj0ZulQDLoejRuALuxL5AXYlpcS9NTmWbwOYuXodlTvopXP7fH1HGm0FkoGHV8SXUgQ2tlbGQSxDY4xvYKdWvvRkwE2ofKW4J2NLHUedYlEG5zTlG6FggAw03UVLeZGbfYRVBiFSSOtJyrjHuAJQqCFMREp7FtBYG51-L1QKtusuJGRu4FfS-VK10d5stdPxjGQjZcxxIQbewR9GH6IVwDGxKw2j4gi1LHiuOUbL2Uc1LCcntVHAs9t7_AIdF8Rc |
| linkProvider | ProQuest |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=AADS%3A+Augmented+autonomous+driving+simulation+using+data-driven+algorithms&rft.jtitle=Science+robotics&rft.au=Li%2C+W&rft.au=Pan%2C+C+W&rft.au=Zhang%2C+R&rft.au=Ren%2C+J+P&rft.date=2019-03-27&rft.eissn=2470-9476&rft.volume=4&rft.issue=28&rft_id=info:doi/10.1126%2Fscirobotics.aaw0863&rft_id=info%3Apmid%2F33137750&rft_id=info%3Apmid%2F33137750&rft.externalDocID=33137750 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2470-9476&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2470-9476&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2470-9476&client=summon |