Suchergebnisse - Data Sets for Robot Learning

  1. 1

    Interactive Language: Talking to Robots in Real Time von Lynch, Corey, Wahid, Ayzaan, Tompson, Jonathan, Ding, Tianli, Betker, James, Baruch, Robert, Armstrong, Travis, Florence, Pete

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: IEEE 2024
    Veröffentlicht in IEEE robotics and automation letters (2024)
    “… We present a framework for building interactive, real-time, natural language-instructable robots in the real world, and we open source related assets …”
    Volltext
    Journal Article
  2. 2

    CALVIN: A Benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks von Mees, Oier, Hermann, Lukas, Rosete-Beas, Erick, Burgard, Wolfram Burgard

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.07.2022
    Veröffentlicht in IEEE robotics and automation letters (01.07.2022)
    “… General-purpose robots coexisting with humans in their environment must learn to relate human language to their perceptions and actions to be useful in a range of daily tasks …”
    Volltext
    Journal Article
  3. 3

    DSEC: A Stereo Event Camera Dataset for Driving Scenarios von Gehrig, Mathias, Aarents, Willem, Gehrig, Daniel, Scaramuzza, Davide

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.07.2021
    Veröffentlicht in IEEE robotics and automation letters (01.07.2021)
    “… To address these challenges, we propose, DSEC, a new dataset that contains such demanding illumination conditions and provides a rich set of sensory data …”
    Volltext
    Journal Article
  4. 4

    OverlapTransformer: An Efficient and Yaw-Angle-Invariant Transformer Network for LiDAR-Based Place Recognition von Ma, Junyi, Zhang, Jun, Xu, Jintao, Ai, Rui, Gu, Weihao, Chen, Xieyuanli

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.07.2022
    Veröffentlicht in IEEE robotics and automation letters (01.07.2022)
    “… Place recognition is an important capability for autonomously navigating vehicles operating in complex environments and under changing conditions. It is a key …”
    Volltext
    Journal Article
  5. 5

    HAPTR2: Improved Haptic Transformer for legged robots’ terrain classification von Bednarek, Michał, Nowicki, Michał R., Walas, Krzysztof

    ISSN: 0921-8890, 1872-793X
    Veröffentlicht: Elsevier B.V 01.12.2022
    Veröffentlicht in Robotics and autonomous systems (01.12.2022)
    “… The haptic terrain classification is an essential component of a mobile walking robot control system, ensuring proper gait adaptation to the changing environmental conditions …”
    Volltext
    Journal Article
  6. 6

    Generation of GelSight Tactile Images for Sim2Real Learning von Gomes, Daniel Fernandes, Paoletti, Paolo, Luo, Shan

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.04.2021
    Veröffentlicht in IEEE robotics and automation letters (01.04.2021)
    “… Most current works in Sim2Real learning for robotic manipulation tasks leverage camera vision that may be significantly occluded by robot hands during the manipulation …”
    Volltext
    Journal Article
  7. 7

    VECtor: A Versatile Event-Centric Benchmark for Multi-Sensor SLAM von Gao, Ling, Liang, Yuxuan, Yang, Jiaqi, Wu, Shaoxun, Wang, Chenyu, Chen, Jiaben, Kneip, Laurent

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.07.2022
    Veröffentlicht in IEEE robotics and automation letters (01.07.2022)
    “… Our contribution is the first complete set of benchmark datasets captured with a multi-sensor setup containing an event-based stereo camera, a regular stereo camera, multiple depth sensors …”
    Volltext
    Journal Article
  8. 8

    MotionBenchMaker: A Tool to Generate and Benchmark Motion Planning Datasets von Chamzas, Constantinos, Quintero-Pena, Carlos, Kingston, Zachary, Orthey, Andreas, Rakita, Daniel, Gleicher, Michael, Toussaint, Marc, Kavraki, Lydia E.

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.04.2022
    Veröffentlicht in IEEE robotics and automation letters (01.04.2022)
    “… other state-of-the-art planners. We present MotionBenchMaker , an open-source tool to generate benchmarking datasets for realistic robot manipulation problems …”
    Volltext
    Journal Article
  9. 9

    LEMMA: Learning Language-Conditioned Multi-Robot Manipulation von Gong, Ran, Gao, Xiaofeng, Gao, Qiaozi, Shakiah, Suhaila, Thattai, Govind, Sukhatme, Gaurav S.

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.10.2023
    Veröffentlicht in IEEE robotics and automation letters (01.10.2023)
    “… Complex manipulation tasks often require robots with complementary capabilities to collaborate …”
    Volltext
    Journal Article
  10. 10

    SACSoN: Scalable Autonomous Control for Social Navigation von Hirose, Noriaki, Shah, Dhruv, Sridhar, Ajay, Levine, Sergey

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.01.2024
    Veröffentlicht in IEEE robotics and automation letters (01.01.2024)
    “… By observing and understanding human interactions from past experiences, learning can enable effective social navigation behaviors directly from data …”
    Volltext
    Journal Article
  11. 11

    ViViD++ : Vision for Visibility Dataset von Lee, Alex Junho, Cho, Younggun, Shin, Young-sik, Kim, Ayoung, Myung, Hyun

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.07.2022
    Veröffentlicht in IEEE robotics and automation letters (01.07.2022)
    “… In this letter, we present a dataset capturing diverse visual data formats that target varying luminance conditions …”
    Volltext
    Journal Article
  12. 12

    Informed Federated Learning to Train a Robotic Arm Inverse Dynamic Model von Jimenez-Perera, Gabriel, Valencia-Vidal, Brayan, Luque, Niceto R., Ros, Eduardo, Barranco, Francisco

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.10.2025
    Veröffentlicht in IEEE robotics and automation letters (01.10.2025)
    “… Specifically, machine-learning-based inverse dynamic models show promising results for nonrigid robot identification, but the data used to train them are often kept private due to intellectual property protections …”
    Volltext
    Journal Article
  13. 13

    MotIF: Motion Instruction Fine-Tuning von Hwang, Minyoung, Hejna, Joey, Sadigh, Dorsa, Bisk, Yonatan

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.03.2025
    Veröffentlicht in IEEE robotics and automation letters (01.03.2025)
    “… , if an apple is picked up - many tasks require observing the full motion of the robot to correctly determine success …”
    Volltext
    Journal Article
  14. 14

    CEAR: Comprehensive Event Camera Dataset for Rapid Perception of Agile Quadruped Robots von Zhu, Shifan, Xiong, Zixun, Kim, Donghyun

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: IEEE 01.10.2024
    Veröffentlicht in IEEE robotics and automation letters (01.10.2024)
    “… To bridge this gap, we introduce CEAR, a dataset comprising data from an event camera, an RGB-D camera, an IMU, a LiDAR, and joint encoders, all mounted on a dynamic quadruped, Mini Cheetah robot …”
    Volltext
    Journal Article
  15. 15

    Multimodal Detection and Classification of Robot Manipulation Failures von Inceoglu, Arda, Aksoy, Eren Erdal, Sariel, Sanem

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.02.2024
    Veröffentlicht in IEEE robotics and automation letters (01.02.2024)
    “… An autonomous service robot should be able to interact with its environment safely and robustly without requiring human assistance …”
    Volltext
    Journal Article
  16. 16

    MoSS: Monocular Shape Sensing for Continuum Robots von Shentu, Chengnan, Li, Enxu, Chen, Chaojun, Dewi, Puspita T., Lindell, David B., Burgner-Kahrs, Jessica

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.02.2024
    Veröffentlicht in IEEE robotics and automation letters (01.02.2024)
    “… Continuum robots are promising candidates for interactive tasks in medical and industrial applications due to their unique shape, compliance, and miniaturization capability …”
    Volltext
    Journal Article
  17. 17

    Google Scanned Objects: A High-Quality Dataset of 3D Scanned Household Items von Downs, Laura, Francis, Anthony, Koenig, Nate, Kinman, Brandon, Hickman, Ryan, Reymann, Krista, McHugh, Thomas B., Vanhoucke, Vincent

    Veröffentlicht: IEEE 01.01.2022
    “… Interactive 3D simulations have enabled break-throughs in robotics and computer vision, but simulating the broad diversity of environments needed for deep learning requires large corpora of photo …”
    Volltext
    Tagungsbericht
  18. 18

    Benchmarking the Sim-to-Real Gap in Cloth Manipulation von Blanco-Mulero, David, Barbany, Oriol, Alcan, Gokhan, Colome, Adria, Torras, Carme, Kyrki, Ville

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.03.2024
    Veröffentlicht in IEEE robotics and automation letters (01.03.2024)
    “… Realistic physics engines play a crucial role for learning to manipulate deformable objects such as garments in simulation …”
    Volltext
    Journal Article
  19. 19

    HHI-Assist: A Dataset and Benchmark of Human-Human Interaction in Physical Assistance Scenario von Saadatnejad, Saeed, Hosseininejad, Reyhaneh, Barreiros, Jose, Tsui, Katherine M., Alahi, Alexandre

    ISSN: 2377-3766, 2377-3766
    Veröffentlicht: Piscataway IEEE 01.09.2025
    Veröffentlicht in IEEE robotics and automation letters (01.09.2025)
    “… The increasing labor shortage and aging population underline the need for assistive robots to support human care recipients …”
    Volltext
    Journal Article
  20. 20

    Action-Inclusive Multi-Future Prediction Using a Generative Model in Human-Related Scenes for Mobile Robots von Xu, Chenfei, Ahmad, Huthaifa, Okadome, Yuya, Ishiguro, Hiroshi, Nakamura, Yutaka

    ISSN: 2169-3536, 2169-3536
    Veröffentlicht: Piscataway IEEE 2025
    Veröffentlicht in IEEE access (2025)
    “… While traditional prediction-based approaches primarily estimate partial features for robot decision making, such as position and velocity, recent world models enable direct prediction of future sensory data …”
    Volltext
    Journal Article