2019 IEEE International Conference on Visual Communications and Image Processing (VCIP)

December 1-4, 2019 • Sydney, AUSTRALIA

Tutorial - Autonomous Underground Exploration
   in the context of the DARPA SubT challenge

Time:        10:30-14:30 Sunday, 1 December
Location:  Thomas Room, Aerial UTS Function Centre

Historically, DARPA robotics challenges have served as a watershed for spawning new technologies and pushing the boundaries of innovation. This is evidenced by the DARPA Grand Challenge and the Urban challenge initiating the self-driving car industry. The current SubT Challenge is aimed accelerating technology development required for exploration in large scale GPS-denied, comms-degraded Subterranean environments. Access to these environments remain difficult, in spite of being relevant across a range of industries and applications. Such environments can vary drastically across subdomains such as tunnel systems, urban and municipal underground infrastructure, and natural cave networks. Furthermore, in time-sensitive scenarios, such as in disaster response, first responders are faced with a range of increased technical challenges, including difficult and dynamic terrains, degraded environmental conditions, severe communication constraints, and expansive areas of operation. These environments often pose too great a risk to deploy personnel. As such, robotics offers a compelling answer to this broad set of challenges, but issues like time-sensitive missions will require systems-level approaches built with teams of cooperating platforms and advancements across a range of technologies, including autonomy, perception, networking, and mobility. The topics covered in this tutorial will address how vision based technologies are being used to address the autonomy and perception challenges.

Since the current DARPA SubT challenge requires multiple vision based technologies to operate in harsh real-world environments, our tutorial is expected to provide some insights for both researcher and engineers interested in deploying their solutions in real-world situations.

Content

  • Lecture 1 - Overview of the DARPA SubT challenge and the motivation for vision based solutions
    The DARPA SubT challenge is setup to push the boundaries of innovation to allow large scale multi-agent exploration of unknown GPS-denied, comms-degrade environments. This lecture will set the scene and motivate the need for various vision based technologies to solve this challenge.
  • Lecture 2 - Deep learning based artefact detection
    The performance of each team is assessed based on detecting and reporting back the location of a number of predefined artefacts that would be encountered in a typical search and rescue operation. These artefacts include backpacks, power tools, fire extinguishers, mannequins with heat signatures etc. This lecture will explain the usage of a vision to detect these artefacts using a trained Convolutional Neural Network based approach.
  • Lecture 3 - Vision based Point Cloud colourisation
    Using cameras to colourise lidar point clouds allow a more realistic representation of the environment.
  • Lecture 4 - Multimodal perception
    Use of multimodal perception (vision+hyperspectral) provides more robust detection of artefacts.
  • Lecture 5 - Vision based place recognition for SLAM
    Use of vision for place recognition in SLAM.

Organizers & Presenters

Nicolas Hudson

Senior Principal Research Scientist

CSIRO

Nicolas.hudson@csiro.au

Nicolas is a Senior Principal Research Scientist and the Technical Leader for the Robotics and Autonomous System Group at CSIRO Data61. Before joining CSIRO, Nicolas lead the [Google] X Robotics perception team, with a focus on applying machine learning to mobile manipulators. During his time at Google, he also worked for Boston Dynamics on whole body humanoid manipulation. Prior to Google, Nicolas was at NASA’s Jet Propulsion Laboratory (JPL), where he lead/contributed to several US Department of Defense projects in mobile manipulation, including JPL’s winning DARPA ARM team, the DARPA Robotics Challenge, and technology development tasks for Mars Sample Return. This work culminated in Nicolas being awarded NASA’s Early Career Achievement Medal for contributions to robotic manipulation autonomy.

Mark Cox

Senior Experimental Scientist

CSIRO

Mark.cox@csiro.au

Mark Cox is a senior experimental scientist at the Robotics and Autonomous Systems Group CSIRO Data61. His interests in computer vision and machine learning have allowed him to work on a wide range of projects spanning non-rigid face tracking, unsupervised registration of images and wearable technologies.

Lars Petersson

Principal Research Scientist

CSIRO

Lars.petersson@csiro.au

Lars Petersson is a Principal Research Scientist within the Smart Vision System’s Group, Data61, CSIRO, Australia. There, he is leading a team specialising in resource constrained computer vision. Previously, he was a Principal Researcher and Research Leader in NICTA’s computer vision research group where, from 2003 until 2016, he was leading projects such as Smart Cars, AutoMap, and Distributed Large Scale Vision. Before joining NICTA, he did one year of postdoctoral research at the Australian National University working with Dr Alexander Zelinsky. He received his PhD in March 2002 from KTH, Stockholm, Sweden, where he also received his Master’s degree in Engineering Physics.

Inkyu Sa

Research Scientist

CSIRO

inkyu.sa@csiro.au

Contact

Navinda Kottege (navinda.kottege@csiro.au)

2019 IEEE International Conference on Visual Communications and Image Processing (VCIP)

http://www.vcip2019.org