2019 IEEE International Conference on Visual Communications and Image Processing (VCIP)

December 1-4, 2019 • Sydney, AUSTRALIA

Tutorial - Autonomous Underground Exploration
   in the context of the DARPA SubT challenge

Historically, DARPA robotics challenges have served as a watershed for spawning new technologies and pushing the boundaries of innovation. This is evidenced by the DARPA Grand Challenge and the Urban challenge initiating the self-driving car industry. The current SubT Challenge is aimed accelerating technology development required for exploration in large scale GPS-denied, comms-degraded Subterranean environments. Access to these environments remain difficult, in spite of being relevant across a range of industries and applications. Such environments can vary drastically across subdomains such as tunnel systems, urban and municipal underground infrastructure, and natural cave networks. Furthermore, in time-sensitive scenarios, such as in disaster response, first responders are faced with a range of increased technical challenges, including difficult and dynamic terrains, degraded environmental conditions, severe communication constraints, and expansive areas of operation. These environments often pose too great a risk to deploy personnel. As such, robotics offers a compelling answer to this broad set of challenges, but issues like time-sensitive missions will require systems-level approaches built with teams of cooperating platforms and advancements across a range of technologies, including autonomy, perception, networking, and mobility. The topics covered in this tutorial will address how vision based technologies are being used to address the autonomy and perception challenges.

Since the current DARPA SubT challenge requires multiple vision based technologies to operate in harsh real-world environments, our tutorial is expected to provide some insights for both researcher and engineers interested in deploying their solutions in real-world situations.

Content

  • Lecture 1 - Overview of the DARPA SubT challenge and the motivation for vision based solutions (by Nicolas Hudson)
    The DARPA SubT challenge is setup to push the boundaries of innovation to allow large scale multi-agent exploration of unknown GPS-denied, comms-degrade environments. This lecture will set the scene and motivate the need for various vision based technologies to solve this challenge.
  • Lecture 2 - Deep learning based artefact detection (by Lars Peterson)
    The performance of each team is assessed based on detecting and reporting back the location of a number of predefined artefacts that would be encountered in a typical search and rescue operation. These artefacts include backpacks, power tools, fire extinguishers, mannequins with heat signatures etc. This lecture will explain the usage of a vision to detect these artefacts using a trained Convolutional Neural Network based approach.
  • Lecture 3 - Vision based Point Cloud colourisation (by Mark Cox)
    Using cameras to colourise lidar point clouds allow a more realistic representation of the environment.
  • Lecture 4 - Multimodal perception (by Peyman Moghadam)
    Use of multimodal perception (vision+hyperspectral) provides more robust detection of artefacts.
  • Lecture 5 - Vision based place recognition for SLAM (by Paulo Borges)
    Use of vision for place recognition in SLAM.

Organizers & Presenters

Nicolas Hudson

Senior Principal Research Scientist

CSIRO

Nicolas.hudson@csiro.au

Nicolas is a Senior Principal Research Scientist and the Technical Leader for the Robotics and Autonomous System Group at CSIRO Data61. Before joining CSIRO, Nicolas lead the [Google] X Robotics perception team, with a focus on applying machine learning to mobile manipulators. During his time at Google, he also worked for Boston Dynamics on whole body humanoid manipulation. Prior to Google, Nicolas was at NASA’s Jet Propulsion Laboratory (JPL), where he lead/contributed to several US Department of Defense projects in mobile manipulation, including JPL’s winning DARPA ARM team, the DARPA Robotics Challenge, and technology development tasks for Mars Sample Return. This work culminated in Nicolas being awarded NASA’s Early Career Achievement Medal for contributions to robotic manipulation autonomy.

Lars Petersson

Principal Research Scientist

CSIRO

Lars.petersson@csiro.au

Lars Petersson is a Principal Research Scientist within the Smart Vision System’s Group, Data61, CSIRO, Australia. There, he is leading a team specialising in resource constrained computer vision. Previously, he was a Principal Researcher and Research Leader in NICTA’s computer vision research group where, from 2003 until 2016, he was leading projects such as Smart Cars, AutoMap, and Distributed Large Scale Vision. Before joining NICTA, he did one year of postdoctoral research at the Australian National University working with Dr Alexander Zelinsky. He received his PhD in March 2002 from KTH, Stockholm, Sweden, where he also received his Master’s degree in Engineering Physics.

Paulo Borges

Principal Research Scientist

CSIRO

Paulo.borges@csiro.au

Paulo is a Principal Research Scientist, Project Manager and Leader of the Robotics Perception Team in the Robotics and Autonomous Systems Group at CSIRO Data61. His current research focuses on sensor-fusion, visual-lidar robot tracking and localisation, and autonomous vehicles. The topic of his Ph.D. (Queen Mary, University of London, 2007) was digital image/video processing, with strong focus on statistical signal processing methods. Paulo is also interested in general field robotics. He has been part of the CSIRO team since 2009. During this period, he also had a visiting scientist appointing at ETH Zurich, Switzerland, in 2012-13.

Peyman Moghadam

Senior Research Scientist

CSIRO

Peyman.moghadam@csiro.au

Peyman is a Technical Entrepreneur, Scientist and Project Leader at the Robotics and Autonomous Systems Group CSIRO Data61. Before joining CSIRO, he has worked in a number of top leading organizations such as the Deutsche Telekom Laboratories (Germany) and the Singapore-MIT Alliance for Research and Technology (Singapore). Dr. Moghadam is also an Adjunct Associate Professor at the Queensland University of Technology (QUT) and Adjunct Fellow at the University of Queensland (UQ), Australia. In his recent role as the AgTech Cluster Leader at the Robotics and Autonomous Systems Group at CSIRO Data61, he leads the transition of innovative technologies into farms. He was Lead of the CSIRO’s HeatWave product, a handheld technology for 3D thermal imaging which has won the 2014 Australian National iAward for Research and Development. His current research interests include 3D multi-modal perception (3D++), robotics, computer vision, machine learning, and 3D thermal/hyperspectral imaging.

Mark Cox

Senior Experimental Scientist

CSIRO

Mark.cox@csiro.au

Mark Cox is a senior experimental scientist at the Robotics and Autonomous Systems Group CSIRO Data61. His interests in computer vision and machine learning have allowed him to work on a wide range of projects spanning non-rigid face tracking, unsupervised registration of images and wearable technologies.

Contact

Navinda Kottege (navinda.kottege@csiro.au)

2019 IEEE International Conference on Visual Communications and Image Processing (VCIP)

http://www.vcip2019.org