TerraSwarm participated in the DARPA Wait, What? technology forum that was held in St. Louis on September 9-11, 2015.

The Wait, What? Demo illustrated TerraSwarm infrastructure innovations on robot swarms and connected sensors. The demonstration included cooperating robots solving more than one problem, and on-the-fly re-tasking to move from one problem to the next.

At the DARPA Wait What? conference, the TerraSwarm team from five universities integrated five exciting and best-in-class technologies to showcase the types of applications enabled by the TerraSwarm research center. The demo focused on a robot delivery service where a team of (designed, built, and programmed at the University of Pennsylvania) delivered snacks to onlookers at the touch of a button. The user interface for the demo was a smartphone app, running on unmodified smartphones. After making a selection, the system dispatched a robot carrying the snack directly to the attendee, even as the attendee walked around. While the robot's main goal was the delivery application, to demonstrate how the robots could be repurposed in real-time, a context-aware machine learning application ran in the background that could interrupt the robots in response to an event. Finally, one of the robots simultaneously performed a surveillance task by carrying a video camera. The video stream was fed to a video summarizer which extracted the most interesting and novel clips from the stream in real-time. Instead of watching an hour long video, examining what the robot had seen was compressed to a minute long summary of several interesting clips.

This composition of various technologies highlights what can be done when independent systems work together. Applications in smart manufacturing, warehouse management, in-the-field delivery, disaster relief, and monitoring and security can all benefit from these services interoperating.

The demo centered around our smartphone app which both presented to visitors the snack options the robots could deliver, and also ran an indoor localization service known as ALPS (developed by a team at Carnegie Mellon University) in the background. ALPS uses fixed anchor nodes (which were positioned around the demo space) which periodically transmit ultrasonic chirps to localize off-the-shelf smartphones. While the chirps are at a frequency above the human hearing range, they are still audible to the microphone circuitry in phones, and based on when the phone receives the chirp and the speed of sound, the ALPS system can calculate the location of the phone.

Figure 1. Students using the app to request a delivery. (source: Pat Pannuto)

The attendee's snack request and current location are fed to the application coordinator which is an Accessors-base application running on the Ptolemy software platform (both developed by a team at UC Berkeley). Ptolemy is a visual, actor-oriented programming environment where applications are composed by connecting the inputs and outputs of the system blocks together. Accessors enable those blocks to represent external, real-world devices, such as the robots or the phones running ALPS. Ptolemy provi des a central point to both describe and implement the application, as well as manage the data flowing through the system.

Scarab Robot with camera and candy
  Figure 1. Scarab Robot with camera used for video summarization and candy. (source: Ilge Akkaya)

The robots operate semi-autonomously, and use the ALPS position updates as waypoints to complete their deliveries. With a known map of the space, the robots receive their next waypoint, calculate a path to that goal, and navigate that path independently. If any obstacles present themselves, including other robots, people walking around, or changes in the environment, the robots both detect the change and navigate around the obstruction.

Scarab Robot with camera and candy
  Figure 2. The map showing the direction of robots. (source: Ilge Akkaya)

Scarab Robots in motion, making deliveries
  Figure 3. Scarab robots in motion, making deliveries. (source: Ilge Akkaya)

While the robots are making deliveries, a microphone is passively listening in the background and running a machine learning model trained to detect applause (as setup by the UC Berkeley team). The relevant features of the audio stream are extracted locally, and then processed by the GMTK machine learning toolkit (developed at the University of Washington) to determine if there was applause in the space. When applause is detected, the robots are interrupted from their deliveries, spin in place, and then continue serving snacks. This reconfiguration is a placeholder for a more serious event, such as the robots being able to re-task themselves to respond when a disaster is detected.

The robot that is filming its environment is streaming the video feed to a video summarization tool (developed by a team at the University of Washington). The video summarizer is, in real-time, analyzing the stream for interesting and novel clips that best describe the video as a whole. It aggregates these clips as a summary of the video. Essentially, the information inside the video is preserved, but the length is significantly reduced, making an originally intractable problem---monitoring the video feeds from a swarm of robots---feasible.

These systems are the cutting-edge technologies and research occurring inside the TerraSwarm research center. To demonstrate how they can be used connected to create otherwise impossible applications, the team at the University of Michigan focused on the systems engineering aspect: architecting the application, bringing together the components, defining interfaces between them, and ensuring the demo worked as expected. By successfully incorporating state-of-the-art research projects from multiple universities, the Wait What? demo showcased the true potential of multi-institution research collaborations like TerraSwarm.

See the waitwhat wiki TerraSwarm Particpants Only

The Wait, What? Demo illustrated TerraSwarm infrastructure innovations on robot swarms and connected sensors. The demonstration included cooperating robots solving more than one problem, and on-the-fly re-tasking to move from one problem to the next.

In addition, Professor Alberto Sangiovanni-Vincentelli gave the following presentation:
Design Technology for the Trillion-Device Future, September 10, 2015

Rebecca Boyle, "Wait, What? The Most Amazing Ideas From DARPA's Tech Conference," September 15, 2015, popsci.com.

This page appears for visitors who are not logged in. To update this page, see the Profile.