Elijah and Gabriel Videos and Press Home
are YouTube videos of demos, along with brief context and explanation
of significance. Each demo illustrates a different Gabriel
application or cloudlet-based concept. Also included
are mentions of Gabriel and Elijah in the press over time.
Everything is listed chronologically, most recent to oldest.
- OpenTPOD: Create DNN object detectors without any programming (February 2020)
video shows an open- source tool that we have built to simplify
the creation of deep neural network object detectors.
You simply use your smartphone to capture a
short video of the target object, and then draw bounding
boxes in a few frames to create a training data set.
TensorFlow is then used for the training of the object detector
from this training set. Created by Junjue Wang as part of his PhD
- OpenWorkFlow: state machine editor for Gabriel applications (wearable cognitive assistance) (February 2020)
Gabriel application is driven by a state machine in which transitions
are triggered by sensed inputs (e.g., object detection).
OpenWorkFlow is an open source tool that enables the creation and
editing of these state machines. It is designed to work
with the object detectors created by OpenTPOD (see video above).
Created by Junjue Wang as part of his PhD thesis.
- Podcast Interview of Satya on Edge Computing (December 2019)
- Carnegie Mellon Researchers Tap Edge Computing to Resolve Real-World Challenges (April 2019)
Nice article from State Tech describing LiveMap and use of Edge Computing
- LiveMap: crowd-sourced knowledge of road conditions without driver distraction (December 2018)
is an example of a widely-used system that uses crowd-sourced human
reporting to share knowledge about road conditions. In this
video, we present a demo of LiveMap on the streets of
Pittsburgh. LiveMap is a research system built at
Carnegie Mellon University that is conceptually similar to Waze, but
avoids the driver distraction that is inherent in human reporting from
a single-occupant vehicle. LiveMap uses edge computing to perform
video analytics close to the point of data capture. Computer
vision algorithms running on an in-vehicle computer (called a
"vehicular cloudlet") continuously analyze video streams from one or
more cameras mounted on that vehicle. Observations
from these algorithms are transmitted over a 4G LTE wireless
network to a central collection point (called a "zone cloudlet"),
where the reports from many vehicles are synthesized and
disseminated. Further details of LiveMap can be found in the
"Towards a Distraction-free Waze"
Kevin Christensen, Christoph Mertz, Padmanabhan Pillai, Martial Hebert, Mahadev Satyanarayanan
Proceedings of HotMobile 2019, Santa Cruz, CA, February 2019
- OpenRTist Paints Real World in Real Time (November 2018)
Mellon University's OpenRTiST allows a user to see the world around
them in real time "through the eyes of an artist." The video feed from
the camera of a mobile device is transmitted to a cloudlet, transformed
there by a deep neural network trained offline to learn the artistic
features of a famous painting, and returned to the user's device as a
video feed. The entire round trip is fast enough to preserve the
illusion that the artist is continuously repainting the user's world as
displayed on the device. Created by Shilpa George (Ph.D. student) and
Tom Eiszler (senior project scientist).
For more details see: https://www.cmu.edu/news/stories/archives/2018/november/edge-computing-partnership.html
- Edge Computing for Real-Time Vison-based Drone Registration on a Revit Model (July 2018)
This video shows the view from a drone that is flying outside the
Tepper Quad at Carnegie Mellon University and continuously streaming
video to a cloudlet on the ground. SIFT-based pattern
matching on the cloudlet identifies what part of the building is
currently in view. A bounding box of this view is
superimposed on a 3D model of the building that has been created from
its Revit engineering drawing. As the drone moves, the bounding
box continuously tracks its motion. This PoC can be viewed as a
form of "inverted augmented reality" in which the physical world (seen
by the drone) in brought into the virtual world (represented by the 3D
- Disk tray assembly (Computex 2018, Taiwan, June 5-9 2018)
In collaboration with the company inwinSTACK,
we created a Gabriel application for training a new worker in disk tray
assembly for a desktop. This demo was shown live at the
Computex 2018 show in Taiwan in June 2018. The application
was created by Junjue Wang of CMU, and demoed at Computex by inwinSTACK
employees. The small size of some of the components
(especially the pin) and the precise nature of the assembly were
difficult challenges to overcome in creating this application.
The wearable device used in this application is an ODG-7.
- OpenDev blog entries and tweets about Elijah and Gabriel (September 2017)
the OpenDev event on Edge Computing organized by the OpenStack
Foundation, Satya presented a keynote talk and video demos of edge
computing and wearable cognitive assistance. He also gave a
longer talk at the end of the first day on edge computing
infrastructure. Here are some blog entries and tweets that
referenced the presentations and demos, giving an idea of how the world
views this work:
Last updated by Satya (May 4, 2020)
- IKEA Stool Assembly: Wearable Cognitive Assistant (August 2017)
This Gabriel application was created by Mihir Bala, a
talented freshman CS student from the University of Michigan, as an NSF
Research Experience for Undergradautes project under the mentorship of
Zhuo Chen. In addition to being another example of a
Gabriel application, it offers the first evidence that creating such an
application does not require a PhD-level person. Our eventual
goal is to make the creation of such applications much easier than it
is today. We still have a long way to go!
RibLoc System for Surgical Repair of Ribs: Wearable Cognitive Assistant (January 2017)
Unsolicited by us, this video was made by a startup company
VIZR Tech (http://vizrtech.com) to illustrate the potential of wearable
cognitive assistance in medical training. The video provides
background to explain relevant concepts to the company's target
audience. The company already uses Google Glass (with the camera
blocked) in medical training, to show videos of complex medical
procedures to trainees. We created this new Gabriel application
to give a tutorial on the RibLoc system for surgical repair of ribs,
which is made by AcuteInnovations, Inc.
(http://acuteinnovations.com). Today, this training is
given to a doctor by an AcuteInnovations technician traveling to the
doctor's site. The Gabriel application illustrates
how this training could be delivered more efficiently. In
addition, the application is available to the doctor to refresh
training at any time. The principals of VizrTech appear in
this video and share their thoughts about why this is a game-changing
innovation. From a technical point of view, the computer vision
in this application is particularly difficult because the parts are
small, differ in subtle ways (e.g. color of screw), and easily confused
under different lighting conditions. The object detectors
are all implemented using deep neural networks.
- IKEA Table Lamp Kit: Wearable Cognitive Assistant (January 2017)
In our talks on Gabriel, we have often mentioned assembly of
IKEA kits as an example of how step by step guidence and prompt
detection of errors could be valuable. This video shows a
Gabriel application to assemble a genuine IKEA kit (a table lamp)
purchased off the shelf at IKEA. An interesting first is
the use of short video segments (rather than still images) in the
Google Glass display to guide the user. The use of videos in this way, combined with the active,
context-sensitive real-time guidance from the Gabriel application, is
- Making a Sandwich: Google Glass and Microsoft Hololens Versions of a Wearable Cognitive Assistant (January 2017)
This demo shows two things. First, it shows how
Gabriel can use much more sophisticated computer vision (based on
convolutional neural nets) than the much simpler computer vision
algorithms used in demos such as Lego and Ping-Pong.
Second, it shows how different kinds of wearable devices (Google Glass
and Microsoft Hololens) can be used for the same application using the
same Gabriel back-end.
- RTFace: Denaturing Live Video on Cloudlets (November 2016)
This demo shows how cloudlets can improve the scalability of
video analytics and how they can be used to enforce privacy policies
based on face recognition. The demo also illustrates use of
the OpenFace face reognition system that we have created.
RTFace combines OpenFace with face tracking across frames to achieve
the necessary frame rate for live video.
- Gabriel on CBS 60 Minutes (October 9, 2016)
Wearable Cognitive Assistance can be viewed as
"Augmented Reality Meets Artificial Intelligence". This
90-second excerpt from the October 9, 2016 CBS 60 Minutes special
edition on Artificial Intelligence highlights the table-tennis wearable
cognitive assistant on Google Glass.
- 7 course projects (many based on cloudlets and Gabriel) (December 6, 2016)
The Fall 2016 offering of 15-821/18-843 "Mobile and
Pervasive Computing" course included many 3-person student projects
based on cloudlets and wearable cognitive assistance. Examples include
wearable cognitive assistance for use of an AED device,
cloudlet-based privacy mediator for audio data, etc. This
web page contains brief descriptions of the projects, and videos of the
student projects captured on the final day of class. The PDFs of
the posters used by the students to explain their projects are also
- FaceSwap: Cloud versus Cloudlet Comparison of User Experience (June 2016)
This demo shows the difference between using a cloud and a
cloudlet for an application where the impact of latency is easily
perceivable by users. We have created an Android application
called "FaceSwap" that is available in the Google Play
Store. A back end VM image for an Amazon cloud site is also
available. The VM image can also be run on a cloudlet.
TPOD System for Creating Deep Neural Net Object Detectors for Cloudlets (May 2016) [OBSOLETE: SEE OpenTPOD Video of Feb 2020)
Creating object detectors for wearable cognitive assistance
is difficult. TPOD is a web-based system that we have created to
simplify the creation of training data sets for object detectors based
on deen convolutional neural networks. This demo shows an
early version of TPOD.
- National Public Radio (WESA) segment on Gabriel (February 9, 2016)
This short (4-minute) NPR radio piece and associated web page on wearable cognitive assistance was broadcast in Spring 2016.
- Drawing Assistant with Google Glass (December 2015)
Can a legacy application for training be modified to use
Gabriel? This demo shows how a Drawing Assistant created by
researchers at INRIA in France has been modified to use a wearable
device (Google Glass). In its original form, a user would receive
instruction to improve his drawing skills on a desktop display, and
provide input using a pen-based tablet. This demo shows how the
system has been modified to retain the application logic for
instruction, but use any writable surface (e.g., paper, whiteboard,
etc.) for input. Computer vision on the video stream from Google
Glass is used to generate input and display streams that are
indistinguishable from the original.
- PingPong Assistant with Google Glass (December 2015)
This conceptually simple demo has proved to be especially
popular because it brings out the importance of low
latency. A person wearing Google Glass plays ping-pong with
a human opponent. The video stream from the Glass
device is streamed to a cloudlet and analyzed on each frame to detect
the ball and the opponent, compare their positions from the previous
frame, and then to infer their trajectories. Based on this,
the application guides the user to hit to the left or right in order to
increase the chances of beating the opponent. To avoid annoying
the user, the application tries not to give advice too frequently and
only when it is confident of its advice.
- "New AI Platform 'Gabriel' Will Whisper Instructions Into Your Ear" (December 3, 2015)
Article in Tech Times.
- 10 course projects (many based on cloudlets and Gabriel) (December 1, 2015)
2015 offering of 15-821/18-843 "Mobile and Pervasive
Computing" course included many 2-person student projects based on
cloudlets and wearable cognitive assistance. Examples include wearable
cognitive assistance for gym exercises, using cloudlets for Google
Street View hyper-lapse viewing, real-time cloudlet-based
super-resolution imaging, etc. This web page contains brief
descriptions of the projects, and videos of the student projects
captured on demo day. The PDFs of the posters used by
the students to explain their projects are also included.
- (December 1, 2015)
Article in HNGN
- "‘Gabriel’ Is A New Artificial Intelligence Named After The Messenger Angel" (December 1, 2015)
Article in Popular Science
- "New AI 'Gabriel' wants to whisper instructions in your ear" (December 1, 2015)
Article in Engadget.
- Task Assistance Demo with Lego Assembly on Google Glass (September 2015)
This is the world's very first wearable cognitive assistance
application! We chose a deliberately simplified task
(assembling 2D lego) since it was our first attempt. The demo
seems easy, but the code to implement it reliably was challenging
(especially with flexible user actions and under different lighting
- Impact of high offload latency on mobile user experience (June 2012)
YouTube videos show the effect of end-to-end latency on an Android front-end application with a
compute-intensive back-end that is offloaded to an Amazon EC2 cloud or a