curriculum vitæ

research statement

I am a computer scientist and roboticist who is passionate about making software and products that are easy to use and self-improving. With over ten years of concentrated academic research and real-world experience bringing consumer robots to market, I have become an expert in building and programming intelligent electromechanical systems. I thrive in environments that emphasize learning and collaboration, in which I can fully devote my skills to products and causes I care about.

education

Massachusetts Institute of Technology

September 2010 - August 2012
M.S., Media Arts and Sciences, Personal Robots Group @ MIT Media Lab

University of Texas at Austin

August 2005 - May 2010
M.S., Computer Science. concentration in Artificial Intelligence, minor in Cognitive Science
B.S., Turing Scholars Honors Computer Science

experience

r-bots, LLC

San Francisco, CA
July 2016 - Present

President, Independent Consultant
  • Rapid prototyping of hardware and software (Python, C, iOS, Android)

  • Clients include Project 100, Momentum Machines, Formlabs, FamBots

AltSchool, PBC

San Francisco, CA
October 2015 - July 2016

Lead Hardware Engineer
  • Created and maintained hardware devices to help educators and students in K-8 classrooms

  • Led the hardware team to build prototypes of cameras, microphones, smart tables, wearables

3D Robotics, Inc.

Berkeley, CA
September 2014 - October 2015

Roboticist
  • Scene Awareness Lead, designing and implementing computer vision tracking algorithms

  • Implemented a hardware-accelerated iOS video pipeline (h264 RTP) and OTA firmware update system

  • Embedded systems integration, development of test software for use in manufacturing and production

Romotive, Inc.

San Francisco, CA
October 2012 - March 2014

Roboticist
  • Technical lead on machine learning and human-robot interaction

  • Designed and implemented a computer vision framework for iOS harnessing OpenCV and GPU filters

  • Lead the design, implementation, and documentation of a robotics SDK for iOS developers

  • Implemented realtime facial detection (GPU based Viola-Jones), facial recognition (local binary patterns), persistent memory system, scripted interaction environment, audio pitch recognition and synthesis

Formlabs, Inc.

Cambridge, MA
June 2012 - September 2012

Software Engineer
  • Designed and implemented the UI and UX for a low-cost high-resolution 3D printer (the Form 1)

  • In-depth development working on models of complex 3D geometries in C++ using OpenGL ES 2.0

MIT Media Lab

Cambridge, MA
September 2010 - August 2012

Research Assistant
  • Focus on cloud-based robot architectures, affective robotics, and applied machine learning

  • Built and programmed Dragonbot, an expressive and inexpensive robot platform powered by an Android phone. Dragonbot was used to secure a $10M NSF grant for socially assistive robots.

  • Helped build Playtime Computing, an interactive and immersive robotic play-space for children

  • Rapid design and prototyping of electromechanical systems

University of Texas at Austin

Austin, TX
August 2008 - July 2010

Graduate Research Assistant
  • Focus on motion acquisition for robots through human training, using motion capture to directly "puppet" humanoid and quadruped robots

  • Emphasis on machine learning by combining reinforcement learning with learning from demonstration

  • Worked under Peter Stone in the Learning Agents Research Group

  • Member of AustinVilla RoboCup Team, Standard Platform League (using Aldebaran Nao Humanoids)

  • Member, Reinforcement Learning Reading Group and Agents that Learn from Humans Reading Group

TRACLabs, Inc.

Houston, TX
May 2009 - January 2010

Intern/Programmer
  • Designed and built TRACBot -- an autonomous mobile robot to showcase planning algorithms

  • Design and partial implementation of software architecture using Player/Stage/Gazebo/ROS

  • Sensor framework included 3D time-of-flight cameras, Laser rangefinders, thermal sensors, distance/bump sensors, microphones, and 2D cameras

Amazon.com

Seattle, WA
May 2008 - August 2008

Software Development Engineer Intern
  • Implemented a major student-oriented textbook promotion in the Amazon Prime group

  • Experience working with service-oriented architectures (Java/C++), dynamic page generation

  • Dealt with large-scale reliability and latency constraints

  • Project lead to more than 50,000 new Prime subscriptions

University of Virginia

Charlottesville, VA
Summer 2007
Department of Computer Science / Medical Center

Computer Applications in Medicine, NSF Research Experience for Undergraduates
  • Working with Dr. Mark Williams (Gerontology), wrote algorithms and data analysis tools using MATLAB

  • Supervised machine learning on large corpus of accelerometer data to look for anomalies in the gait of at-risk geriatric patients, using unobtrusive and inexpensive hardware

Applied Research Laboratories

Austin, TX
June 2006 - May 2007

Senior Student Associate
  • Research and Development using Java, Java3D, CORBA, granted DOD Secret security clearance

  • Prototyped 3D desktop environments, created various testing utilities and front-end widgets

teaching

Intro to Robotics, K-2, AltSchool

Fall 2015 - Present

  • Creating curriculum and teaching hands-on introduction courses to robotics

  • Built line-followers and NERF robots from scratch, prototyping with Makey Makeys, Scratch, Lego

How To Make (almost) Anything, MIT Media Lab

Fall 2011

Teaching Assistant to Neil Gershenfeld
  • Helped to run and teach the hands-on crash course on personal fabrication

  • Taught students skills such as computer-controlled cutting, molding and casting, basic electronics

publications

Adam Setapen, Creating Robotic Characters for Long-Term Interaction. Masters thesis, MIT. Readers: Cynthia Breazeal, Rosalind Picard, David DeSteno.
Bibliography: BibTeX
Download: [pdf] (28.3MB)

Nadia Cheng, Maxim Lobovsky, Steven Keating, Adam Setapen, Katy Gero, Anette Hosoi, and Karl Iagnemma. Design and Analysis of a Robust, Low-cost, Highly Articulated Manipulator Enabled by Jamming of Granular Media. To appear, 2012 IEEE International Conference on Robotics and Automation (ICRA 2012).
Bibliography: BibTeX
Download: [pdf] (1.3MB)

Natalie Freed, Jie Qi, Adam Setapen, Hayes Raffle, Leah Buechley, and Cynthia Breazeal. Sticking Together: Handcrafting Personalized Communication Interfaces. 2011 ACM International Conference on Interaction Design and Children (IDC) 2011.
Bibliography: BibTeX
Download: [pdf] (1.1MB)

W. Bradley Knox, Adam Setapen, and Peter Stone. Reinforcement Learning with Human Feedback in Mountain Car. AAAI 2011 Spring Symposium - Help Me Help You: Bridging the Gaps in Human-Agent Collaboration., Palo Alto, CA - March 2011.
Bibliography: BibTeX
Download: [pdf] (584kB)

Adam Setapen, Michael Quinlan, and Peter Stone. Beyond Teleoperation: Exploiting Human Motor Skills with MARIOnET. In AAMAS 2010 Workshop on Agents Learning Interactively from Human Teachers (ALIHT), Toronto, Canada - May 2010.
Supplemental video cited in the paper.
Bibliography: BibTeX
Download: [pdf] (1.8MB)

Adam Setapen, Michael Quinlan, and Peter Stone. MARIOnET: Motion Acquisition for Robots through Iterative Online Evaluative Training (Extended Abstract). In The Ninth International Conference on Autonomous Agents and Multiagent Systems (AAMAS), International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, May 2010.
Supplemental video cited in the paper.
Bibliography: BibTeX
Download: [pdf] (332kB)

Adam Setapen. Exploiting Human Motor Skills for Training Bipedal Robots. Undergraduate Honors Thesis/Technical Report HR-09-02. Committee: Peter Stone (chair), Dana Ballard, Gordon Novak.
Bibliography: BibTeX
Download: [pdf] (2.3MB)

invited talks

Droidcon 2012, Berlin. 03.14.2012. The Robot In Your Pocket.

360iDev 2013, Denver, 09.09.2013, Your Code Just Ran Across The Floor.

honors, awards and appointments

  • University of Texas College of Natural Sciences Dean’s Honored Graduate (2010)

  • Motorola Endowed Scholar (2007 – 2009)

  • Director of the Campus Technology Agency, University of Texas (2007)

  • Student Government Technology Liaison, University of Texas (2006 - 2007)

  • Single national recipient of the Käthe Wilson Memorial Scholarship to study in Germany (2005)

  • Department of Defense Secret Security Clearance

  • Freshmen Research Initiative, algorithmic game theory, University of Texas (2005)

  • Chosen as peer tutor in Data Structures and Algorithms course, University of Texas (2006)

  • University of Texas honors list, Fall 2005 - Spring 2010

technical chops

Relevant Graduate Coursework:

Autonomous Robotics, Machine Learning, Cognitive Science, Natural Language Processing, Object Recognition, Affective Computing, Computational Neuroscience, Autonomous Multiagent Systems, Algorithmic Game Theory, Cryptography, Algorithms, Programming Languages, How to Make (almost) Anything, Sensor Applications for Interactive Environments, Technologies for Creative Learning

Programming Languages:

C, C++, Python, Java, C#, Objective-C, Lisp, Haskell

Frameworks:

iOS, Android, MATLAB, Eclipse, ROS

Design:

SolidWorks, AutoCAD, Rhino, Maya, Adobe Suite, Eagle PCB

Electronics:

Sensor prototyping and integration, power regulation and management, motor controllers

Fabrication:

CNCs, mills, lathes, laser cutters, waterjets, 3D printers (SLA, SLS, FDM, MJM), mold making


Collapse CV

contact

asetapen [at] gmail [dot] com

Linkedin

GitHub



biography

Adam Setapen is a computer scientist and roboticist who is passionate about making software and products that are easy to use and self-improving. With over ten years of concentrated academic research and real-world experience bringing consumer robots to market, he has become an expert in building and programming intelligent electromechanical systems. Adam thrives in environments that emphasize learning and collaboration, in which he can fully devote his skills to products and causes he cares about.

Adam has published papers on machine learning, robot design, learning from demonstration, and novel robot control interfaces. His academic research looks at how humans can bootstrap autonomous systems with sparse datasets obtained through real-world interactions. Since entering industry, his work has focused on making products and algorithms that are easy to use and self-improving. He has held positions as Lead Hardware Engineer for AltSchool, as a Roboticist for Romotive, 3D Robotics, and TRACLabs, Inc., and as a Software Engineer for Formlabs. Adam is a hacker at heart (and considers himself a "full-stack" roboticist), building his own robots and expressive objects to test his algorithms. He loves empowering people to be builders, teaching hands-on robotics courses and spending as much time in the shop as he can.

Adam is currently president of r-bots LLC, where he prototypes hardware and software to make the ideas of startups become products. He also works as an expert educator for AltSchool, developing curriculum and teaching robotics to K-8 students. Adam received a Master's degree from the MIT Media Lab where he spent his time building fuzzy dragon robots in the Personal Robots Group. Adam also earned an M.S. studying machine learning in the Learning Agents Research Group and a B.S. in the Turing Scholars Computer Science program, both at the University of Texas at Austin. He is a certified Yoga instructor, an eager snowboarder, an avid musician, and a bit of a coffee snob.

AltSchool

As Lead Hardware Engineer at AltSchool, I designed, prototyped, and maintained hardware devices for use by educators and students in the classroom. This included video cameras, microphones, wearables, and augmented spaces.

I continue to work with AltSchool as an expert educator, teaching robotics classes and helping educators create robotics and programming curricula for K-8 students.

Relevant Technologies

Rapid prototyping, SolidWorks, EaglePCB, Arduino, IP cameras, python, django, node.js, installations and infrastructure

Photos
Altschool Altschool Altschool Altschool
Video

DragonBot

Collaborators: Natalie Freed, Fardad Faridi, Sigurður Örn, Marc Strauss, Jesse Grey, Matt Berlin, Iliya Tsekov

As robots begin to appear in people's everyday lives, it's essential that we understand natural ways for humans and machines to communicate, share knowledge, and build relationships. For years, researchers have been trying to make robots more socially capable by inviting human subjects into a laboratory to interact with a robotic character for a few minutes. But the laboratory doesn't share the complexity of the real world, and any possibility of a long-term interaction is hopeless. Enter the modern smart phone, which packs the essential functionality for a robot into a tiny always-connected package. The DragonBot platform is an Android-based robot specifically built for social learning through real-world interactions.

DragonBot is all about data-driven robotics. If we want robots capable of social interaction, we simply need a lot more examples of how humans interact in the real world. DragonBot's cellphone makes the platform deployable outside of the laboratory, and the onboard batteries are capable of powering the robot for over seven hours. This makes DragonBot perfect for longitudinal interactions - learning over time and making the experience more personalized. DragonBot is a "blended reality" character - one that can transition between physical and virtual representations. If you remove the phone from DragonBot's face, the character appears on the phone's screen in a full 3D model, allowing for interaction on the go.

I designed and built DragonBot from scratch, building on the lessons I learned through creating Nimbus. The robot uses the android phone for all of it's onboard computation, communicating with custom-built cloud services for computationally-heavy tasks like face detection or speech recognition. The phone performs motor control, 3D animation, image streaming, data capture, and much more. DragonBot uses a delta parallel manipulator with updated DC motors, custom motor controllers (made by Sigurður Örn), and precision-machined linkages. Two extra motors were added - head tilt (letting the robot look at objects on a tabletop or up at a user) and a wagging tail (which improves children's perception of the robot's animacy).

I'm currently using DragonBot to build models of joint attention in human-robot interactions. Most models of attention are based entirely on visual stimuli, but there is a lot of information contained in other sensory modalities about the social dynamics of an interaction. My ongoing work attempts to improve social attention through language, symbolic labeling, pointing and other non-verbal behavior. Through easy-to-use teleoperation interfaces and intelligent shared autonomy, my Masters thesis aims to make it much easier to "bootstrap" a robot's performance through large datasets of longitudinal interactions.

Relevant Technologies

Rapid Prototyping, SolidWorks, Eagle PCB, aesthetic / fabric design, Android / Java, C, Python, OpenCV, Machine Learning, Cloud architectures

Photos
dragonbot dragonbot dragonbot dragonbot dragonbot dragonbot dragonbot dragonbot dragonbot
Video

Romo

I was a Roboticist at Romotive, where we built a small iPhone-based robot to help teach children programming concepts through a lovable embodied character named Romo. While at Romotive I lead a software team of seven people, taking ownership of the personality and autonomy of Romo, as well as coordinating the robot's software architecture.

One of my primary contributions for Romo was a best-in-class iOS framework for realtime computer vision called RMVision. This framework allowed our robot to expertly track faces, follow lines on the floor, detect changes in brightness, and use natural training by a person to chase brightly colored objects. Using a combination of OpenCV and hardware-accelerated OpenGL shaders, this framework was able to squeeze every bit of performance out of both legacy and modern iOS devices.

Relevant Technologies

iOS development, OpenCV, hardware-accelerated computer vision, Flash

Photos
Romotive Romotive Romotive Romotive Romotive
Videos

Developing embodied animations for Romo.

3D Robotics

As a Roboticist at 3D Robotics, I prototyped hardware and software for the leading US drone company. I was in charge of main components of the video streaming and update system for Solo -- a "smart-drone" capable of creating cinematic aerial video. I developed realtime computer-vision prototypes and production-ready code using a combination of GStreamer (embedded), OpenCV and GPUImage (iOS). I also traveled to China to help with the production of Solo, where I created hardware jigs and software tests for the assembly line.

Relevant Technologies

Yocto (Embedded Linux), Python, GStreamer, iOS (Swift and Objective-C), ROS and OpenCV, Machine learning

Photos
3D Robotics 3D Robotics
Videos

Developing the "dronie" with Phu Nguyen, Kellyn Loehr, and Eric Liao.

Nimbus

Collaborators: Marc Strauss, Hasbro

Nimbus is an exploration into using delta parallel manipulators for highly expressive tabletop robot characters. At the core of the platform lies a four degree-of-freedom delta manipulator, able to move the robot's head in all three translational directions and around a single rotational axis. Parallel manipulators, typically used in manufacturing pick-and-place robots, are also particularly well suited for creating expressive "squash-and-stretch" characters. Because animating the motion of the robot is as simple as controlling a single inverse kinematics handle, even people without animation expertise can easily program believable motions for Nimbus.

Nimbus also represents an exploration into robot "furs" that move organically with the kinematic constraints of the platform. Collaborating with engineers on the Soft Goods team at Hasbro, a sewing pattern was created for this platform that preserves the volume of the character, while deforming it like a balloon that is being squashed and stretched. Using passive elements like long-pile fur and silicone-casted hands and feet, Nimbus aims to increase believability with a very minimal number of manipulators.

The furry exterior also has fabric capacitive electrodes sewn in, allowing for detection of touch and pre-touch in six distinct locations on the robot's body. The robot wirelessly receives information about any people in the environment from a Microsoft Kinect hidden in the environment, and Nimbus was programmed to want to mimic any people in front of it. The video below shows the robot's motion, and illustrates the robot moving along with humans and expressing elation when the human's motion coincides with the robot's.

Relevant Technologies

Rapid Prototyping, Fabric design, Molding and casting, SolidWorks, Eagle PCB, Android / Java, C, Python, Kinect and Point-Cloud Library (PCL)

Photos
nimbus nimbus nimbus nimbus nimbus nimbus nimbus
Videos



Some mechanical prototypes along the way:

Playtime Computing

Collaborators: David Robert, Natalie Freed

The Playtime Computing System is a technological platform that computationally models a blended reality interactive and collaborative media experience that takes place both on-screen and in the real world as a continuous space. On-screen audio-visual media (e.g., portraying virtual environments and characters – story world, etc.) have an extended presence into the physical environment using digital projectors, robotics, real-time behavior capture, and tangible interfaces. Player behavior is tracked using 3D motion capture as well as other sensors such as cameras and audio inputs.

Characters in this system can seemingly transition smoothly from the physical world to the virtual on-screen world through a physical enclosure that metaphorically acts as a portal between the virtual and the real. Any events or changes that happen to the physical character in the real world are carried over to the virtual world. Digital assets can be transitioned from the virtual to the physical world. These blended reality characters can either be programmed to behave autonomously, or their behavior can be controlled by the players.

My primary contribution was building the "trans-reality portal", the enclosure that transports the robot between physical and virtual representations. I also wrote the image stitching code that makes the eight projectors output a continuous environment, using a gaussian pattern from each projector and a single camera image capturing the scene to back-calculate the projector positions. This is where I had my first exposure to powerful realtime animation techniques through Touch Designer, under the guidance of David Robert. I learned a ton about setting up large audiovisual installations, exploiting graphics supercomputers, and building robot houses.

Relevant Technologies

Rapid prototyping, Arduino, Motor control, motion capture (Vicon), Touch Designer, C

Photos
playtime computing playtime computing playtime computing playtime computing playtime computing playtime computing playtime computing
Video

Formlabs

I worked with Formlabs when it was still a startup with about 10 people. I set up the first websites for the company and handled the design and development of PreForm software that Formlabs uses to stage and handle 3D objects before sending to the printer.

Relevant Technologies

C++, Python, Qt Framework, OpenGL, 3D modeling

Photos

Courtesy of Andy Ryan and Formlabs.

Formlabs Formlabs Formlabs Formlabs Formlabs
Video

MDS

Collaborators: Nick dePalma, Sigurður Örn, Jin Joo Lee, Jason Alonso

The MDS platform, which stands for mobile, dextrous, and social, is a humanoid robot that was designed to be able to naturally interact with people. I spent a few weeks working on Maddox, the newest MDS robot in the fleet. I wrote low-level linux drivers and calibration code for quick initialization of the robot's motor positions. Through working on MDS, I became familiar with the challenges of animating a highly-sophisticated humanoid, solving issues with high-level motion synthesis and low-level motor control.

Relevant Technologies

Motor control, Python, C, Java, Linux administration

Photo
mds
Videos

Developing a jeopardy question for Maddox the MDS robot

MARIONET

Collaborators: Peter Stone, Michael Quinlan

MARIONET, or Motion Acquisition for Robots through Iterative Online Evaluative Training, is a framework I developed with my undergraduate/masters adviser, Dr. Peter Stone.

Although machine learning has improved the rate and accuracy at which robots are able to learn, there still exist tasks for which humans can improve performance significantly faster and more robustly than computers. While some ongoing work considers the role of human reinforcement in intelligent algorithms, the burden of learning is often placed solely on the computer. These approaches neglect the expressive capabilities of humans, especially regarding our ability to quickly refine motor skills. In this paper, we propose a general framework for Motion Acquisition for Robots through Iterative Online Evaluative Training (MARIONET). Our novel paradigm centers around a human in a motion-capture laboratory that "puppets" a robot in realtime. This mechanism allows for rapid motion development for different robots, with a training process that provides a natural human interface and requires no technical knowledge. Fully implemented and tested on two robotic platforms (one quadruped and one biped), our research has demonstrated that MARIONET is a viable way to directly transfer human motion skills to robots.

Relevant Publications

Adam Setapen, Michael Quinlan, and Peter Stone. Beyond Teleoperation: Exploiting Human Motor Skills with MARIOnET. In AAMAS 2010 Workshop on Agents Learning Interactively from Human Teachers (ALIHT), Toronto, Canada - May 2010.
Supplemental video cited in the paper.
BibTeX Download: [pdf] (1.8MB)

Adam Setapen, Michael Quinlan, and Peter Stone. MARIOnET: Motion Acquisition for Robots through Iterative Online Evaluative Training (Extended Abstract). In The Ninth International Conference on Autonomous Agents and Multiagent Systems (AAMAS), International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, May 2010.
Supplemental video cited in the paper.
BibTeX Download: [pdf] (332kB)

Adam Setapen. Exploiting Human Motor Skills for Training Bipedal Robots. Undergraduate Honors Thesis/Technical Report HR-09-02. Committee: Peter Stone (chair), Dana Ballard, Gordon Novak.
BibTeX Download: [pdf] (2.3MB)

Relevant Technologies

Machine learning, Motion Capture (Vicon), Robot Kinematics, C++, Qt, MATLAB, Sony AIBO, Aldebaran Nao

Photos
marionet marionet marionet marionet marionet
Video

SnakeBot

Collaborators: Nadia Cheng, Maxim Lobovsky, Steven Keating, Katy Gero, Anette Hosoi, Karl Iagnemma

This highly articulated snake-like robot uses non-traditional actuators. Through reverse jamming of granular media by creating a vacuum, the segments of the manipulator can individually transition between solid-like states and fluid ones. Combined with traditional off-board motors and tension cables to achieve complex manipulator configurations, I helped design the software that enabled the motion of the platform.

Relevant Publications

Nadia Cheng, Maxim Lobovsky, Steven Keating, Adam Setapen, Katy Gero, Anette Hosoi, and Karl Iagnemma. Design and Analysis of a Robust, Low-cost, Highly Articulated Manipulator Enabled by Jamming of Granular Media. To appear, 2012 IEEE International Conference on Robotics and Automation (ICRA 2012).
Bibliography: BibTeX
Download: [pdf] (1.3MB)

Photos
Snakebot Snakebot Snakebot Snakebot
Video

electrello

I've wanted an electric cello since I was old enough to realize they existed. My parents, both professional classical musicians, started me on the cello when I was three years old. But I gravitated towards an electric guitar as a rebellious teenager, and since then I've anxiously waited to combine the soothing tones of the cello with the warm hum of a vintage tube amp. When I took Neil Gershenfeld's whirlwind class - How To Make (almost) Anything - I knew I had to design and build a cello to call my own.

Most electric cellos are either too expensive or don't have the same "feel" as a traditional instrument. electrello is a low-cost instrument that retains the feel of a traditional cello while allowing the performer to move more freely, due to the motion provided by the four-bar linkages which the player grips with their legs. The bow is outfitted with a wireless accelerometer and vibration motor, packed into a compact 3D-printed enclosure that can fit on any cello bow. The accelerometer records the movements of the bow and can store this data for analysis or use it in realtime. For example, an audio effect - like distortion - could be applied to the sound based on bow speed, intensifying the faster passages of a piece. The vibration motor is primarily an idea for remote lessons, where a teacher can provide haptic feedback to a student in an unobtrusive way.

The body of the instrument contains an android phone, which can wirelessly communicate with the bow and display relevant information based on the instrument's sound. Also, the integrated microphone can stream the sound over the internet, for a web-based performance or a remote teaching session. A $1.50 piezo and simple instrumentation amplifier capture the vibrations from the bridge and convert them into an audio signal, and I have plans to add a magnetic coil pickup to allow for a more grungy and distorted tone. Originally, I wanted the phone to act as an effects box and transcription device, but at the time cellphones couldn't handle simultaneously doing analog-to-digital and digital-to-analog conversion.

Relevant Technologies

Rapid prototyping, musical instrument design, woodworking, electronics design, analog filters / amplifiers, IMUs, Android

Images
electrello electrello electrello electrello electrello electrello electrello electrello electrello electrello electrello
More information

The project is well documented on my final project website, and all design files and code can be freely download at the bottom of the page. Video/Audio coming soon!

TRACBot

Collaborators: Aaron Hill, Dr. Patrick Beeson, Dr. David Kortenkamp

TRACBot is a differential-drive robot I built from the ground up while interning at TRACLabs, Inc. I designed the robot to work with the Player/Stage/Gazebo software stack, the predecessor to the now-popular ROS framework. I integrated a wide variety of sensors such as LIDAR, thermal sensors, infrared rangers, cameras, and microphones. I also helped design the software architecture to exploit this rich sensory data. After my internship ended, I was hired as a part-time programmer to fabricate simulated 3D models and environments for the robot. Working on TRACBot exposed me to problems in robotics I might never encounter in academia, and it was an incredible learning experience.

Relevant Technologies

Robot design, sensor prototyping, software drivers, C++, Python, Player/Stage, Gazebo

Photos
tracbot

The Cnidarian

Collaborators: Emma Freed, Natalie Freed, Pol Pla I Conesa, Jie Qi, Xiao Xiao

When Naomi Darian is poisoned by jellyfish venom, she transforms into The Cnidarian - a jellyfish super-villain created for the TEI 2011 Design Challenge. A custom dress outfitted with electroluminescent tentacles and a pulsing hood shrouds the mysterious Cnidarian. She attacks in a flash, with the palms of her gloves housing ultra-bright bulbs from a pair of hacked disposable cameras. I did the electronics for the EL wire, the motion control for the hood, and composed the music for the video (superhero theme song, bucket list item checked).

Relevant Technologies

Costume design, motion control, EL wire, Ableton Live

Photos

Photos courtesy of Andy Ryan

More information

More information can be found on the Cnidarian project blog.

Telescrapbook

Collaborators: Natalie Freed and Jie Qi

Telescrapbook is a set of remote sticker-books that are wirelessly connected, and they are both educational and customizable. Telescrapbook presents I/O Stickers, adhesive sensors and actuators that children can use to create personalized remote communication interfaces. By attaching I/O Stickers to special greeting cards, children can invent ways to communicate with long-distance loved ones with personalized, connected messages. Children decorate these cards with their choice of craft materials, creatively expressing themselves while making a functioning interface. The low-bandwidth connections leave room for children to design not only the look and function, but also the signification of the connections.

Telescrapbook is the wonderful work of Jie Qi and Natalie Freed, who let me help out with some coding and soft-sensor making.

Relevant Publications

Natalie Freed, Jie Qi, Adam Setapen, Hayes Raffle, Leah Buechley, and Cynthia Breazeal. Sticking Together: Handcrafting Personalized Communication Interfaces. 2011 ACM International Conference on Interaction Design and Children (IDC) 2011.

Relevant Technologies

C, Arduino

Photos
telescrapbook telescrapbook telescrapbook telescrapbook telescrapbook telescrapbook
Video

SEEDpower

SEEDpower is an integrated solution for power management and regulation on small-to-medium sized robots. With full isolation of logic and motor power sources, the board supports 3-channel input (up to 3 batteries) and 4-channel output (Motor voltage, +12V, +5V, and +3.3V). Any of the two input batteries may be placed in series or parallel (using on-board jumpers), and the output is fully protected with both fuses and flyback diodes. The board supports "plug-and-play" charging, using an onboard relay to switch to an external supply whenever the robot is plugged in.

I built a few custom charging stations to charge the batteries inside DragonBot and Huggable while simultaneously powering the robots on an external supply. It charges up to three Lithium Polymer batteries and provides external power via two 110W power supplies. The lockable charging stations are kid-friendly, having only a single power umbilical with an industrial-grade polarized connector. The front of the charging station has an LED matrix, indicating power levels for the batteries and current draw for the external supplies.

Relevant Technologies

Electronics design, LiPo power management, Eagle PCB, Rapid Prototyping

Photos
SEED power SEED power SEED power SEED power SEED power SEED power charging station
Videos

The SEEDpower board delivers juice to DragonBot, Huggable, and many other projects from the the Personal Robots Group.



artbots

Robots don't always have to perform a function. Building robots and physical objects that evoke strong emotions has always been a passion of mine. Here are some of the less-than-functional robots I've made for the purpose of artistic expression.

Photos

Verna

verna verna verna verna

Cuboogie

cuboogie cuboogie

food_clock

foodclock foodclock foodclock foodclock foodclock

wiggler

wiggler
Videos

verna


food_clock

Portable, Inexpensive, and Unobtrusive Accelerometer-based Geriatric Gait Analysis

Collaborators: Chris Gutierrez, Dr. Mark Williams

In the summer of 2007, I was chosen for an NSF Research Experience for Undergraduates at the University of Virginia focusing on computing in medicine. In a joint venture with the Department of Computer Sciences and School of Medicine, I spearheaded a project titled "Portable, Inexpensive, and Unobtrusive Accelerometer-based Geriatric Gait Analysis." Collaborating closely with a gerontologist, Dr. Mark Williams, we attached wireless accelerometers to the ankles, wrists, and waists of geriatric patients and recorded their walking movements. Using signal processing and supervised machine learning techniques, we were able to detect diseases such as Alzheimer's, spastic hemiparesis, and spastic paraparesis with surprising accuracy. We also developed GaitMate, a tool for aiding physicians in using this machine learning data for diagnosis in clinical gait analysis. Dr. Williams has continued to build on my work, and plans to release a commercial version in the near future. Applications of this research include prediction and confirmation of geriatric disorders, telemedicine, and long-term analysis.

Relevant Technologies

MATLAB, IMUs, Supervised Machine Learning

Photos
gaitmate gaitmate gaitmate
More information

Robocup

Collaborators: Peter Stone

For 2 years I worked on the UT Austin Villa Robot Soccer team. During this time, I worked on creating motion primitives through human training (imitation learning teaching the robots how to walk and kick). I also worked on models of teamwork for passing and helped build a set of development tools for the Sony AIBO and Aldebaran Nao.

Relevant Technologies

Machine learning, C++, Python, Player/Stage, Gazebo, Qt, Sony AIBO, Aldebaran Nao

Photos
Robocup Robocup
Video

Code

Here are some old but interesting projects I completed during my undergraduate career at the University of Texas at Austin.

Operating System

A bootable x86 operating system I developed with my good friend Jose Falcon for my undergraduate operating systems course.

SLAM Simulator

Simultaneous Localization and Mapping (SLAM) simulator designed in my graduate robotics course. Displays a map of a mobile robot's probabilistic position in its environment using a particle filter.

Genetic Algorithm

A genetic algorithm for learning to play keepaway in the robotic soccer domain. I designed this algorithm in my undergraduate course, Autonomous Multiagent Systems.

Pipelined Processor

A pipelined processor designed and implemented (using an extended version of the LC-3 architecture) for my undergraduate computer architecture course with Daniel Chimene.

  • Language: Verilog

  • Requirements: Verilog simulator (such as VCS)

  • Download

Fun with Lambda Calculus! A monadic parser in haskell.

A monadic parser, typechecker, and evaluator for a simply-typed lambda calculus augmented with booleans, natural numbers, fix-points and references.

RSA in Java

An efficient Java implementation of the RSA encryption/decription protocol.