curriculum vitæ

research statement

I am a roboticist focused on how normal people can train robots through intuitive social and physical interactions. Through focused academic research and real-world experience bringing consumer robots to market, I have tackled the problem of transferring knowledge between robots and humans from many different angles. With a background in computer science and a passion for building things, I thrive when designing, prototyping, and programming socially competent robots that harness adaptive algorithms to improve their performance over time.

education

Massachusetts Institute of Technology

September 2010 - August 2012
M.S., Media Arts and Sciences, Personal Robots Group @ MIT Media Lab

University of Texas at Austin

August 2005 - May 2010
M.S., Computer Science. concentration in Artificial Intelligence, minor in Cognitive Science
B.S., Turing Scholars Honors Computer Science

experience

Romotive, Inc.

San Francisco, CA
October 2012 - Present

Roboticist
  • Technical lead on machine learning and human-robot interaction

  • Designed and implemented a computer vision framework for iOS harnessing OpenCV and GPU filters

  • Lead the design, implementation, and documentation of a robotics SDK for iOS developers

  • Implemented facial detection (GPU based Viola-Jones), facial recognition (local binary patterns), persistent memory system, scripted interaction environment, audio pitch recognition and synthesis

Formlabs, Inc.

Cambridge, MA
June 2012 - September 2012

Software Engineer
  • Designed and implemented the UI and UX for a low-cost high-resolution 3D printer (the Form 1)

  • In-depth development working on models of complex 3D geometries in C++ using OpenGL ES 2.0

MIT Media Lab

Cambridge, MA
September 2010 - August 2012

Research Assistant
  • Focus on cloud-based robot architectures, affective robotics, and applied machine learning

  • Built and programmed Dragonbot, an expressive and inexpensive robot platform powered by an Android phone. Dragonbot was used to secure a $10M NSF grant for socially assistive robots.

  • Helped build Playtime Computing, an interactive and immersive robotic play-space for children

  • Rapid design and prototyping of electromechanical systems

University of Texas at Austin

Austin, TX
August 2008 - July 2010

Graduate Research Assistant
  • Focus on motion acquisition for robots through human training, using motion capture to directly "puppet" humanoid and quadruped robots

  • Emphasis on machine learning by combining reinforcement learning with learning from demonstration

  • Worked under Peter Stone in the Learning Agents Research Group

  • Member of AustinVilla RoboCup Team, Standard Platform League (using Aldebaran Nao Humanoids)

  • Member, Reinforcement Learning Reading Group and Agents that Learn from Humans Reading Group

TRACLabs, Inc.

Houston, TX
May 2009 - January 2010

Intern/Programmer
  • Designed and built TRACBot -- an autonomous mobile robot to showcase planning algorithms

  • Design and partial implementation of software architecture using Player/Stage/Gazebo/ROS

  • Sensor framework included 3D time-of-flight cameras, Laser rangefinders, thermal sensors, distance/bump sensors, microphones, and 2D cameras

Amazon.com

Seattle, WA
May 2008 - August 2008

Software Development Engineer Intern
  • Implemented a major student-oriented textbook promotion in the Amazon Prime group

  • Experience working with service-oriented architectures (Java/C++), dynamic page generation

  • Dealt with large-scale reliability and latency constraints

  • Project lead to more than 50,000 new Prime subscriptions

University of Virginia

Charlottesville, VA
Summer 2007
Department of Computer Science / Medical Center

Computer Applications in Medicine, NSF Research Experience for Undergraduates
  • Working with Dr. Mark Williams (Gerontology), wrote algorithms and data analysis tools using MATLAB

  • Supervised machine learning on large corpus of accelerometer data to look for anomalies in the gait of at-risk geriatric patients, using unobtrusive and inexpensive hardware

Applied Research Laboratories

Austin, TX
June 2006 - May 2007

Senior Student Associate
  • Research and Development using Java, Java3D, CORBA, granted DOD Secret security clearance

  • Prototyped 3D desktop environments, created various testing utilities and front-end widgets

teaching

How To Make (almost) Anything, MIT Media Lab

Fall 2011

Teaching Assistant to Neil Gershenfeld
  • Helped to run and teach the hands-on crash course on personal fabrication

  • Taught students skills such as computer-controlled cutting, molding and casting, basic electronics

publications

Adam Setapen, Creating Robotic Characters for Long-Term Interaction. Masters thesis, MIT. Readers: Cynthia Breazeal, Rosalind Picard, David DeSteno.

Nadia Cheng, Maxim Lobovsky, Steven Keating, Adam Setapen, Katy Gero, Anette Hosoi, and Karl Iagnemma. Design and Analysis of a Robust, Low-cost, Highly Articulated Manipulator Enabled by Jamming of Granular Media. To appear, 2012 IEEE International Conference on Robotics and Automation (ICRA 2012).
Bibliography: BibTeX

Natalie Freed, Jie Qi, Adam Setapen, Hayes Raffle, Leah Buechley, and Cynthia Breazeal. Sticking Together: Handcrafting Personalized Communication Interfaces. 2011 ACM International Conference on Interaction Design and Children (IDC) 2011.
Bibliography: BibTeX
Download: [pdf] (1.1MB)

W. Bradley Knox, Adam Setapen, and Peter Stone. Reinforcement Learning with Human Feedback in Mountain Car. AAAI 2011 Spring Symposium - Help Me Help You: Bridging the Gaps in Human-Agent Collaboration., Palo Alto, CA - March 2011.
Bibliography: BibTeX
Download: [pdf] (584kB)

Adam Setapen, Michael Quinlan, and Peter Stone. Beyond Teleoperation: Exploiting Human Motor Skills with MARIOnET. In AAMAS 2010 Workshop on Agents Learning Interactively from Human Teachers (ALIHT), Toronto, Canada - May 2010.
Supplemental video cited in the paper.
Bibliography: BibTeX
Download: [pdf] (1.8MB)

Adam Setapen, Michael Quinlan, and Peter Stone. MARIOnET: Motion Acquisition for Robots through Iterative Online Evaluative Training (Extended Abstract). In The Ninth International Conference on Autonomous Agents and Multiagent Systems (AAMAS), International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, May 2010.
Supplemental video cited in the paper.
Bibliography: BibTeX
Download: [pdf] (332kB)

Adam Setapen. Exploiting Human Motor Skills for Training Bipedal Robots. Undergraduate Honors Thesis/Technical Report HR-09-02. Committee: Peter Stone (chair), Dana Ballard, Gordon Novak.
Bibliography: BibTeX
Download: [pdf] (2.3MB)

invited talks

Droidcon 2012, Berlin. 03.14.2012. The Robot In Your Pocket.

360iDev 2013, Denver, 09.09.2013, Your Code Just Ran Across The Floor.

honors, awards and appointments

  • University of Texas College of Natural Sciences Dean’s Honored Graduate (2010)

  • Motorola Endowed Scholar (2007 – 2009)

  • Director of the Campus Technology Agency, University of Texas (2007)

  • Student Government Technology Liaison, University of Texas (2006 - 2007)

  • Single national recipient of the Käthe Wilson Memorial Scholarship to study in Germany (2005)

  • Department of Defense Secret Security Clearance

  • Freshmen Research Initiative, algorithmic game theory, University of Texas (2005)

  • Chosen as peer tutor in Data Structures and Algorithms course, University of Texas (2006)

  • University of Texas honors list, Fall 2005 - Spring 2010

technical chops

Relevant Graduate Coursework:

Autonomous Robotics, Machine Learning, Cognitive Science, Natural Language Processing, Object Recognition, Affective Computing, Computational Neuroscience, Autonomous Multiagent Systems, Algorithmic Game Theory, Cryptography, Algorithms, Programming Languages, How to Make (almost) Anything, Sensor Applications for Interactive Environments, Technologies for Creative Learning

Programming Languages:

C, C++, Java, C#, Objective-C, Lisp, Haskell, Python, Javascript

Frameworks:

iOS, Android, MATLAB, Eclipse, ROS

Design:

SolidWorks, AutoCAD, Rhino, Maya, Adobe Suite, Eagle PCB

Electronics:

Sensor prototyping and integration, power regulation and management, motor controllers, analog filter design

Fabrication:

CNCs, mills, lathes, laser cutters, waterjets, 3D printers (SLA, SLS, FDM, MJM), mold making


Collapse CV

contact

asetapen [at] media [dot] mit [dot] edu

+1.512.524.9682



biography

Adam Setapen is a roboticist who likes to explore the intersection of science, engineering, design, and art. His research focuses on the social dynamics of humans interacting with robots, harnessing machine learning to improve performance and to increase personalization. His work aims to push data-driven robotics by making accessible agents, both hardware and software, that can exist outside of the laboratory while collecting and analyzing large datasets of simple "real-world" interactions. Through novel control interfaces for shared autonomy, Adam's work looks at how humans can bootstrap autonomous systems through data obtained from real-world interactions.

In addition to his work into software for intelligent social agents, Adam is a hacker at heart, building his own robots and expressive objects to frame his research. Through a deep integration of principles from animation and design into the electromechanical systems of a robot, his work aims to find better ways for physical computers to socially interact with people.

Adam is currently a Roboticist at Romotive, where he is heading up the software team. Adam received a Master's degree from the MIT Media Lab, where he spent his time building furry dragon robots in the Personal Robots Group. Adam also earned an M.S. studying machine learning in the Learning Agents Research Group and a B.S. in the Turing Scholars Computer Science program, both at the University of Texas at Austin. He has held positions at TRACLabs, Inc., Amazon.com, The University of Virginia, and Applied Research Laboratories.

DragonBot

Collaborators: Natalie Freed, Fardad Faridi, Sigurður Örn, Marc Strauss, Jesse Grey, Matt Berlin, Iliya Tsekov

As robots begin to appear in people's everyday lives, it's essential that we understand natural ways for humans and machines to communicate, share knowledge, and build relationships. For years, researchers have been trying to make robots more socially capable by inviting human subjects into a laboratory to interact with a robotic character for a few minutes. But the laboratory doesn't share the complexity of the real world, and any possibility of a long-term interaction is hopeless. Enter the modern smart phone, which packs the essential functionality for a robot into a tiny always-connected package. The DragonBot platform is an Android-based robot specifically built for social learning through real-world interactions.

DragonBot is all about data-driven robotics. If we want robots capable of social interaction, we simply need a lot more examples of how humans interact in the real world. DragonBot's cellphone makes the platform deployable outside of the laboratory, and the onboard batteries are capable of powering the robot for over seven hours. This makes DragonBot perfect for longitudinal interactions - learning over time and making the experience more personalized. DragonBot is a "blended reality" character - one that can transition between physical and virtual representations. If you remove the phone from DragonBot's face, the character appears on the phone's screen in a full 3D model, allowing for interaction on the go.

I designed and built DragonBot from scratch, building on the lessons I learned through creating Nimbus. The robot uses the android phone for all of it's onboard computation, communicating with custom-built cloud services for computationally-heavy tasks like face detection or speech recognition. The phone performs motor control, 3D animation, image streaming, data capture, and much more. DragonBot uses a delta parallel manipulator with updated DC motors, custom motor controllers (made by Sigurður Örn), and precision-machined linkages. Two extra motors were added - head tilt (letting the robot look at objects on a tabletop or up at a user) and a wagging tail (which improves children's perception of the robot's animacy).

I'm currently using DragonBot to build models of joint attention in human-robot interactions. Most models of attention are based entirely on visual stimuli, but there is a lot of information contained in other sensory modalities about the social dynamics of an interaction. My ongoing work attempts to improve social attention through language, symbolic labeling, pointing and other non-verbal behavior. Through easy-to-use teleoperation interfaces and intelligent shared autonomy, my Masters thesis aims to make it much easier to "bootstrap" a robot's performance through large datasets of longitudinal interactions.

Photos
dragonbot dragonbot dragonbot dragonbot dragonbot dragonbot dragonbot dragonbot dragonbot
Video

Playtime Computing

Collaborators: David Robert, Natalie Freed

The Playtime Computing System is a technological platform that computationally models a blended reality interactive and collaborative media experience that takes place both on-screen and in the real world as a continuous space. On-screen audio-visual media (e.g., portraying virtual environments and characters – story world, etc.) have an extended presence into the physical environment using digital projectors, robotics, real-time behavior capture, and tangible interfaces. Player behavior is tracked using 3D motion capture as well as other sensors such as cameras and audio inputs.

Characters in this system can seemingly transition smoothly from the physical world to the virtual on-screen world through a physical enclosure that metaphorically acts as a portal between the virtual and the real. Any events or changes that happen to the physical character in the real world are carried over to the virtual world. Digital assets can be transitioned from the virtual to the physical world. These blended reality characters can either be programmed to behave autonomously, or their behavior can be controlled by the players.

My primary contribution was building the "trans-reality portal", the enclosure that transports the robot between physical and virtual representations. I also wrote the image stitching code that makes the eight projectors output a continuous environment, using a gaussian pattern from each projector and a single camera image capturing the scene to back-calculate the projector positions. This is where I had my first exposure to powerful realtime animation techniques through Touch Designer, under the guidance of David Robert. I learned a ton about setting up large audiovisual installations, exploiting graphics supercomputers, and building robot houses.

Photos
playtime computing playtime computing playtime computing playtime computing playtime computing
Video

electrello

I've wanted an electric cello since I was old enough to realize they existed. My parents, both professional classical musicians, started me on the cello when I was three years old. But I gravitated towards an electric guitar as a rebellious teenager, and since then I've anxiously waited to combine the soothing tones of the cello with the warm hum of a vintage tube amp. When I took Neil Gershenfeld's whirlwind class - How To Make (almost) Anything - I knew I had to design and build a cello to call my own.

Most electric cellos are either too expensive or don't have the same "feel" as a traditional instrument. electrello is a low-cost instrument that retains the feel of a traditional cello while allowing the performer to move more freely, due to the motion provided by the four-bar linkages which the player grips with their legs. The bow is outfitted with a wireless accelerometer and vibration motor, packed into a compact 3D-printed enclosure that can fit on any cello bow. The accelerometer records the movements of the bow and can store this data for analysis or use it in realtime. For example, an audio effect - like distortion - could be applied to the sound based on bow speed, intensifying the faster passages of a piece. The vibration motor is primarily an idea for remote lessons, where a teacher can provide haptic feedback to a student in an unobtrusive way.

The body of the instrument contains an android phone, which can wirelessly communicate with the bow and display relevant information based on the instrument's sound. Also, the integrated microphone can stream the sound over the internet, for a web-based performance or a remote teaching session. A $1.50 piezo and simple instrumentation amplifier capture the vibrations from the bridge and convert them into an audio signal, and I have plans to add a magnetic coil pickup to allow for a more grungy and distorted tone. Originally, I wanted the phone to act as an effects box and transcription device, but at the time cellphones couldn't handle simultaneously doing analog-to-digital and digital-to-analog conversion.

Images
electrello electrello electrello electrello electrello electrello electrello electrello electrello electrello
More information

The project is well documented on my final project website, and all design files and code can be freely download at the bottom of the page. Video/Audio coming soon!

food_clock

food_clock is a graduate student's best friend - it will alert you of free food, and it will tell you BEFORE everyone else finds out. Let me explain - there is a webcam at the MIT Media Lab called FoodCam, which is placed above a table where people bring leftovers and extra food. When the food is on the table, a button on the wall is pushed which generates an RSS event. An email goes out to all the students, and the frenzy begins. But there are 7 seconds between the RSS event and the email trigger, and this valuable time can mean all the difference when there's just a single slice of pepperoni remaining. And so food_clock was born, and so their stomachs were full.

Photos
foodclock foodclock foodclock foodclock foodclock
Video

food_clock was a project for How To Make (almost) Anything.

SnakeBot

More information soon, publication pending

Relevant Publications

Nadia Cheng, Maxim Lobovsky, Steven Keating, Adam Setapen, Katy Gero, Anette Hosoi, and Karl Iagnemma. Design and Analysis of a Robust, Low-cost, Highly Articulated Manipulator Enabled by Jamming of Granular Media. To appear, 2012 IEEE International Conference on Robotics and Automation (ICRA 2012).

Nimbus

Collaborators: Marc Strauss, Hasbro

Nimbus is an exploration into using delta parallel manipulators for highly expressive tabletop robot characters. At the core of the platform lies a four degree-of-freedom delta manipulator, able to move the robot's head in all three translational directions and around a single rotational axis. Parallel manipulators, typically used in manufacturing pick-and-place robots, are also particularly well suited for creating expressive "squash-and-stretch" characters. Because animating the motion of the robot is as simple as controlling a single inverse kinematics handle, even people without animation expertise can easily program believable motions for Nimbus.

Nimbus also represents an exploration into robot "furs" that move organically with the kinematic constraints of the platform. Collaborating with engineers on the Soft Goods team at Hasbro, a sewing pattern was created for this platform that preserves the volume of the character, while deforming it like a balloon that is being squashed and stretched. Using passive elements like long-pile fur and silicone-casted hands and feet, Nimbus aims to increase believability with a very minimal number of manipulators.

The furry exterior also has fabric capacitive electrodes sewn in, allowing for detection of touch and pre-touch in six distinct locations on the robot's body. The robot wirelessly receives information about any people in the environment from a Microsoft Kinect hidden in the environment, and Nimbus was programmed to want to mimic any people in front of it. The video below shows the robot's motion, and illustrates the robot moving along with humans and expressing elation when the human's motion coincides with the robot's.

Photos
nimbus nimbus nimbus nimbus nimbus nimbus nimbus
Videos



Some mechanical prototypes along the way:

MDS

Collaborators: Nick dePalma, Sigurður Örn, Jin Joo Lee, Jason Alonso

The MDS platform, which stands for mobile, dextrous, and social, is a humanoid robot that was designed to be able to naturally interact with people. I spent a few weeks working on Maddox, the newest MDS robot in the fleet. I wrote low-level linux drivers and calibration code for quick initialization of the robot's motor positions. Through working on MDS, I became familiar with the challenges of animating a highly-sophisticated humanoid, solving issues with high-level motion synthesis and low-level motor control.

Photos
mds
Video

The Cnidarian

Collaborators: Emma Freed, Natalie Freed, Pol Pla I Conesa, Jie Qi, Xiao Xiao

When Naomi Darian is poisoned by jellyfish venom, she transforms into The Cnidarian - a jellyfish super-villain created for the TEI 2011 Design Challenge. A custom dress outfitted with electroluminescent tentacles and a pulsing hood shrouds the mysterious Cnidarian. She attacks in a flash, with the palms of her gloves housing ultra-bright bulbs from a pair of hacked disposable cameras. I did the electronics for the EL wire, the motion control for the hood, and composed the music for the video.

Photos

Photos courtesy of Andy Ryan

Video
More information

More information can be found on the Cnidarian project blog.

Telescrapbook

Collaborators: Natalie Freed and Jie Qi

Telescrapbook is a set of remote sticker-books that are wirelessly connected, and they are both educational and customizable. Telescrapbook presents I/O Stickers, adhesive sensors and actuators that children can use to create personalized remote communication interfaces. By attaching I/O Stickers to special greeting cards, children can invent ways to communicate with long-distance loved ones with personalized, connected messages. Children decorate these cards with their choice of craft materials, creatively expressing themselves while making a functioning interface. The low-bandwidth connections leave room for children to design not only the look and function, but also the signification of the connections.

Telescrapbook is the wonderful work of Jie Qi and Natalie Freed, who let me help out with some coding and soft-sensor making.

Relevant Publications

Natalie Freed, Jie Qi, Adam Setapen, Hayes Raffle, Leah Buechley, and Cynthia Breazeal. Sticking Together: Handcrafting Personalized Communication Interfaces. 2011 ACM International Conference on Interaction Design and Children (IDC) 2011.

Photos
telescrapbook telescrapbook telescrapbook telescrapbook telescrapbook telescrapbook
Video

Robot Charging Station

I built a few custom charging stations to charge the batteries inside DragonBot while simultaneously powering the robot on an external supply. It charges up to three Lithium Polymer batteries and provides external power via two 110W power supplies. The lockable charging stations are kid-friendly, having only a single power umbilical with an industrial-grade polarized connector. The front of the charging station has an LED matrix, indicating power levels for the batteries and current draw for the external supplies.

charging station

SEEDpower

SEEDpower is an integrated solution for power management and regulation on small-to-medium sized robots. With full isolation of logic and motor power sources, the board supports 3-channel input (up to 3 batteries) and 4-channel output (Motor voltage, +12V, +5V, and +3.3V). Any of the two input batteries may be placed in series or parallel (using on-board jumpers), and the output is fully protected with both fuses and flyback diodes. The board supports "plug-and-play" charging, using an onboard relay to switch to an external supply whenever the robot is plugged in.

The SEEDpower board delivers juice to both DragonBot and the Huggable - two projects from the the Personal Robots Group.

Photos
electrello electrello electrello electrello electrello electrello

MARIONET

MARIONET, or Motion Acquisition for Robots through Iterative Online Evaluative Training, is a framework I developed with my undergraduate/masters adviser, Dr. Peter Stone.

Although machine learning has improved the rate and accuracy at which robots are able to learn, there still exist tasks for which humans can improve performance significantly faster and more robustly than computers. While some ongoing work considers the role of human reinforcement in intelligent algorithms, the burden of learning is often placed solely on the computer. These approaches neglect the expressive capabilities of humans, especially regarding our ability to quickly refine motor skills. In this paper, we propose a general framework for Motion Acquisition for Robots through Iterative Online Evaluative Training (MARIONET). Our novel paradigm centers around a human in a motion-capture laboratory that "puppets" a robot in realtime. This mechanism allows for rapid motion development for different robots, with a training process that provides a natural human interface and requires no technical knowledge. Fully implemented and tested on two robotic platforms (one quadruped and one biped), our research has demonstrated that MARIONET is a viable way to directly transfer human motion skills to robots.

Relevant Publications

Adam Setapen, Michael Quinlan, and Peter Stone. Beyond Teleoperation: Exploiting Human Motor Skills with MARIOnET. In AAMAS 2010 Workshop on Agents Learning Interactively from Human Teachers (ALIHT), Toronto, Canada - May 2010.
Supplemental video cited in the paper.
BibTeX Download: [pdf] (1.8MB)

Adam Setapen, Michael Quinlan, and Peter Stone. MARIOnET: Motion Acquisition for Robots through Iterative Online Evaluative Training (Extended Abstract). In The Ninth International Conference on Autonomous Agents and Multiagent Systems (AAMAS), International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, May 2010.
Supplemental video cited in the paper.
BibTeX Download: [pdf] (332kB)

Adam Setapen. Exploiting Human Motor Skills for Training Bipedal Robots. Undergraduate Honors Thesis/Technical Report HR-09-02. Committee: Peter Stone (chair), Dana Ballard, Gordon Novak.
BibTeX Download: [pdf] (2.3MB)

Photos
marionet marionet marionet marionet marionet
Video

TRACBot

Collaborators: Aaron Hill, Dr. Patrick Beeson, Dr. David Kortenkamp

TRACBot is a differential-drive robot I built from the ground up while interning at TRACLabs, Inc. I designed the robot to work with the Player/Stage/Gazebo software stack, the predecessor to the now-popular ROS framework. I integrated a wide variety of sensors such as LIDAR, thermal sensors, infrared rangers, cameras, and microphones. I also helped design the software architecture to exploit this rich sensory data. After my internship ended, I was hired as a part-time programmer to fabricate simulated 3D models and environments for the robot. Working on TRACBot exposed me to problems in robotics I might never encounter in academia, and it was an incredible learning experience.

Photos
tracbot

Portable, Inexpensive, and Unobtrusive Accelerometer-based Geriatric Gait Analysis

Collaborators: Chris Gutierrez, Dr. Mark Williams

In the summer of 2007, I was chosen for an NSF Research Experience for Undergraduates at the University of Virginia focusing on computing in medicine. In a joint venture with the Department of Computer Sciences and School of Medicine, I spearheaded a project titled "Portable, Inexpensive, and Unobtrusive Accelerometer-based Geriatric Gait Analysis." Collaborating closely with a gerontologist, Dr. Mark Williams, we attached wireless accelerometers to the ankles, wrists, and waists of geriatric patients and recorded their walking movements. Using signal processing and supervised machine learning techniques, we were able to detect diseases such as Alzheimer's, spastic hemiparesis, and spastic paraparesis with surprising accuracy. We also developed GaitMate, a tool for aiding physicians in using this machine learning data for diagnosis in clinical gait analysis. Dr. Williams has continued to build on my work, and plans to release a commercial version in the near future. Applications of this research include prediction and confirmation of geriatric disorders, telemedicine, and long-term analysis.

Photos
gaitmate gaitmate gaitmate
More information

Code

Here are some interesting projects I have completed over my undergraduate career at the University of Texas at Austin.

Operating System

A bootable x86 operating system I developed with my good friend Jose Falcon for my undergraduate operating systems course.

SLAM Simulator

Simultaneous Localization and Mapping (SLAM) simulator designed in my graduate robotics course. Displays a map of a mobile robot's probabilistic position in its environment using a particle filter.

Genetic Algorithm

A genetic algorithm for learning to play keepaway in the robotic soccer domain. I designed this algorithm in my undergraduate course, Autonomous Multiagent Systems.

Pipelined Processor

A pipelined processor designed and implemented (using an extended version of the LC-3 architecture) for my undergraduate computer architecture course with Daniel Chimene.

  • Language: Verilog

  • Requirements: Verilog simulator (such as VCS)

  • Download

Fun with Lambda Calculus! A monadic parser in haskell.

A monadic parser, typechecker, and evaluator for a simply-typed lambda calculus augmented with booleans, natural numbers, fix-points and references.

RSA in Java

An efficient Java implementation of the RSA encryption/decription protocol.