As robots begin to appear in people's everyday lives, it's essential that we understand natural ways for humans and machines to communicate, share knowledge, and build relationships. For years, researchers have been trying to make robots more socially capable by inviting human subjects into a laboratory to interact with a robotic character for a few minutes. But the laboratory doesn't share the complexity of the real world, and any possibility of a long-term interaction is hopeless. Enter the modern smart phone, which packs the essential functionality for a robot into a tiny always-connected package. The DragonBot platform is an Android-based robot specifically built for social learning through real-world interactions.
DragonBot is all about data-driven robotics. If we want robots capable of social interaction, we simply need a lot more examples of how humans interact in the real world. DragonBot's cellphone makes the platform deployable outside of the laboratory, and the onboard batteries are capable of powering the robot for over seven hours. This makes DragonBot perfect for longitudinal interactions - learning over time and making the experience more personalized. DragonBot is a "blended reality" character - one that can transition between physical and virtual representations. If you remove the phone from DragonBot's face, the character appears on the phone's screen in a full 3D model, allowing for interaction on the go.
I designed and built DragonBot from scratch, building on the lessons I learned through creating Nimbus. The robot uses the android phone for all of it's onboard computation, communicating with custom-built cloud services for computationally-heavy tasks like face detection or speech recognition. The phone performs motor control, 3D animation, image streaming, data capture, and much more. DragonBot uses a delta parallel manipulator with updated DC motors, custom motor controllers (made by Sigurður Örn, and precision-machined linkages. Two extra motors were added - head tilt (letting the robot look at objects on a tabletop or up at a user) and a wagging tail (which improves children's perception of the robot's animacy).
I'm currently using DragonBot to build models of joint attention in human-robot interactions. Most models of attention are based entirely on visual stimuli, but there is a lot of information contained in other sensory modalities about the social dynamics of an interaction. My ongoing work attempts to improve social attention through language, symbolic labeling, pointing and other non-verbal behavior. Through easy-to-use teleoperation interfaces and intelligent shared autonomy, my Masters thesis aims to make it much easier to "bootstrap" a robot's performance through large datasets of longitudinal interactions.
Rapid Prototyping, SolidWorks, Eagle PCB, aesthetic / fabric design, Android / Java, C, Python, OpenCV, Machine Learning, Cloud architectures
Adam Setapen. Creating Robotic Characters for Long-Term Interaction. Masters thesis, MIT. Readers: Cynthia Breazeal, Rosalind Picard, David DeSteno.