Height | mm |
Width | mm |
Depth | mm |
Weight | g |
Materials | |
Manufacturing method | 3D printing |
Processor | |
Sensors |
Our proof of concept uses mobile sensor systems to extend the capabilities of current IPA’s while using nonverbal communication cues to suggest emotion.
Should we care how we treat AI’s? They certainly don’t; they’re robots with no emotions, yet there is pushback that says yes; “the tone of voice that we use in a growing percentage of our daily interactions matters—not for Alexa, not for Amazon, not for Apple, but for us.” Amazon’s contribution to this topic was a direct response to parents worried about smart speakers affecting children’s social development; the newly announced EchoDot Kids Edition, includes a “Magic Word” feature to expect words like ‘please’ when kids ask questions. But should our systems be encoded to force a level of politeness and respect from us? Perhaps, the underlying issue is actually ‘lack of empathy’. We made a prototype with animated responses because nonverbal cues are a rich medium for emotional expression. An emotional relationship has room for empathy. Empathy has potential to cultivate more intuitively positive communication.
Beyond emotive animation, we had to understand the social issues surrounding the role of the Home Assistant because the project proposes something that is at least functionally equivalent. We ask questions like “are the branded associations and cultural connections of our current home assistants problematic?” because as Rachel Withers writes in a Slate article; “it’s as if missing housewives are being replaced with more servile smart-home wives, reinforcing the cultural connection between ‘women’ and ‘subservience’ to boot.” By creating emoto we establish a different category of ‘character’ assistant rather than something more purely speech based, and easily anthropomorphized with cultural implications.
We really resonated with something Golden Krishna, author of the book “No Interface Is The Best Interface”, said in an interview; “Let’s work towards escaping this world of screen-zombies”. Emoto provides an alternative to passive content feed consumption plaguing our every idle moment; it’s a defined place and reason to set your phone down at home. The idea of the user’s smartphone changing roles into emoto, has exciting interaction design challenges because it completely changes how users might engage with content on phones. How is content consumed through a character? Would emoto present notifications at set intervals or see the user is busy and delay distractions? Can it know rain made traffic slow, so the dinner reservation should be moved back? Having more contextually relevant functionalities beyond basic phone abilities also implies rethinking communication between different third party apps. Emoto as a physical prototype shows how an endearing and expressive home assistant could change the way we relate to AI’s and smartphones. The interaction design implies AI software taking on app management. Conceptually, it provokes important questions surrounding human-AI communication, the character design of intelligent assistants affecting behavior, and the paradigm shift of interaction on mobile devices out towards intelligent environments.
Current home assistants, smart speakers, and robots (MIT’s Jibo, Baidu’s Raven) don’t take advantage of the AI in your pocket — your phone. Conversational UI’s aren’t emotional, at technologies current level it would be uncanny— it’s why we explore nonverbal animated expression. Beyond that, the unique combination of robot, display, and sensor systems extend the capability of current IPA’s and drive interesting new ways for users to interact with content on their phone.
In regards to embodying the concept of an AI as sidekick rather than an assistant, an AI assistant presented with a human voice comes with certain expectations. Because our project is branded as a whole new category of character rather than a ‘human’ assistant, it’s a relatively blank canvas and user infers and project meaning as they get to know and interact with emoto. Initial frustrations with a NLP system like Siri comes from expecting more intelligence than technology currently provides.
We specifically don’t intend for emoto to be anthropomorphized into a person. Instead, we drew inspiration from endearing pop culture examples where viewers attached emotional value to intrinsically robotic characteristics. The qualities of the motors we used are apparent in emoto’s quick head-tilts. Welcoming the motor as the material creates motions that feel inherently robotic.
By combining robotic hardware, our smartphone display and sensor systems (dual cameras, IR sensors, gyrometer, accelerometer, magnetometer, etc) we open the door for more dynamic two way communication; the use of body language, and gestures become available. Can you imagine the dynamics of a facetime call if you directed camera angle with gesture or voice?
Once docked, your phone takes on the role of AI Sidekick. While docked, a user can engage with content on their phone in more meaningful and expressive way. Imagine you have emoto stationed in the kitchen with you, and as you’re preparing a meal emoto is following along with you. While current smart home assistants might be able to read instructions out as the user prompts it, emoto would instead be able to observe the current steps being taken and proactively suggest the next steps rather than waiting for further instruction. Because emoto’s settings can be toggled to access calendar apps, it can anticipate when the user is ready to leave home and ambiently transition from the active and aware character of emoto back to the smartphone the user is familiar with.
Contains video's and case studies
The instructions to make the hardware for your own phone charging robot.
The software to control the Emoto robot.
These are the .stl files to make your own Emoto.
Wevolver 2023