Project Mobii, for Mobile Interior Imaging, was a collaboration between Intel and Ford to explore how drivers interact with technology in the car and how we can then make that interaction more intuitive and predictive.
Ford currently uses exterior vehicle cameras for driver-assist features such as lane-keeping assist and lane departure warning. The Mobii research examined new applications for interior cameras, including driver authentication. The use of facial recognition software offers improved privacy controls, and enabled Mobii to identify different drivers and automatically adjust features based on an individual’s preferences.
User Cases and Overall Flow:
- Upon entering the vehicle, the driver is authenticated by Project Mobii through a front-facing camera using facial recognition software.
- The in-car experience is then personalized to display information specific to that driver, such as calendar, music and contacts.
- If Project Mobii detects a passenger in the car, a privacy mode activates to display only navigation.
- If Mobii does not recognize the driver, a photo is sent to the primary vehicle owner’s smartphone. That owner can then set permissions and specify features that should be enabled or disabled.
- If the driver is the child of the vehicle owner, for example, restrictions could be automatically set to require safety belt use and to limit speed, audio volume or mobile phone use while driving.
- Gesture recognition software enables intuitive interaction for the driver. A combination of natural gestures and simple voice commands can simplify such tasks as turning the heat up and down, or opening and closing a sunroof while driving.
As part of this project, I collaborated with the team in terms of interaction design, but my main contribution was as a lead UX researcher. I designed a phased approach for this project given its length and complexity:
- The first phase was a concept research exploration to define the usages and gain insights on the direction of the ideas.
- The second phase was conducted in a vehicle, but with a low fidelity, wizard of oz prototype.
- The third and final phase included a high fidelity prototype built into the vehicle. For this phase we travelled to Detroit and worked alongside Ford researchers and engineers. To finalize, we reported the findings to Ford and Intel senior leaders and moderated a discussion on the impact of the findings as well as next steps.
The Mobii research was a collaboration between Intel ethnographers, anthropologists and engineers alongside Ford research engineers, and incorporated perceptual computing technology to offer a more enjoyable and intuitive vehicle experience.
“The use of interior imaging is purely research at this point,” Mascarenas said. “However, the insights we’ve gained will help us shape the customer experience in the long term.”
As the lead researcher in the team, I helped uncover and communicate those experiences. It is very gratifying to be able to have such a deep impact in the future of the space as well as the recognition from the senior leaders in the industry.
I learned a lot during this project, including how to conduct research in freezing cold temperatures with systems that keep falling, poor internet connection and around a very different culture of what I’m used to working with in my day-to-day job. I made many friends in Detroit and Ford, and I am looking forward to seeing some of these neat features in their vehicles soon!
Here’s the official video explaining Mobii in its entirety: