Create digital from physical
June 2015 – ongoing for Microsoft
How many times a day do we snap a photo, or look at videos that other people have shared?
We do this so regularly now that I sometimes forget how different this experience was just a few years ago.
And I love that while the cameras and sensors we use keep evolving, our intentions
(to document and share particular moments in time) have remained mostly constant.
In July 2015, I joined a three-person team tasked with envisioning and realizing new 3D capture and mixed reality products.
The challenge: Assuming a future where 3D content and VR/AR/MR would be pervasive, how might people capture,
remix/modify, view, and share the things that matter to them?
A note about this case study:
Over the course of this project and as the project grew, I served as both technical design and product design lead, responsible for
operationalizing the creative team and collaborating with directors to define product vision. I shepherded the project and team as it grew from
three people to forty-five in total.
This case study focuses on work done from mid 2017 onwards, when I led a shift in the product in response to changes in strategy.
Understanding Capture Intentions
What are people really trying to do, and how could it be easier?
Working with product partners across the company, we started by focusing on the needs of our target customers,
conducting on-site research across a number of diverse groups.
We then created a framework by which our executive stakeholders could better
understand peoples' motivations, behaviors, and pain points around 3D capture.
Given our audience's pain points around 3d capture, we developed a set of scenarios and worked with stakeholders to prioritize them to fit
Windows (our parent organization) and the 3D For Everyone initiative. The executive team settled on a scenario where a user could easily create models from the
physical world using their mobile device.
Teaching New Capture Behaviors
Our goal was to make the experience as predictable, forgiving, and enjoyable as possible.
Drawing from familiar capture workflows, we tried out several different approaches before settling on our current architecture.
One of our biggest design challenges was how to teach people an entirely new behavior in real-time,
given a tracker in development. We needed to provide users with enough guidance to understand what
they needed to do to capture successfully, without overwhelming them.
To this end, we focused on timely, visual feedback and building a mental model where the user felt that
their actions were driving the capture results, and where their movements mapped 1:1 with what they saw on screen and in the resulting model.
Using VR to prototype and test AR
We created many lo-fi prototypes in order to quickly test out our assumptions and push our technology further,
taking the most promising prototypes forward into the product and iterating. My team uses whatever tools suit the goals and timeline,
so prototypes might be sketches, paper prototypes, UI clickthroughs, videos, or full 3D interactive simulations.
To test out real-time interactions that relied heavily on user movement, we used after-market VR headsets
(Windows Mixed Reality and Oculus) to prototype our Mixed Reality interactions. This allowed me to rapidly build and test out an ideal experience without
needing finished hardware or software, and helped us communicate our experience goals. This also enabled us to measure and study the effect of feedback on
user motion and its subsequent impact on model quality.
It has been both hugely challenging and rewarding to be on a project that is so tightly coupled with research technology. I love
the uncertainty and experimentation that comes with creating new mixed reality/holographic
experiences while being on the forefront of computer vision research for tracking and reconstruction. As the far
future becomes the near future, I'm looking forward to seeing these tools enter peoples' lives and become part of their regular workflow.
There are many things I'd love to change and improve, but all this is speculation until we see our products in the market.
I've learned so much from this project, and look forward to learning more once the project is released.