Sofia Design
UX · UI Design Portfolio
portfolio-874.png

Blippar

Blippar

The world we see is a treasure trove of information ready to be unlocked. Blippar is an app that identifies things around you and surfaces content users might find useful in their context.

portfolio-874.png

 

Overview

Blippar is a leading augmented reality (AR) company. The company is renowned for creating consumer AR experiences for big clients such as Coca-Cola, Max Factor and Emirates.

Previously, Blippar operated on an agency model, now they strive to become their own consumer product for millions of people to use each day.

 

Team & Role

UX Designer in the Human Interface team of three. Partners: Manuel Colom (UX Lead), Richard Picot (Mid-Weight UX Designer)

Duration: 3 months
Tools: Pen & Paper, Sketch, Principle, Invision, Motion 5, Final Cut Pro

 

pasted-image-229.jpeg

The Brief

Blippar's tech stack includes a proprietary 3D engine used to deliver cross-platform AR experiences, a knowledge graph with content on several million entities and a computer vision layer capable of recognising millions of objects in the world.

  • Bring all three elements together in one unifying experience - allowing a user to point their smartphone at anything and have it recognised, display content in the AR space and allow for further exploration.
  • The solution should be able to accommodate user profiles (a feature to be added later that would allow make users recognisable to the Blippar app).
  • The app should be built upon the existing tech stack, work across iOS and Android, and to be designed and developed within 3 months.

 

User & Audience

The goal was to create a product that might be useful to anyone looking to identify something they see and gather information about it - a Google for the visual world. However, we worked off the assumption that the primary users for such a product would initially be tech early adopters, Millennials and Gen-Z.

 

blippar image.png

What we Knew

From prior research, we were able to use the following insights to keep our ideas grounded:

  1. People are generally overwhelmed by large amounts of content
  2. Don’t state the obvious
  3. Optimise for 2 min (max) engagement
  4. People don’t want to have endless exploration on a mobile phone
  5. Personalisation is important


 

 

Key Challenges

Interaction

Working in AR is an interesting challenge in that regular interaction principles are yet to be defined. Of course regular interaction design principles still apply but there are many key problems that have not been tackled before that would required validation. These details needed to be defined quickly.

How willing are users to consume content in the AR space? How will the user receive feedback from the recognition process? How should visual recognition errors be handled?

Interaction with content in AR can be challenging, how can we avoid the user pointing their phone at things in uncomfortable positions whilst maintaining a strong AR element throughout?

Validation

The second challenge was trying to validate these ideas. In UX, it is common to mock-up a UI and validate it with a quick test in Invision. With AR, everything happens in the camera so it can be very difficult to produce prototypes without the time consuming process of taking it into code.

This made progressively validating ideas even trickier under the time constraints. Instead, we logged assumptions during the design process and would work with data scientists later to disprove or validate these later.

Content
Presenting content in AR was yet another challenge. It had to be short, relevant, be displayed in a way that would accommodate errors and allow users to dive deeper should they wish.

Tech
We had plenty of ideas that we felt could work but the final challenge was finding one that operate within the technological constraints of the Blippar AR engine.

Imagining augmented content

Imagining augmented content

Ideation

It is very easy to create a picture in your mind of what content in AR might look like - for instance, something like the above maybe?

However, current technology is still limited and the ability to track and localise an object within the camera view meant so many ideas we had were not technically feasible yet.

Through this process, we ended up with a rough outline for a journey that would allow for varied content density and meet the project goals. 

Varied levels of information at each point of a user's journey

Varied levels of information at each point of a user's journey

Stakeholder feedback pushed the team to introduce more layers of content whilst remaining in the camera view. The cards were not as engaging, and so we introduced the idea of an AR Heads Up Display (HUD) for specific content verticals that might benefit from this.

With the aim of resonating with our target users, we also used this opportunity to introduce a number of entertaining models that could serve a purpose in the AR space. This included displaying the current weather conditions over your head, promotion materials and characters for treasure hunts.

The Solution

The solution we created was a card based system. The user points their camera at an object and is presented with a small card identifying it. Following this, a number of content cards would display bite size information. The user could tap on any of these and continue to learn about the object on a separate detailed page.

Cards would stack below the screen into the history, allowing a user to retrieve information without having to hold their arm up to the object they were previously scanning.

blippar image4.png

Learnings and Future Iterations

Our solution met all the project requirements, however there were a number of learnings that surfaced from this:

- General feedback has been that cards trigger too easily - it can be overwhelming to be pointing the app at something and then have a stream of information displayed
- People enjoy the novelty of experiencing content in the AR space
- Users aren’t particularly interested in exploring the deeper information page of an object
- Trying to replicate a number of flat UI elements such as Table Views in a 3D engine introduces creates a rift between user expectation and technical implementation
- Attempting to recognise millions of objects causes a large number of false positives in general use

Future Iterations:
Scale back on the amount of recognisable objects to improve performance on ones with key use cases and reduce the likelihood of false positives
- Introduce a buffer that requires user input to direct the app to start the recognition process rather than automatically
- Leverage new AR technologies such as Apple’s AR Kit to allow HUDs to track to their point of origin and remove the need for information cards that end up in a history stack
- Don’t attempt to keep users in the app past the AR experience and hand them off to the web should they want to learn more

2018-04-15 14_41_12.gif
2018-04-15 14_45_32.gif

Looking Back...

This project was a particularly challenging one, in many ways it was going to be almost impossible to implement what was required in the given timeframe.

Defining an appropriate MVP to test our assumptions and launching that would have produced a much more focused and reliable product that we could have iterated on quicker based on insights.

I learnt that in the right scenarios going high-fidelity quickly can be extremely helpful when communicating ideas to team members. It also encourages detailed thinking sooner, helping to lower the chance of key design details being overlooked because too much time was spent wireframing. In this project, we jumped straight from whiteboard sketches into high-fidelity mockups. This approach may not be right for every project but it was extremely helpful in this situation.