FLEX AI: Zero-to-one experience and interface design of security camera company’s cloud-based AI model creation tool

Company: Hanwha Vision America

Hero image for FLEX AI
Hero image for FLEX AI
Hero image for FLEX AI
Project Summary

I led and launched a product with a focus on ease of use, despite not being able to conduct any user research.

Timeline & Team

Sep 2023 – June 2024
(10 months)

Responsibilities
  • Experience design

  • Visual design

  • Clickable prototype

  • Adapting to technical limitations

  • Ensuring a cohesive experience and look and feel

Results
  • First of 4 teams to be ready for official deployment

  • Beta client purchased $600,000 worth of cameras to implement FLEX AI in their security system

Company Overview

Hanwha Vision is a B2B video surveillance company headquartered in Korea, and one of the top surveillance solutions in the United States.

They provide high-quality security cameras and smart software that helps businesses monitor their properties. Their products help prevent theft, improve safety, and offer peace of mind by allowing users to keep an eye on their premises in real-time or review recorded footage later.

Hoping to pivot from hardware manufacturer to software solutions provider, Hanwha Vision wanted to create a portfolio of cloud-based tools that their customers can subscribe to.

process

Research

  • Stakeholder interviews

  • Understanding competitors

Milestone 1: Proof of concept

  • Happy path of creating a single model

Milestone 2: MVP

  • Retrain and compare models

  • Adopting HQ's design system

Milestone 3: MMP

  • Integration with other Hanwha cloud products

  • Product licensing

  • Ease of use

Research

Understanding the concept behind Hanwha's existing WiseDetector

We did multiple stakeholder interviews with our AI team, led by Paul Lee and Huan-Yu Wu

  • What information does the AI team need from the user to create a model?

  • How do we make a more accurate model?

  • How does footage from day vs night affect the model?

  • How many images are needed to create a model?

team call
team call
team call

WiseDetector

wisedetector screenshot
wisedetector screenshot
wisedetector screenshot
  • Hidden within Device Manager

  • Old UI

  • Uses videos directly from cameras in the users’ systems

  • Slow because runs on local computer’s CPU

Frigate

frigate screenshot
frigate screenshot
frigate screenshot
  • Uses videos directly from cameras in the users’ systems

  • Not a fully custom model – Frigate comes with preset models, and the users’ annotations help improve the official models

Target audience

There are many parties involved in the sales of Hanwha’s products, but we envisioned our users to be either the salesperson, or the end customer.

Hanwha sales flow
Hanwha sales flow
Hanwha sales flow

Sales people may offer to use FLEX AI to build a model in order to secure a deal.

End customer may wnt to create their own models for their own purpose.

Milestone 1 requirements

Our first goal was to launch a Proof of Concept at GSX, one of the biggest trade shows for the security industry.

Goal of GSX:

  • Show potential customers what FLEX AI could do

  • Pique potential customers' interest

  • (Internally) prove to Hanwha HQ that this is a project we can handle, and is worth working on

User flow & wireframes

We mapped out a flowchart of our user journey.

Hanwha sales flow
Hanwha sales flow
Hanwha sales flow

We brought the wireframes to the AI team for alignment.

Hanwha sales flow
Hanwha sales flow
Hanwha sales flow
Milestone 1 design

Our Dev team turned our prototype into reality.

Hanwha sales flow
Hanwha sales flow
Hanwha sales flow

Part 1: Annotate

  1. Go to the Annotate tab

  2. Upload a video

  3. Draw bounding boxes

  4. Repeat for multiple frames

  5. Train model

Hanwha sales flow
Hanwha sales flow
Hanwha sales flow

Part 2: Upload video for Verify

  1. Go to the Verify tab

  2. Upload a video

Hanwha sales flow
Hanwha sales flow
Hanwha sales flow

Part 3: Verify results

  1. Play a video on the Verify tab

  2. Watch bounding boxes appear

  3. Use te Confidence Level slider to filter out bounding boxes of low confidence scores

GSX Outcome

Fred, our PM, talked to over 100 salespersons and potential customers, giving us more insight of who wants to use FLEX AI, and what they’re trying to detect.

We had a very small corner of Hanwha Vision’s booth, but there was a lot of foot traffic and interest in FLEX AI. The audience loved the slick interface and ease of use, they were amazed at what could be done with such little steps.

Potential use cases for FLEX AI
Potential use cases for FLEX AI
Potential use cases for FLEX AI

Goal of GSX:

  • ✅ Show potential customers what FLEX AI could do

  • ✅ Pique potential customers' interest

  • ✅ (Internally) prove to Hanwha HQ that this is a project we can handle, and is worth working on

Milestone 2 requirements

On to the next stage: Building an MVP

Goal of Milestone 2:

  • Combining FLEX AI, DM Pro, Sightmind, and OnCloud into 1 overarching Cloud Platform

  • Broaden scope: add ability to improve on a model

HQ Team feedback

In order to understand other products’ look and feel, I reached out to the Korean HQ Design team for a demo of their products, as well as to get feedback on FLEX AI.

Example of DM Pro's designs
Example of DM Pro's designs
Example of DM Pro's designs

DM Pro is a cloud-based device manager, used to check on health of hardware and maintenance.

Example of Sightmind's design
Example of Sightmind's design
Example of Sightmind's design

Sightmind is a business analytics platform.

Both designs used a lot of modules rounded corners and ample white space. We implemented that into our designs, but decided to keep our font size and color contrast to make sure we stay ADA compliant.

What FLEX AI looked like before GSX and feedback from the HQ Design team
What FLEX AI looked like before GSX and feedback from the HQ Design team
What FLEX AI looked like before GSX and feedback from the HQ Design team

Before

What FLEX AI looked like after GSX and feedback from the HQ Design team
What FLEX AI looked like after GSX and feedback from the HQ Design team
What FLEX AI looked like after GSX and feedback from the HQ Design team

After

New flow: Improving a model

What is an "improved" model to the user?

Option 1: quantitative metrics

  • "Accuracy score"

  • Number of correct detections
    (True positives)

  • Number of incorrect detections
    (False positives)

  • Number of missed detections
    (False negatives)

Metrics would be ideal for the salesperson to prove increase in accuracy.

Example of DM Pro's designs
Example of DM Pro's designs
Example of DM Pro's designs

The trade-off is that a lot more work is needed to set this up correctly:

  1. Upload a video that is not used for training

  2. User annotates frames that contain the object to create a "score sheet"

  3. After the latest version of model is applied, user goes through all the annotated frames to grade the version.

Option 2: qualitative approach

By showing the two versions of the model side-by-side, the user can visually compare the difference in the older and newer versions. This also allows for more nuanced comparisons of accuracy, like if the boxes are tighter or if there are more correct boxes.

This would be ideal for the end user, as the output mimics what they will actually see on their VMS, meaning that they can easily tell if the model is good enough for them.

Example of DM Pro's designs
Example of DM Pro's designs
Example of DM Pro's designs

What actually happens when a model is improved?

Example of DM Pro's designs
Example of DM Pro's designs
Example of DM Pro's designs

This mental model helped align Front End Tech Lead, Back End Tech Lead, and Product team. We came to a conclusion that:

  • Model doesn’t actually learn from false positives, so focus on encouraging user to mark true positives

  • Removed the idea of allowing model reversion – user can delete individual incorrect annotations, or start the project over.

Milestone 2 (MVP) success

Goal of MVP:

  • ✅ Combining FLEX AI, DM Pro, Sightmind, and OnCloud into 1 overarching Cloud Platform

  • ✅ Broaden scope: ability to improve on a model

At ISC West 2024, we started finding companies that were starting to work on custom object detection models as well.

Home stretch goals before launch:

  • Focus on making it easy

  • Continue matching other Cloud Products

Launch Prep

With the help of the AI team, we added 3 features to make model creation easier:

  1. Suggested frames to improve

When improving the model, we use AI to generate a list of frames that it thinks will be the most beneficial in creating the next version. We then pick the top 10 of these frames for the user to draw bounding boxes again.

  1. AI tightening of boxes

With a click, AI will try to recognize the object that the user is trying to draw a box around, and tighten the individual boxes.

Example of DM Pro's designs
Example of DM Pro's designs
Example of DM Pro's designs
  1. Reference points

If there are certain moments the user deem crucial to recognizing an object, e.g. when a forklift first enters the warehouse, they can bookmark these moments on the video to serve as a clear point of comparison.

Example of DM Pro's designs
Example of DM Pro's designs
Example of DM Pro's designs

Option 2: qualitative approach

By showing the two versions of the model side-by-side, the user can visually compare the difference in the older and newer versions. This also allows for more nuanced comparisons of accuracy, like if the boxes are tighter or if there are more correct boxes.

This would be ideal for the end user, as the output mimics what they will actually see on their VMS, meaning that they can easily tell if the model is good enough for them.

How does the user know what to do at each stage?

During MVP, we considered using steppers to show the user which stage they’re currently on for the project, as well serve as a navigation.

However, we decided it against it for multiple reasons:

  • Since the process is not a straight line, the visual made it look more confusing than it is.

  • It took up a lot of visual space and emphasis.

  • Technical restrictions: it would be hard to build in the short timeframe.

Example of DM Pro's designs
Example of DM Pro's designs
Example of DM Pro's designs

We also considered using tutorials that would allow a first-time user, or a user who hasn’t used FLEX AI in over 3 months, to know about updates.

As much as our whole team liked the idea, it unfortunately had to be scoped out as the dev team needed to focus on building out core functionality.

We started looking to see if Amplitude, the usability metrics platform that we were about to install, had some kind of tutorial tool, but even installing Amplitude was scoped out.

Example of DM Pro's designs
Example of DM Pro's designs
Example of DM Pro's designs

We decided to simply use page titles with tooltips to help the user out if they were lost. This way the scope was doable for the dev team to include, and the user also clearly knows where to go to figure out what to do.

We also had a separate Help page which then evolved into a knowledge base, which had all of our FAQs, and the user could contact the Hanwha Customer Support team from there if needed.

Example of DM Pro's designs
Example of DM Pro's designs
Example of DM Pro's designs

Terminology

Since our goal is to make this tool as easy as possible to understand, we decided to not use technical terms even though the jargon may be more accurate.

"annotate"

➡️

"draw bounding box"

"inference"

➡️

"applying the model" or "processing"

"source clips"

➡️

"video clips"

To fully integrate with the Cloud Portal, we added a dark mode theme, and added licensing functions.

Example of DM Pro's designs
Example of DM Pro's designs
Example of DM Pro's designs
Outcomes and learnings

As of 6/21/24, we are all set to launch, with no more high- or medium-level bugs.

Through the Sales Team’s beta program, Dallas-Fort Worth Airport decided they would like to integrate FLEX AI into their existing system, hence purchasing another 200 cameras to be set up.

Key outcomes

  • Despite lack of research, successfully launched a product that is and will continue to bring revenue to the company

  • Annotations needed to create a custom object detection model greatly decreased from 200 to 50

  • Adapted quickly to all the frequent scope and style guide changes

  • Distilled a complex AI tool into simple steps for users to follow

What I learned

  • Sit in technical discussions when possible, to understand feasibility, scope limitations, and alignment

  • Be proactive in double checking edge cases and error cases with development team

Future improvements

Integration is key to further simplify the process for end users.

FLEX AI is not meant to be a daily tool, its output is.

As of 2024, our key points of integration is only on the input side: being able to import videos that have been tagged on the users’ VMS if they use WAVE or OnCloud.

Future features that can reduce friction and increase integration:

  • Direct deployment of models to cloud connected cameras (coming 2025)

  • Plug in to allow annotating directly in VMS

Potential use cases for FLEX AI
Potential use cases for FLEX AI
Potential use cases for FLEX AI

How to measure success?

  • % of users successfully creating a model

  • Amount of time spent to create a model

  • Average number of annotations used to create a model

  • % of users improving a model

  • Average number of versions a user creates

  • % of users renewing the annual subscription

    Other things I’d like to know:

  • Who is using FLEX AI?

  • At which step does most users who don’t finish creating a model fall off?

  • Average number of projects a user has simultaneously

Want to see more?