IstoVisio (syGlass) | B2B
Designed a 0-1 AI Voice Commands to Simplify Complex Tasks
MY ROLE: FOUNDING PRODUCT DESIGNER
TIMELINE: 2 months
AI SYSTEM: WHISPER AI
LAUNCH: JANUARY 2025
syGlass is a software that allows scientists to explore very large and detailed 3D images, like brain scans or microscope pictures, in virtual reality. Instead of looking at flat pictures on a screen, scientists can step inside the data, move around it, and see it from every angle. This makes it easier and faster to understand complex information.
PROBLEM TO SOLVE
syGlass users spend too much time on routine tasks because the software requires them to repeat the same multi-step actions over and over. These extra steps slow their workflow, waste valuable time, and increase frustration, often leading to mistakes or delays. Solving this is crucial to improving productivity, reducing errors, and keeping users engaged with the product.
SOLUTION
Adding voice commands to replace multi-click actions speeds up routine tasks and keeps users focused by reducing workflow interruptions.
IMPACT
This reduced average task time from 12 to 7 seconds, boosting efficiency by 41%.
Uncovering the Problem, Navigating the Barriers
While reviewing customer feedback and support tickets during Q3 planning, I noticed a common pain point - users were frustrated by repetitive menu-based tasks. Some even suggested adding a Siri-like assistant to make things faster.
A full voice assistant wasn’t possible due to limited resources and privacy concerns. Instead, I designed a lightweight voice command system that runs only when needed.
Understanding the Problem
Before jumping into solutions, I wanted to be sure voice control would actually help. I spoke with eight scientists across Europe and the U.S. to watch how they worked and hear about their daily frustrations.
USERS NEEDS
Precise control
Instead of carefully clicking through multiple steps, users could say what they want and get the exact view they need.
Quick visibility toggles
Turning data layers on and off could be done instantly with a short command.
Faster access to settings
Frequently used settings could be opened right away, without digging through menus.
These insights shaped my design approach. By replacing repeated clicks with simple spoken commands, I could help users stay focused, work faster, and feel less frustrated.
Designing for Adoption
To make voice commands feel natural and useful, I first uncovered the main challenges different users faced and then designed specific solutions to overcome them.
The table below shows the key adoption barriers and how each was addressed through design.
Defining the Voice Command Experience
I explored best practices from voice systems like Amazon Alexa, Meta AI Assistant, and Google Assistant to gather inspiration for simple and intuitive interactions.
I then mapped out a high-level user flow to visualize how people would use voice commands step by step. This helped me pinpoint the moments where accuracy mattered most and where the experience had to feel effortless.
Refining the Design Through Real-World Challenges
As I moved from concept to implementation, key challenges surfaced that shaped the final design.
Challenge 1: Diverse Speech Patterns
Challenge 3: Improving Error Recovery
To support different accents and English levels, I built a multi-variant voice model allowing at least three ways to say each command.
Challenge 2: Making Commands Easy to Learn
To make voice commands easy to understand, I organized them into a clear structure (toggle vs. slider) and built supportive learning touchpoints.
Pic. 1 In-VR Guide – Quick command lookup
Pic. 3 Web Documentation – Detailed guide for advanced use
Pic. 2 Tutorial – helps users learn commands quickly
To make error recovery seamless, I first mapped the voice command structure:
This allowed the system to offer suggestions when part of a command was recognized (e.g. the command name) but not the full phrase.
Visual feedback when the system is listening, so users know their voice is being detected and don’t repeat commands unnecessarily.
Clear prompts when the sustem doesn’t understand (e.g. “I didn’t understand that. Try saying ‘Decrease opacity’ or ‘What can I say’”) helped users recover quickly and stay confident using voice commands.
Reflection
Designing Whisper AI reminded me that AI can hallucinate -mishear, misinterpret, or confidently return the wrong result. This makes error handling and fallbacks just as critical as core features.
By planning for errors like offering suggestions, confirmations, and visual feedback, I could build trust and keep users engaged even when the AI got things wrong.
If I Had More Time
I would explore personalizing the voice system, allowing users to train custom commands and adapt recognition to their speech patterns over time. This could further improve accuracy, reduce frustration, and make the experience feel more natural.