Mapping Motion - A Data Journey
An interactive travel map visualizing where I went, when, and how I got there.
Dashboard exploration — early concept phase
Context
MeisterTask had never shipped an AI feature. The question wasn't whether to — it was what would actually be worth building. I joined as sole designer on a 5-person discovery team with 9 weeks and no predetermined answer.
Phase 1 was discovery only. No shipping. What we produced had to be strong enough to justify further investment — or honest enough to kill the idea.
We started with the wrong hypothesis. That turned out to be the most useful thing we did.
Process
The wrong hypothesis
We assumed users wanted AI to automate task creation. Interviews said otherwise. The actual friction wasn't creating tasks — it was knowing which ones mattered and when.
Research synthesis — interview themes mapped against feature assumptions
Three hard rules
After the pivot, we defined constraints that shaped every design decision: AI should surface, not decide. It should be dismissable. And it should earn trust incrementally rather than asking for it upfront.
V1 concept
V2 after constraints applied
Prototype & testing
High-fidelity prototype across 6 key flows. 8 participant sessions. The feedback shaped the beta brief that went to the board.
Prototype flow — daily digest concept
Results
Discovery concluded with a board presentation and green light for Phase 2. The work reduced R&D decision time significantly and established the design principles carried into the beta.
The strongest AI concepts did not replace user judgment. They reduced cognitive load, surfaced context, and made the next action easier to trust.
- Surface signals and context instead of over-automating decisions.
- Make every suggestion editable, dismissable, and low-risk to ignore.
- Introduce AI through narrow, credible moments rather than broad promises.
Started with an automation-first hypothesis around task creation.
Used interviews and prototype feedback to identify prioritisation and context recovery as the real problem.
Refined the direction around assistive, explainable AI patterns that earned trust incrementally.
Learnings
The wrong hypothesis is data
Starting with an assumption you're willing to kill is more useful than starting with a brief you're meant to execute. The pivot was expensive short-term and clarifying long-term.
Constraints on AI UX need to be designed, not implied
Saying 'the AI should feel helpful' produces nothing. Saying 'it can suggest, never decide, and must be dismissable in one tap' produces a system.
Discovery has its own deliverables
The output wasn't a product. It was a shared mental model that let five people — and eventually a board — agree on what we were actually building.