UX Researcher & Interaction Designer Smart TV users Data-driven decisions

Smart TV Navigation via Smartphone Remote

A data-driven UX project redesigning Smart TV navigation through a smartphone interface, combining benchmark testing, iterative prototyping, and A/B testing to reduce interaction cost and navigation errors in platform-scale media systems.

Quantitative UX Benchmark Testing A/B Testing Interaction Design B2B2C Platform UX

Overview

A quick snapshot of what this was, who did what, and the scope that shaped the work.

Your Role

  • UX Researcher & Interaction Designer
  • Usability benchmarking and problem definition
  • Ideation and low-fidelity exploration (10+10 sketches)
  • Experimental design and hypothesis formulation
  • Quantitative usability testing (clicks, errors, comfort)
  • Statistical analysis (t-tests, hypothesis testing)
  • Iterative prototyping and design refinement

Note: Worked in a small team; research design, analysis, and synthesis responsibilities were clearly attributable.

Context

  • Type: Applied UX research & design project
  • Duration: Multi-phase (benchmark → prototype → A/B testing)
  • Domain: Smart TV platforms, second-screen control
  • Users: General Smart TV users navigating apps and channels
  • System Type: B2B2C platform (TV OS, content providers, end users)
  • Focus: Reducing interaction friction under hardware constraints + platform expectations

Stakeholder Lenses

This project sits inside a multi-actor system. Toggle perspectives to see how incentives, risks, and responsibilities shift.

Smart TV user

  • Wants fast switching across apps, channels, and search
  • Is shaped by mobile interaction patterns and touch familiarity
  • Experiences friction from deep menus and repeated clicks
  • Often fails due to discoverability rather than ability

Platform / TV OS

  • Must support broad device constraints and legacy remotes
  • Optimizes for scalability across apps and input methods
  • Navigation conventions can lag behind content complexity
  • Must balance system consistency with app-level variation

Content providers / apps

  • Compete for attention and placement on a shared surface
  • Prefer visibility and short paths to content
  • Benefit when navigation encourages exploration and dwell time
  • Can create inconsistent patterns that raise user confusion

Hardware + input constraints

  • Legacy remotes were optimized for channel switching
  • Directional input increases step cost for app navigation
  • Second-screen control can remove motor friction
  • But risks adding complexity or mode switching

UX researcher / designer

  • Defines measurable interaction cost and error outcomes
  • Uses benchmarking + controlled testing to justify decisions
  • Tests trade-offs (depth vs clutter, speed vs learnability)
  • Refines toward a synthesis design that survives data

Problem & Research Question

Smart TVs became complex content platforms, but navigation is still constrained by legacy interaction models optimized for channel switching.

Research question: How can Smart TV navigation be redesigned using a smartphone interface to reduce interaction cost (clicks) and navigation errors—without increasing cognitive overload or reducing discoverability in platform-scale media systems?

See how it was studied

Constraints

These weren’t “project limitations” — they actively shaped design decisions and experimental structure.

Familiarity & Expectation

  • Users already familiar with physical remotes
  • Platform expectations shaped by mobile interaction patterns
  • Comfort and habit influence perceived usability

Discoverability & Overload

  • Limited discoverability of advanced controls
  • Risk of overwhelming users with excessive on-screen controls
  • Trade-off between information visibility and clutter

Testing Validity

  • Learning effects during testing
  • Mitigated via counterbalancing task order
  • Measured outcomes beyond preference

Approach & Method

A mixed-method, metric-driven workflow: benchmark → prototypes → controlled A/B testing → statistical + qualitative synthesis.

Method Summary

  • Initial usability observations and benchmarking against expected task paths
  • Defined key metrics: number of clicks, navigation errors (deviations), subjective comfort
  • Low-fidelity ideation (10+10 sketches) to explore interaction alternatives
  • High-fidelity interactive prototypes (Prototype A and Prototype B)
  • Controlled A/B testing with counterbalanced task order
  • Quantitative analysis using t-tests (α = 0.05)
  • Qualitative observations and post-test interviews

Method Flow

01
Benchmark navigation Observe friction and compare real paths to expected task flows.
02
Ideate + sketch alternatives 10+10 low-fidelity explorations to test interaction directions early.
03
Build Prototype A & B Interactive hi-fi variants that encode different trade-offs (density vs minimalism).
04
A/B test + analyze Counterbalanced tasks; measure clicks/errors/comfort; t-tests at α = 0.05; synthesize with qual notes.

Key Decisions

Decisions that shaped the interaction model, testing validity, and what “better” meant (measured, not assumed).
Chosen to leverage users’ learned touch interaction patterns rather than replicating the constraints of a physical remote.
Instead of debating preference, the project explicitly compared information density (Prototype A) versus minimalism (Prototype B) using clicks, errors, and comfort metrics.
Embedded critical controls contextually to reduce mode switching and navigation cost during multi-step TV tasks.
When aesthetic preference conflicted with results, decisions prioritized reductions in clicks and navigation errors. The goal was improvements that survive measurement, not just taste.

Deliverables & Outputs

Artifacts that support replicability: what was measured, how it was tested, what changed, and why.

Benchmark usability report

Baseline findings and task-path friction analysis to define what “improvement” needed to mean.

Placeholder: benchmark report

Protocol + hypotheses

Experimental protocol, counterbalancing plan, and hypothesis documentation (α = 0.05).

Placeholder: protocol and hypothesis document

Prototypes + analysis

Two high-fidelity prototypes (A/B), testing dataset, statistical analysis, and a final refined synthesis prototype.

Placeholder: prototypes and analysis

Course: CCT480_A12_Team4

Outcomes & Impact

What changed, what didn’t, and what that revealed about navigation friction in platform ecosystems.

What the testing found

  • Prototype A significantly reduced clicks compared to Prototype B (p < 0.05)
  • No significant difference in error rates across some tasks—pointing to discoverability issues rather than motor difficulty
  • Users preferred clearer signaling (from A) and reduced friction / infinite scrolling (from B)

So what?

The final design synthesized best-performing elements:

  • Explicit information visibility where decision speed mattered
  • Progressive disclosure where overload was harmful
  • A control model that aligns with mobile familiarity while respecting TV context

Core insight: Reducing clicks does not automatically reduce errors—discoverability and signaling often dominate outcomes in platform navigation.

Learnings & Reflection

What this project reinforced about familiarity, discoverability, and the logic of platform-scale UX.

What changed in your thinking

  • Familiarity strongly shapes perceived usability—even when objectively less efficient
  • Reducing clicks does not automatically reduce errors
  • Discoverability is often a bigger barrier than interaction complexity

What it reinforced about platform UX

Platform UX requires balancing user efficiency, content-provider visibility, and system scalability. This project reinforced that good UX decisions must survive data, not just intuition.

Benchmarking Experimental design Statistics Discoverability B2B2C trade-offs