Sole UX Researcher Child Users (11–13) Systems-Level Findings

Children’s Perceptions and Reactions to Deceptive Design in Video Games

A systems-level UX research project examining how children perceive deceptive design in games—and why awareness alone fails when product incentives, UI pressure, and power asymmetries shape user behavior.

Qualitative Research Scenario-Based Interviews Thematic Analysis Trust & Safety B2B2C Incentives

Overview

A quick snapshot of what this was, who did what, and the scope that shaped the work.

Your Role

  • Research ideation and framing
  • Method design + pilot studies
  • Ethics approval (research involving children)
  • Participant recruitment
  • Data collection (interviews)
  • Analysis + synthesis (thematic analysis)
  • Writing + defense

Note: Supervisory support for academic guidance and coding validation.

Context

  • Type: Academic, independent research
  • Level: Master’s thesis
  • Duration: ~12 months
  • Funding: University-funded
  • Team: Solo (with supervisor & co-supervisor for review)
  • Domain: Games, digital platforms, child users

Stakeholder Lenses

This project sits inside a multi-actor system. Toggle perspectives to see how incentives, risks, and responsibilities shift.

Child user (11–13)

  • Can often recognize manipulative patterns
  • Feels pressure despite awareness
  • Limited agency in monetized systems
  • Blames self rather than system

Parent / Guardian

  • Relies on disclosure and parental controls
  • Limited visibility into moment-to-moment pressure
  • Responsibility without real leverage
  • Often positioned as the “fix” for systemic issues

Platform / Developer

  • Incentivized to maximize engagement and spend
  • Uses disclosure as ethical cover
  • Frames compliance as user choice
  • Success metrics conflict with child wellbeing

Regulator / Policy maker

  • Focuses on transparency and consent mechanisms
  • Struggles to regulate emotional pressure
  • Lag between design practice and policy language
  • Relies on evidence like this study

Researcher / UX Practitioner

  • Sees awareness ≠ agency
  • Identifies system-level responsibility gaps
  • Moves beyond “dark pattern” labeling
  • Advocates for incentive-aware design

Problem & Research Question

Many platforms rely on disclosure and user awareness to mitigate harm. This project tested whether that assumption holds for children.

Research question: Can children (ages 11–13) recognize deceptive design patterns in games—and if they recognize them, does that recognition meaningfully enable resistance in systems designed to encourage compliance through emotional and economic pressure?

See how it was studied

Constraints

These weren’t “project limitations” — they actively shaped method choice, recruiting strategy, and scope.

Ethics & Consent

  • Ethics approval for research involving children
  • Parental consent + verbal assent
  • Strict anonymization (participant IDs only)

Operational Limits

  • Interview time cap (≤ 1 hour)
  • Fixed compensation ($30)
  • No behavioral intervention allowed

Recruitment Reality

  • Recruitment challenges (bots)
  • Snowball sampling only
  • Sensitive data handling + retention rules

Approach & Method

Scenario-based interviews were used to reduce judgment and elicit reasoning about real product pressure without putting children “on trial.”

Method Summary

  • Scenario-based qualitative interviews using video game scenarios
  • Hypothetical framing (third-person) to reduce social desirability bias
  • Pilot studies to refine scenarios and questions
  • Semi-structured discussion: perception + reaction
  • Full transcription and thematic analysis
  • Structured codebook development (themes, subthemes, codes)

Method Flow

01
Frame the problem Deceptive design + child vulnerability + “awareness as safeguard” assumption.
02
Design scenarios + pilots Iterate on prompts to elicit reasoning without leading.
03
Interviews + transcription Time-capped sessions; strict anonymization and data handling.
04
Thematic analysis + codebook Build structured themes and connect perception → reaction.

Key Decisions

Decisions that shaped validity, honesty in responses, and alignment with the research question.
Chosen to target users old enough to articulate reasoning while still structurally vulnerable in incentive-driven systems.
Used hypothetical third-person scenarios to reduce social desirability bias and encourage honest reasoning about “what someone might do.”
Analysis intentionally tracked how noticing a tactic translated (or failed to translate) into action under emotional and economic pressure.
Compelling threads were set aside when they didn’t directly answer the central research question—avoiding “interesting, but off-target” drift.
Scenarios drew from widely played titles so children could reason from familiar patterns and real experiences—not abstract hypotheticals.

Deliverables & Outputs

Artifacts that would matter to an evaluator (and to future teams working on trust, safety, and monetization ethics).

Thesis (Public Archive)

Master’s thesis (publicly archived). Add link when ready.

Placeholder: thesis cover

Scenario Video Materials

Video scenarios used to anchor discussion and reduce direct self-report pressure.

Placeholder: scenario storyboard

Codebook + Framework

Structured themes/subthemes and an analytical framework linking perceptions to reactions.

Placeholder: codebook table

Outcomes & Impact

Key takeaways that matter for platform design, monetization ethics, trust & safety, and child-centered UX policy.

What the study found

  • Many children recognize deceptive or manipulative design
  • Awareness often does not translate into resistance
  • Participants frequently described feeling trapped or without alternatives
  • Responsibility for harm is diffused across systems (design, monetization, guardianship)

So what?

The work contributes evidence relevant to:

  • Platform design and incentive structures
  • Monetization ethics in B2B2C ecosystems
  • Trust & safety practices and policy
  • Child-centered UX and regulatory debates

Core insight: “Users should just be aware” is not a protection strategy when the system is engineered to pressure compliance.

Learnings & Reflection

The shift from UI-level explanations to systems-level accountability.

What changed in your thinking

  • User awareness is an insufficient safeguard when incentive structures and emotional pressure dominate choice
  • Ethical responsibility in B2B2C systems cannot be placed solely on users
  • Designing for vulnerable populations requires systems-level thinking, not UI-level fixes

How UX research helped

UX research can reveal harm even when products are “working as intended” — especially when success metrics are aligned with monetization, not user wellbeing.

Policy relevance Ethics & governance Power asymmetries Child-centered methods