Rethinking Consent for AI Apps
Rethinking Consent for AI Apps
Rethinking Consent for AI Apps



A privacy-first AI assistant that replaces upfront consent walls with contextual, just-in-time permissions.
A privacy-first AI assistant that replaces upfront consent walls with contextual, just-in-time permissions.
A privacy-first AI assistant that replaces upfront consent walls with contextual, just-in-time permissions.
Overview
Overview
Overview
Clarity is a privacy-first conversational AI for managing daily stress and anxiety. Rather than requesting broad permissions at onboarding, it introduces contextual consent by asking for data only at moments where value is immediately clear.
Clarity is a privacy-first conversational AI for managing daily stress and anxiety. Rather than requesting broad permissions at onboarding, it introduces contextual consent by asking for data only at moments where value is immediately clear.
Clarity is a privacy-first conversational AI for managing daily stress and anxiety. Rather than requesting broad permissions at onboarding, it introduces contextual consent by asking for data only at moments where value is immediately clear.
Design Question
Design Question
Design Question
How might we design AI consent so users feel informed, respected, and in control at the moment consent is requested?
How might we design AI consent so users feel informed, respected, and in control at the moment consent is requested?
How might we design AI consent so users feel informed, respected, and in control at the moment consent is requested?
Research: Understanding the Trust Gap
Research: Understanding the Trust Gap
Research: Understanding the Trust Gap
Foundational Survey
Foundational Survey
Foundational Survey
I surveyed 22 participants to understand baseline privacy behaviors and emotional responses to common consent patterns in everyday apps.
I surveyed 22 participants to understand baseline privacy behaviors and emotional responses to common consent patterns in everyday apps.
I surveyed 22 participants to understand baseline privacy behaviors and emotional responses to common consent patterns in everyday apps.

Users don’t ignore consent because they don’t care about privacy, but because current consent patterns feel overwhelming and unavoidable.
Users don’t ignore consent because they don’t care about privacy, but because current consent patterns feel overwhelming and unavoidable.
Competitive Audit
Competitive Audit
Competitive Audit
I reviewed how leading AI wellness apps establish trust within the first 60 seconds of use.
I reviewed how leading AI wellness apps establish trust within the first 60 seconds of use.
I reviewed how leading AI wellness apps establish trust within the first 60 seconds of use.
App
App
App
Wysa
Wysa
Wysa
Woebot
Woebot
Woebot
Replika
Replika
Replika
First 60 seconds
First 60 seconds
First 60 seconds
Asks emotional context immediately
Asks emotional context immediately
Asks emotional context immediately
Requests personal background early
Requests personal background early
Requests personal background early
Creates emotional bond instantly
Creates emotional bond instantly
Creates emotional bond instantly
Trust Breakdown
Trust Breakdown
Trust Breakdown
No clear data explanation
No clear data explanation
No clear data explanation
Value not demonstrated first
Value not demonstrated first
Value not demonstrated first
Blurred AI and human boundaries
Blurred AI and human boundaries
Blurred AI and human boundaries
Opportunity
Opportunity
Opportunity
Explain privacy at the moment data is needed
Explain privacy at the moment data is needed
Explain privacy at the moment data is needed
Prioritize value-exchange before data-request.
Prioritize value-exchange before data-request.
Prioritize value-exchange before data-request.
Explicitly define AI identity
Explicitly define AI identity
Explicitly define AI identity
Bot Persona & Voice
Bot Persona & Voice
Bot Persona & Voice
Safety-First Boundaries:
The bot avoids emotional mimicry and human role-play to reduce perceived manipulation.
Calm & Grounded Communication:
Responses are supportive, neutral, and use plain language, avoiding urgency or dependency cues.
Explicit AI Identity:
The bot clearly states it is an AI, not a human, from the first interaction.
Safety-First Boundaries:
The bot avoids emotional mimicry and human role-play to reduce perceived manipulation.
Calm & Grounded Communication:
Responses are supportive, neutral, and use plain language, avoiding urgency or dependency cues.
Explicit AI Identity:
The bot clearly states it is an AI, not a human, from the first interaction.
Safety-First Boundaries:
The bot avoids emotional mimicry and human role-play to reduce perceived manipulation.
Calm & Grounded Communication:
Responses are supportive, neutral, and use plain language, avoiding urgency or dependency cues.
Explicit AI Identity:
The bot clearly states it is an AI, not a human, from the first interaction.
Language constraints in practice
Language constraints in practice
Clarity uses language like:
Clarity uses language like:
“You’re in control of what I remember.”
“This is optional.”
“I’m an AI, not a human counselor.”
“You’re in control of what I remember.”
“This is optional.”
“I’m an AI, not a human counselor.”
“You’re in control of what I remember.”
“This is optional.”
“I’m an AI, not a human counselor.”
Clarity avoids:
Clarity avoids:
“I know exactly how you feel.”
Any phrasing that pressures users into enabling features or sharing data.
“I know exactly how you feel.”
Any phrasing that pressures users into enabling features or sharing data.
“I know exactly how you feel.”
Any phrasing that pressures users into enabling features or sharing data.
Clarity replaces upfront consent barriers with conversational requests for data only when it becomes meaningful to the user.
Clarity replaces upfront consent barriers with conversational requests for data only when it becomes meaningful to the user.
Clarity replaces upfront consent barriers with conversational requests for data only when it becomes meaningful to the user.
Pattern 1 — Onboarding Trust (First 30 Seconds)
Pattern 1 — Onboarding Trust (First 30 Seconds)
Pattern 1 — Onboarding Trust (First 30 Seconds)
Establishing transparency by clearly stating AI identity and privacy rules before requesting permissions.
Establishing transparency by clearly stating AI identity and privacy rules before requesting permissions.
Establishing transparency by clearly stating AI identity and privacy rules before requesting permissions.



Pattern 2 — Just-In-Time Memory Request
Pattern 2 — Just-In-Time Memory Request
Pattern 2 — Just-In-Time Memory Request
Framing memory requests around immediate usefulness only after a user experiences a helpful interaction.
Framing memory requests around immediate usefulness only after a user experiences a helpful interaction.
Framing memory requests around immediate usefulness only after a user experiences a helpful interaction.






Memory is requested only after the system demonstrates a concrete benefit tied to the user’s own language.
Memory is requested only after the system demonstrates a concrete benefit tied to the user’s own language.
Pattern 3 — Proactive Insights (Advanced Opt-In)
Pattern 3 — Proactive Insights (Advanced Opt-In)
Pattern 3 — Proactive Insights (Advanced Opt-In)
Triggering advanced tracking only after identifying a concrete behavioral pattern to ensure a clear value-exchange.
Triggering advanced tracking only after identifying a concrete behavioral pattern to ensure a clear value-exchange.
Triggering advanced tracking only after identifying a concrete behavioral pattern to ensure a clear value-exchange.






Pattern 4 — Crisis Safety Override
Pattern 4 — Crisis Safety Override
Pattern 4 — Crisis Safety Override
Immediately prioritizing emergency resources when high-risk language is detected.
Immediately prioritizing emergency resources when high-risk language is detected.
Immediately prioritizing emergency resources when high-risk language is detected.






In crisis moments, safety overrides all other interactions
In crisis moments, safety overrides all other interactions
Privacy Dashboard
Privacy Dashboard
Privacy Dashboard
A centralized space where users can review, control, and revoke consent at any time.
1. Full visibility into what data is active and why
2. Granular permission control without breaking core functionality
3. Safety tools always accessible, regardless of consent state
A centralized space where users can review, control, and revoke consent at any time.
1. Full visibility into what data is active and why
2. Granular permission control without breaking core functionality
3. Safety tools always accessible, regardless of consent state
A centralized space where users can review, control, and revoke consent at any time.
1. Full visibility into what data is active and why
2. Granular permission control without breaking core functionality
3. Safety tools always accessible, regardless of consent state


Interactive Prototype
Interactive Prototype
Clarity requests memory only after a helpful exchange, allowing me to observe how consent feels when value is already established and how the system responds to acceptance or refusal.
Clarity requests memory only after a helpful exchange, allowing me to observe how consent feels when value is already established and how the system responds to acceptance or refusal.
Clarity requests memory only after a helpful exchange, allowing me to observe how consent feels when value is already established and how the system responds to acceptance or refusal.
Projected Business Impact
Projected Business Impact
Projected Business Impact
Ethical consent design can create measurable product value when aligned with real user behavior.
Ethical consent design can create measurable product value when aligned with real user behavior.
Ethical consent design can create measurable product value when aligned with real user behavior.
↓ 15–25%
↓ 15–25%
↓ 15–25%
Onboarding drop-off
Onboarding drop-off
Onboarding drop-off
Friction is delayed until value is demonstrated, reducing early abandonment.
Friction is delayed until value is demonstrated, reducing early abandonment.
Friction is delayed until value is demonstrated, reducing early abandonment.
↑ ~15% lift
↑ ~15% lift
↑ ~15% lift
in 30-day retention
in 30-day retention
in 30-day retention
Users are more likely to continue when they feel informed and in control of data sharing.
Users are more likely to continue when they feel informed and in control of data sharing.
Users are more likely to continue when they feel informed and in control of data sharing.
45–60%
45–60%
45–60%
Voluntary consent activation
Voluntary consent activation
Voluntary consent activation
Users enable features because they understand the benefit.
Users enable features because they understand the benefit.
Users enable features because they understand the benefit.
Why these estimates are reasonable
Why these estimates are reasonable
Why these estimates are reasonable
72.7% of survey participants preferred permissions being requested only when needed. The projected lift reflects alignment between observed user preference and the redesigned consent timing.
72.7% of survey participants preferred permissions being requested only when needed. The projected lift reflects alignment between observed user preference and the redesigned consent timing.
72.7% of survey participants preferred permissions being requested only when needed. The projected lift reflects alignment between observed user preference and the redesigned consent timing.
Strategic takeaway
Strategic takeaway
Strategic takeaway
Privacy-centered design becomes a compounding trust advantage, not a tradeoff.
Privacy-centered design becomes a compounding trust advantage, not a tradeoff.
Privacy-centered design becomes a compounding trust advantage, not a tradeoff.
This case study represents a conceptual project created for portfolio purposes
This case study represents a conceptual project created for portfolio purposes
This case study represents a conceptual project created for portfolio purposes
