Learning Plan 2: Bring Your Lens to Execution
How to Make AI Outputs Reflect Your Professional Discipline
Welcome to Your Learning Plan
You eliminated your bottlenecks. You can run queries, build prototypes, pull data. You're executing in adjacent disciplines.
But something's off.
Your outputs look generic. They work, but they don't reflect how you think. A UX person's dashboard looks like an engineer's dashboard. Their prototype feels like a PM's prototype. The execution happened, but the lens got lost.
This is the difference between low-vibe and high-vibe work with AI.
Low-vibe is transactional. You prompt, you approve or reject, you start over. The AI does the work; you're just quality control. High-vibe is collaborative. You steer, iterate, build momentum. Your professional discipline shapes every output because you're actively embedding your perspective in the work.
By the end of this plan, you'll:
- Know what makes your professional lens distinct (and valuable)
- Embed that lens in how you prompt, iterate, and evaluate AI outputs
- Move from transactional AI use to collaborative steering
- Steer together with teammates so artifacts reflect multiple lenses
Let's surface your lens.
Why This Matters
Two people use the same AI to build the same deliverable. One produces generic output. The other produces something that reflects years of professional judgment.
The difference isn't the tool. It's the lens.
The qualitative difference shows up in how people prompt. Give an engineer and a UX person the same task (build an analytics dashboard) and they'll approach it differently.
Engineer might prompt:
"Build a React component that fetches user data from /api/metrics. Display DAU, session duration, feature adoption. Use React Query, implement efficient re-rendering with useMemo, handle loading and error states."
UX person might prompt:
"I need an analytics dashboard showing user engagement. Before generating UI: what context am I missing about how stakeholders currently get this data? Does this leverage our existing component library? How does this fit with our other admin views?"
Same deliverable. Different lens. The engineer's prompt optimizes for technical architecture. The UX prompt starts with context, checks platform fit, and considers the horizontal view.
The professionals getting the most value from AI aren't the ones who prompt the most. They're the ones whose prompts carry their professional judgment into execution.
The Three-Step Method
Step 1: Acknowledge — Surface Your Lens
You can't apply what you can't articulate.
Most professionals operate on intuition built over years. You know something's wrong with a design, a data interpretation, or a product flow. You can't always explain why. That intuition is your lens, and surfacing it is the first step to embedding it in AI execution.
Start by answering three questions about your discipline:
1. What do you notice first when you look at work in your field?
UX people might notice context gaps, problem framing, platform consistency, or whether the work leverages existing patterns. Engineers often gravitate to error handling, edge cases, scalability concerns. For a PM, it's usually unclear user value, missing success metrics, or scope creep.
Write down the first three things you instinctively check.
2. What mistakes do you catch that others miss?
Think about feedback you've given. What do you flag that non-specialists overlook? UX people catch when teams jump to solutions before understanding the real problem, or when new work breaks consistency with existing platform patterns.
Write down three mistakes you consistently catch.
3. What questions do you ask that others don't?
When someone presents work, what's your go-to question? "Have we understood the full context?" "What analogous products could we learn from?" "Does this leverage existing components?" "How does this fit with everything else we've built?"
Write down three questions that feel automatic to you.
Your answers form the core of your professional lens. These instincts took years to develop. They're what makes your perspective valuable when you execute in any medium.
Step 2: Become — Apply Your Lens to AI Execution
Your lens means nothing if it stays in your head.
Embedding your lens happens at three moments: when you prompt, when you iterate, and when you evaluate. Most people focus only on prompting. The real leverage is in iteration and evaluation.
Prompting with your lens
Generic prompts produce generic outputs. Lens-embedded prompts tell the AI what perspective to adopt.
Example for UX:
"Act as a UX professional. Before jumping to solutions, clarify the context and real problem. Consider analogous products we can learn from. When working on interfaces, prioritize leveraging existing components over creating new ones. Check whether this adds visual clutter or breaks consistency across the platform. Evaluate by asking: Does this fit horizontally (across features) and vertically (within this flow)? Are we introducing patterns that conflict with established ones?"
Iterating with your lens
Low-vibe work is approve/reject. You get output, judge it good or bad, start over if bad.
High-vibe work is collaborative steering. You get output, identify what's working, push the work in the direction your lens suggests. Give specific feedback:
- "The visual hierarchy isn't clear. Make the primary action more prominent and reduce the density of secondary information."
- "This survey has leading questions. Reframe questions 3 and 7 to be more neutral."
- "The error handling is incomplete. What happens if the API times out? Add fallback states."
Evaluating with your lens
Before you ship, run the output through your discipline's quality checks.
If a UX professional reviews their AI-generated work:
- Have we understood the full context?
- Are we solving the real problem or a symptom?
- What analogous products or contexts could we learn from?
- Does this leverage existing components or create unnecessary new ones?
- Does this add visual clutter or break experiences elsewhere on the platform?
- How does this look horizontally (across features) and vertically (within this flow)?
Step 3: Support — Steer Together
Your lens catches problems from one angle. Your team has different angles.
Collaborative steering means bringing multiple lenses to the same artifact at the same time. Not sequential handoffs where UX finishes, then engineering reviews, then PM weighs in. Parallel steering where different perspectives shape the work together as it develops.
One person drives (prompts the AI, makes edits). Others steer from their lens. This can happen in person or async.
In person
Open a shared screen. Fast iteration cycles. Good for complex artifacts.
Async via Slack
Driver posts output to a thread. Others respond with their lens. Works across time zones.
Hybrid
Start async to gather perspectives, go sync for decision points.
Tools that enable this:
Figma Make lets you generate and iterate on UI with AI while others watch or comment. Lovable and Replit let you build working prototypes together, with AI assistance, and see changes in real time. Claude or ChatGPT in a shared screen works for docs, specs, and analysis. The specific tool matters less than the pattern: one driver, multiple lenses, rapid iteration.
This isn't design-by-committee. One person still owns the artifact. But they're steering with input from different angles, catching problems from perspectives they wouldn't see alone.
Tools & Resources
For surfacing your lens
- Review your last 10 pieces of feedback (email, Slack, docs)
- Ask 2 colleagues: "What do I always notice that others miss?"
- Write down your instinctive evaluation questions
For lens-embedded prompting
- Claude (claude.ai) for nuanced prompts and longer context
- ChatGPT (chatgpt.com) for quick iteration cycles
- Figma Make for UI generation with your lens embedded
- Lovable for building working prototypes
- Replit for collaborative code with AI assistance
For team lens sharing
- Notion or Confluence for storing lens prompts
- Slack threads for async steering sessions
- Team wiki for cross-discipline checklists
Measure Your Progress
Week 1: Surface your lens
- Answered the three lens questions (notice, catch, ask)
- Wrote your discipline's lens prompt template
- Tested lens prompt on one real deliverable
Week 2: Apply and iterate
- Used lens prompt for 3 AI-assisted tasks
- Practiced iterative steering (3-4 rounds per output)
- Evaluated output using your discipline's quality checks
- Output quality noticeably reflects your perspective
Week 3: Steer together
- Invited 2 teammates to steer a shared artifact
- Completed one collaborative steering session (3+ rounds)
- Practiced both driving and steering roles
- Artifact reflects multiple lenses, not just yours
Week 4: Build the habit
- Collaborative steering is default for cross-discipline work
- Team catches conflicts and tradeoffs early
- Cycle times noticeably shorter than sequential handoffs
- Ready for Learning Plan 3 (Collaborative Discovery)
What's Next
You've learned to bring your lens to individual execution.
But lens-embedded work becomes powerful when entire teams do it in parallel. UX building prototypes that reflect UX thinking: context, real problems, platform coherence. PMs writing specs with product logic baked in. Engineers building with system constraints surfaced early. All happening simultaneously on real artifacts.
That's collaborative discovery. That's when months of sequential handoffs compress to weeks of parallel iteration.
Learning Plan 3: Collaborative Discovery
Shows you how to structure team workflows for parallel execution, combining multiple lenses on shared artifacts without creating chaos.
Coming soon. You're on the early access list.
Your Action Plan This Week
Today:
- Answer the three lens questions (30 minutes)
- Write your first lens prompt template
This week:
- Use lens prompt for one real deliverable
- Practice iterating with discipline-specific feedback
- Evaluate output using your quality checks
Next week:
- Invite 2 teammates to steer a shared artifact
- Run one collaborative steering session (3+ rounds)
- Practice driving and steering roles
The difference between generic AI outputs and lens-embedded outputs is whether your professional judgment shapes the work.
You just learned how to bring your lens to execution. Now go apply it.
About this Learning Plan
This is part of the Generalista Capability Expansion Framework: a structured system for building cross-discipline skills without burning out or losing your specialty.
Created by John Garvie, Head of Design at Evisort AI (Workday). Former design leader at Uber, Amplitude, and LinkedIn. First UX Researcher to transition into design leadership at Uber.