AI Practice
Grounding AI in Real User Behavior
AI systems inherit the quality of the understanding that precedes them. The research documented here is the work that makes intelligent systems trustworthy.
The Prerequisite
Automation only succeeds when the work it replaces or augments is already visible, structured, and trusted.
That precondition is not technical. It is epistemological: surfacing what is tacit, quantifying what is assumed, and making implicit logic explicit before any system is asked to act on it. Applied anthropology has been doing this work for decades under different names. The core disciplinary claim has not changed: you cannot design intelligent systems around how people say they work. You have to study how they actually work.
That training shapes how I conduct research. When executives and analysts are in the same room, seniority does not determine where I direct my attention. Operational specificity does. The person closest to the work holds the knowledge the system needs. Finding that person, earning their candor, and translating what they know into a form a system can act on — that is the prerequisite work. Most organizations have not done it yet.
Evidence, Not Opinion
The portfolio you are reading was built conversationally usingVercel's V0. Design direction came from Anthropic's Claude. The assistant you can talk to in the corner of this page is also Claude, configured to answer questions about this body of work. These are not decorative choices. They are evidence.
Building a production-quality portfolio this way required understanding what these tools can and cannot do, how to decompose a design problem into prompts that produce usable output, how to verify and refine that output iteratively, and how to integrate an AI assistant into a product experience in a way that actually serves visitors rather than distracting them. That is applied AI work in the product development space. The artifact you are looking at is the proof.
The same disposition applies to the research and product work described throughout this portfolio: AI tools are most useful when the person using them understands both the capability and the limits, and has done the groundwork to deploy them responsibly.
How I Work With AI Tools
I use AI tools extensively in my research practice, not as a replacement for judgment but as a way to extend analytical capacity. The approach follows a consistent pattern:
Decomposing the Work
Breaking complex research problems into discrete, well-scoped tasks that can be addressed independently. Clarity about what each component requires makes it possible to identify where AI assistance adds value and where human judgment remains essential.
Parallelizing Across Teams
Structuring work so that multiple streams can progress simultaneously without creating dependencies that block progress. This applies to human teams and to AI-assisted workflows alike.
Verification Loops
Building checkpoints where outputs are validated against source material, domain knowledge, and stated objectives. AI-generated content requires the same scrutiny as any other input to a research process.
Iteration Toward Deliverables
Treating initial outputs as drafts that require refinement, not finished products. The value of AI assistance is in accelerating the iteration cycle, not in eliminating it.
Where the Work Shows Up
A domain taxonomy reflecting how banking professionals actually think about their work, challenged across three rounds of external testing and held up under pressure at the structural level throughout, is the prerequisite for trustworthy agentic systems in regulated environments. Agents don't negotiate ambiguity. They execute on context. The six-domain IA framework documented in this project is that context, built from the user research side and validated externally before any AI system was asked to act on it.
The goal was automation. The problem was that the decision logic any automated system would need existed only in analyst memory, transferred informally through turnover and invisible to any tool that had not first sat with the people doing the work. The research produced a structured translation of that logic into explicit triage rules. That document was the prerequisite. Without it, automation could reduce steps. With it, automation could be trusted.
The product team wanted to build an executive-facing scoring system. The research included both executives and analysts in the same sessions. The detail that mattered came from the analysts. Their operational specificity revealed that the problem was not the absence of a score — it was that the work driving performance outcomes was invisible to the people responsible for it. The research invalidated the original thesis and pointed toward a simpler, more feasible direction: making operational work visible and connected to financial outcomes.
The Performance Kanban prototype extends the same argument to property management operations. Site teams execute work that connects directly to NOI and asset valuation. That connection is rarely visible to the people making it. Making it visible changes behavior, enables accountability, and creates the operational clarity any AI system supporting those teams would require.
What I Can Help Organizations Do
- Surface the tacit knowledge that exists only in the heads of experienced practitioners.
- Document decision logic in forms that can inform automation design.
- Quantify operational dynamics to establish baselines for measuring AI impact.
- Identify where AI assistance adds genuine value versus where it introduces risk.
- Design verification frameworks that maintain human oversight without eliminating efficiency gains.
- Build organizational understanding of what responsible AI adoption actually requires.
The research methods I have developed across financial services, real estate technology, and enterprise software are directly applicable to AI readiness work. The question is whether organizations are willing to do the foundational work before deploying the technology.
I can help organizations get there.