Multifamily Analytics & Reporting
The product team had a measurement solution. Research confirmed the problem was visibility. Those are not the same thing.
Overview
RealPage was building a scoring system to help senior multifamily executives assess portfolio health at a glance. The concept was internally driven, still unproven with clients, and already under pressure, data science teams working to define the scoring logic were discovering that the number of client properties that could be fully scored shrank dramatically as product subscription requirements became more precise.
I was brought in to lead exploratory research that could ground the product direction in actual client behavior. Before the first session was scheduled, the internal framing had already shifted, from executive users to the asset managers and data analysts whose workflows any future automation would have to accommodate.
My job was to understand how the work actually gets done: what tools people trust, where reporting breaks down, and what the gap looks like between the data analysts produce and the decisions it's meant to support. Executives and analysts were in every session together. The operational detail the analysts provided, specific, workflow-level, and granular in ways executive accounts rarely are, was what made the problem visible at the resolution the research required.
Research Approach
With a compressed five-week window and a product direction still evolving internally, the research had to be designed for credibility as much as discovery. Participants were senior representatives from large, institutionally complex client organizations. Arriving underprepared was not an option.
Phase 1: Preparation
Before any session was scheduled, I built detailed profiles for each client organization using publicly available data: portfolio size, geographic footprint, growth trajectory, operating model, and investment thesis. These were not background reading. They were working tools that shaped how I framed each session, established credibility with participants early, and accelerated synthesis by grounding responses in firm-specific context rather than generic patterns.
Phase 2: Field Research
I conducted three remote sessions across TruAmerica, UBS Realty Investors, and FPI Management with 9 participants total, as no session involved a single person from the client side. Each session was moderated as a behavioral inquiry focused on how work actually gets done: what tools people trust, where reporting breaks down, and what triggers a decision to investigate further or take action.
Phase 3: Synthesis
Following the sessions, I developed two personas — Analyst and Asset Manager — not as report appendices, but as facilitation tools for a cross-team synthesis session conducted before the final report was complete. The goal was to share personas informed directly from the client sessions while the product direction was still being discussed.
Key Findings
Research conducted across three operationally distinct firms made one thing clear: client priorities, operating models, and internal expertise informed asset management decisions far more than any single feature or product.
"Don't just give us a dashboard with more widgets and metrics. Give us recommendations. Tell us what to do."
Operating Model Shapes Everything
No two firms used data the same way. TruAmerica prioritized portfolio-level control through proprietary financial models. UBS used market data defensively, to challenge appraisers and protect valuations. FPI needed standardization at scale across diverse client portfolios. A single scoring architecture couldn't serve all three without becoming meaningless to at least two.
Analyst Dependency Is a Structural Risk
Across all three firms, complex reporting capabilities were concentrated in one or two individuals. When those people were unavailable, reporting stalled. Simplifying tool access wasn't a nice-to-have. It was a continuity risk.
This is also an AI readiness finding. Analytical capability concentrated in one or two individuals, undocumented, untransferable, and invisible to any system that hasn't sat with those people, is precisely the condition that makes automation fragile. Before any intelligent system could support these workflows, that knowledge would need to be surfaced and made explicit. It hadn't been.
Manual Effort Fills Every Gap
All three clients exported data to Excel for further manipulation. Reconciling date-of-record differences, exception criteria, and source discrepancies between RealPage, Yardi, and internal systems consumed an estimated 30 percent or more of analyst time before any actual analysis began.
Data Confidence Is Institutional Knowledge
RealPage and Yardi apply different inclusion and exclusion criteria, creating surface-level discrepancies that resolve when data is evaluated at the right altitude and over the right time horizon. Analysts who had done that reconciliation work knew the differences were negligible for portfolio-level decisions, and adoption tracking confirmed it: properties using RealPage tools outperformed those that didn't. That knowledge hadn't traveled. The organizational knowledge gap was the client's problem to solve. RealPage's opportunity was narrower: make the methodology transparent enough that no one needed an internal champion to trust the output.
Familiarity Beats Abstraction Every Time
A new scoring system asks users to learn how it weights functional areas, understand how those areas ladder up to a composite, and then decide whether to trust it before it can tell them anything they don't already know. Executives who already speak the language of cap rate compression, NOI trends, and occupancy velocity don't need a translation layer. They need clearer line of sight between the work happening on the ground and the outcomes they're accountable for. That's a visibility problem, not a measurement problem.
It is also an AI readiness problem. Any system built on the product team framing, a composite score abstracting operational complexity into a single number, would have been wrong not because the engineering was inadequate, but because the problem framing was. The analyst sessions are what made that visible.
Impact & Reflection
Research findings were delivered to product leadership before the scoring model had been formally committed to internally. The study didn't rule out the concept. A score-based model could offer baseline utility for certain executive use cases. What it surfaced was more uncomfortable: the operating model variation across client firms meant the effort required to make a universal score meaningful would likely outweigh the value it delivered, and the data infrastructure required to power it was still collapsing inward under its own requirements.
Simpler paths to decision support existed and were documented. The Performance Kanban concept, developed independently in response to what the research revealed, proposed connecting valuation targets to operational execution at the site level, bidirectionally, without requiring users to learn a new scoring language before the tool could tell them anything useful. It was acknowledged as a strong idea. It was not pursued.
The team had aligned around the scoring direction before findings were delivered. Organizational circumstances prevented wider circulation. What the research had done, redirecting the problem frame before significant development resources committed to the wrong direction, didn't fully register at the time. It does in retrospect.
What this project sharpened was:
- my ability to operate with rigor inside significant constraint
- my domain expertise, built from the ground up in a field I had no prior background in
- my capacity to prepare for and moderate sessions with nine senior participants across three institutionally complex firms
- my ability to produce a synthesis that was honest about what the research found and useful to a team that wasn't fully ready to hear it
That calibration, between what you know, what you can say, and what the room can hold, is something I carried forward.
Artifacts
Client Profile: TruAmerica Multifamily
One of three client profiles developed from public sources prior to field research. Each profile documented portfolio size, growth trajectory, geographic footprint, operating model, and investment thesis, not as background reading, but as preparation tools that shaped how each session was framed and accelerated synthesis by grounding participant responses in firm-specific context.
Arriving at sessions with institutional clients at this level of preparation was not optional. It was the baseline for credibility. A researcher who doesn't know that TruAmerica manages 60,000 units across 200 properties, relies on third-party operators, and pursues a value-add strategy in Class B markets cannot ask the questions that surface the gaps.
This profile represents the standard of domain preparation applied across all three participating firms.
View ProfileMultifamily Financial Calculator
A dynamic 10-year financial model built to support domain fluency and prototype fidelity. Adjustable inputs include unit mix, occupancy by year, rental rates, purchase price, interest rate, and operating expenses — with cascading effects on NOI, DSCR, cap rate, IRR, and a GP/LP waterfall distribution. Originally developed to teach designers how multifamily metrics interact in practice. Used during MFA to generate financially coherent mock data for prototypes, where scenario narratives, before and after a performance intervention, reconcile as a matter of course rather than as visual approximation.
View CalculatorMultifamily Investment Strategy Decision-Making Guide
A structured domain primer developed to equip designers and researchers working across RealPage's Data and Analytics and Investment Intelligence products with a working understanding of how multifamily investors actually think. Covers Buy/Sell/Hold strategy, repositioning risk profiles by investor type, the Capital Triangle framework, and capital stack dynamics — with explicit attention to how the same asset can be valued differently depending on who is doing the assessing and what they are optimizing for.
The guide was not developed for clients. It was developed for internal teams who were designing solutions for clients they didn't yet fully understand. The premise is straightforward: workflow design improvements in a domain this financially consequential require researchers and designers to speak the language before they can really contribute meaningfully.
View GuideAdditional artifacts, including participant screeners, synthesis frameworks, and usability test scripts, are available upon request.