Multifamily Analytics & Reporting
When the product vision shifted mid-project, research clarified what a scoring model could and couldn't do, and mapped a clearer path to decision support for the client executives it was meant to serve.
Overview
RealPage was building a scoring system to help senior multifamily executives assess portfolio health at a glance. The concept was internally driven, still unproven with clients, and already under pressure, data science teams working to define the scoring logic were discovering that the number of client properties that could be fully scored shrank dramatically as product subscription requirements became more precise.
I was brought in to lead exploratory research that could ground the product direction in actual client behavior. Before the first session was scheduled, the internal framing had already shifted, from executive users to the asset managers and data analysts whose workflows any future automation would have to accommodate.
My job was to understand how those people actually work: what tools they trust, where existing systems break down, and whether a standardized scoring model would support or complicate the decisions it was designed to simplify. The answer, it turned out, was more nuanced than either outcome.
Research Approach
With a compressed five-week window and a product direction still evolving internally, the research had to be designed for credibility as much as discovery. Participants were senior representatives from large, institutionally complex client organizations. Arriving underprepared was not an option.
Phase 1: Preparation
Before any session was scheduled, I built detailed profiles for each client organization using publicly available data: portfolio size, geographic footprint, growth trajectory, operating model, and investment thesis. These were not background reading. They were working tools that shaped how I framed each session, established credibility with participants early, and accelerated synthesis by grounding responses in firm-specific context rather than generic patterns.
Phase 2: Field Research
I conducted three remote sessions across TruAmerica, UBS Realty Investors, and FPI Management with 9 participants total, as no session involved a single person from the client side. Each session was moderated as a behavioral inquiry focused on how work actually gets done: what tools people trust, where reporting breaks down, and what triggers a decision to investigate further or take action.
Phase 3: Synthesis
Following the sessions, I developed two personas — Analyst and Asset Manager — not as report appendices, but as facilitation tools for a cross-team synthesis session conducted before the final report was complete. The goal was to share personas informed directly from the client sessions while the product direction was still being discussed.
Key Findings
Research conducted across three operationally distinct firms made one thing clear: client priorities, operating models, and internal expertise informed asset management decisions far more than any single feature or product.
"Don't just give us a dashboard with more widgets and metrics. Give us recommendations. Tell us what to do."
Operating Model Shapes Everything
No two firms used data the same way. TruAmerica prioritized portfolio-level control through proprietary financial models. UBS used market data defensively, to challenge appraisers and protect valuations. FPI needed standardization at scale across diverse client portfolios. A single scoring architecture couldn't serve all three without becoming meaningless to at least two.
Analyst Dependency Is a Structural Risk
Across all three firms, complex reporting capabilities were concentrated in one or two individuals. When those people were unavailable, reporting stalled. Simplifying tool access wasn't a nice-to-have. It was a continuity risk.
Manual Effort Fills Every Gap
All three clients exported data to Excel for further manipulation. Reconciling date-of-record differences, exception criteria, and source discrepancies between RealPage, Yardi, and internal systems consumed an estimated 30 percent or more of analyst time before any actual analysis began.
Data Confidence Is Institutional Knowledge
RealPage and Yardi apply different inclusion and exclusion criteria, creating surface-level discrepancies that resolve when data is evaluated at the right altitude and over the right time horizon. Analysts who had done that reconciliation work knew the differences were negligible for portfolio-level decisions, and adoption tracking confirmed it: properties using RealPage tools outperformed those that didn't. That knowledge hadn't traveled. The organizational knowledge gap was the client's problem to solve. RealPage's opportunity was narrower: make the methodology transparent enough that no one needed an internal champion to trust the output.
Familiarity Beats Abstraction Every Time
A new scoring system asks users to learn how it weights functional areas, understand how those areas ladder up to a composite, and then decide whether to trust it before it can tell them anything they don't already know. Executives who already speak the language of cap rate compression, NOI trends, and occupancy velocity don't need a translation layer. They need clearer line of sight between the work happening on the ground and the outcomes they're accountable for. That's a visibility problem, not a measurement problem.
Impact & Reflection
Research findings were delivered to product leadership before the scoring model had been formally committed to internally. The study didn't rule out the concept. A score-based model could offer baseline utility for certain executive use cases. What it surfaced was more uncomfortable: the operating model variation across client firms meant the effort required to make a universal score meaningful would likely outweigh the value it delivered, and the data infrastructure required to power it was still collapsing inward under its own requirements.
Simpler paths to decision support existed and were documented. The Performance Kanban concept, developed independently in response to what the research revealed, proposed connecting valuation targets to operational execution at the site level, bidirectionally, without requiring users to learn a new scoring language before the tool could tell them anything useful. It was acknowledged as a strong idea. It was not pursued.
The team had aligned around the scoring direction before findings were delivered. Organizational circumstances prevented wider circulation. I moved on to other project work.
What this project sharpened was:
- my ability to operate with rigor inside significant constraint
- my domain expertise, built from the ground up in a field I had no prior background in
- my capacity to prepare for and moderate sessions with nine senior participants across three institutionally complex firms
- my ability to produce a synthesis that was honest about what the research found and useful to a team that wasn't fully ready to hear it
That calibration, between what you know, what you can say, and what the room can hold, is something I carried forward.
Artifacts
Below are representative samples of the research artifacts produced during this project. Proprietary details have been anonymized to protect confidentiality.
Additional artifacts, including participant screeners, synthesis frameworks, and usability test scripts, are available upon request.