Back to Portfolio
Contextual InquiryBehavioral Observation

Online Help & Guides

Contextual inquiry and behavioral observation to guide a platform migration and surface unmet user needs across RealPage's enterprise help system.

Overview

RealPage's Online Help and Guides system supports clients across a portfolio of more than 100 products. In 2022, the system was approaching end-of-life on its legacy platform and migrating to a new HTML5 environment called Magellan. The migration was inevitable, but it created an opportunity to do something the platform transition alone could not: understand how users actually sought help, and where the system was failing them.

This research effort emerged not from a recognized performance crisis but from a mandatory infrastructure change. The product team had analytics, but much of it lacked explanatory power, counts of help access by technical source, for instance, that couldn't tell us anything meaningful about behavior. One signal stood out: users of RealPage's AI Revenue Management product (AIRM) represented the highest volume of Help system usage. That gave me a credible entry point.

The migration created a narrow but real opportunity: to ensure the new platform was shaped by behavioral insight rather than inherited assumptions from the legacy system. That's what the research was for.

Research Approach

The core methodological choice was direct observation over stated preference. Relying on retrospective feedback or prior survey data would not have surfaced how users actually navigated support content under real conditions. I structured the research around contextual inquiry in live production environments, watching users seek help while doing their actual work, not staged tasks in a controlled setting.

Phase 1: Internal Recruiting & Product Familiarization

I began by recruiting internally, targeting RealPage employees using AIRM. Observing how internal users, including Revenue Advisors with deep product knowledge, engaged with Help while navigating a highly specialized, analytically intensive product gave me two things: a picture of how even expert users experienced friction, and enough product familiarity to conduct credible sessions with external clients.

One internal session was particularly revealing. A Revenue Advisor demonstrated how brittle search performance required a workaround, adding a hyphen to search queries to yield usable results. The workaround worked, but its existence revealed something important: users had adapted to system failures rather than reporting them, and the adaptations themselves were invisible to the product team.

Phase 2: External Client Recruiting

For external recruiting, I leaned on internal participants and Sales Account Managers within the Data & Analytics suite to identify qualified clients. Participants were free to navigate any RealPage product during sessions, but all independently selected tools from the D&A suite, AIRM, Business Intelligence (BI), or Performance Analytics Benchmarking (PAB). This self-selection was itself a data point: the users most engaged with Help were concentrated in analytically intensive, high-complexity workflows.

In several sessions, I sought post-session clarification from internal product experts to ensure I had interpreted complex workflows correctly. When I encountered a gap between my understanding and a participant's described experience, I preserved the participant's account as stated, distinguishing between user perception and actual system behavior in my reporting rather than quietly correcting the record.

Phase 3: Synthesis & Recommendation Development

Across seven sessions, I coded for patterns in search behavior, navigation friction, content expectations, and moments of vulnerability, instances where users were stuck, uncertain, or had to leave the platform to find answers. This synthesis drove three targeted recommendation areas, each grounded in observed behavior rather than general UX principles.

Key Findings

Sessions surfaced friction across search, navigation, and content format, but one pattern emerged with enough consistency to warrant treatment as a structural gap rather than an isolated complaint.

"I used the feedback from your study to drive a conversation with the Help Center dev team and the Lumina team to improve the Help Center search page. I used your data to support my proposed changes."

— RealPage Product Manager, post-study follow-up

Users Needed Narrative Walkthroughs, Not Just Reference Content

The most consistent and consequential finding across sessions: the Help system lacked clear, sequential walkthroughs for complex or infrequent workflows. Users navigating high-stakes, low-frequency tasks found reference articles but not guidance. Participants expressed a consistent preference for content that framed where they were starting from, walked them through a process step by step, used real product screenshots for orientation, and explained the reasoning behind a workflow rather than just the mechanics. Several participants had responded to this gap by creating such guides themselves for their own teams. When expert users invest time building guides for their colleagues, it's because the official resource failed to meet a need they consider essential. The system wasn't just incomplete, users had concluded it wasn't worth waiting for.

Search Performance Required Workarounds

Users had developed adaptations to compensate for unreliable results. The hyphen workaround, adding a punctuation character to queries to force more relevant results, was not documented anywhere. It had been passed between colleagues informally. Users who didn't know the workaround either abandoned search and browsed manually, or left the platform to search externally. Both outcomes represented a complete failure of the help system's core function.

Filter Ambiguity Created Context Confusion

Users frequently switched between products during sessions, and the Help system's search filters didn't provide clear visual indication of their current scope. When results appeared sparse or irrelevant, users often didn't recognize that a product-scoped filter was constraining their search, they concluded the content didn't exist. This was a perception problem with real behavioral consequences: users abandoned valid searches, escalated to Support unnecessarily, or constructed their own workarounds.

Feedback Collection Was Structurally Buried

The existing feedback mechanism was positioned at the bottom of Help pages, which could span considerable length for complex topics. Users who encountered problems mid-page, the most valuable moment to capture feedback, had no accessible path to report it without scrolling past the content they were trying to use. The friction wasn't just inconvenient; it meant that the feedback the product team most needed was the least likely to be submitted.

Impact & Reflection

This project operated without formal research funding and outside the core product team's established workflow. Research findings reached the product team through persistent relationship-building and direct presentation rather than a formal research operations structure. As a result, the path from insight to action was more circuitous than ideal, but it did produce documented action.

Search page improvements were driven directly by study data, as confirmed by the product manager's follow-up. Findings were used to frame problem statements and desired future states presented at an executive steering committee meeting, contributing to continued investment in the Help Center. The Magellan migration was informed by behavioral insight it would not otherwise have had.

What this project reinforced:

  • The value of an analytical entry point. Starting with the one meaningful signal in a weak analytics dataset, AIRM users' disproportionate Help volume, gave the research a credible anchor and shaped recruiting in a way that produced rich, domain-specific behavioral insight. A vaguer starting point would have produced vaguer findings.
  • The diagnostic power of user adaptations. The hyphen workaround and the colleague-built guides were not just anecdotes. They were evidence of failures the system hadn't acknowledged, problems users had given up reporting because reporting didn't seem to lead anywhere. Finding those adaptations required being present during real work, not asking retrospective questions.
  • The importance of research operations infrastructure. The fragmented nature of this project, no formal funding, no established access pathway, no dedicated recruiting support, made everything harder than it needed to be. It pushed me toward building more visible internal and external contact networks to support future research access, and toward more proactive advocacy for what sustained research engagement actually requires organizationally.

Artifacts

The following materials were produced during this engagement:

  • Research plan (PDF) — study design, recruiting rationale, and session structure
  • Participant recruiting message (PDF) — outreach framing used for internal and external sessions
  • Search recommendation documentation — annotated before/after examples with design rationale
  • Feedback modal recommendation — floating draggable modal concept with interaction specification
  • Self-directed learning content recommendations — narrative walkthrough format guidance with example structure
  • Session synthesis notes — coded patterns across seven sessions

Research plan and recruiting materials are available as public PDFs. Detailed session notes and client-specific observations are subject to NDA restrictions and can be discussed in portfolio conversations.