Universal Banker Application: Information Architecture Framework
A shared mental model of banking operations is the foundation every core experience can build on.
Overview
Fiserv's Universal Banker Application (UBA) is being built feature-by-feature from a legacy product called Integrated Teller. Over time, Integrated Teller has accumulated 13 tabs, each representing a product-centric addition rather than a task-based user need. This structure is migrating directly into UBA's navigation, creating what I am describing as a "junk drawer" that would severely hinder the utility of any meaningful usability testing.
The problem is straightforward: participants won't be able to properly orient themselves in any related prototype, not because the features or workflows within them are poorly designed, but because the navigation structure itself is incoherent. Testing workflows under these conditions would contaminate every study with navigation and orientation issues unrelated to the actual task being evaluated.
I was brought in as a contractor researcher to scope and execute usability studies as features become ready for development. The team's working approach is to resolve primary navigation later, once more features are complete. That sequence keeps development moving, but it creates compounding costs: usability studies conducted against an unresolved navigation structure surface orientation problems unrelated to the features being evaluated, and cross-functional calls spend time re-establishing shared terminology that a stable structure would already settle. What I saw was a more foundational problem, one that could be worked abstractly and validated externally before a single feature was finalized, and one that needed to be solved before usability testing could be meaningful. What started as a tactical engagement has evolved into a multi-phase validation effort with potential application across multiple Fiserv core banking and teller products, positioning the work as a foundation for the broader "One Fiserv" initiative to create visual and structural consistency across the product suite.
The work spans three completed rounds of external tree testing with screened banking professionals, SME sessions between rounds that sharpened both task wording and structural clarity, and a final moderated round with known Fiserv clients. The validation trajectory is clear: an initial 67.1% pass rate in v1 rose to 85.6% in v2, then exceeded 97% in v3 with senior client participants whose discriminating critique helped sharpen labels to conform to established mental models and semantic associations. Category boundaries are now obvious to users without needing to read domain rules. The challenge ahead is organizational adoption: getting the validated framework implemented as the navigation foundation for UBA and potentially across Fiserv's broader product suite.
Research Approach
The work follows a structured progression from internal structural analysis through iterative external validation with banking professionals.
Phase 1: Structural Analysis
I reverse-engineered the 13 existing Integrated Teller tabs into 6 first-principles, task-based categories. Each category was defined by inclusion and exclusion criteria documented in governance materials designed to prevent future navigation accumulation. Representative features were mapped to the proposed structure to demonstrate how the taxonomy would accommodate both current functionality and future additions.
The framework prioritizes task-oriented mental models over product-centric legacy organization. Categories are designed to answer "what am I trying to accomplish?" rather than "which product feature is this?"
Integrated Teller · Legacy Navigation → Proposed IA
13 Legacy Categories → 6 Proposed Domains
Hover over legacy items or proposed domains to trace consolidation paths.
Phase 2: Unmoderated Tree Test → Round 1
To test whether the proposed structure made intuitive sense to banking professionals who had never encountered UBA or Integrated Teller, I designed and fielded a tree test through UserTesting with a screened panel of banking VPs and system administrators from multiple countries. The test included 18 tasks representative of common banking workflows, with success measured by first-click accuracy, task completion rate, and ultimate correct selection.
The initial overall pass rate was 67.1% — a strong result for a first-iteration tree test conducted with unfamiliar, international participants navigating an abstract interface. The test also surfaced specific areas where task wording and category labels could be refined without requiring structural changes to the underlying framework.
Phase 3: SME Refinement Sessions
Between rounds of external testing, I conducted sessions with internal subject matter experts from other Fiserv core banking and teller products. These conversations served as a structured refinement bridge: SMEs reviewed the IA framework, the navigation model, and the tree test task wording, surfacing terminology mismatches and labeling friction that external participants had flagged with their navigation behavior, without always articulating precisely.
Their feedback informed targeted adjustments to category labels, navigational groupings, and task phrasing ahead of the second round of testing. SMEs also confirmed that the framework was not only applicable to UBA, but could extend across their own product areas, an early signal of cross-product viability.
Phase 4: Unmoderated Tree Test → Round 2
The second tree test was designed to measure the impact of SME-informed refinements with a larger participant pool. The study used 9 tasks (compared to 18 in round 1) and 10 screened participants (compared to 5), allowing for a sharper focus on the most structurally significant navigation decisions and greater confidence in the results.
The improvement was substantial. Overall task success rose to 85.6%, an 18.5 percentage point gain over round 1. Four participants achieved a perfect 9/9 score, and six of ten scored 90% or higher. Multiple tasks reached 90-100% success rates, and post-test participant feedback consistently described the structure as clear, intuitive, and aligned with how they expect banking system settings to be organized.
Phase 5: Moderated Tree Test → Round 3 (Complete)
The third and final round of tree testing was conducted with 10 senior-level Fiserv client participants, including system administrators, VPs of Operations, and other roles directly relevant to UBA's intended user base. Unlike the two prior unmoderated rounds, this study was moderated, allowing for deeper dialogue on labeling choices and category logic.
The results exceeded expectations: greater than 97% cumulative task success. More valuable than the score was the quality of participant feedback. Senior client participants provided discriminating critique that helped sharpen labels to conform to established mental models and semantic associations. The refinements improved category boundary clarity to the point where the structure is now obvious to users without needing to read domain rules.
The framework is validated. The path forward is organizational adoption.
Key Findings
Domain Knowledge as Foundation
Building a credible information architecture for a core banking product requires more than card sorting and category logic. It requires understanding what banking professionals actually do, the regulatory environment they operate within, and the institutional variation that shapes how different organizations prioritize workflows.
Before proposing any structural consolidation, I invested in understanding compliance and prudential regulations governing banking transactions, the competitive landscape across traditional banks, credit unions, neo-banks, and fintechs, how institution type, size, and risk profile shape operational priorities, top level metrics, and the geographic variation in regulatory requirements that Fiserv products need to accommodate across their global client base.
This domain knowledge wasn't background context. It was the analytical foundation that made the 13-to-6 consolidation defensible rather than arbitrary. Without it, the framework would have been a designer's guess. With it, it became a structure banking professionals could navigate without ever having seen the product.
This same logic extends directly to AI readiness. A validated domain taxonomy, one that reflects how banking professionals actually think about their work, confirmed across three rounds of testing with no fundamental structural revision, is the prerequisite for trustworthy agentic systems in regulated environments. Agents don't negotiate ambiguity. They execute on context. If the context is a taxonomy that hasn't been validated against real mental models, the system is confidently operating on someone's internal assumption. Deloitte's February 2026 analysis of Domain-Driven Design in legacy banking modernization makes the same argument from the technical side, that bounded context clarity is the foundation modernization requires. This work approaches that same boundary from the user research and design side, and arrives at the same conclusion.
Client Validation Exceeds 97% — Round 3
The final moderated round with 10 senior-level Fiserv client participants achieved greater than 97% cumulative task success. Participants included system administrators, VPs of Operations, and other roles directly relevant to UBA's intended user base. Their discriminating critique helped sharpen labels to conform to established mental models and semantic associations, improving category boundary clarity to the point where the structure is obvious to users without needing to read domain rules.
The framework is validated across three rounds of testing with no fundamental structural revision required.
Navigation Structure Validated — Round 1
The first round of external validation tested whether the proposed structure made intuitive sense to banking professionals with no prior exposure to UBA or Integrated Teller. Eighteen tasks were fielded through a screened UserTesting panel of banking VPs and system administrators from multiple countries.
The overall pass rate of 67.1% was a strong result for a first-iteration tree test conducted on an abstract interface with unfamiliar international participants. One participant achieved 94.4% accuracy — 17 of 18 tasks correct — demonstrating that the framework's ceiling was reachable. The test also identified specific task wording and labeling friction that pointed toward clear, targeted refinements without requiring structural overhaul.
SME Sessions Sharpen the Framework
Between rounds of external testing, sessions with internal subject matter experts from other Fiserv core banking and teller products provided a structured opportunity to interrogate the framework from the inside. SMEs reviewed the IA categories, the navigation model, and the tree test task wording, identifying terminology mismatches and labeling ambiguities that external participants had signaled but couldn't always name precisely.
Their feedback informed targeted adjustments to category labels, navigational groupings, and task phrasing, changes designed to reduce cognitive friction without altering the underlying structural logic the first round had already validated.
Validation Strengthens Across Rounds
The impact of SME-informed refinements is most clearly visible in the jump from round one to round two. With 10 participants and 9 tasks, the second tree test was designed for sharper focus and greater statistical confidence. The results were unambiguous.
Overall task success rose to 85.6%, an +18.5 percentage point improvement over round one. Where one participant had cleared 90% in round one, six did in round two. Four participants achieved a perfect 9/9 score. Multiple tasks reached 90–100% success rates across the panel. Post-test feedback consistently described the structure as clear, intuitive, and aligned with how participants expect banking system settings to be organized.
The pattern is not just improvement, it's the framework becoming reliably learnable across a broader range of participants.
Cross-Product Applicability Confirmed
SME sessions surfaced a finding that extended beyond UBA's immediate scope. Every SME who reviewed the framework confirmed it would work for their own core banking and teller products — not just the product it was originally designed for.
This cross-product signal positions the IA framework as potential shared infrastructure across Fiserv's banking product suite — a structural foundation that could reduce fragmentation, improve consistency, and lower training and support overhead for clients using multiple Fiserv products. It is an early but meaningful indication that the framework's logic is sound at a category level, not just within a single product context.
Governance Documentation Prevents Regression
The inclusion and exclusion criteria documented for each IA category create a decision-making framework for future feature additions. As UBA continues to expand, this governance layer provides clear guidance on where new functionality belongs and why, preventing the gradual accumulation of misplaced features that produced the navigation problem this work was designed to solve. The value of this documentation compounds over time: it is most useful not at launch, but at every subsequent product decision that follows.
Notably, across every phase of this work, from initial structural analysis through two rounds of external tree testing and multiple SME sessions, the governing logic of the framework has remained intact. Minor terminology adjustments have improved clarity at the surface, but the categorical structure itself has required no fundamental revision. Each iteration has confirmed its correctness rather than challenged it.
"We're blocked from meaningful usability tests because the current navigation is incoherent. Participants will get lost in structure, not features. If we adopt this framework now, we can more readily validate workflows."
Impact & Reflection
The validation trajectory on this project is clear and strong. Two rounds of external tree testing with screened banking professionals, SME sessions that sharpened the framework between rounds, and a third moderated round with known Fiserv clients underway — the research has done what research is supposed to do: reduce uncertainty, build evidence, and point toward a decision.
What the research has delivered to date:
A validated IA framework that rose from 67.1% to 85.6% to greater than 97% overall task success across three rounds, culminating in a moderated study with 10 senior-level Fiserv client participants. Their discriminating feedback sharpened labels to conform to established mental models, improving category clarity to the point where boundaries are obvious without reading domain rules. Cross-product applicability confirmed by SMEs from multiple Fiserv product areas. Governance documentation that gives the team a durable decision-making tool for future feature additions.
What remains unresolved is not the research. It is the organizational question of when and how the findings get acted on. That gap between evidence and adoption is a reality of doing strategic research inside a product development process that is still in motion. The team is heads-down on feature delivery, and structural questions like navigation are easy to defer when shipping feels more urgent. Naming that dynamic honestly is more useful than pretending it isn't there, or understating what's actually at stake.
There is a broader reason this work matters beyond navigation. A validated domain model, one that banking professionals recognize without training, is a prerequisite for any credible agentic AI integration. If internal teams and clients cannot agree on how banking operations are categorized, scoping AI agents against that structure is guesswork. This IA framework is not just a navigation fix. It is the semantic foundation that any future automation, intelligent routing, or agent-assisted workflow would need to operate reliably.
What this project continues to reinforce is that the work worth doing is rarely the work you were asked to do. I was engaged to run usability studies. What I saw was a problem that would have made those studies unreliable until it was solved. Solving it first — abstractly, rigorously, and in advance of feature completion — is the kind of contribution that doesn't always show up on a sprint board but shapes everything that follows, from Jira tickets to leadership messaging.
The moderated client round will close the external validation loop. What comes next is a communication and alignment challenge, and one I intend to meet directly.