Universal Banker Application: Information Architecture Framework
When legacy navigation becomes a usability blocker, systematic research and validation can create a foundation that scales beyond a single product.
Overview
Fiserv's Universal Banker Application (UBA) was being built feature-by-feature from a legacy product called Integrated Teller. Over time, Integrated Teller had accumulated 13 tabs — each representing a product-centric addition rather than a task-based user need. This structure was migrating directly into UBA's navigation, creating what I can only describe as a "junk drawer" that would severely hinder the utility of any meaningful usability testing.
The problem was straightforward: participants wouldn't be able to find features, not because the features were poorly designed, but because the navigation structure itself was incoherent. Testing workflows under these conditions would contaminate every study with findability issues unrelated to the actual task being evaluated.
I was brought in as a contractor researcher to perform usability studies as often as possible as features were made ready for development. What started as a tactical approach evolved into a strategic framework with potential application across multiple Fiserv core banking and teller products, positioning the work as a foundation for the broader "One Fiserv" initiative to create visual and structural consistency across the product suite.
The challenge was twofold: reverse-engineer a coherent structure from 13 legacy tabs, then validate it externally with banking professionals who had never seen the interface before, all while operating as a contractor without formal authority to enforce adoption.
Research Approach
The work followed a structured progression from internal analysis through external validation to cross-product verification.
Phase 1: Structural Analysis
I reverse-engineered the 13 existing tabs into 6 first-principles, task-based categories. Each category was defined by inclusion and exclusion criteria documented in governance materials designed to prevent future "junk drawer" accumulation. Representative features were mapped to the proposed structure to demonstrate how the taxonomy would accommodate both current functionality and future additions.
The framework prioritized task-oriented mental models over product-centric legacy organization. Categories were designed to answer "what am I trying to accomplish?" rather than "which product feature is this?"
Integrated Teller · Legacy Navigation → Proposed IA
13 Legacy Categories → 6 Proposed Domains
Secondary research established the consolidation hypothesis. Tree testing validated it at 67.1%.
Hover legacy items or proposed domains to trace consolidation paths
Phase 2: External Validation
To test whether the proposed structure made intuitive sense to banking professionals, I designed and fielded a tree test using UserTesting. Participants were banking VPs and system administrators from multiple countries. People who had never seen UBA or Integrated Teller and had no exposure to Fiserv's internal product structure.
The tree test included 18 tasks representative of common banking workflows. Participants navigated the proposed taxonomy to locate where they would expect to find specific functionality. Success was measured by first-click accuracy, task completion rates, and ultimate correct selection, with or without backtracking, as tree tests are heavily decontextualized.
Results were strong:
- Overall pass rate: 67.1% (considered strong for first-iteration tree testing with unfamiliar international users on an abstract interface)
- Top performer: 94.4% success rate (17/18 tasks correct)
- Second-tier performance: 77.7% success rate (14/18 tasks correct)
- Nearly half of all tasks scored between 80-100% correct first-click accuracy
The test also surfaced specific task wording improvements and areas where category labels could be refined to better match user expectations.
Phase 3: Internal Validation
Because UserTesting was blocked internally and among known clients due to security restrictions, I described to a designer on team a clickable Figma prototype I needed. They were able to create it within a few hours. I was able to use it to validate the framework with internal subject matter experts (SMEs) from other Fiserv core banking and teller products. I plan to use the refined version of the prototype with any changes that arise from my SME sessions to test with screened client participants.
SMEs confirmed that the framework was not just applicable to UBA, but could work across their own product areas. This cross-product validation positioned the IA framework as a potential unifying structure for the broader "One Fiserv" initiative, an effort to make all Fiserv products feel like they were designed by the same company. The model is further proving utility across both administrative and operational contexts.
Key Findings
The research surfaced both structural validation and strategic opportunity.
Navigation Structure Validated Externally
A 67% pass rate on first-iteration tree testing with well-screened, non-client, international participants provided strong evidence that the proposed structure aligned with banking professional mental models. The fact that top performers achieved 94% accuracy demonstrated that the taxonomy could support expert-level efficiency once learned.
Wording and labeling refinements identified through the study created a clear path for iteration without requiring structural changes to the underlying framework.
Cross-Product Applicability Confirmed
Internal SMEs from multiple Fiserv products validated that the framework could extend beyond UBA. Each SME who reviewed the structure confirmed it would work for their own core + teller UI products, creating an opportunity to establish a shared IA foundation across this category of products.
This finding positioned the work not as a one-off UBA refinement or validation, but as infrastructure that could reduce fragmentation, improve consistency, and lower training costs for customers using multiple Fiserv products.
Governance Documentation Prevents Regression
The inclusion/exclusion criteria documented for each category created a decision-making framework for future feature additions. This governance layer was designed to prevent the re-emergence of "junk drawer" navigation by providing clear guidance on where new functionality should be placed and why.
Domain Knowledge Required Strategic Investment
Building this framework required understanding: compliance and prudential regulations in banking and financial transactions, the competitive landscape among financial institutions (traditional banks, neo-banks, fintechs, credit unions, etc.), how different institution types, sizes, and risk profiles shape workflow priorities, and geographic variations in regulatory requirements and operational models.
This domain knowledge wasn't incidental. It was foundational to designing a structure that could accommodate the range of banking contexts Fiserv products needed to support.
"We're blocked from meaningful usability tests because current nav is incoherent. Participants will get lost in structure, not features. If we adopt this framework now, we can validate workflows immediately."
Impact & Reflection
The IA framework achieved strong external validation and cross-product SME interest. However, as a contractor without formal authority, I could not enforce adoption or claim implementation.
What I delivered:
- A validated framework (67% external success, 94% top performer)
- Governance documentation with inclusion/exclusion criteria
- Cross-product SME confirmation of applicability
- A strategic positioning opportunity (One Fiserv foundation)
What I cannot claim:
- That the framework was implemented in UBA
- That other products adopted it as a standard
- That it unblocked usability testing as intended
- That it shaped the One Fiserv initiative direction
As a contractor, I delivered research and strategic frameworks to leadership for consideration. I did not have visibility into organizational adoption decisions or the authority to drive implementation.
What this project reinforced:
The value of building domain expertise before proposing solutions. Understanding banking compliance, competitive landscapes, and institutional variation was not optional — it was the foundation that made the framework credible.
The importance of external validation when internal stakeholders may be skeptical. A 67% pass rate with international banking professionals carried more weight than internal opinion alone.
The reality of contractor constraints. Strategic scope does not guarantee strategic authority. I operated at Staff-level research scope while navigating the limitations of contractor status: no formal decision-making power, no long-term ownership, no visibility into what happened after delivery.
That limitation shaped how I position this work. I can claim delivery and validation. I cannot claim organizational impact without evidence of adoption. That honesty matters more than inflating contribution.
Additional Contributions
This section is reserved for supplementary contributions related to the Universal Banker Application project that extend beyond the core IA framework work documented above.
Artifacts
Below are representative samples of the research artifacts produced during this project. As contractor work, artifact sharing may be subject to NDA restrictions. Summaries and anonymized examples can be provided in portfolio conversations.