
General Motors
A side-by-side comparison of what happens when UX research drives decisions versus when it doesn't — told from the perspective of the only person who had a seat on both teams.
Client
General Motors
Service Offered
Digital Strategy
Year
2023
Timeline
12 months



Project overview:
I (Kyle) was the sole UX designer on two enterprise projects running simultaneously at the same company. Team A — building a centralized data platform — invited me into the process from day one. I led user interviews, built empathy maps and personas, and used the MoSCoW framework to drive prioritization. Team B — building a separate internal tool — never included UX in the project scope. Research wasn't rejected — it was never on the table. I watched both teams operate in real time, with the same organizational constraints, the same stakeholder dynamics, and the same designer available. The difference in how they functioned was night and day.
Project overview:
I (Kyle) was the sole UX designer on two enterprise projects running simultaneously at the same company. Team A — building a centralized data platform — invited me into the process from day one. I led user interviews, built empathy maps and personas, and used the MoSCoW framework to drive prioritization. Team B — building a separate internal tool — never included UX in the project scope. Research wasn't rejected — it was never on the table. I watched both teams operate in real time, with the same organizational constraints, the same stakeholder dynamics, and the same designer available. The difference in how they functioned was night and day.
Project process:
Team A brought me in from day one. I conducted 6–10 user interviews that surfaced a finding nobody had measured before — users were spending half their working time verifying data instead of doing their actual job. That single stat changed the project's priority overnight. I synthesized the interviews into empathy maps, personas, and journey maps that became the team's shared reference point. When disagreements came up, people pointed to the research instead of their opinions. When we hit a design crossroads with two competing architecture approaches and strong advocates on both sides, a SWOT analysis resolved it in one meeting. Sprints moved smoothly. Ideation sessions were focused. Requirements were simple to write because everyone already understood the problem they were solving. Team B never included UX in scope. Without interviews or shared artifacts, every meeting became a negotiation between assumptions. There was no framework for saying "not now" to a feature request, so scope expanded unchecked. Features were designed, built, and presented to stakeholders — only to be rejected because the assumptions behind them were never validated. This happened repeatedly. The application grew more chaotic with each cycle as layers of unvalidated decisions stacked on top of each other. The team wasn't less talented or less experienced. They just never had a shared understanding of who they were building for or why.
Project process:
Team A brought me in from day one. I conducted 6–10 user interviews that surfaced a finding nobody had measured before — users were spending half their working time verifying data instead of doing their actual job. That single stat changed the project's priority overnight. I synthesized the interviews into empathy maps, personas, and journey maps that became the team's shared reference point. When disagreements came up, people pointed to the research instead of their opinions. When we hit a design crossroads with two competing architecture approaches and strong advocates on both sides, a SWOT analysis resolved it in one meeting. Sprints moved smoothly. Ideation sessions were focused. Requirements were simple to write because everyone already understood the problem they were solving. Team B never included UX in scope. Without interviews or shared artifacts, every meeting became a negotiation between assumptions. There was no framework for saying "not now" to a feature request, so scope expanded unchecked. Features were designed, built, and presented to stakeholders — only to be rejected because the assumptions behind them were never validated. This happened repeatedly. The application grew more chaotic with each cycle as layers of unvalidated decisions stacked on top of each other. The team wasn't less talented or less experienced. They just never had a shared understanding of who they were building for or why.
Final results:
Team A shipped with confidence. Decisions were grounded in evidence instead of politics. The MoSCoW framework gave the team something they'd never had before — a shared, defensible "no." A post-research impact survey scored the UX process 4.13 out of 5 for shifting team understanding and alignment. Sprints ran on time with minimal rework because the foundation was solid before a single pixel was designed. Team B continued to cycle through rework. Features were built and discarded. Meetings ran in circles without resolution. The application became increasingly difficult to use as competing visions were stitched together without a unifying framework. There was no measurement system in place to even identify where things were going wrong — the team could feel the dysfunction but couldn't diagnose it. Same company, same designer, same timeframe. The only variable was whether UX research had a seat at the table.
Final results:
Team A shipped with confidence. Decisions were grounded in evidence instead of politics. The MoSCoW framework gave the team something they'd never had before — a shared, defensible "no." A post-research impact survey scored the UX process 4.13 out of 5 for shifting team understanding and alignment. Sprints ran on time with minimal rework because the foundation was solid before a single pixel was designed. Team B continued to cycle through rework. Features were built and discarded. Meetings ran in circles without resolution. The application became increasingly difficult to use as competing visions were stitched together without a unifying framework. There was no measurement system in place to even identify where things were going wrong — the team could feel the dysfunction but couldn't diagnose it. Same company, same designer, same timeframe. The only variable was whether UX research had a seat at the table.
Contact Us
contact@nopixa.design
Bridging the Gap Between Brands and Audiences
Contact Us
contact@nopixa.design
Bridging the Gap Between Brands and Audiences
Contact Us