Case study

Internal tools need love too

My approach to diagnosing, prioritising, and solving complex UX and architecture problems in internal tools.

Chris Brett's avatar image
Chris Brett

When the fix became the problem

There's an irony in how operational software can often create more work than it saves. These tools often start with the best intentions. A solution gets built to solve an initial problem but as the business grows and requirements change, things get bolted on rather than rethought. Before long, the people who depend on it are the ones slowed down by it the most.

This article is a case study of one such situation to remind myself of the lessons learnt and the steps taken to address this kind of problem.


Listening before looking

It started with conversations. I'd been speaking with operations analysts who used the internal task management system daily, handling KYC (know your customer) checks, fraud investigations, disbursement failures, compliance reviews etc. The frustration was consistent enough that I arranged a meeting with the head of operations to get a clearer picture.

What emerged was a system that had grown without a clear information hierarchy. Tasks that should take minutes were taking an average of 50 minutes to resolve. Some, where ongoing customer interaction was required, stretched to five hours. Cycle time was a core concern, and it was easy to see why.

The data analysts needed was spread across the internal tool and Intercom conversations. Finding it meant context switching constantly. Actions on a task were scattered across the interface with no clear sense of what was needed next or what state a task was actually in. And underpinning all of it was a single UI design trying to serve over 60 different task types.

That last point was the root of most of the other problems.

Structure, not features

Before touching any code, I spent time understanding what "fixed" would actually look like. A few things became clear quickly.

The existing system wasn't broken because it lacked features. It was broken because it lacked structure. The information hierarchy was flat, everything competed for attention equally, so analysts had to do the cognitive work of prioritising it themselves, on every task, every time.

There were also pockets of good design already in the system. The financial and fincrime task templates were noticeably more considered than the rest. They had clearer action sets, better data grouping, and a more logical flow. The goal wasn't to start from scratch, it was to understand what made those templates work and apply that thinking consistently across everything else.

I also had no PM on this project. That meant I had to think carefully about prioritisation myself. With 60+ task types and a live operations team depending on the tool, I couldn't redesign everything at once. I ranked task types by monthly volume and worked from the top down, the highest impact changes first.

Working through it

Screenshots before sketches

I worked through screenshots of all 60+ task types, cataloguing the patterns, what data appeared where, what actions existed, how state was communicated, where the inconsistencies were. This wasn't glamorous work, but it gave me a ground truth to design from rather than assumptions.

The picture that emerged was clear: analysts needed to know three things at a glance on any task:

  1. What is the task?
  2. What state is it in?
  3. What do I do next?

The existing design made none of those easy.

One shell, sixty faces

The solution was a TaskShell component, a consistent structural wrapper that every task type could slot into. It gave every task the same skeleton: a header with task identity and status, a sidebar with associated user data and references, a main content area for task-specific information, a consistent action zone, and an activity timeline.

The task-specific content lived inside the shell as a child component. This meant each of the 60+ task types could have its own logic and data display without diverging on layout, hierarchy, or interaction patterns.

The shell also supported a layered action system. Default actions were available across all task types, but each template could define its own additional actions relevant to that specific workflow. Analysts always had a consistent baseline, with the right contextual tools surfaced when needed.

The main content area was equally flexible. Rather than a fixed layout, each template could define its own tabs, grouping related information however made sense for that task type. A KYC task surfaced identity checks and document reviews in separate tabs. A disbursement failure grouped transaction data and provider status together. The structure was consistent, the content was not constrained by it.

This was the shift that mattered most: moving from one design trying to do everything, to a structure that gave every task type a consistent foundation to build on all while remaining completely customisable.

Ship by volume, not by feature

So how was this rolled out? Well rather than a big bang release, I worked through task types in phases, financial operations first given their volume, then compliance, and so on. This let the operations team see improvements incrementally and meant I could incorporate feedback as I went. Building small but flexibly and iterating quickly was the key here.

Easier to find, faster to resolve

The analysts responded well. The feedback was direct, it was easier to get at the data they needed. For a team spending hours a day inside this tool, that matters a lot.

From a technical standpoint, it was noted how straightforward it would be to add new task type templates going forward. That was intentional. The architecture was built to scale without friction, so future work wouldn't require unpicking what was already there and of course documenting the decisions made along the way was crucial to that.


Note(s) to self

Working without a PM forced a kind of discipline I didn't expect to appreciate. Every prioritisation decision, every trade-off, every scoping call was mine to make. It pushed me to think beyond the component level, to understand the business context well enough to make good calls about where to spend time.

The deeper lesson was about where problems actually live. The analysts didn't need more features. They needed better structure. Recognising that early and resisting the temptation to jump straight into building, is what made the difference.

Good tooling doesn't announce itself. It just gets out of the way and let's users get on with what they have.