top of page

Artifex

Product Design Case Study

Artifex was created to streamline document review and commitment letter drafting by developing an AI-powered tool. 

BIPC Mocki[.png

Project Overview

Client

Legal Firm

​

​Challenge

Time-intensive document reviews and repetitive drafting workflows were creating bottlenecks—reducing efficiency and limiting the firm’s ability to expand.​

My Role

Lead Product Designer

​

​Solution

Artifex: An AI-powered tool leveraging Large Language Models (LLMs) to automate document analysis, streamline commitment letter drafting, and enable natural language querying—reducing manual effort and accelerating legal workflows.

Impact

Lawyers loved it. Metrics proved it.

74%

6X

3X

​Reduction in average document review time
 

Increase in commitment letters generated per week

 

Increase in case volume capacity without additional headcount

92%

80%

Decrease in manual drafting effort
 

​

Of users reported increased confidence in document accuracy

So… How did we get there?


Let’s rewind to the messy, manual workflows we set out to fix.

User Research

THE PROCESS - STEP 1

User Research

In-Person Workshop

We facilitated a full-day workshop onsite with legal operations, technology stakeholders, and senior attorneys to:

​

  • Align on business goals

  • Map out the current high level process

  • Identify friction points and error risks

  • Understand use cases

  • Prioritize automation opportunities

11_9_23 BIR Meeting Notes_edited.jpg
workflow_edited.jpg
Contextual Interviews

After the onsite workshop, we set out to identify the most impactful AI use cases for the MVP by taking a deeper look into their document analysis workflows.


To do this, I led a series of contextual interviews with lawyers and legal assistants to observe their day-to-day tasks and uncover key friction points. We focused on:

   •    How commitment letters were created and customized
   •    The methods used to compare multiple documents
   •    How discrepancies and inconsistencies were identified and flagged
   •    Where delays and inefficiencies emerged when handling high volumes of work

Who are We Designing For?


Grounded in our discovery insights, I created a persona to keep the real user—and their challenges—at the center of every design decision.

Persona.png

THE PROCESS - STEP 2

Define​

​

​Clarifying the Problem

 

After gathering rich insights during user research, we faced a critical step: translating a wide range of pain points into a clear, actionable product vision. While many promising AI use cases emerged, we knew we had to focus our efforts to deliver the most impact.

 

Rather than trying to build every possible function, we worked closely with stakeholders to identify the core tasks that lawyers repeatedly struggle with when handling large volumes of documents. Using an impact vs. effort matrix, we prioritized the most high-value, feasible opportunities for an MVP.

 

We defined two core problem areas to solve:

 

  1. Helping lawyers process and analyze legal documents more efficiently

  2. Reducing the time and manual effort required to draft commitment letters

​

​

​Narrowing the Focus

 

Based on our discovery research, we created an impact-effort matrix to evaluate and prioritize potential AI use cases. This helped us identify which features would deliver the highest value with the least complexity.

 

We ultimately centered the solution around three essential document review functions, supported by AI, along with an auto-drafting capability for commitment letters. These areas addressed the most common pain points and directly contributed to our core goal:

 

To improve the speed, accuracy, and confidence with which lawyers review and act on legal documents.​​

bipc impact effort.png

THE PROCESS - STEP 3

Ideation​

We translated user needs into a clear information architecture and wireframes that mapped out key workflows. To validate structure and layout early, we conducted wireframe testing through task-based feedback sessions with legal stakeholders.

Information Architecture
BIPC info arch.png
Initial Wireframes
Wireframes bipc.png
Wireframe Feedback Session

To validate our information architecture and early design decisions, I conducted a quick wireframe feedback session with legal stakeholders.

 

Participants were given key tasks—such as locating a specific document, uploading a file, or initiating an AI function—and were asked to talk through where they would click and why. This “think aloud” approach helped us assess whether the structure and feature placement aligned with user expectations. They were also encouraged to share any reactions or thoughts as they went through the wireframes.

 

I also presented multiple design variations for key components, inviting participants to compare options and share preferences. The session surfaced valuable insights around terminology, hierarchy, and intuitive access to core actions—enabling us to make targeted refinements before moving into high-fidelity prototyping.

Refining Wireframes

Following our wireframe feedback session, I iterated on the designs to address user input and improve clarity, usability, and alignment with real-world workflows. These refinements focused on simplifying interactions, enhancing layout hierarchy, and ensuring each screen supported the intended task flow.

Refining the Data Inconsistencies Results Page
fixes to wireframes.png

Based on stakeholder feedback, I revised the data inconsistencies results page by removing the left-side panel, which included unnecessary functions for this context. I also simplified the results summary to present clear, bite-sized insights in a cleaner, more focused layout.

THE PROCESS - STEP 4

Design & High-Fidelity Prototyping​

With a validated structure and clear user needs, we moved into the high-fidelity design phase. This stage focused on defining reusable components and aligning the experience with the legal firm's brand identity.
 

I built a component system to ensure consistency across matter details, document views, and AI interactions—prioritizing clarity, speed, and accessibility. Design decisions were guided by both legal industry norms and the need for intuitive, lightweight interactions.

​

The final high-fidelity prototype brought the experience to life, showing how users would navigate, upload, compare documents, and interact with AI—all within a seamless, modern interface tailored for legal workflows.

As part of the high-fidelity design process, I explored multiple layout and visual treatments for multiple pages, including the evaluate clauses page. I created two distinct mockups with different approaches to layout, hierarchy, and visual tone:

Option A: Featured a darker aesthetic with a bold, high-contrast interface and a larger results summary panel for quick top-level insights.

Option B: Maintains stronger consistency with the rest of the product’s visual system and prioritized real estate for the clause-level details—supporting deeper reading and analysis.

Evaluate clauses.png

​To evaluate which direction better aligned with user expectations and workflows, I conducted a quick comparison test with stakeholders and legal users. I asked participants to review both mockups and share which felt more intuitive, effective, and aligned with how they work—and why.

What I Learned

Users appreciated the boldness and clarity of Option A but their attention was drawn to the large numbers in the summary, which didn't provide them with useful information. Option B was preferred overall for its consistency with other pages and its emphasis on detailed content. Users felt more grounded and found it easier to scan the results summary.

 

Feedback highlighted the importance of visual continuity across tools, especially for users navigating quickly between different parts of the system.Throughout the process, I gathered feedback at each stage to ensure the evolving designs were meeting user needs and expectations—not just visually, but functionally. These insights directly shaped the final layout and component behavior for the experience.​​​​​​​​​​

With a tight turnaround and development on the horizon, we conducted a focused validation session with key stakeholders to gather quick, actionable feedback on the high-fidelity prototype. We walked through the end-to-end experience—document upload, AI interaction, and letter generation—capturing real-time reactions and suggestions. Feedback centered on terminology clarity, button placement, and streamlining repetitive interactions. 

 

Given the urgency, we prioritized rapid iteration: making targeted design refinements within 48 hours before handing off the prototype to development. This approach ensured the MVP aligned with stakeholder expectations while keeping the delivery timeline on track.

THE PROCESS - STEP 5

Design to Developer Handoff​

To ensure a smooth and efficient transition from design to development, I worked closely with the engineering team throughout the handoff process—providing not just visuals, but context, structure, and ongoing support.

Design System Refinement

I expanded and refined the design system to better support implementation—establishing consistent component behaviors and interaction states. This became a reliable source of truth for the development team and ensured a cohesive UI across the entire experience.

​

Knowing this was just the beginning of a scalable platform, I intentionally invested in a strong design foundation early. That upfront effort not only accelerated development but also reduced rework and decision fatigue as new features were added.

image.png
User Stories & Collaboration

I translated key workflows and pain points into clear, actionable user stories that captured the “why” behind each feature. These stories helped developers understand the real-world legal context and user intent—not just what to build, but why it matters.

 

We collaboratively reviewed these stories, and developers broke them down into tasks and sub-tasks, allowing for parallel work and clarity on scope. Together, we prioritized the backlog based on technical complexity, business impact, and MVP needs.

Ongoing Support & Iterative Testing

Throughout development, I remained embedded in the process—clarifying edge cases, reviewing in-progress builds, and supporting iterative QA. We conducted rolling reviews of completed stories to ensure the implementation aligned with the intended user experience.

THE PROCESS - STEP 6

User Acceptance Testing (UAT)​

To ensure the final product met both functional and experiential expectations, I led and supported a structured User Acceptance Testing (UAT) process with stakeholders. Our goal was to validate the accuracy of AI-generated outputs, ensure a bug-free experience, and confirm that the solution addressed the real-world use cases surfaced during discovery.​

Live Demos & Iterative Feedback

Throughout the build, we conducted live demo sessions to walk stakeholders through functionality and allow early feedback. This built familiarity with the tool and helped testers feel confident and invested in the final stages.

Designing the UAT Process

Drawing from my deep understanding of the workflow and process flows, I developed a step-by-step UAT guide to make testing seamless and accessible. I collaborated with QA resources to ensure full coverage of high-priority scenarios and edge cases. To empower testers and reduce friction, I:
 

  • Created step-by-step walkthroughs using Scribe for common actions

  • Built a centralized Google Site hub that linked key testing resources and workflows

  • Integrated this with Jira, enabling testers to log feedback independently and have it instantly converted into dev tickets

​

Testing Approach
 
  • Conducted structured testing across a wide range of matters and document types

  • Validated data accuracy and AI behavior under different inputs

  • Tested real-time document uploads, clause analysis, chat interaction, and auto-drafting workflows

  • Monitored user feedback for usability gaps, friction points, and enhancement ideas

​

Outcome

The UAT process not only helped us resolve bugs and validate core features before launch—it also surfaced enhancement ideas that were rolled into our next phase of work. My close involvement ensured a feedback loop between users, design, and development, making the product stronger and more aligned to real needs.

​

Final Thoughts & Lessons Learned​

This project was a powerful example of how user-centered design and AI can come together to solve real, complex problems in a high-stakes legal environment. By narrowing scope with intention, prioritizing what mattered most to users, and validating early and often, we delivered a solution that had immediate and measurable impact.

Key Takeaways

​​

  • Focus wins: Choosing a small number of high-impact, high-need use cases helped us build trust with users and deliver value fast.

  • Stakeholder involvement is everything: Bringing lawyers into the process early—from research through testing—built alignment and momentum.

  • Design systems are time multipliers: Investing in scalable components early on saved time for both design and development down the line.

  • Structure enables agility: Creating clear UAT workflows and feedback loops allowed us to move quickly without sacrificing quality. 

What's Next for Artifex?

 

The MVP laid a strong foundation, and feedback gathered during testing helped define the roadmap for future enhancements—including advanced clause scoring, analytics dashboards, and more intelligent drafting assistance. With user trust and adoption already in place, the next phase is focused on scaling and deepening the AI experience.

Still Here? You’re My Kind of Reader!

 

If you’ve made it this far, congratulations—I hope it was an insightful read.

​

Have questions or want to chat about the work? I’d love to hear from you!

bottom of page