This is the last in a three-part series of posts that will explore how we do product development at DevMynd – something we call “Human-Centered Agile.” You may want to read part one and part two before reading this particular post.
A Process for HCD+Agile
Any article that expounds on the conjoining of the philosophies of agile and human-centered design would feel somewhat perfunctory if it didn’t also attempt to propose a process. There are innumerable ways to implement process around agile and HCD – what follows is by no means comprehensive, or even ideal in every scenario. One of the values of agile wins here: “respond to change over following a plan”, the reader will need to determine the best practice for their needs.
The process above can be broken down into roughly 3 phases: research, experience design, and construction. While this appears to be a linear flow in the graphic above this is merely a limitation of visualization, in practice the process loops and repeats when needed and often doubles back when necessary.
During the research phase, the primary objective is to take raw research and turn it into product concepts. At DevMynd, we do this through the following steps:
- Research through interview, observation, ethnographic & psychographic study, and survey
- Download (often parallel) research streams into a single set of themes and key learnings
- Synthesize these themes and key learnings into insights, statements that either surprise us or affirm our assumptions
- Identify the opportunities that these insights produce through “how might we” statements
- Benchmark how similar and orthogonal problems are being solved by other products in the market or homegrown solutions
Sidebar: When we describe our process we often get the question: “where do personas fit in?” The answer is that we don’t really find them to be effective. They are a handy reference but usually fail to capture the full picture and only highlight the attributes that the designer or team care about or they capture superfluous attributes that are just noise. We often see personas used as a crutch to justify not doing enough primary research. Instead, we like to use “archetype users”, real research subjects who we have interviewed or observed that represent a cohort of users. This results in a much richer model against which to check our assumptions.
Now that we have some product concepts we need to further define the product. It’s important at this stage to do just enough planning. Too much planning will kick us into waterfall, not enough and we run the risk of missing large objectives that will inevitably change the project direction and cause waste. Our general progression is the following:
- Brainstorm product concepts that fulfill these opportunities and position well against alternatives, frequently involving lo-fidelity sketching
- Test our product concepts with users to validate or refine our approach, this ideally involves the original research subjects
- Frame the product with a project charter, a set of high-level capabilities and features, often the core user journeys will be drawn out here
- Plan for construction: estimation, staffing, constraints, etc.
Depending on the project, this is where we may also develop the business model or business case – particularly true if this product is opening a new market. This may involve defining the overall product strategy, value propositions, pricing model, and forecasting return-on-investment.
Building something is where agile begins to come to the fore. At DevMynd we typically use a combination of Scrum and Kanban methodologies but adhere dogmatically to neither. For example, we do fixed duration sprints, sprint planning, and regular retrospectives but we also pull from a continuous backlog and stress the limitation of work-in-progress (WIP).
The visualization above zooms in on the agile process but in particular highlights how design fits in. The key takeaway here though is that design of the user-interface should be done in close proximity to the writing of code. Meaning, that design (UX and UI) occurs roughly in tandem with the implementation, or perhaps slightly ahead.
Sidebar: Here there is a warning for the designers on the team. Because your progression through the user journeys is often faster than the developers you will feel a pull to run ahead of implementation. By doing this you run the risk of two major failure modes: 1) creating designs that are difficult to implement or overly complex, and 2) designing things that will ultimately not be introduced into the product. This second failure can be demoralizing for the designer and simultaneously cloud the judgement of the product owner.
Connecting the Dots
The construction phase is where we get our hands dirty in the details. The danger here is that we can lose sight of our original research and insights. If HCD and agile are truly going to work together we must connect our strategic insights to our tactical decisions. There are numerous ways to accomplish this but the way we do it is to keep the research insights up on the wall in the physical project space.
As we hold our product planning and user story writing meetings we can continually ask ourselves: “which insight and opportunity does this feature serve?” If we can’t answer that question then it implies we either don’t need it or we are working from assumption and incomplete information.
Despite all of our efforts to align our product decisions with user value, there will be things that are missed. Performing user testing at regular intervals is the best way to ensure we stay on track and deliver the value our users expect. There are a few best-practices that will improve the outcomes of your user testing efforts:
- Use a Mixed Panel – subjects should be a mix of individuals that were part of the original research and fresh subjects who did not have a hand in co-creation – the reactions from both will be valuable.
- Plan Ahead – create scripted test scenarios ahead of time that is as “real world” as possible so that each test run can be compared apples-to-apples.
- Observe Don’t Direct – it’s important to set up test scenarios for your subjects and let them discover the good and the bad of your design on their own, only intervene when they become seriously stuck.
- Record if Possible – if it’s ok with users it can be helpful to screen capture (or video record in the case of non-digital products) each test session, often things are missed in the live session that is later caught while reviewing recorded material.
- Create an Analysis Rubric – results of a usability test can often be qualitative (which is good) but it’s also helpful to set up a quantitative rubric that can be used to measure your test scenarios as a benchmark for future product iterations.
Another important facet of this process is that it requires any sufficiently complex project to be broken down into smaller milestones. This is particularly true of projects that will likely require many months or even years to tackle in full. There are numerous reasons for this:
- It creates opportunities to put parts of the solution into the market for testing, feedback, and course correction at regular intervals.
- Discovery and definition do not need to be done up front for the entire system which avoids falling into waterfall or “analysis paralysis”. Each milestone can loop back to the beginning of the process and take a fresh look at research.
- The team builds a sense of momentum and accomplishment that staves off the “death march” of extremely long projects.
Exactly how milestones are broken down varies wildly based on the project. And, the definition of milestones can change over the course of a large initiative.
I hope these articles, lengthy though they were, will inspire you to take a fresh look at your processes. Not just from the perspective of what you do to build software, but why. Time and time again I am encouraged that we have pushed the boundaries of product development through the combination of human-centered design and agile philosophies. Take the leap, I think you will too.