Five Myths of the UX Design Process
Five Myths of the UX Design Process
Join the DZone community and get the full member experience.Join For Free
Bugsnag monitors application stability, so you can make data-driven decisions on whether you should be building new features, or fixing bugs. Learn more.
ISO 9241-210: Human-centred design for interactive systems
Since founding UXLabs I’ve been involved in all sorts of design projects: both large and small, from simple to complex, start-up to corporate. In that time I’ve noticed some practices that seem to work well, and an even greater number that don’t. In this post I summarise a few as slightly tongue-in-cheek ‘myths’ of the UX design process.
1. Requirements analysis is best left to the experts
There are many techniques for gathering and managing requirements, with varying degrees of formality. Often they are gathered by business analysts in stakeholder workshops and subsequently documented as line items in an ever expanding table, each with its associated priority. That’s a good start, but many times I’ve found myself also needing to obtain estimates for the level of effort (from both a design and development perspective) and readiness to deliver. The former is relatively transparent and easy to acquire, but the latter can include availability of data, resources, or skills/capabilities; relation to corporate strategy, alignment with other ongoing projects/workstreams, i.e. anything that determines whether a particular requirement represents a practicable proposition at any given time, regardless of its priority or ease of implementation. These estimates are harder to obtain, but play a key role as the table can now be sorted by priority, readiness to deliver (as a secondary key), and level of effort (as a tertiary key) to produce a more reliable project ‘roadmap’ or baseline for subsequent sprint planning.
2. Design is a purely creative activity
Given infinite time, it is possible in principle to exhaustively explore any given design space and identify an optimal solution for any design problem. In practice, of course, available time is finite, so design exploration needs to be focused and prioritised. Consequently there needs to be a shared understanding of the dimensions of the design space (so that we understand the bounds of the landscape we are exploring) and a rationale for choosing among candidate areas (so that we can agree on the areas that are likely to be most productive). In search-related projects, the dimensions typically correspond to:
- User: for what types of users are we designing, and what is their relative priority?
- Task: what type of search tasks are we supporting: known item, exploratory, something else?
- Context: There are many aspects of context that are important, but one that is particularly pertinent for search-related projects is data: what information assets are we concerned with, and how do they map onto users’ mental models?
- Complexity: What level of complexity are we aiming to support in each of the scenarios: a simple, limited interaction, or something more demanding/realistic? Many design projects over-emphasize simple ‘findability’ tasks and ignore more complex types of information behaviour. However, many users’ information needs extend over multiple sessions, across multiple platforms and multiple locations; involving multiple media, and multiple people. The real challenge in this context is supporting behaviours such as analysis, sense-making and discovery-oriented problem solving (see Designing the Search Experience for further details).
3. You can’t track design exploration with numbers
One of the shortcomings of traditional requirements analysis techniques is that they are reductionist, in that the requirements can end up being represented in a fragmented, atomic form. Taken out of context, they provide little or no sense of how they combine to form meaningful experiences that deliver value to an end user. One way to address this is by combining groups of requirements into a single, coherent narrative or scenario. These can take many forms, from a single sentence to a highly structured dialogue, but they share the property of aggregating requirements into meaningful, goal-oriented stories.
Once you’ve mapped from scenarios to requirements, you can use the mapping as an audit tool to assess the status of each requirement against the scope of the design work. This type of audit trail provides a degree of transparency and accountability to ensure that commitments made during the analysis stage are appropriately accounted for in the design exploration stage.
4. Focus on a single optimal design solution
A core principle of user-centred design is that prototypes should be iteratively tested with end users and updated based on their feedback. But I’d go a step further and suggest that user testing should be based on a set of alternative candidate designs: it provides an opportunity to make direct quantitative comparisons between them, and qualitative user feedback is often more useful when framed in terms of what’s better or worse among a set of tangible alternatives rather than as a more abstract discussion about what’s missing from a single solution or what an improvement on this might look like. In addition, for search-related projects the selection of alternatives should be based an understanding of the various stages in the information journey, e.g.
- Opening game: the stage in the user journey prior to query articulation; typically this will be some sort of home or dedicated search page
- Middle game: the main phase in the search experience, where queries are clarified, disambiguated, refined, etc.
- End game: the target or payload pages, which typically contain the content items themselves. Often these are out of scope for a simple search design project, but a sample end game page is usually included for completeness and authenticity
5. Facets and values are just ‘content’
It’s tempting to think that once the interaction design is completed and delivered e.g. as a set of wireframes, the design work is essentially complete. But this isn’t the case for search projects, particularly those that employ some form of faceted search. In these cases, there needs to be a separate deliverable documenting the form and content of the individual facets and how they should behave. This includes issues such as:
- The transition logic – i.e. how choices made by the user in the opening game affect the middle game
- The precedence rules – i.e. the rules that govern when and how a particular facet should be displayed.
Wireframes aren’t a suitable vehicle for documenting these relationships and constraints, but these details play a key role in the quality of the search experience and should be defined and documented as a key part of the design activity.
Published at DZone with permission of Tony Russell-rose , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.