A Task-based Model of Search
A Task-based Model of Search
Join the DZone community and get the full member experience.Join For Free
Java-based (JDBC) data connectivity to SaaS, NoSQL, and Big Data. Download Now.
A little while ago I posted an article called Findability is just So Last Year, in which I argued that the current focus (dare I say fixation) of the search community on findability was somewhat limiting, and that in my experience (of enterprise search, at least), there are a great many other types of information-seeking behaviour that aren’t adequately accommodated by the ‘search as findability’ model. I’m talking here about things like analysis, sensemaking, and other problem-solving oriented behaviours.
Now, I’m not the first person to have made this observation (and I doubt I’ll be the last), but it occurs to me that one of the reasons the debate exists in the first place is that the community lacks a shared vocabulary for defining these concepts, and when we each talk about “search tasks” we may actually be referring to quite different things. So to clarify how I see the landscape, I’ve put together the short piece below. More importantly, I’ve tried to connect the conceptual (aka academic) material to current design practice, so that we can see what difference it might make if we had a shared perspective on these things. As always, comments & feedback welcome.
A Task-based Model of Search
In A Taxonomy of Enterprise Search we reviewed various models of information seeking, from an early focus on queries and documents through to a more contemporary notion of search as an information journey driven by dynamic information needs. Continuing this thread of moving from the ‘micro’ to the ‘macro’ level leads us, inevitably, to context.
In this article, we apply a task-based lens, examining the various layers of context that influence the search process. But to understand the effects of these influences in a principled manner, we need first to establish a framework and vocabulary for the key concepts and their relationships. The graphic below presents such a framework, based on the work of Jarvelin and Ingwersen (2004).
This model represents the multiple levels of task context as a set of layers and the criteria by which they are evaluated. These start from the micro level (the ‘information retrieval context’) and extend outwards to the macro level (the ‘socio-organizational and cultural context’).
The Information Retrieval Layer
At the most granular level (the innermost layer in the figure), we have information retrieval. This layer is typified by simple, focused tasks such as finding a specific document or resource related to a keyword query. An example might be a shopper searching an online bookstore for the latest Harry Potter book, or an engineer searching a parts database for a component with the serial number 123-456. These tasks are often referred to as known item searches. They may involve a number of iterations, but are usually confined to a single session. The success of tasks at this level is commonly evaluated using system-oriented metrics such as precision and recall.
The Information Seeking Layer
At the next level, we have information seeking. This layer is associated with broader, more complex tasks that attempt to satisfy a perceived information need or problem (Marchionini, 1995). An example might be a shopper trying to find shoes to match their interview suit, or an engineer trying to find components that are compatible with a particular product design.
At this level, users need to exercise judgment regarding which strategies to adopt, such as where, how and when to look for information (Wilson et al, 2010). This can include determining which particular repositories to search and which approaches to adopt. For example, they may choose to browse, enter a keyword query, or to apply some combination of the two approaches. Users may find themselves performing a series of information retrieval tasks as part of a broader information-seeking session. The success of tasks at this level is usually evaluated by assessing the quality of information acquired relative to the information need.
The Work Task Layer
The information need that motivates information seeking is itself motivated by a further level of search: the work task. This layer is characterized by higher-level tasks that are created when the user recognizes an information need based on either an organizational need or personal motive (Marchionini, 1995). An example of an organizational need might be an engineer trying to understand product lifecycles and manage the risks associated with component obsolescence. An example of a personal motive, on the other hand, might be a shopper who wants to understand the available options in selecting an affordable home entertainment system for their family.
Work tasks are situated in an organizational setting and are likely to reflect local resources, constraints and working practices. This can include which particular approaches may be used to satisfy a given information need, and what resources are available, e.g. reference materials, libraries, human experts, etc. (Wilson et al, 2010). Evaluation at this level commonly focuses on assessing performance of the overall task. For the engineer mentioned above, this could mean developing product designs that use parts from preferred suppliers involving a minimal risk of obsolescence.
The Socio-organizational and Cultural Context Layer
Finally, we have the highest level in the model: the socio-organizational and cultural context. This level influences not only the overall task requirements but also the collective importance attached to meeting them. For example, the expectations associated with completing a given work task may be perceived differently when considered within the context of a large public sector organization, a small start up business, or a home-based hobby.
In the remainder of this article, we’ll be primarily considering the first three levels of this model: information retrieval, information seeking, and work task.
Designing across Layers
The model above provides a useful lens through which to view the various layers of the search task. But more importantly, it provides a framework for understanding what type of design support is most likely to be effective at each level.
To illustrate, let’s return to our shopper who is trying to understand the various options in selecting an affordable home entertainment system. The overall goal is driven by a personal motive at the work task level, but in satisfying this goal, they will need to undertake a number of sub-tasks across several layers of the task context. We can examine the effect of context at each level, and explore what kinds of design support are appropriate. We can also start to think about search as a series of sub-tasks, reflecting the stages in the information seeking process.
At the outset, they are likely to be constrained by a lack of domain knowledge (e.g. of the main product types), and may be unsure of what questions to ask or even where to ask them. Perhaps they start by searching the website of an electrical retailer such as Comet. Unfortunately, tasks at this level are often poorly supported by online retailers, and a query for “home entertainment” returns an opaque list of product categories, which relies on the user knowing the terminology and which category to select.
But behind the tab labeled “Videos and Advice” lies a resource which is much more appropriate for this level of task. Instead of product categories, a query for “home entertainment” on PluggedIn returns content much better suited to goals at the work task level. These include tutorial information in the form of “buyer’s guides” and “how to guides” alongside product reviews and user generated content from topical forums and commentary streams. In contrast to the product category listing seen previously, this material provides far greater support for activities associated with the work task level, such as exploration and learning. In addition, it supports serendipitous discovery of latent needs through the provision of inspirational articles and expert reviews.
After exploring and reviewing this material, the user may start to formulate a more specific idea of the options open to them and the various tradeoffs involved. In so doing, the focus shifts from the higher-level work task to a set of information-seeking sub-tasks associated with those specific options. As their understanding deepens, they may wish to start collecting a list of ideas or candidates to investigate in greater detail at a later date. What kind of support can be provided to facilitate this? A simple but effective example can be seen at Amazon, which supports iterative information seeking via a personalized history panel that includes recent searches and recently viewed items. This is augmented by a facility for users to create, organize, and share their own lists.
As the user starts to get a clearer idea of their needs and the most appropriate products, they may wish to review and verify the details of these particular items on independent sites. In this context, the focus shifts from information seeking to a set of specific information retrieval sub-tasks. This is the level that traditionally has been best supported by online retailers, and there are many examples of design support for such tasks. One notable example can be found at Samsung, which supports search via keyword queries using a particularly immersive style of auto-suggest.
This facility helps users not only to accurately enter valid product names and details, but also provides character-by-character interactive guidance through the product suggestions shown in the dialog overlay.
Search tasks are defined by three primary levels: information retrieval, information seeking and work task. Each of these layers provides a unique lens through which to view the search experience and understand the types of design support that are appropriate at each level. In a future article, we’ll build on our understanding of the task context to explore the physical context in greater depth, examining the fundamental influences that guide and shape the mobile search experience.
- Ann Blandford and Simon Attfield (2010). ‘Interacting with Information’, Morgan & Claypool 2010
- Kalervo Jarvelin and Peter Ingwersen (2004). “Information seeking research needs extension towards tasks and technology”, Information Research, Vol. 10, No. 1. (October 2004)
- Gary Marchionini, (1995). Information Seeking in Electronic Environments. Cambridge University Press.
- Max L. Wilson, Bill Kules, m.c. schraefel and Ben Shneiderman (2010) “From Keyword Search to Exploration: Designing Future Search Interfaces for the Web”, Foundations and Trends® in Web Science: Vol. 2: No 1, pp 1-97.
Opinions expressed by DZone contributors are their own.