標籤:

When to Use Which User Experience Research Methods

By Christian Rohrer

The field of user experience, is blessed (or cursed) with a very wide range of research methods, ranging from tried-and-true methods such as lab-based usability studies to those that have been more recently developed, such as desirability studies (to measure aesthetic appeal).

You can"t use the full set of methods on every project, but most design teams benefit from combining insights from multiple research methods. The key question is what to do when. To better understand when to use which method, it is helpful to realize that they differ along 3 dimensions:

  • Attitudinal vs. Behavioral
  • Qualitative vs. Quantitative
  • Context of Website or Product Use
  • The following chart illustrates where several popular methods appear along these dimensions:

    Each dimension provides a way to distinguish between studies in terms of the questions they answer and the kinds of purposes they are most suited for.

    The Attitudinal vs. Behavioral Dimension

    This distinction can be summed up by contrasting "what people say" with "what people do" (very often quite different). The purpose of attitudinal research is usually to understand, measure, or inform change of people"s stated beliefs, which is why attitudinal research is used heavily in marketing departments.

    While most usability studies should rely more on behavior, methods that use self-reported information can still be quite useful. For example, card sorting provides you with insights about users" mental model of an information space, which can help you determine the best information architecture for your site. Surveys measure attitudes or collect self-reported data that can help track or discover important issues with your site. Focus groups tend to be less useful for usability purposes, for a variety of reasons.

    On the other end of this dimension, methods that focus mostly on behavior usually seek to understand "what people do" with minimal interference from the method itself. A/B testing only changes the site"s design, but attempts to hold all else constant, in order to see the effect of site design on behavior, while eyetracking seeks to understand how users visually interact with interface designs.

    Between these two extremes lie the two most popular methods we use: usability studies and field studies. They utilize a mixture of self-reported and behavioral data, and can move toward either end of this dimension, though leaning toward the behavioral side is generally recommended.

    The Qualitative vs. Quantitative Dimension

    The basic distinction here is that, in qualitative studies, the data is usually being gathered directly, whereas in quantitative studies, the data is gathered indirectly, through an instrument, such as a survey or a web server log. In field studies and usability studies, for example, the researcher directly observes how people use technology (or not) to meet their needs. This gives them the ability to ask questions, probe on behavior or possibly even adjust the study protocol to better meet its objectives. Analysis of the data is usually not mathematical.

    By contrast, insights in quantitative methods are typically derived from mathematical analysis, since the instrument of data collection (e.g., survey tool or web-server log) captures such large amounts of data that are coded numerically.

    Due to the nature of their differences, qualitative methods are much better suited for answering question about why or how to fix a problem, whereas quantitative methods do a much better job answering how many and how much type of questions. The following chart illustrates how the first two dimensions affect the types of questions that can be asked:

    The Context of Product Use Dimension

    The final distinction has to do with how and whether participants in the study are using the website or product in question. This can be described by:

  • Natural or near-natural use of the product
  • Scripted use of the product
  • Not using the product during the study
  • A hybrid of the above
  • When studying natural use of the product, the goal is to minimize interference from the study in order to understand behavior or attitudes as close to reality as possible. Many ethnographic field studies attempt to do this, though there are always some observation biases. Intercept surveys and data mining/analytic techniques are quantitative examples of this.

    A scripted study of product usage is done in order to focus the insights in very specific ways, such as on a redesigned flow. The degree of scripting can vary quite a bit, depending on the study goals. For example, a benchmarking study is usually very tightly scripted so that it can produce reliable usability metrics.

    Studies where the product is not used are conducted to examine issues that are broader than usage and usability, such as a study of the brand or larger cultural behaviors.

    Hybrid methods use a creative form of product usage to meet their goals. For example, participatory design allows users to interact with and rearrange design elements and discuss why they made certain choices.

    Most of the methods in the chart can move along one or more dimensions, and some do so even in the same study, usually to satisfy multiple goals. For example, field studies can focus on what people say (ethnographic interviews) or what they do (extended observation); desirability studies and cardsorting have both qualitative and quantitative versions; and eyetracking can be scripted or unscripted.

    Phases of Product Development (the time dimension)

    Another important distinction to consider when making a choice among research methodologies is the phase of product development and its associated objectives.

    1. STRATEGIZE: In the beginning phase of the product development, you are typically considering new ideas and opportunities for the future. Research methods in this phase can vary greatly.
    2. OPTIMIZE: Eventually, you will reach a "go/no-go" decision point, when you transition into a period when you are continually improving the design direction you have chosen. Research in this phase is mainly formative and helps you reduce the risk of execution.
    3. ASSESS: At some point, the website or product will be available for use by enough users where you can begin measuring how well you are doing.

    The table below summarizes these goals and lists typical research approaches and methods associated with each:

    Product Development Phase
    Strategize Optimize Assess
    Goal: Inspire, explore and choose new directions and opportunities Inform and optimize designs in order to reduce risk and improve usability Measure product performance against itself or its competition
    Approach: Qualitative and Quantitative Mainly Qualitative (formative) Mainly Quantitative (summative)
    Typical methods: Ethnographic field studies, focus groups, diary studies, surveys, data mining or analytics Cardsorting, field studies, participatory design, paper prototype and usability studies, desirability studies, customer emails Usability benchmarking, online assessments, surveys, A/B testing

    Art or Science?

    While many user experience research methods have their roots in scientific practice, their aims are not purely scientific and still need to be adjusted to meet stakeholder needs. This is why the characterizations of the methods here are meant as general guidelines, rather than rigid classifications.

    In the end, the success of your work will be determined by how much of an impact it has on improving the user experience of the website or product in question. These classifications are meant to help you make the best choice at the right time.


    推薦閱讀:

    Online Learning Community Research - Some Influences of Theory on Methods
    Social Software Research Center(China)
    Research-oriented learning

    TAG:Research |