Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

October 29 2013

13:30

Five Tips for Conducting Scientific Research in the UX World

Despite the fact that research plays such a pivotal role in the practice of user-centered design, much has been written about how to approach it in a “quick and dirty” manner. Why the rush? I believe that the application of a more rigorous, scientific methodology could lend some much-needed credibility to our approach.

My love story with research began almost a decade ago. One day, while working as a novice prototyper, I was instructed to get feedback from customers. So — awkwardly — I introduced my ideas to potential users. Some told me what they liked; others gently glossed over what they would improve. I came away feeling accomplished.

Little did I know. My subsequent training as a scientific researcher helped me see the error of my ways. I realized that, in that moment, I used biased responses to inform my design. I heard what I wanted and not necessarily what I needed.

A rigorous approach to research provides a much clearer path to unbiased findings, findings that go a long way towards informing our design. This article covers five perspectives to that end. Starting with research plans, we’ll cover details of testing methodologies and even the role of the researcher herself. Finally, we’ll discuss the ways these tips apply to our research in practice.

Go back to where it all began

All scientific research projects begin with a research plan, a document that outlines:

  • The problem (or the research questions) to be explored,
  • A summary of existing literature,
  • The hypothesis(es) or an extrapolation of any patterns evident in the existing literature,
  • The research participants who will take part (more on this, below),
  • The data collection methodology(ies) to be employed,
  • The planned analysis methods, and
  • Any expected results.

The goal in writing a research plan is to be absolutely certain that the entire team understands not only the purpose of the study but also the fact that each aspect of the study has been given due consideration.

Developing a sound research plan requires that we begin with an extensive review of existing theories, models, and other research studies. This ensures that we aren’ t reinventing the wheel. For instance, if the study is based around the System Usability Scale, the best thing to do is to read the original paper to truly understand the scale. Finding original research is more valuable than pretty diagrams or the popularity of the source. Valuable academic citation sites include Google scholar and Microsoft Academic Search. While there’ s always the risk of playing a game of “telephone”, these documents often go through extensive committee review which minimizes the chance that they will contain incorrect information.

Determine the number of participants beforehand

Sample size has been a hot topic for a while now. Some researchers assert that five participants will suffice2; others calculate their sample size based on the power that they want to achieve3; still others believe that a higher number has a lower percentage of risk associated with it4. My take is that the sample size depends on the methodology of the study.

For example, a qualitative, exploratory study on mobile phone usage behavior needs descriptive, insightful data, so the number of participants depends on the richness of the information received. But, a quantitative study, such as looking at the effects of mobile phone usage on behavior depends on confidence limits and intervals as well as analysis methods. The more analytical you want to be, the bigger your sample size needs to be.

Either way, the key is to determine the number of participants before conducting our research and to continue researching until we’ ve hit that number. This ensures that we aren’ t swayed by early trends that might ultimately cause us to miss subtle issues. The Three Mile High tragedy is a painful reminder of the severity of subtle issues.

Don’t let your interests conflict

Scientific research focuses on objectivity. For that reason, it always begins with getting approval from an Institutional Review Board (IRB), an academic organization that approves and monitors any research involving humans. The IRB requires that all researchers state they do not have a conflict of interest in the research project at hand.

So, what does this imply for UX designers? Simple: designers shouldn’t research their own designs.

Designers inevitably design things that make sense to themselves. This is beneficial in some ways, but it also paves the way for hundreds of biases to affect decision making. In order to gather unbiased research to inform designs, a trained, unbiased researcher needs to have the final say on the questions, and decipher the answers. This helps avoid experimenter biases like interpretive bias and observer bias.

Test the test

Pilot tests are tests of a research plan. For scientific researchers, pilot tests are necessary in order to ensure the validity of the research plan and help identify possible problems with it5. Ideally, pilot tests are conducted with a group of users that are representative of the target audience.

The pilot test works exactly like the proposed one, but instead of looking for data, it allows us to catch errors in our test itself. For example, if we are pilot-testing a survey and users don’ t understand the word ldquo; cumbersome” , we might remove that from our final survey. With a survey, we’ ll also time how long users take to complete it, make sure that every question is understood correctly and ask the participants for candid feedback.

If we’ re doing a usability test, we’ ll provide the instructions and watch them complete the tasks that we plan to assign to users, to ensure that our instructions are clear; we’ ll remind users to think aloud and to be frank with their opinions, as they would in an actual test; and, most important, we’ ll take notes every time they ask that a question be repeated or for more clarity.

Make sure to stick to the planned script and behave as though this was a regular research study. Ask for honest feedback on how users would improve the overall study and let your expertise as a researcher apply their answers accordingly.

Typically, results of a pilot test are only used to modify the actual test. Results like answers to surveys, time taken to complete tasks, etc. should not be incorporated into the final results of the research to ensure consistency.

De-bias, de-stress, de-tect

Scientific research often requires extensive vetting of researchers — the people conducting the research — prior to their participation in a project. The latest trend in the UX world is to get everyone involved with the research. As a researcher, nothing excites me more than this but that being said, it is extremely important to acknowledge that the inexperience of a researcher and its number of open (versus hidden) observers can be inversely proportionate to its overall ldquo; success.”

For instance, usability testing (arguably, the most common type of research method in the UX world), can be extremely stressful for participants6. Aside from being asked to ‘perform’ , users are sometimes put in unnatural conditions which can be very nerve wracking. This, in turn, could hinder performance and risk invalidating our findings.

Another thing that affects performance is the fact that participants change their behaviour when they know they’ re being observed, something otherwise known as the Hawthorne effect. Worse still, this effect is only exacerbated as the number of observers increases. So while it’ s definitely good to get more people involved and invested in research, there are a few precautions we should take in order to minimize their potential negative effects.

  1. First, whenever we’ ve got a group of people involved in the research process, we should always ensure the facilitator of a research session has some experience and training so that they’ re not unknowingly influencing participants. Keep an eye out for leading questions and analyze the results accordingly.
  2. Second, either keep the observers hidden or to a minimum. A researcher’ s main job is to keep the data as pure as possible (objectivity, remember?), and a stressed participant does not provide reliable data.
  3. Third, let’ s remind the users that we had nothing to do with the design, so that users aren’ t hesitant to give too much negative feedback.
  4. Fourth, always remind the user that we’ re testing the product and not them. This is (hopefully) old news, but users need to be reminded of this constantly.
  5. Fifth, and finally, always keep an eye out (or a ear, if the session is remote) for any sign of stress. If the participant starts to appear stressed, immediately change the topic or consider ending the session. The key here is to note the difference between a stressful or frustrating design interaction and a stressful research session. The former provides valuable insight while the latter can provide unreliable data.

Repurposing the scientific method

In summary, I suggest taking a leaf out of the scientific researchers ’ book:

  1. Plan your study out and read the sources to get accurate information.
  2. Choose a number of participants before testing, and stick to it regardless of the first trends that appear.
  3. Be alert! Watch out for bias when conducting research.
  4. Test our tests.
  5. Avoid biases, stress, and leading questions.

Most importantly, don’t shy away from rigor in the research process; it’ s the only thing that can lead to truly dependable results!

References

  1. Fin, P. (2006). Bias and Blinding: Self-Fulfilling Prophecies and Intentional Ignorance
  2. Nielsen, J. (1993). Usability Engineering. Boston :AP Professional.
  3. Kraemer, C. & Thiemann, S. (1987). How many subjects? Statistical power analysis in research. Oaks, CA, US: Sage Publications, Inc. (1987). 119 pp.
  4. Faulkner, L. (2003). Beyond the five-user assumption: Benefits of increase sample sizes in usability testing. Behavior Research Methods, Instruments, & Computers, 2003, 35(3), 379-383.
  5. van Teijlingen, E. & Hundley, V. (2001). The importance of pilot studies. Social Research Update. Issue 35.
  6. Schrier, J. (1992). Reducing Stress Associated with Participating in a Usability Test. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 1992 36: 1210. DOI: 10.1177/154193129203601606.

The post Five Tips for Conducting Scientific Research in the UX World appeared first on UX Booth.

July 23 2013

13:30

Interviewing the Interviewer, Part 1: A Chat with Steve Portigal

Whether you’re new to the practice or a user research veteran, there’s always something to learn. So when researchers Maish Nichani and Steve Portigal got together to talk, we were delighted to listen in. In the following two-part series, Maish and Steve take turns discussing some of the oft-overlooked aspects of the craft.

Over the course of his career, Steve Portigal has interviewed hundreds of people – families eating breakfast, hotel maintenance staff, architects, radiologists, home-automation enthusiasts, credit-default swap traders, even rock musicians. He’s also the founder of Portigal Consulting, where his work has informed the development of dozens of products and services. In his new book “Interviewing Users: How to Uncover Compelling Insights” (recently published by our friends at Rosenfeld Media), Steve sheds light on his seemingly simple but rigorous practice.

Maish Nichani is a UX practitioner and principal at Pebble Road, an enterprise UX consultancy. He and Steve are friends. Prompted by the release of Steve’s book, the two of them get together to really discuss aspects of their work. They had a whole book to go on, after all!

Included in the first half of the transcript are the differences between interviewing and day-to-day conversation as well as what Steve describes as the “tipping point” that occurs during research. Later this week, we’ll present a the second half – a sort-of reverse interview – in which Steve asks Maish what he thinks about the current and future state of the profession. And if that wasn’t enough reason to check back in, we’re also running a contest, giving away three copies of Steve’s book. Details below!


Thanks, Steve, for taking the time to chat! What is hardest part, you think, for newcomers to grasp when it comes to interviewing users?
I don’t think people really grasp that there’s a big difference between talking to people – something we do every day – and leading an interview. Some people decide to treat interviews exactly like conversations, whereas others decide to act like what they think “real interviewers” do (e.g., a list of questions that are read from a sheet of paper and don’t ever turn into an interaction). Both groups are missing out. Developing an understanding of the ways that interviewing inherits some aspects of normal conversation and the ways in which it differs is what separates newbies from those with a bit of skill.
What is an appropriate response to give clients who insist on specifying aspects of your research methodology?
Whenever a client approaches me and has already specified the approach we should take with their study, that’s usually time for a conversation. Sometimes teams create a research plan as a stake in the ground when what they actually want is feedback and a recommended approach. Sometimes, though, their plan is a good one, and we might just suggest one or two changes to see if they are amenable. I can’t count the number of times I’ve received a detailed request, exclaimed “what?!” and then had a really excellent conversation to better understand the reasons behind it. Obviously, no one should take a project where they don’t believe the method is going to produce results. An examination of a prescribed approach is one of the first tests of (the potential for) good collaboration.
A common stakeholder complaint regarding user interviews is that they take too much time. How do you respond to clients who insist on lean research?
It’s a red flag when someone approaches me with schedule concerns. Whether it’s their version of a lean process or not, I want to be sure that it’s feasible to address their issues with the given resources. Otherwise, it’s a project that’s going to fail – and no one wants to take that on!
I provide a “typical,” phased research schedule in the book:

As well as a version with highly compressed phases:

My job is to help clients be mindful of the tradeoffs they’re making as they build a project schedule. The more time we spend, the more complex issues we can explore and the more certainty we will have about our conclusions. It isn’t always necessary to reach “the ultimate depths of complexity” with “the ultimate heights of certitude,” though. Clients should adjust the schedule while being aware of the tradeoffs.
In your book, you suggest interviewers use “transitional rituals.” What are these rituals and why are they important?
In the same way that interviews are not the same as everyday conversations, the time we spend with research participants is separate from the time we spend doing our “regular jobby stuff.” Transition rituals help interviewers switch contexts, providing a more objective interview. For me, this sometimes means assembling the materials and equipment, checking that I have the documents, etc. That’s sufficient. For someone else, they might want to remind themselves that what they are about to do is focus on the participant. That also has the benefit of reminding them to let go of the stuff-at-the-office – the report they have to give, the meetings they missing, etc.
You go on to mention a certain “tipping point” that happens during interviews where the interviewee shifts from giving short answers to telling stories. Can you shed more light on that?
Almost all interviews (if done well) get to a point in which the interviewer receives a great deal of information without feeling as though they’re “pulling” it out. For some interviewees, this happens in 30 seconds; for others it might 30 minutes, 60 minutes, or more. Ultimately, it’s an unpredictable element. While it doesn’t always happen oftentimes, when running an interview, I have the realization “Oh, now we’re there!”
Are transcripts of interviews necessary? Do memos or notes suffice in some situations?
Notes taken during or before an interview are filled with inaccuracies. It’s just beyond human capacity to fully capture everything. You need an audio or video record. Whether you later transcribe those (my preference) or just watch them again is up to you, but notes are not the same as the definitive recording of the interview.
How do you identify insights when going through interview data? In other words, what makes an insight an insight?
Insights come from successive refinement. I like to have conversations with my team throughout the research process about what we’re hearing. That way, when we’re actually going through the data, it’s not the first time we’ve reflected on what is interesting. Later I go through data with two filters on: the first is looking for specific things that I’ve already identified as areas to understand; the second is looking for things that strike me as interesting. But going through data is just about gathering individual data points; it’s when you put them all together into something new (e.g., synthesis) that you start to be able to report an insight. As far as defining the term, ugh; I’ll let someone else worry about it!
Last question! What are some tips for design teams to spread the use of research findings inside their organization?
In short, it’s best to look for opportunities to share findings throughout the process [Ed: notice a pattern?], not just when you’ve got “findings.” I cover this in more detail in my presentation “Championing Contextual Research in your Organization.”


That’s all, folks! Thanks again, Steve and Maish, for sharing your knowledge with us. Here’s a summary of Steve’s points:

  • Leading an interview is very different from everyday conversation. This subtle difference makes all the difference.
  • Methods-first briefs (in which a client prescribes a process) provide opportunities for researchers to meet clients and understand their approach.
  • Research can’t be rushed. Time is commensurate with outcome.
  • Transitional rituals provide time to remove our own hat and wear our participant’s.
  • Tipping points indicate states of flow during an interview, a natural outpouring of information.
  • Always record interviews when you can. Don’t depend on memory or scribbled notes.
  • Insights come from two points of view: what’s specified as part of research and what’s personally interesting!

In Part 2, later this week, we’ll share the “reverse interview” in which Steve Portigal asks Maish how he and his team work to hone their skills over time and as well as how research might be stored in the future.

As for the book giveaway, longtime readers know the drill. To enter, simply follow @uxbooth on twitter and leave a comment on this post answering the question: What’s the most surprising thing you’ve learned while conducting user research? Be sure to include your twitter handle in your comment and to leave it before this Thursday at midnight PST. We’ll contact winners over Twitter. Good luck!


The post Interviewing the Interviewer, Part 1: A Chat with Steve Portigal appeared first on UX Booth.

Sponsored post
feedback2020-admin
14:56

July 02 2013

13:30

The Art of Guerilla Usability Testing

Guerrilla usability testing is a powerful technique. Designer Martin Belam describes it as “the art of pouncing on lone people in cafes and public spaces, [then] quickly filming them whilst they use a website for a couple of minutes.” Let’s skip the pouncing part and instead focus on its subtleties, including how to obtain and share feedback with our team.

I recently worked on a quickstart project in which my team was asked to build a responsive website in a short amount of time. We were given very little time to code (let alone conduct research) for the endeavor, yet by employing guerilla usability testing along the way we collected feedback on the brand position. Eventually, we aligned our designs to both customer expectations and business goals.

Once a week throughout the project, we tested different kinds of prototypes to bring the business’s ideas to life. For example, while mid-development, we sketched a mobile version of the site on index cards and did a quick assessment. This revealed navigational problems (which guided us to rethink a key point in the customer journey) and even ended up shaping a bit of the brand’s media material. What’s more, guerilla usability testing opened our stakeholders’ eyes so that they challenged their own, innate assumptions about “the user.”

We iterated through our design ideas using lo-fi techniques like paper prototyping. Sketch by Chris Cheshire.

The bottom line? Guerilla usability testing presented itself as an easy-to-perform technique for refining the user experience. It helped us validate (and invalidate) critical assumptions at cheap cost and with rapid speed.

Breaking it down

It’s hard to see the magic that guerrilla usability testing affords and not want in on the action, right? Here are some basic questions to consider before getting started:

  1. What shall we test?
  2. Where will we test?
  3. With whom will we test? and, of course,
  4. How will we test?

What shall we test?

One of the best parts about this kind of testing is that it can be done with almost anything, from concepts drawn on the back of napkins to fully functioning prototypes. Steve Krug recommends testing things earlier than we think we should and I agree – get out of the building as soon as possible.

Test what the product could be so as to shape what the product should be. Even loosely defined UI sketches can be a great way to evaluate a future product. In fact, recent research shows that lower-fidelity prototypes can be more valuable concerning both high and low-level user interactions.

Where do we test?

Where we conduct tests affects how we perform and document our work. For instance, if we’re testing a new mobile app for a retail chain, we might go to the store itself and walk the aisles; if we’re working on “general” office software, we might test it with coworkers in a different part of the office; etc. The point is: let context drive the work.

With whom do we test?

When designing for the mass market, it’s easy enough to ask friendly looking strangers if they have a couple minutes to spare. Public spaces and shopping centers present some of the best places to do this on account of the sheer amount of foot traffic they receive (as well the relaxed nature of the environment). With more specific user sets, however, it’s useful to target subjects based on their context (see above): a mixture of location and behavior.

Coffeeshops are great because you’ll often find test subjects from varying cultural backgrounds and different age ranges.

How do we test?

Testing is fairly straightforward: have participants talk aloud as they perform tasks. Use the think-aloud protocol to test overall product comprehension rather than basic task completion. The key is to watch customers fiddle with a product and silently evaluate its usability. As Sarah Harrison explains, “Observing users is like flossing–people know they’re supposed to do it every day, but they don’t. So just do it. It’s not a big deal.”

Always start with open-ended, non-leading questions like:

  1. What do you make of this?
  2. What would you do here?
  3. How would you do [that]?

By answering these kinds of questions, participants tell a loose story in which they explain how they perceive a product. Along the way, we can generate ideas for how to improve things in the next iteration.

Employing the technique

Guerrilla usability testing is very much about adapting to the situation. That said, here are some helpful hints that I find consistently work in different international contexts:

  1. Beware the implicit bias. While coffeeshops are a great place to find test participants, focusing on people who frequent them introduces bias to our work. Simply acknowledging this implicit bias can help designers neutralise subjective experiences and account for individual differences. Remember to target different genders and be fair in who you approach.
  2. Explain what’s going on. Designers should be honest about who we are, why we’re testing, and what sort of feedback we’re looking to receive. Oftentimes, it’s best to do this with a release form, so that people are fully aware of the implications of their participation – like if it’s going to just be used internally versus shared globally at conferences. These sort of release forms, while tedious to carry around, help establish trust.
  3. Be ethical. Of course, being honest doesn’t mean we need to be fully transparent. Sometimes it’s useful to skip certain information, like if we worked on the product they’re testing. Alternatively, we might tell white lies about the purpose of a study. Just make sure to always tell the truth at the end of each session: trust is essential to successful collaboration.
  4. Make it casual. Lighten up tests by offering cups of coffee and/or meals in exchange for people’s time. Standing in line or ordering with a test subject is a great opportunity to ask questions about their lifestyle and get a better feel for how a test might go.
  5. Be participatory. Break down barriers by getting people involved: ask them to draw – on a napkin or piece of notebook paper, for example – what they might expect to see on the third or fourth screen of a UI flow. This doesn’t have to be a full-blown user interface necessarily, just a rough concept of what’s in their head. You never know what you’ll learn by fostering imagination.
  6. Don’t lead participants. When you sense confusion, ask people what’s going through their head. Open them up by prodding, saying “I don’t know. What do you think?”. People in testing situations often can feel as though they are being tested (as opposed to the product itself), and therefore can start to apologise or shut down.
  7. Keep your eyes peeled. It’s important to encapsulate passing thoughts for later analysis. Ethnographic observation is one good way to capture what you were thinking of during tests. Don’t get too hung up about formalised notes though, most of the time your scribbles will work just fine. It’s about triggering memories, not showing it off at an academic conference.
  8. Capture the feedback. A key part of any testing process is capturing what we’ve learned. While the way in which we do this is definitely a personal choice, there are a few preferred tools available: apps like Silverback or UX Recorder collect screen activity along with a test subject’s facial reaction. Other researchers build their own mobile rigs. The important part to remember here is to use tools that fit your future sharing needs.
  9. Be a timecop. Remember, this isn’t a usability lab with paid users. Be mindful of how much time you spend with test subjects and always remind them that they can leave at any point during the test. The last thing you’d want is a grumpy user skewing your feedback.

Sharing the feedback

Conducting the tests is only half the battle, of course. To deliver compelling and relevant results from guerilla usability tests, designers need to strategically decide how we’ll share our findings with our colleagues.

When analysing and preparing captured feedback, always consider your audience. The best feedback is the kind that understands stakeholders and kickstarts important conversations between them. For example, developers who need to evaluate bugs will have different needs than executives who want to prioritise new features.

Next, when delivering feedback, align it with your audience’s expectations. Try editing clips in iMovie or making slides in PowerPoint. Your co-workers are probably as busy as you, so an edited down “trailer” that highlights relevant results or a bullet-point summary along with powerful quotes is always a good method to keep people listening.

Go guerilla

At the end of the day, guerilla usability testing comes in many forms. There’s no perfection to the art. It is unashamedly and unapologetically impromptu. Consider making up your own approach as you go: learn by doing.

Note: Thanks to Andrew for providing lots of feedback on early drafts of this article.

Related reading


The post The Art of Guerilla Usability Testing appeared first on UX Booth.

July 31 2012

13:30

Running a Successful User Workshop

For UX professionals, talking to real users is undoubtedly an important part of the process. Our clients are experts in their industries and we are experts in ours but the best way to learn what users do, think, and want is to ask them directly.

Users aren’t fictional; who do you want to speak to?

That’s where user workshops come in handy. In essence, user workshops are sessions where we invite people who have a connection with our client to meet and talk about their experiences. Through workshops we can draw together groups of real people and quiz them about their behaviours and opinions.

User workshops won’t tell you exactly what to do but, if run correctly, they can give you invaluable insight at the crucial early stages of a project. They can also be relatively inexpensive to run.

Group dynamics can also lead to interesting discoveries that we can’t get from other forms of user consultation. For example, a user may be inspired by the debate and thus encouraged to share views when they may otherwise remain silent.

All this sounds good, yes? Here’s some advice to help you setup your next user workshop.

Before your workshop

The best workshops take planning. Start by talking with your colleagues and/or clients about your reasons for wanting to hold a user workshop:

  • Who do you want to speak to?
  • What information are you hoping to obtain?
  • What topics of discussion would presumably provide the most insight?

Next, recruit well. Recruiting users is always the hardest job because it takes longer than you think. When it comes to recruiting users, it doesn’t matter if you use a specialist recruiter (e.g. Acumen) or if you drag them in off the street. As long as you determine who you want to speak to, how many people you want and what you will give them for their time. Be sure to incentivise well: £25-£50 in hard cash is enough to ensure people turn up but not so much that they just come for the money

Create profiles of the kind of people you want to talk to. Preferably people who already understand the product or service you are discussing. Be sure to get the numbers right. Up to six people is a good number for a simple discussion, 12 to 18 is better if you want something more involved with breakout sessions

Prepare your users. Whatever time you set aside for your workshops you don’t want to spend a large chunk of it explaining why people are in the room and then another chunk sitting in silence while everyone has a good think about it. Instead, send participants an email or letter about a week before the actual session that explains where it will be, how long it will take, what you want to talk about, what they should bring with them (do you want them to look at a website beforehand?)

Get the setting right. If you can, visit the intended venue for your workshop to make sure it’s up to the job – Is it big enough and in the best location? Does it have the right facilities? – Get a feel for where everyone will sit and move around.

Kickoff

Workshops will inevitable be an alien environment for your users. For some, the idea of sharing their thoughts with a roomful of strangers will be downright scary. At the start of the session you should smile and run through a simple checklist, saying something akin to:

  • The session will largely involve open discussion and I want you to speak up!
  • I’ll give you support but I want you to do the majority of the talking.
  • Be vocal. The more you say, the more I will understand what to do next.
  • Be honest. All your views ‐ positive and negative ‐ are important.
  • I am not the client and anything you say will remain confidential.
  • Finally, and most importantly, there is no such thing as a silly idea. I want to approach things afresh, without preconceptions or perceived limitations.

Make the most of your time

At this point you have achieved something relatively rare – you have gathered users together in one room. Don’t waste the opportunity! Work hard to avoid dull monologues peppered with uncomfortable silences. One of the best ways to get the blood pumping and conversation flowing is to get people out of their chairs. Hand out post-it notes and pens. As you progress through the session, encourage people to scribble down their ideas and stick them on the nearest wall.

You are looking for breadth, not depth so avoid spending too long on any specific subject (e.g. the pros and cons of social media); don’t be afraid to close discussions down if you think a particular topic has been wrung dry.

Remember you are dealing with different personalities. Encourage the quiet ones to speak and wrestle the talking stick from the verbose, but do it in a nice way. Mike B. Fisher provides an excellent observation on differing personality traits and how to deal with them in his article, Understanding User Personalities.

Don’t hold the baton!

And when it comes to talking: don’t hold the baton. People naturally want to fill silences; shutting up will encourage others to talk. When I put a question to a group I count to ten in my head before saying anything else. I usually get to about six before someone says something and… off we go!

Record the events. If you’re doing it right, you’ll be too busy running the session to take notes so get someone else to help you, or use a Dictaphone or a video camera to record what people say and do. For me, voice recordings are most useful. I can play them over and over again and a lot of what users say then tends to sink into my brain through osmosis.

And if you’ve used post-it notes take pictures of everything before you take them off the wall ‐ you’ll soon forget how they were laid out.

Report back

However you communicate what you’ve learned, make sure you do it in a quick, easy-to-understand way. Nobody reads 150-page, wordy reports so be economical with the detail.

I prefer to sit and talk through the findings. I may support the discussion with simple, visual presentations and use recorded clips if they help to make a particular point.

I will sometimes create a more-involved document to be circulated to the masses, but even then I avoid wordyness. I rely hugely on images and simple bullet point lists and aim to produce something that anyone can skim in a couple of minutes and get the general gist.

In sum: next time, don’t just think about your users, talk to them!


The post Running a Successful User Workshop appeared first on UX Booth.

June 05 2012

13:30

5 Useful Lies to Tell User Research Participants

If you’ve ever run a research or usability test, you’ll know they can be tricky to facilitate. After all, you’re dealing with people; and people come with a whole host of existing preconceptions, personalities, emotions, and experiences. One thing that can help you to gain more honest and thereby useful feedback from research participants is, in fact, to lie to them.

Data is a sorted sort. Not only must it be properly contextualized and analyzed in order to bear useful information, it must also be collected and collated in a prudent fashion to begin with. Researchers go through this high level of detail to ensure the validity of their results. Dr. Marion Joppe of Ryerson University provides a more exacting definition:

Validity determines whether the research truly measures that which it was intended to measure or how truthful the research results are.

User researchers can increase the validity of their results in a variety of ways. Sometimes they conduct research “on-the-road” – known as
ethnographic research
– to interact with participants in their context of use. Other researchers go as far as recreating the environmental setting in which the product will be used. For example, if testing a television or video game, they might rearrange their lab to feel like a living room (comfy sofa, pictures on the wall, etc). If the product being tested is something that’s mostly used in the evenings, they might change the lighting in the room. If participants would often be interrupted while doing a particular task, the researchers might frequently interrupt their participants during the test. You get the idea.

In his 1994 paper
Usability
: Practical Methods for Testing and Improvement, Miles Macleod posited the following questions to aid the validity of research:

  • Are you looking at the right things to be representative of real-world use?
  • Are you collecting the right data and the right amounts of it?
  • Are you analyzing the raw data reliably?

The general consensus across these approaches – and Macleod’s questions – is that to increase the validity of a test requires scrutiny and planning. Though we’re all aware that planning is a good thing, what if you have planned accordingly, but you just want to ensure more worthwhile results? That’s where ly – err, deception – comes in. Two types of deception are commonly used by researchers to gather results with a greater degree of validity: active deception – in which participants are misinformed about certain aspects of a study, such as its true purpose – and passive deception when they are not made aware of certain aspects of a study.

It is often necessary to deceive users during research because giving participants complete information will likely change how they view what they’re doing, how they think, what they do and what they say. In turn the results are less valid. Robert Kerr provided a good example of this
back in March
known as “the Good Subject”; a respondent who – upon knowing the true purpose of the study – will be eager to say and do the things they think the experimenter wants, rather than what they would do naturally.

Anything we can do to uncover more valid results is a step in the right direction. To that end, here are a number of lies that you can use to obtain more valid results.

Tell them you had nothing to do with the project

“I’ve not worked on this at all so please feel you can be honest in your opinions”

Telling the participant you designed the thing they’re testing will very likely ruin the validity of the research. Non-confronters, people-pleasers, and aforementioned Good Subjects tend to go out of their way to avoid conflict and will therefore refrain from making negative remarks. Instead, they’ll be full of overwhelming praise even if they noticeably struggle on many of the tasks.

Even if you are the person who designed the product being tested, just omit that information. If they ask, lie. Say you’re not part of the design team at all; you’re just “a researcher.” In fact, even if they don’t ask, you’re better off denying any affiliation with the software whatsoever – they’re probably thinking it.

Play dumb

“I’m actually not familiar with this software so I’m afraid I can’t help you. Would you mind spending another minute on this task whilst talking me through your thoughts and expectations?”

Even if you deny having designed the product that they’re about to use, respondents will likely assume you know the product they’re made to use. If you’re asked to help, use your judgement and assess the length of time the participant has already spent on the task. If they haven’t tried long enough – and they’re not overly stressed – play dumb. This can often instantly refocus them. Another option is to state that you would be “unable to help them as in real life;” however, this implies that you do know how to complete it which can add to their frustration and performance pressure.

Still not sure what “playing dumb” is? The user may ask you “What should I press here?” To which you might say “What would you expect to press?” This is a good start, but you might increase the power of your response (in addition to switching the responsibility of the task back to the user) by adding, “I’m actually not familiar with this software so I’m afraid I can’t help you. Would you mind spending another minute on this task whilst talking me through your thoughts and expectations?”

Lie about the purpose of the study

“We’re just making sure that everything works as you’d expect it to”

By telling the user the true purpose of the study you risk contaminating the results. Research respondents will likely pay more attention and put more focus on any task they know you’re analyzing. This isn’t how they would normally interact with what you’re testing, of course. To keep their reaction as realistic as possible, it’s useful to lie about the purpose of your study.

Lying about a study’s purpose is one of the oldest tricks in the book, according to
Allan Kimmel
. His 2001 research paper found it to be one of the most common practices amongst seasoned researchers. Though it’s easy – even natural – to do, be sure you tell the truth after the the test has concluded. More on this later.

Lie about the number of people observing the test

“One or two people might pop into the room next door to watch for a bit, is that ok?”

User research sometimes takes place in a room with a two-way mirror so that the researcher’s client(s) can observe the test in an adjoining room. If there are lots of people behind the two-way mirror observing, don’t let the user know this or it will put them under immense pressure. When respondents know they’re being watched, they often feel pressure to say positive things and perform well.

Lie about how well they’re doing

“Oh, fantastic; that’s really useful!”

Speaking of performing well, some users – especially first timers or shy respondents – may need the occasional bit of reassurance and/or encouragement. A good example would be “Oh, fantastic; that’s really useful!” (even if it isn’t) whilst keeping your body language fairly neutral. This phrase can also be used to gain more comments from the user and can be very effective at helping users to feel more comfortable expressing what they dislike.

Lavishing praise might not seem like a lie per se, but it’s just as powerful. Give it sparingly. Overly positive reinforcement can actually encourage a very specific response from the user, leading to confirmation bias.

One important caveat

Okay, you understand the notion of research validity and you’ve got a bevy of lies you just can’t wait to tell. What’s the catch? Although lying can help you get more valid results, it’s very important that you don’t impinge on the ethical guidelines set by the APA (American Psychological Association):

  • Any deception must be justified in terms of significant scientific, educational or applied value that outweigh any risks to participants.
  • It must not cause physical pain or emotional distress.
  • The researcher must debrief the participant at the end of the session.

These ethical guidelines particularly apply to Lie #3. Always explain the true purpose of a study at the end of the research session. This should be done carefully to ensure the user is clear of the importance of the lie(s) and how telling the truth would have likely changed their response. It’s a good chance for them to further reflect and you may find that at this point, when the participant is relaxed because in their mind the research is over, some of the most useful insights can be gleaned.

Roundup

Remember these are white lies that aren’t intended to harm the participant in any way. Using them can help participants feel at ease which encourages more honest responses and therefore higher research validity. You don’t need to use them all the time.

Before you put them through their paces, carefully consider the aim of your research and the impact of each lie. You may wish to just test out 1 or 2 lies until you feel comfortable. It may feel odd at first, but remember it’s for the benefit of both the user (to put them at ease) and the research (to gain better results) as a whole. When you notice the user’s posture or facial expression visibly relax, you’ll know the lie has worked well.

Are you ready to start lying to get more from your research? It’ll be our little secret!


The post 5 Useful Lies to Tell User Research Participants appeared first on UX Booth.

March 06 2012

14:30

Make your Findings Effective: Choosing the Right Method to Present Research

Bull's eye photo (dartboard) by Aaron Sarauer

Image by Aaron Sarauer

As user researchers we have a common goal: to provide a crystal clear picture of our research. To that end, our presentation must be both believable and digestible for our audience; stakeholders need to quickly scan whereas developers require specific details in order to take action. Likewise, the medium we use to present our findings is important.

Conventional mediums such as Powerpoint or Word might work great in some situations, whereas other situations require unconventional ones such as spreadsheets, video, storyboards and/or sticky notes. To present our findings effectively it’s imperative that we correctly match all three: the audience, the medium, and message itself.

Understand your audience

Lego people photo by CJ Isherwood

Image by CJ Isherwood

Just as designers should understand for whom they design, so should researchers understand for whom they research. Be sure to ask the following questions before choosing a presentation method:

  • Who is your audience? What is their primary role? e.g. Designers/ Executives/ Marketing/ other User Researchers/ Programmers/Community, etc.
  • What is your relationship with this audience? Is it new or ongoing?
  • What is the industry of this project? e.g. Government, Health Care, Medicine, etc.
  • What is the development cycle into which this research will be adopted? e.g. Agile/ Hybrid/Waterfall

In my experience (which consists of new and repeat stakeholders, both designers and executives) providing an executive summary in an email that also includes a link to the entire report is well received.

An executive summary email is pretty easy to create. Just make sure to focus on the important details and avoid any research jargon–save that kind of stuff for your actual report! More generally, structure your email:

  1. Brief description about the goal, methodology and participant demographics
  2. Notable findings
  3. High and medium-level issues (in a bulleted list) together with participant quotes (recommended) or even video snippets.
  4. Task Success Ratings and Metrics Studied (if applicable)
  5. Recommendations and next steps (if applicable)

While working with larger audiences – such as the open source community – the approach is different, however. Larger audiences include a variety of roles with varying degrees of involvement. For them we provided a comprehensive report posted on the community website with the option for them to react in the comments.

Establish credibility

Before your results are taken seriously your credibility must be established. This is easier said than done, of course. One quick way to gain the favor of your audience is to present your research in a way that makes sense to them.

While working with new stakeholders or stakeholders who are not familiar with user experience, a traditional report (PowerPoint, Text report) works well. Executives might not have the time to read an entire report but it’s reassuring to them (thereby, adding to your credibility) that the information is available to them. Over time, stakeholders may have more confidence in your data. Ask them about the effectiveness of your presentation and then pivot your method accordingly.

Credibility is directly correlated with the trust stakeholders have in your research methods, you as a person and user-centered design more generally. It is important that we act prudently in this regard, not just for ourselves but for our profession as a whole.

Manage Expectations

Time is also an important factor affecting our presentation, as user research projects can range anywhere from just a few hours to a few months. Regardless, we must ensure that the information we’re after is delivered in a timely fashion to help the project succeed!

Manage the expectations of your stakeholders by informing them that the presentation of your findings affects your overall turnaround time. In agile environments, for example, writing a report is often considered wasteful as the time spent can delay the team from pivoting. As such, many researchers consider rather unconventional approaches: a compact, bulleted list, a high level summary via email (see above), or a quick conference call. Any of these can provide enough information to jump start productive solutions. In all cases, though, be sure to inform your stakeholders about your choice of medium beforehand in order to establish/retain trust.

These sort of lightweight approaches usually work well when a researcher is accessible to the team on a day-to-day basis (especially when dealing with prototypes). However, when the user researcher is an external resource, it’s recommended that they invest the time to generate a report, providing as must information as possible. That way the client can revisit their findings, in totality, at a later date.

Track progress transparently

Train tracks photo by Eirik Refsdal

Image by Eirik Refsdal

Finally, issue tracking is an invaluable asset as it not only provides visibility into the state of a product’s usability but also aides in decision making. A spreadsheet log of issues can be effective enough for smaller projects. For bigger projects, there are other available such as Sifterapp, JIRA, Lighthouseapp or even a home-grown application. Choose your issue tracking solution based on a project’s complexity and shelf life.

Over time, analyze the impact of issue tracking data to seek patterns. Tracking number of urgent/ high issues per feature will make it easier to prioritize and to focus. It is also a great way to validate design changes. For live features, higher tracking is recommended whereas for iterative prototypes less tracking could work.

In all cases, it’s important that issue tracking be done transparently. Sharing your ongoing findings with the wider organization is useful in evangelizing usability and raising awareness about the concerns within a product. Consider creating a central platform to share your findings (especially with the option to comment). Sharable documents (like google docs) or an intranet are good places to start.

Take it the next level

With the significant overlap of pointers and variables, it’s difficult to have a one-size-fits all solution to the problem. However, by considering our audience, our medium and our message together we can make better choices with regards to how we present our research.

Certainly, I’ve only scratched the surface here. How do you present your findings?

The author wishes to thank Lisa Rex and Jeff Noyes for their feedback on this article.


November 15 2011

14:00

Design Studios: The Good, the Bad, and the Science

It’s a studio light. Get it?

Much has been written about the design studio methodology within the design community. In order to really understand how and why design studios work, though, designers must look beyond design—in particular, to social psychology and behavioral economics.

Too often, design problems are tackled ad hoc. A team stumbles into a design problem and they chase the first idea they come up with. If the idea dead-ends, the team may start over on a different solution. As soon as a viable solution appears, the team moves on to executing it.

At Case Commons, we use print-outs and post-its to share and critique ideas in our design studios.

In contrast, the design studio methodology offers a structured approach to design problems. While variations of design studio have been proposed—from Ungar and White’s initial 2008 case study on the methodology to Will Evans’ more sophisticated variation described earlier this year—all share the same basic flow.

Someone frames a design problem. They assemble a team of designers, developers, and product managers. Each team member individually brainstorms ideas. Each, then, shares his ideas with the group, which offers criticism and feedback. After more rounds of individual brainstorming and group discussion, a smaller team takes the resulting ideas and produces a final design.

Two criteria set design studios apart from other approaches:

  • Design studio brings together a group with diverse and balanced skill-sets such as design, product management, and development.
  • Design studio prescribes a specific process including one or more iterations of: individual brainstorming followed by group critique (explanation, feedback, voting). At the end, the surviving ideas are synthesized in a final design.

The Kennedy administration’s 1961 Bay of Pigs fiasco was one of many US foreign policy blunders attributed to groupthink and the failure to develop alternatives.

The JFK Presidential Library, image public domain

Developing alternatives

The starting point for understanding why design studios work is that they begin with brainstorming. Too often, in the ad hoc approach to design, the team runs with the first idea that comes to mind, before exploring alternatives.

Since the early 1990s, behavioral economists have used findings from cognitive science to challenge the idea that humans always act rationally. Behavioral economics has catalogued many of the ways in which we make sub-optimal decisions (economically and otherwise) due to cognitive biases inherent in our thinking. The bias most relevant to the design process is confirmation bias, our tendency to seek information that confirms our hypothesis and unconsciously filter that which negates it.

In the typical ad hoc design approach, described above, a team starts with a single idea and begins to vet it. As it does so, the likelihood grows that the team will fail to discover flaws in the proposed solution, or discover better alternatives. If the team hits a dead-end, they may explore alternatives, but otherwise, confirmation bias pushes a team down it’s initial, possibly sub-optimal path. The risk, obviously, is that the team is ignoring other, better solutions that may be out there.

Design studio mitigates confirmation bias in two ways. First, it starts with an explicit brainstorming period, which gets lots of ideas out on the table, before people get wedded to any one idea. (One improvement on the design studio methodology would be to encourage and reward the most extreme ideas possible. Because of the fact that ideas begin to converge after the initial brainstorm, it is important to cast as wide an initial net as possible.) Second, design studio turns confirmation bias against itself. Confirmation causes individuals to fall in love with their initial ideas developed during brainstorming. This inoculates individuals against groupthink by giving them time to develop and fall in love with their own ideas before becoming exposed to other people’s ideas.

Design thinking has its own language for what is going on here: divergence (the creation of new alternatives) and convergence (the cross-pollination and merging of disparate ideas). For most groups, convergence occurs naturally; people are polite and avoid conflict, so compromise and consolidation of ideas happens easily. Divergence is, for many teams, less natural, and design studio are a great way of enforcing this crucial step.

Groups vs. individuals

What about the fact that design studios encourage a large group to tackle the design problem? We hear all the time that “two heads are better than one” and that diverse groups are better than uniform groups. Scientists in the 1960s developed a complementary task model that theorized that groups outperform individuals because they combine individuals with complementary talents and resources. This idea still informs our common sense today.

Research since the 1960s, however, has shown that it’s not that simple. Over the last century, social science has taught us that groups are subject to a whole range of dysfunctions such as conformity, groupthink, self-censorship, free-riding, and so forth. We’ve all experienced ineffective groups. The design community is particularly hostile to groups doing “design by committee.” Given all the problems with groups, when, if ever, groups are able to outperform individuals? Instead of using design studios, should we just have individuals do design?

Group vs. individual performance

In 1982, Hill published the most-cited paper (PDF) on group versus individual performance. Hill found that whether groups or individuals perform better depended on the type of task:

  • Brainstorming problems: When brainstorming, research shows that the best results come from having individuals brainstorm on their own and then pooling their ideas. The reasons groups perform worse at brainstorming include: production blocking (when one person talks, the others in the group are inhibited from being productive), social loafing (individuals don’t try as hard when they are judged as a group) and groupthink (individuals are less likely to pursue riskier, idiosyncratic leads in a group). In one experiment, researchers asked subjects to shout and clap as loudly as possible; they found that people clapped a remarkable 20-30% less loudly in a group than individually.
  • Creativity problems: When performing a creative task, groups outperform individuals—if all individuals are of similar skill level. However, the research shows that weak and even average members drag down group performance, and that this effect is stronger when the task is more challenging. For difficult creative problems, if a group contains both high- and low-performing individuals, any one of the high-performing individuals would probably out-perform the group on his or her own. For most creative problems, though, it’s safe to assume a group will generally perform as well or better than an individual.
  • Complex problems: Sometimes a group cannot even agree on what the problem is, or it may multiple overlapping sub-problems. For complex problems such as these, groups are about as good as their second-best member. Why? Complex problems usually don’t have intuitively obvious answers, so group members will have to convince one another. Whichever individual best understands the problem, then, will have to convince his peers. The group, then, will only be as good as the next-best individual, and whether he can grasp the solution put forth by the best individual. Since it’s usually hard to predict who would be the strongest individual for a given task, using a group makes sense.

These findings support the design studio methodology:

  • Identifying and framing a design problem is often a complex problem. This is outside the scope of design studio and done before setting it up. Research suggests it’s best done in a group setting.
  • The sketching phase of a design studio is a brainstorming problem, which research agrees should be performed individually.
  • The group sharing/feedback/discussion phase of a design studio is a creative problem: critiquing ideas, using proposed ideas to better understand the problem, searching for novel combinations of ideas, etc. Research agrees that it’s usually good to do this as a group. For harder problems or teams with widely varying skill levels, though, it may actually be better to have a single, highly skilled individual take the group’s ideas and design independently.

In fact, not only do social scientists support the design studio—social scientists essentially invented the design studio 50 years ago, except they called it Nominal Group Technique (NGT). NGT, though, only calls for one round of brainstorming and discussion, so technically the design studio is more akin to iterative-NGT.

What to Make of the Research

The overall morale of this story is that the design studio methodology is research-backed and theoretically sound. Compared to the usual ad hoc way designs often come together, teams would do well to incorporate design studios into their process. That said, there a few lessons to keep in mind as you apply it in your organizations:

The Kumar Model theorizes that design thinking involves a mix of research, analysis, synthesis and delivery. The design studio is a tool of the synthesis phase.

  • Design studio doesn’t prescribe how to analyze the domain or frame the design problem. Much design theory makes a distinction between analysis and synthesis. In that frame of reference, design studio is mostly about synthesis, and presupposes that the team has already done analysis (domain research, user research, etc) before the studio. This is not a fault of design studio, but it is worth noting as a constraint. Do your homework before the studio.
  • The research on group vs. individual performance gives two cautions about the group sharing/feedback/discussion phase of design studios. One, be careful who you invite: low-performing individuals drag down groups. Two, groups under-perform individuals for the toughest design problems; sometimes it’s best to go with a single high-performing individual.
  • NGT, upon which design studio is based, was originally a one-pass process. Research has noted over the years at how much this limits the cross-fertilization of ideas, to generate novel combinations of ideas. Design studio is at least two pass (design-discuss-design), but this still results in very little cross-fertilization. The more iterating in design studio, the better.

Hopefully this has shed some light on why design studios can work. What have your experiences been, though? Have you led or participated in them, and have you found them to be effective? I’d love to hear your real-life stories!


Advertise here with BSA

July 21 2011

13:00

Creating Gesture Guidelines for Tablets, Part 2

Participant during the Guessability Study

A participant during the Guessability Study

How do you come up with the right gesture for an app or a game? If there is no precedent, then you’re on your own. Here I’ll discuss a 4-step method that’ll allow you to create gestures for specific actions, with validation from end-users.

In the first part of this series, we discussed the importance of having guidelines for gestural tablet interaction. Now that we understand the need to get the interaction techniques right, we’ll learn how to create gestures for specific actions.

How to conduct a gesture creation study

The Basics

Gesture creation is a 4-step process. Each individual stage leads on and informs the proceeding step. This method can be used to discover gestures for more than one action; I used this process for 18 distinct actions during my study.

Setting up the Gesture-Meaning Association test

Setting up the Gesture-Meaning Association test

Guessability study

Show subjects a short, two slide animation: before-and-after screenshots that shows the outcome of the gesture. Here you can view the video I used for the zoom-in action.

Using an app that can draw multiple inputs at once such as Doodle Buddy, place the “before” screenshot as the background of the drawing app.

While recording the screen, ask the participant to draw the gesture that they feel would invoke that desired action.

Setting up the camera to record the gesture creation

Setting up the camera to record the gesture creation.

Take a screenshot of the completed gesture and stop recording on the camera.

Ask the user to rate their created gesture based on the statement: “The gesture I picked is a good match for its intended purpose,” and rate the gesture on a 5-point likert scale from “Strongly Disagree” to “Strongly Agree.”

If you want to be able to identify similarities between participants, then you should aim to have around 10+ users for this stage.

Rating study

Now invite 3 other people to watch each gesture video and ask them to rate the effectiveness of the gesture for the intended action; use the same question that was asked in the Guessability study.

After all the raters have rated each gesture, produce an average score which will identify a consensus on whether specific gestures were a good match for the intended action.

Gesture creation

With all the information you have compiled from the previous two stages, you can identify similarities and issues from the collection of gestures created during the Guessability study.

In my study, for example, I noticed many participants were utilizing letters and symbols to represent certain actions. Therefore, these symbolic gestures were used for many of the gesture/action pairings I created.

Moreover, I noted that gestures that were used in advertising smartphones and tablet devices received high approval ratings; I therefore decided not to adapt or change gesture/action pairings that were already well-known.

For this stage in the process, the way you select gestures depends on your overall goals. For example, if you’re designing a game and you want the action to be challenging to invoke, then selecting the most popular gesture might not be the best choice for you. However, this process will provide you with all the information necessary to make these important decisions.

Gesture-meaning association test

So you have what you believe are ideal gestures for specific actions. Now it’s time to validate the gesture/action pairings you have created.

A small set of users—3 to 5 would be ideal—will be provided with several pieces of paper: half will be the names of the actions and the other half will be the actual gestures. The participants will be asked to match the gesture/action pairings that they believe are correct.

Setting up the Gesture-Meaning Association test

Setting up the gesture-meaning association test

This will allow you to identify the accuracy of selection, where you can discover which pairings were challenging for participants to match, and which ones were easier. Moreover, you can note the speed with which selections were made—were certain pairings selected through a process of elimination?

Final thoughts

This method can go a long way to ensure that the gesture/action pairings you are using in your apps or games are the best they can be.

There will always be actions that are challenging to depict with a gesture, yet this method allows you to identify these, providing you with opportunities to design around such constraints.

Give it a go.

Lead image for this article (on UX Booth homepage) courtesy of quinn.anya


Advertise here with BSA

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
(PRO)
No Soup for you

Don't be the product, buy the product!

close
YES, I want to SOUP ●UP for ...