Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

November 19 2013

14:30

Fail Fast, Fail Often: An Interview with Victor Lombardi

Retrospectives are common. You’ve likely conducted one before. But how many companies are actually good at them? How many companies actually have the courage to be open and honest about their own shortcomings? My experience tells me that very few are. And that’s why Victor Lombardi’s recently released book, is so necessary: unlike the ones designers are used to seeing, Lombardi’s stories are full of objective, thoughtful, and insightful commentary.

An award-winning product designer, Victor Lombardi’s had a hand in over 40 different software and internet projects throughout the course of his career. And during that time he’s clearly paid attention to one thing: namely, all of the different ways in which a project can unfold. His new book, Why We Fail, tells over a dozen stories of projects gone awry.

So why do design projects fail? Many reasons. Lombardi attempts to answer the question from a number of angles: product ideation, design, development, and marketing. After reading his book, we brought additional questions to the discussion: How does bias factor in? Or branding? And, on a different level, what can we learn from healthcare.gov?

Our full interview appears, below. Additionally (as is always the case when we interview an author published by Rosenfeld Media) the publisher has graciously offered to give away a few books to readers. More information on that follows the interview!


Hey, Victor! Thanks for taking the time to chat. Throughout the book, you note a wide variety of places in which cognitive biases might affect an organization (“survivorship bias,” for example, is a perspective that exclusively favors success). Were you aware of bias and its effects from the outset or did you simply start to see bias the further you delved into your research?
I wasn’t expecting to hear about bias when I interviewed people for the book. Maybe that’s because I didn’t think people would open up this way. But they did.

I think it’s good therapy for us to talk through not only what we did during a project but also what we thought and felt. From there I brushed up on my psychology—Max Bazerman’s “Blind Spots” was particularly helpful—to explain the cognitive science behind the issues that led to failures.

Many companies find it (understandably) difficult to financially justify a culture that “embraces” failure. What advice do you have for them?
If senior management rules by ego, believing that the people at the top have the best ideas, then I’ve got nothing to say. They won’t hear my message.

For others, I think the overt message of “fail fast” is actually better framed as “experiment fast.” The most effective innovators succeed through experimentation. They’ve updated the traditional R&D department by stepping out of the lab and interacting directly with customers, running thoughtful experiments, and executing them quickly to learn quickly what works and what doesn’t.

Anyone doing user-centered design is already 80% of the way there. It makes a huge difference just to shift your process towards the scientific method, phrasing research questions as hypotheses and iteratively testing towards them. A key difference is in the results: instead of a lot of usability data to analyze and interpret, you get a true or false result. This makes it much easier to decide what to do next.

I recommend reading up on methods like customer development, lean startup, or by starting with the final chapter of my book.

In chapter four you recount the story of Wesabe and Mint, two startups who approached the financial space from slightly different perspectives. Wesabe suggested that users manually upload their financial data (in the name of privacy and security) whereas Mint.com automated this task (at the risk of perceived security). Both were minimum viable products, but one failed while the other succeeded. Can you speak a little as to what startups can learn, generally, from Wesabe and Mint.com’s subtle differentiation?
Wesabe was a useful service with a smart Web 2.0 strategy. Given more time and investment it would still be around. But certain classes of startups are dependent on attracting large numbers of customers in order to attract more investment. Mint.com chose features and designed them in a way that excelled at attracting customers. They won the competition even though Wesabe was superior in many ways.

But this isn’t true in every case. In the book I cover a broad spectrum of products: startups and mature products; mobile, web, and desktop software; hardware; and services. Different situations resulted in different lessons. I summarize the lessons at the end of each case study.

One of my favorite case studies in the book is Google Wave, in which you suggest that the first sign of trouble was that everyone had a different definition of what a “wave” actually was. Personally, I think this speaks to the strong connection between user experience, semantics and branding. How do we fail in this regard and how might we do better?
The UX field generally is not good at the conceptual design stage of creating new products compared to, say, industrial design or architecture. We fall in love with our first idea, and we can quickly and cheaply move from idea to working prototype—it isn’t natural to stay in the idea stage for a while to explore alternate solutions.

It’s unfortunate that Google Wave failed because the problem still exists. The solution was close. …maybe “Concept Design” should be my next book ;-)

Chapter 7, titled “Do the right thing,” tells the story of Plaxo and Classmates.com, two companies who each decided to employ dark patterns to “better” their business. What other kinds of stories/examples did you consider including in this chapter that exhibited bad behavior?
In cases like Classmates.com I had no doubt the behavior was unethical. Others were less clear cut. Some of the things Plaxo did [ed: such as mass emailing its members’ contacts] that annoyed us back then are now accepted practice. So it’s relative. I decided against including others because there was no smoking gun, so I’ll refrain from mentioning them here as well. If you really want to know, you’ll have to buy me a drink sometime.
Last question! I know it’s a bit premature, but what, if anything, do you think designers might learn from the (highly publicized) failure of healthcare.gov?
Let’s say we solved for the myriad of political and vendor integration problems that plagued the project. What’s left are some intriguing customer experience issues. One seems to be that a long registration process is required before the customer can view prices of health plans, because the plans and prices are determined by your registration information. I don’t know how they ended up with that design, but the decision to design it this way sounds like a policy decision made around a conference table rather than through a design process that included running experiments.

What you can do if you find yourself in this situation is to acknowledge, out loud, that the goal of not showing prices prematurely is a good one, but the solution of making the customer do a lot of work up front is risky because more people will abandon the process before receiving any value from the site (see Wesabe vs. Mint). To mitigate this risk, you can generate alternate designs, mock them up, and test them out with customers.

Offhand, I can think of a few options:

  • Let visitors browse plans upon arrival and show the range of prices next to each plan to give people a general idea of cost. Then show them the actual prices after registration.
  • Show some realistic content so visitors know what factors will influence the price, like “Sally, a single mother of two in New York will pay $100/month for Plan X which includes benefits A, B, and C.”
  • If just a bit of data is needed to determine price, like state and income, just ask for that data, and require registration later when people are ready to buy a plan.

Thanks, again, for taking the time, Victor! Your book was a pleasure to read.

If you’re as jazzed about learning from failure as we are, I’d strongly suggest entering for a chance to win a copy of your own, courtesy of our friends over at Rosenfeld Media. To enter, simply follow UX Booth on twitter and leave a comment on this post answering the question: What’s your favorite story of design failure (one you’ve witnessed firsthand or otherwise) and what lessons to you think it provides? Rather than pick the winners at random, as we usually do, we’ll work with Victor to pick the three best stories of failure. Their authors will receive copies of the book. Entries must be made by Midnight, PST of November 21st. Good luck!


The post Fail Fast, Fail Often: An Interview with Victor Lombardi appeared first on UX Booth.

October 29 2013

13:30

Five Tips for Conducting Scientific Research in the UX World

Despite the fact that research plays such a pivotal role in the practice of user-centered design, much has been written about how to approach it in a “quick and dirty” manner. Why the rush? I believe that the application of a more rigorous, scientific methodology could lend some much-needed credibility to our approach.

My love story with research began almost a decade ago. One day, while working as a novice prototyper, I was instructed to get feedback from customers. So — awkwardly — I introduced my ideas to potential users. Some told me what they liked; others gently glossed over what they would improve. I came away feeling accomplished.

Little did I know. My subsequent training as a scientific researcher helped me see the error of my ways. I realized that, in that moment, I used biased responses to inform my design. I heard what I wanted and not necessarily what I needed.

A rigorous approach to research provides a much clearer path to unbiased findings, findings that go a long way towards informing our design. This article covers five perspectives to that end. Starting with research plans, we’ll cover details of testing methodologies and even the role of the researcher herself. Finally, we’ll discuss the ways these tips apply to our research in practice.

Go back to where it all began

All scientific research projects begin with a research plan, a document that outlines:

  • The problem (or the research questions) to be explored,
  • A summary of existing literature,
  • The hypothesis(es) or an extrapolation of any patterns evident in the existing literature,
  • The research participants who will take part (more on this, below),
  • The data collection methodology(ies) to be employed,
  • The planned analysis methods, and
  • Any expected results.

The goal in writing a research plan is to be absolutely certain that the entire team understands not only the purpose of the study but also the fact that each aspect of the study has been given due consideration.

Developing a sound research plan requires that we begin with an extensive review of existing theories, models, and other research studies. This ensures that we aren’ t reinventing the wheel. For instance, if the study is based around the System Usability Scale, the best thing to do is to read the original paper to truly understand the scale. Finding original research is more valuable than pretty diagrams or the popularity of the source. Valuable academic citation sites include Google scholar and Microsoft Academic Search. While there’ s always the risk of playing a game of “telephone”, these documents often go through extensive committee review which minimizes the chance that they will contain incorrect information.

Determine the number of participants beforehand

Sample size has been a hot topic for a while now. Some researchers assert that five participants will suffice2; others calculate their sample size based on the power that they want to achieve3; still others believe that a higher number has a lower percentage of risk associated with it4. My take is that the sample size depends on the methodology of the study.

For example, a qualitative, exploratory study on mobile phone usage behavior needs descriptive, insightful data, so the number of participants depends on the richness of the information received. But, a quantitative study, such as looking at the effects of mobile phone usage on behavior depends on confidence limits and intervals as well as analysis methods. The more analytical you want to be, the bigger your sample size needs to be.

Either way, the key is to determine the number of participants before conducting our research and to continue researching until we’ ve hit that number. This ensures that we aren’ t swayed by early trends that might ultimately cause us to miss subtle issues. The Three Mile High tragedy is a painful reminder of the severity of subtle issues.

Don’t let your interests conflict

Scientific research focuses on objectivity. For that reason, it always begins with getting approval from an Institutional Review Board (IRB), an academic organization that approves and monitors any research involving humans. The IRB requires that all researchers state they do not have a conflict of interest in the research project at hand.

So, what does this imply for UX designers? Simple: designers shouldn’t research their own designs.

Designers inevitably design things that make sense to themselves. This is beneficial in some ways, but it also paves the way for hundreds of biases to affect decision making. In order to gather unbiased research to inform designs, a trained, unbiased researcher needs to have the final say on the questions, and decipher the answers. This helps avoid experimenter biases like interpretive bias and observer bias.

Test the test

Pilot tests are tests of a research plan. For scientific researchers, pilot tests are necessary in order to ensure the validity of the research plan and help identify possible problems with it5. Ideally, pilot tests are conducted with a group of users that are representative of the target audience.

The pilot test works exactly like the proposed one, but instead of looking for data, it allows us to catch errors in our test itself. For example, if we are pilot-testing a survey and users don’ t understand the word ldquo; cumbersome” , we might remove that from our final survey. With a survey, we’ ll also time how long users take to complete it, make sure that every question is understood correctly and ask the participants for candid feedback.

If we’ re doing a usability test, we’ ll provide the instructions and watch them complete the tasks that we plan to assign to users, to ensure that our instructions are clear; we’ ll remind users to think aloud and to be frank with their opinions, as they would in an actual test; and, most important, we’ ll take notes every time they ask that a question be repeated or for more clarity.

Make sure to stick to the planned script and behave as though this was a regular research study. Ask for honest feedback on how users would improve the overall study and let your expertise as a researcher apply their answers accordingly.

Typically, results of a pilot test are only used to modify the actual test. Results like answers to surveys, time taken to complete tasks, etc. should not be incorporated into the final results of the research to ensure consistency.

De-bias, de-stress, de-tect

Scientific research often requires extensive vetting of researchers — the people conducting the research — prior to their participation in a project. The latest trend in the UX world is to get everyone involved with the research. As a researcher, nothing excites me more than this but that being said, it is extremely important to acknowledge that the inexperience of a researcher and its number of open (versus hidden) observers can be inversely proportionate to its overall ldquo; success.”

For instance, usability testing (arguably, the most common type of research method in the UX world), can be extremely stressful for participants6. Aside from being asked to ‘perform’ , users are sometimes put in unnatural conditions which can be very nerve wracking. This, in turn, could hinder performance and risk invalidating our findings.

Another thing that affects performance is the fact that participants change their behaviour when they know they’ re being observed, something otherwise known as the Hawthorne effect. Worse still, this effect is only exacerbated as the number of observers increases. So while it’ s definitely good to get more people involved and invested in research, there are a few precautions we should take in order to minimize their potential negative effects.

  1. First, whenever we’ ve got a group of people involved in the research process, we should always ensure the facilitator of a research session has some experience and training so that they’ re not unknowingly influencing participants. Keep an eye out for leading questions and analyze the results accordingly.
  2. Second, either keep the observers hidden or to a minimum. A researcher’ s main job is to keep the data as pure as possible (objectivity, remember?), and a stressed participant does not provide reliable data.
  3. Third, let’ s remind the users that we had nothing to do with the design, so that users aren’ t hesitant to give too much negative feedback.
  4. Fourth, always remind the user that we’ re testing the product and not them. This is (hopefully) old news, but users need to be reminded of this constantly.
  5. Fifth, and finally, always keep an eye out (or a ear, if the session is remote) for any sign of stress. If the participant starts to appear stressed, immediately change the topic or consider ending the session. The key here is to note the difference between a stressful or frustrating design interaction and a stressful research session. The former provides valuable insight while the latter can provide unreliable data.

Repurposing the scientific method

In summary, I suggest taking a leaf out of the scientific researchers ’ book:

  1. Plan your study out and read the sources to get accurate information.
  2. Choose a number of participants before testing, and stick to it regardless of the first trends that appear.
  3. Be alert! Watch out for bias when conducting research.
  4. Test our tests.
  5. Avoid biases, stress, and leading questions.

Most importantly, don’t shy away from rigor in the research process; it’ s the only thing that can lead to truly dependable results!

References

  1. Fin, P. (2006). Bias and Blinding: Self-Fulfilling Prophecies and Intentional Ignorance
  2. Nielsen, J. (1993). Usability Engineering. Boston :AP Professional.
  3. Kraemer, C. & Thiemann, S. (1987). How many subjects? Statistical power analysis in research. Oaks, CA, US: Sage Publications, Inc. (1987). 119 pp.
  4. Faulkner, L. (2003). Beyond the five-user assumption: Benefits of increase sample sizes in usability testing. Behavior Research Methods, Instruments, & Computers, 2003, 35(3), 379-383.
  5. van Teijlingen, E. & Hundley, V. (2001). The importance of pilot studies. Social Research Update. Issue 35.
  6. Schrier, J. (1992). Reducing Stress Associated with Participating in a Usability Test. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 1992 36: 1210. DOI: 10.1177/154193129203601606.

The post Five Tips for Conducting Scientific Research in the UX World appeared first on UX Booth.

Sponsored post
feedback2020-admin
20:51

July 25 2013

15:15

Interviewing the Interviewer, Part 2: A Chat with Maish Nichani

Earlier this week we shared part one of Steve Portigal and Maish Nichani’s conversation concerning the practice of user research. In the final half of their conversation, Steve asks Maish how he and his firm structure their approach to design research as well as how Maish thinks research might be facilitated in the future.


Hey, Maish! What are some internal initiatives you’ve used to develop interviewing skills within your own organization?
PebbleRoad is a small team comprised of “t-shaped” researchers, so when it comes to collaborating on a project we work to ensure that the “horizontal part” of our T’s overlap. For example, everyone needs to know the process and methodology of research interviewing.
We used to think we could rely on our people to carry the knowledge of our research process, but we were wrong. We kept finding interesting stuff we did many years ago that nobody knew about! To establish common ground, we created processes and tools to guide our practice, including:
  • A toolkit, comprised of guides, templates and forms that frame the practice;
  • Artifacts, which are physical things that give quick access to our shared knowledge;
  • Resources, such as collections of books and articles for new knowledge; and
  • Rituals, which are regular activities to keep the ideas flowing such as sharing sessions called Jolt Fridays!
  • Interesting! What’s in the toolkit, then?
    The research toolkit consists of:

    The toolkit is kept in our Google Drive and can be updated by everyone on the team. It helps establish a common ground upon which further conversations can take place. For example, we can quickly start discussing the research need and plan activities to meet that need.

    I’m assuming the books and articles are physically present in your space?
    Yes, we have a nice collection of 300+ books in our library. We read a lot! Current favourites include Service Design – From Insight to Implementation by Andy Polaine, Lavrans Løvlie & Ben Reason, Thinking Fast and Slow by Daniel Kahneman and Designing the Search Experience by Tony-Russell Rose and Tyler Tate. Our entire catalog is on LibraryThing!
    Could you give me an example of an artifact? Where does something like that live?
    In one of your webinars, Steve, you outlined the different types of interview questions one could ask and what they are used for. We found that list to be very useful when writing the interview planning guide, so we created an interview-types keychain!

    This is an example of an artifact. When we want to use it, we open the ring and spread all the cards on the table. We then discuss which types of questions will be most useful for the interview. When we’re done, we put the cards back on the ring. In fact, for those who want to create their own version, download the PDF or Indesign file.

    Okay, so, I’ve got to know: what happens on Jolt Fridays? It sounds dramatic!
    Jolt sessions typically happen on a Friday morning and anyone on the team can volunteer to host one. The host arranges for breakfast, chooses a topic, and researches it beforehand. We’ve had Jolt sessions on topics like How to Listen, Digital Marketing, Gamification and, yes, Qualitative Data Analysis. We also have screenings such as the documentary Art & Copy: Inside Advertising’s Creative Revolution. The big idea is to widen the organization’s collective thinking – to give it a jolt!
    In the past you mentioned that you were developing an app to facilitate user research. What is the app? Who uses it and for what?
    A few years ago we spent almost half a year developing a research data management app called Insightico. Sadly, we “archived” the project late last year.

    The idea was to collect and store research data in chunks. Researchers could import audio and video recordings, pdf files, ppt templates and other types of files into the app. These files could then be “chunked” up; for example, a video file could be broken up into individual segments and tagged. The big idea was that if we had chunked and tagged data from all sources we’d be in a better position to find patterns. Also, since chunks exist independently we could mix and mash up chunks from different projects for a session on crazy connections!

    The app was hosted in the cloud and expensive to maintain. We did this without any external investment. It was our pet project.

    Very quickly we learned that ideation is an “all-at-once” activity. Researchers are more effective when we can “see” all findings at the same time, not linearly like a blog post, which is how we presented it in the application. We know of only two ways of doing it today: Post-It Notes on a wall and spreadsheets!

    The app design team moved on with their projects and the development team disbanded so we had no other option but to “archive it”. I say “archive,” not “kill,” because I still think the idea has merit. If we can get a good team in place we might just have a go at it again!

    But it sounds like the technology had a limitation in that it didn’t support the “all-at-once” way of engaging with the data? Is that something you would design around if came back to the project?
    Yes. Even the pro research apps, like MaxQDA, are trying to get more visualization capabilities, but they’re not there yet. I think the problem is screen size. Unless we get something larger where many people can simultaneously interact with (multi-touch, multi-user), the physical wall with yellow stickies is going to win. Collaborative ideation needs something like what Jeff Han demoed on TED a few years ago!
    You’ve explored creating very analog and very digital tools to support the research process. What do you think the tradeoffs are in the different approaches?
    Digital is fantastic for capturing data and analytics, but when it comes to inference we need to take it out of the computer, as Jon Kolko advises.

    My biggest peeve is that we should be able to reuse findings across projects. This is where digital can really help but only if we’re strict about collecting it the right way. Vijay Kumar’s recent book, 101 Design Methods, talks of a “User Observations Database” that students can refer to when doing background scans. This database contains results of observation studies across different projects. This is the kind of reuse I’m interested in and it seems to be something for which digital is extremely well suited.

    If you could wave a magic wand and create any kind of tool or artifact to support the research process, what would it be?
    Research is really all about creating new knowledge and the more people who have access to that knowledge, the better. Currently, our research findings and insights are all locked up in (what Karen McGrane calls) “blobs.” We need it, instead, to be “chunked” (as Sara Wachter-Boettcher says) so that it can travel more freely and be mixed and mashed up to create, again, new knowledge. I don’t know of any existing project or initiative but I was thinking about using a schema (like what is already available on schema.org) for research findings. That way anyone writing up research findings could use the same markup and then search engines and specialist apps could read and move those findings more efficiently.
    A while back, I asked my team to avoid actively discussing examples from a previous project while synthesizing a current one because it felt awkward. While our clients hire us because of the benefit of our expertise, is prior “data” off limits? Every time I hear about sharing across projects, I think about what might be legal or ethical issues. What do you think is our obligation to keep our data and our insights from a project to just that project?
    The main objective of research is to point to a design direction or decision. The multitude of these decisions leads to a design solution, in our case websites, intranets or apps. If we keep this frame, then the criss-crossing of findings across projects becomes a necessity. As an example, we recently did a project on the job market in Singapore. One of the core findings was a deep sense of “entitlement” of young Singaporean jobseekers. The reasons for this attitude are “macro” in nature—recent economic success, changing culture, political freedom, etc.

    Now, our latest project centers on workforce productivity. It would be a disservice to ignore a possible causality between the entitlement mindset and a productive workforce. This is the type of reuse I’m referring to; not about identifiable data, but broad strokes of understanding.


    And that’s a wrap! Thanks, again, Steve and Maish; we’re always delighted to listen in.

    Considering that the interview ended on such an ethical conundrum, I’m curious what readers think: is sharing research data a good thing or a bad thing? What about knowledge, the flashes of understanding we get as researchers? Finally, if anyone knows about a schema for structuring research findings, please share it with Maish in the comments, below.


    The post Interviewing the Interviewer, Part 2: A Chat with Maish Nichani appeared first on UX Booth.

    July 23 2013

    13:30

    Interviewing the Interviewer, Part 1: A Chat with Steve Portigal

    Whether you’re new to the practice or a user research veteran, there’s always something to learn. So when researchers Maish Nichani and Steve Portigal got together to talk, we were delighted to listen in. In the following two-part series, Maish and Steve take turns discussing some of the oft-overlooked aspects of the craft.

    Over the course of his career, Steve Portigal has interviewed hundreds of people – families eating breakfast, hotel maintenance staff, architects, radiologists, home-automation enthusiasts, credit-default swap traders, even rock musicians. He’s also the founder of Portigal Consulting, where his work has informed the development of dozens of products and services. In his new book “Interviewing Users: How to Uncover Compelling Insights” (recently published by our friends at Rosenfeld Media), Steve sheds light on his seemingly simple but rigorous practice.

    Maish Nichani is a UX practitioner and principal at Pebble Road, an enterprise UX consultancy. He and Steve are friends. Prompted by the release of Steve’s book, the two of them get together to really discuss aspects of their work. They had a whole book to go on, after all!

    Included in the first half of the transcript are the differences between interviewing and day-to-day conversation as well as what Steve describes as the “tipping point” that occurs during research. Later this week, we’ll present a the second half – a sort-of reverse interview – in which Steve asks Maish what he thinks about the current and future state of the profession. And if that wasn’t enough reason to check back in, we’re also running a contest, giving away three copies of Steve’s book. Details below!


    Thanks, Steve, for taking the time to chat! What is hardest part, you think, for newcomers to grasp when it comes to interviewing users?
    I don’t think people really grasp that there’s a big difference between talking to people – something we do every day – and leading an interview. Some people decide to treat interviews exactly like conversations, whereas others decide to act like what they think “real interviewers” do (e.g., a list of questions that are read from a sheet of paper and don’t ever turn into an interaction). Both groups are missing out. Developing an understanding of the ways that interviewing inherits some aspects of normal conversation and the ways in which it differs is what separates newbies from those with a bit of skill.
    What is an appropriate response to give clients who insist on specifying aspects of your research methodology?
    Whenever a client approaches me and has already specified the approach we should take with their study, that’s usually time for a conversation. Sometimes teams create a research plan as a stake in the ground when what they actually want is feedback and a recommended approach. Sometimes, though, their plan is a good one, and we might just suggest one or two changes to see if they are amenable. I can’t count the number of times I’ve received a detailed request, exclaimed “what?!” and then had a really excellent conversation to better understand the reasons behind it. Obviously, no one should take a project where they don’t believe the method is going to produce results. An examination of a prescribed approach is one of the first tests of (the potential for) good collaboration.
    A common stakeholder complaint regarding user interviews is that they take too much time. How do you respond to clients who insist on lean research?
    It’s a red flag when someone approaches me with schedule concerns. Whether it’s their version of a lean process or not, I want to be sure that it’s feasible to address their issues with the given resources. Otherwise, it’s a project that’s going to fail – and no one wants to take that on!
    I provide a “typical,” phased research schedule in the book:

    As well as a version with highly compressed phases:

    My job is to help clients be mindful of the tradeoffs they’re making as they build a project schedule. The more time we spend, the more complex issues we can explore and the more certainty we will have about our conclusions. It isn’t always necessary to reach “the ultimate depths of complexity” with “the ultimate heights of certitude,” though. Clients should adjust the schedule while being aware of the tradeoffs.
    In your book, you suggest interviewers use “transitional rituals.” What are these rituals and why are they important?
    In the same way that interviews are not the same as everyday conversations, the time we spend with research participants is separate from the time we spend doing our “regular jobby stuff.” Transition rituals help interviewers switch contexts, providing a more objective interview. For me, this sometimes means assembling the materials and equipment, checking that I have the documents, etc. That’s sufficient. For someone else, they might want to remind themselves that what they are about to do is focus on the participant. That also has the benefit of reminding them to let go of the stuff-at-the-office – the report they have to give, the meetings they missing, etc.
    You go on to mention a certain “tipping point” that happens during interviews where the interviewee shifts from giving short answers to telling stories. Can you shed more light on that?
    Almost all interviews (if done well) get to a point in which the interviewer receives a great deal of information without feeling as though they’re “pulling” it out. For some interviewees, this happens in 30 seconds; for others it might 30 minutes, 60 minutes, or more. Ultimately, it’s an unpredictable element. While it doesn’t always happen oftentimes, when running an interview, I have the realization “Oh, now we’re there!”
    Are transcripts of interviews necessary? Do memos or notes suffice in some situations?
    Notes taken during or before an interview are filled with inaccuracies. It’s just beyond human capacity to fully capture everything. You need an audio or video record. Whether you later transcribe those (my preference) or just watch them again is up to you, but notes are not the same as the definitive recording of the interview.
    How do you identify insights when going through interview data? In other words, what makes an insight an insight?
    Insights come from successive refinement. I like to have conversations with my team throughout the research process about what we’re hearing. That way, when we’re actually going through the data, it’s not the first time we’ve reflected on what is interesting. Later I go through data with two filters on: the first is looking for specific things that I’ve already identified as areas to understand; the second is looking for things that strike me as interesting. But going through data is just about gathering individual data points; it’s when you put them all together into something new (e.g., synthesis) that you start to be able to report an insight. As far as defining the term, ugh; I’ll let someone else worry about it!
    Last question! What are some tips for design teams to spread the use of research findings inside their organization?
    In short, it’s best to look for opportunities to share findings throughout the process [Ed: notice a pattern?], not just when you’ve got “findings.” I cover this in more detail in my presentation “Championing Contextual Research in your Organization.”


    That’s all, folks! Thanks again, Steve and Maish, for sharing your knowledge with us. Here’s a summary of Steve’s points:

    • Leading an interview is very different from everyday conversation. This subtle difference makes all the difference.
    • Methods-first briefs (in which a client prescribes a process) provide opportunities for researchers to meet clients and understand their approach.
    • Research can’t be rushed. Time is commensurate with outcome.
    • Transitional rituals provide time to remove our own hat and wear our participant’s.
    • Tipping points indicate states of flow during an interview, a natural outpouring of information.
    • Always record interviews when you can. Don’t depend on memory or scribbled notes.
    • Insights come from two points of view: what’s specified as part of research and what’s personally interesting!

    In Part 2, later this week, we’ll share the “reverse interview” in which Steve Portigal asks Maish how he and his team work to hone their skills over time and as well as how research might be stored in the future.

    As for the book giveaway, longtime readers know the drill. To enter, simply follow @uxbooth on twitter and leave a comment on this post answering the question: What’s the most surprising thing you’ve learned while conducting user research? Be sure to include your twitter handle in your comment and to leave it before this Thursday at midnight PST. We’ll contact winners over Twitter. Good luck!


    The post Interviewing the Interviewer, Part 1: A Chat with Steve Portigal appeared first on UX Booth.

    July 02 2013

    13:30

    The Art of Guerilla Usability Testing

    Guerrilla usability testing is a powerful technique. Designer Martin Belam describes it as “the art of pouncing on lone people in cafes and public spaces, [then] quickly filming them whilst they use a website for a couple of minutes.” Let’s skip the pouncing part and instead focus on its subtleties, including how to obtain and share feedback with our team.

    I recently worked on a quickstart project in which my team was asked to build a responsive website in a short amount of time. We were given very little time to code (let alone conduct research) for the endeavor, yet by employing guerilla usability testing along the way we collected feedback on the brand position. Eventually, we aligned our designs to both customer expectations and business goals.

    Once a week throughout the project, we tested different kinds of prototypes to bring the business’s ideas to life. For example, while mid-development, we sketched a mobile version of the site on index cards and did a quick assessment. This revealed navigational problems (which guided us to rethink a key point in the customer journey) and even ended up shaping a bit of the brand’s media material. What’s more, guerilla usability testing opened our stakeholders’ eyes so that they challenged their own, innate assumptions about “the user.”

    We iterated through our design ideas using lo-fi techniques like paper prototyping. Sketch by Chris Cheshire.

    The bottom line? Guerilla usability testing presented itself as an easy-to-perform technique for refining the user experience. It helped us validate (and invalidate) critical assumptions at cheap cost and with rapid speed.

    Breaking it down

    It’s hard to see the magic that guerrilla usability testing affords and not want in on the action, right? Here are some basic questions to consider before getting started:

    1. What shall we test?
    2. Where will we test?
    3. With whom will we test? and, of course,
    4. How will we test?

    What shall we test?

    One of the best parts about this kind of testing is that it can be done with almost anything, from concepts drawn on the back of napkins to fully functioning prototypes. Steve Krug recommends testing things earlier than we think we should and I agree – get out of the building as soon as possible.

    Test what the product could be so as to shape what the product should be. Even loosely defined UI sketches can be a great way to evaluate a future product. In fact, recent research shows that lower-fidelity prototypes can be more valuable concerning both high and low-level user interactions.

    Where do we test?

    Where we conduct tests affects how we perform and document our work. For instance, if we’re testing a new mobile app for a retail chain, we might go to the store itself and walk the aisles; if we’re working on “general” office software, we might test it with coworkers in a different part of the office; etc. The point is: let context drive the work.

    With whom do we test?

    When designing for the mass market, it’s easy enough to ask friendly looking strangers if they have a couple minutes to spare. Public spaces and shopping centers present some of the best places to do this on account of the sheer amount of foot traffic they receive (as well the relaxed nature of the environment). With more specific user sets, however, it’s useful to target subjects based on their context (see above): a mixture of location and behavior.

    Coffeeshops are great because you’ll often find test subjects from varying cultural backgrounds and different age ranges.

    How do we test?

    Testing is fairly straightforward: have participants talk aloud as they perform tasks. Use the think-aloud protocol to test overall product comprehension rather than basic task completion. The key is to watch customers fiddle with a product and silently evaluate its usability. As Sarah Harrison explains, “Observing users is like flossing–people know they’re supposed to do it every day, but they don’t. So just do it. It’s not a big deal.”

    Always start with open-ended, non-leading questions like:

    1. What do you make of this?
    2. What would you do here?
    3. How would you do [that]?

    By answering these kinds of questions, participants tell a loose story in which they explain how they perceive a product. Along the way, we can generate ideas for how to improve things in the next iteration.

    Employing the technique

    Guerrilla usability testing is very much about adapting to the situation. That said, here are some helpful hints that I find consistently work in different international contexts:

    1. Beware the implicit bias. While coffeeshops are a great place to find test participants, focusing on people who frequent them introduces bias to our work. Simply acknowledging this implicit bias can help designers neutralise subjective experiences and account for individual differences. Remember to target different genders and be fair in who you approach.
    2. Explain what’s going on. Designers should be honest about who we are, why we’re testing, and what sort of feedback we’re looking to receive. Oftentimes, it’s best to do this with a release form, so that people are fully aware of the implications of their participation – like if it’s going to just be used internally versus shared globally at conferences. These sort of release forms, while tedious to carry around, help establish trust.
    3. Be ethical. Of course, being honest doesn’t mean we need to be fully transparent. Sometimes it’s useful to skip certain information, like if we worked on the product they’re testing. Alternatively, we might tell white lies about the purpose of a study. Just make sure to always tell the truth at the end of each session: trust is essential to successful collaboration.
    4. Make it casual. Lighten up tests by offering cups of coffee and/or meals in exchange for people’s time. Standing in line or ordering with a test subject is a great opportunity to ask questions about their lifestyle and get a better feel for how a test might go.
    5. Be participatory. Break down barriers by getting people involved: ask them to draw – on a napkin or piece of notebook paper, for example – what they might expect to see on the third or fourth screen of a UI flow. This doesn’t have to be a full-blown user interface necessarily, just a rough concept of what’s in their head. You never know what you’ll learn by fostering imagination.
    6. Don’t lead participants. When you sense confusion, ask people what’s going through their head. Open them up by prodding, saying “I don’t know. What do you think?”. People in testing situations often can feel as though they are being tested (as opposed to the product itself), and therefore can start to apologise or shut down.
    7. Keep your eyes peeled. It’s important to encapsulate passing thoughts for later analysis. Ethnographic observation is one good way to capture what you were thinking of during tests. Don’t get too hung up about formalised notes though, most of the time your scribbles will work just fine. It’s about triggering memories, not showing it off at an academic conference.
    8. Capture the feedback. A key part of any testing process is capturing what we’ve learned. While the way in which we do this is definitely a personal choice, there are a few preferred tools available: apps like Silverback or UX Recorder collect screen activity along with a test subject’s facial reaction. Other researchers build their own mobile rigs. The important part to remember here is to use tools that fit your future sharing needs.
    9. Be a timecop. Remember, this isn’t a usability lab with paid users. Be mindful of how much time you spend with test subjects and always remind them that they can leave at any point during the test. The last thing you’d want is a grumpy user skewing your feedback.

    Sharing the feedback

    Conducting the tests is only half the battle, of course. To deliver compelling and relevant results from guerilla usability tests, designers need to strategically decide how we’ll share our findings with our colleagues.

    When analysing and preparing captured feedback, always consider your audience. The best feedback is the kind that understands stakeholders and kickstarts important conversations between them. For example, developers who need to evaluate bugs will have different needs than executives who want to prioritise new features.

    Next, when delivering feedback, align it with your audience’s expectations. Try editing clips in iMovie or making slides in PowerPoint. Your co-workers are probably as busy as you, so an edited down “trailer” that highlights relevant results or a bullet-point summary along with powerful quotes is always a good method to keep people listening.

    Go guerilla

    At the end of the day, guerilla usability testing comes in many forms. There’s no perfection to the art. It is unashamedly and unapologetically impromptu. Consider making up your own approach as you go: learn by doing.

    Note: Thanks to Andrew for providing lots of feedback on early drafts of this article.

    Related reading


    The post The Art of Guerilla Usability Testing appeared first on UX Booth.

    Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.
    (PRO)
    No Soup for you

    Don't be the product, buy the product!

    close
    YES, I want to SOUP ●UP for ...