Beyond the Usability Lab - Conducting Large-Scale Online User Experience Studies

Online usability testing allows usability practitioners to get simultaneous feedback about their web and software applications from 1,000s of users. This book offers tried and tested methodologies for conducting online usability studies. It gives practitioners the guidance they need to collect a wea...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Albert, Bill, Tullis, Tom, Tedesco, Donna
Format: E-Book Buch
Sprache:Englisch
Veröffentlicht: Burlington, Mass Elsevier 2010
Morgan Kaufmann
Elsevier Science & Technology
Ausgabe:1
Schlagworte:
ISBN:9780123748928, 0123748925
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Inhaltsangabe:
  • Title Page Preface Table of Contents 1. Introduction 2. Planning the Study 3. Designing the Study 4. Piloting and Launching the Study 5. Data Preparation 6. Data Analysis and Presentation 7. Building Your Online Study Using Commercial Tools 8. Discount Approaches to Building an Online Study 9. Case Studies 10. Ten Keys to Success References Index
  • 6.4.2 Combinations (or deriving an overall usability score) -- 6.5 Segmentation Analysis -- 6.5.1 Segmenting by participants -- 6.5.2 Segmenting by tasks -- 6.6 Identifying Usability Issues and Comparing Designs -- 6.6.1 Identifying usability issues -- Focusing on problem tasks -- Analysis of errors -- Analysis of comments -- 6.6.2 Comparing alternative designs -- Make sure comparisons are valid -- Use confidence intervals and t tests -- 6.7 Presenting the Results -- 6.7.1 Set the stage appropriately -- 6.7.2 Make the participants real -- 6.7.3 Organize your data logically -- 6.7.4 Tell a story -- 6.7.5 Use pictures -- 6.7.6 Simplify data graphs -- 6.7.7 Show confidence intervals -- 6.7.8 Make details available without boring your audience -- 6.7.9 Make the punch line(s) clear -- 6.7.10 Clarify the next steps -- 6.8 Summary -- Chapter 7: Building Your Online Study Using Commercial Tools -- 7.1 Loop11 -- 7.1.1 Creating a study -- 7.1.2 From the participant's perspective -- 7.1.3 Data analysis -- 7.1.4 Summary of strengths and limitations -- 7.2 RelevantView -- 7.2.1 Creating a study -- 7.2.2 From the participant's perspective -- 7.2.3 Data analysis -- 7.2.4 Summary of strengths and limitations -- 7.3 UserZoom -- 7.3.1 Creating a study -- 7.3.2 From the participant's perspective -- 7.3.3 Data analysis -- 7.3.4 Summary of strengths and limitations -- 7.4 WebEffective -- 7.4.1 Creating a study -- 7.4.2 From the participant's perspective -- 7.4.3 Data analysis -- 7.4.4 Summary of strengths and limitations -- 7.5 Checklist of Questions -- 7.6 Summary -- Chapter 8: Discount Approaches to Building an Online Study -- 8.1 The Basic Approach -- 8.2 Measuring Task Success -- 8.3 Ratings for Each Task -- 8.4 Conditional Logic for a Comment or Explanation -- 8.5 Task Timing -- 8.6 Randomizing Task Order -- 8.7 Positioning of Windows
  • Study B -- Study C -- Study D -- 2.6 Participant Recruiting -- 2.6.1 True intent intercept -- 2.6.2 Panels -- How they work -- Panelist incentives -- Integrated services -- Cost -- Quality of panelists -- 2.6.3 Direct and targeted recruiting -- Emailing -- Posting on the Internet -- Posting in paper ads -- Friends, family, and co-workers -- 2.7 Participant Sampling -- 2.7.1 Number of participants -- 2.7.2 Sampling techniques -- 2.8 Participant Incentives -- 2.9 Summary -- Chapter 3: Designing the Study -- 3.1 Introducing the Study -- 3.1.1 Purpose, sponsor information, motivation, and incentive -- 3.1.2 Time estimate -- 3.1.3 Technical requirements -- 3.1.4 Legal information and consent -- 3.1.5 Instructions -- 3.2 Screening Questions -- 3.2.1 Types of screening questions -- 3.2.2 Misrepresentation checks -- 3.2.3 Exit strategy -- 3.3 Starter Questions -- 3.3.1 Product, computer, and Web experience -- 3.3.2 Expectations -- 3.3.3 Reducing bias later in the study -- 3.4 Constructing Tasks -- 3.4.1 Making the task easy to understand -- 3.4.2 Writing tasks with task completion rates in mind -- 3.4.3 Anticipating various paths to an answer -- 3.4.4 Multiple-choice answers -- 3.4.5 Including a "none of the above" option -- 3.4.6 Including a "don't know" or "give up" option -- 3.4.7 Randomizing task order and answer choices -- 3.4.8 Using a subset of tasks -- 3.4.9 Self-generated and self-selected tasks -- 3.4.10 Self-reported task completion -- 3.5 Post-Task Questions and Metrics -- 3.5.1 Self-reported data -- 3.5.2 Open-ended responses -- 3.6 Post-session Questions and Metrics -- 3.6.1 Overall rating scales -- 3.6.2 Overall assessment tools -- 3.6.3 Open-ended questions -- 3.7 Demographic Questions and Wrap-Up -- 3.7.1 Demographic questions -- 3.7.2 Wrap-up -- 3.8 Special Topics -- 3.8.1 Progress indicators -- 3.8.2 Pausing -- 3.8.3 Speed traps
  • 8.8 Random Assignment of Participants to Conditions -- 8.9 Pulling it all Together -- 8.10 Summary -- Chapter 9: Case Studies -- 9.1.1 Background -- 9.1.2 Access Task Survey tool -- 9.1.3 Methodology -- 9.1.4 Results -- 9.1.5 Discussion and conclusions -- 9.2 Using Self-Guided Usability Tests During the Redesign of IBM Lotus Notes -- 9.2.1 Methodology -- Tasks -- Participants -- 9.2.2 Results -- 9.2.3 Self-guided usability testing: Discussion and conclusions -- Limitations -- Lessons learned -- Acknowledgments -- Reference -- Biographies -- 9.3.1 Project background -- 9.3.2 Why a longitudinal study design -- 9.3.3 Task structure -- 9.3.4 Data gathering technology and process -- 9.3.5 Respondent recruiting and incentives -- 9.3.6 Lab study and online data gathering methodology verification -- 9.3.7 Data analysis -- 9.3.8 Results and discussion -- Nomenclature analysis findings -- Content, features, and functions -- Imagery analysis findings -- Interactive quality findings -- 9.3.9 Conclusion -- References -- Biographies -- 9.4 An Automated Study of the UCSF Web Site -- 9.4.1 Methodology -- 9.4.2 Results and discussion -- 9.4.3 Conclusions -- Biographies -- 9.5 Online Usability Testing of Tax Preparation Software -- 9.5.1 Methodology -- 9.5.2 Results and discussion -- 9.5.3 Advantages and challenges -- 9.5.4 Conclusions -- Biographies -- 9.6 Online Usability Testing: FamilySearch.org -- 9.6.1 Study goals -- 9.6.2 Why online usability testing? -- 9.6.3 Methodology -- Recruiting -- Compensation -- Study mechanics -- Tools -- Limitations -- 9.6.4 Metrics and data -- 9.6.5 Results and discussion -- 9.6.6 Data and user experience -- 9.6.7 Getting results heard and integrated -- 9.6.8 Conclusions -- 9.6.9 Lessons learned -- Biography -- 9.7 Using Online Usability Testing Early in Application Development: Building Usability in From the Start
  • Front Cover -- Beyond the Usability Lab: Conducting Large-scale Online User Experience Studies -- Copyright -- Table of Contents -- Preface -- Acknowledgments -- Dedication -- Author Biographies -- Bill Albert -- Tom Tullis -- Donna Tedesco -- Chapter 1: Introduction -- 1.1 What is An Online Usability Study? -- 1.2 Strengths and Limitations of Online Usability Testing -- 1.2.1 Comparing designs -- 1.2.2 Measuring the user experience -- 1.2.3 Finding the right participants -- 1.2.4 Focusing design improvements -- 1.2.5 Insight into users' real experience -- 1.2.6 Where are users going (click paths)? -- 1.2.7 What users are saying about their experiences -- 1.2.8 Saving time and money -- 1.2.9 Limitations of online usability testing -- 1.3 Combining Online Usability Studies with Other User Research Methods -- 1.3.1 Usability lab (or remote) testing -- 1.3.2 Expert review -- 1.3.3 Focus groups -- 1.3.4 Web traffic analysis -- 1.4 Organization of the Book -- Chapter 2: Planning the Study -- 2.1 Target Users -- What are the users' primary goals in using the product? -- 2.2 Type of Study -- Comprehensive usability or user experience study -- Usability or user experience baseline -- Competitive evaluation -- Live site vs prototype comparison -- Feature- or function-specific test -- 2.3 Between-Subjects Versus within-Subjects -- Order and sequence effects -- Task effects -- 2.4 Metrics -- 2.4.1 Task-based data -- Task success (also known as task completion) -- Task times (also known as "time on task") -- Efficiency -- Clickstream data -- Self-reported data -- Task comments or verbatims -- 2.4.2 End-of-session data -- Overall self-reported data -- Overall assessment tools -- Comments or verbatims -- 2.5 Budget and Timeline -- 2.5.1 Budget -- Technology costs -- Recruiting costs -- Participant incentives -- People time -- 2.5.2 Timeline -- Study A
  • 3.9 Summary -- Chapter 4: Piloting and Launching the Study -- 4.1 Pilot Data -- 4.1.1 Technical checks -- 4.1.2 Usability checks -- 4.1.3 Full pilot with data checks -- 4.1.4 Preview of results -- 4.2 Timing the Launch -- 4.2.1 Finding the right time to launch -- 4.2.2 Singular and phased launches -- 4.3 Monitoring Results -- 4.4 Summary -- Chapter 5: Data Preparation -- 5.1 Downloading/Exporting Data -- 5.2 Data Quality Checks -- 5.3 Removing Participants -- 5.3.1 Incomplete data -- 5.3.2 Participants who misrepresent themselves -- 5.3.3 Mental cheaters -- Extremely poor or abnormal performance -- Speed traps -- Inconsistent responses -- 5.3.4 Tips on removing participants -- 5.4 Removing and modifying data for individual tasks -- 5.4.1 Outliers -- 5.4.2 Contradictory responses -- 5.4.3 Removing a task for all participants -- 5.4.4 Modifying task success -- 5.5 Recoding Data and Creating New Variables -- 5.5.1 Success data -- 5.5.2 Time variables -- 5.5.3 Self-reported variables -- 5.5.4 Clickstream data -- 5.6 Summary -- Chapter 6: Data Analysis and Presentation -- 6.1 Task Performance Data -- 6.1.1 Task success -- Binary task success -- Tasks with multiple correct answers -- Breakdown of task completion status -- Calculating task success rates -- Confidence intervals -- 6.1.2 Task times -- All task times or only successful times? -- Mean, median, or geometric mean? -- Confidence intervals -- 6.1.3 Efficiency -- Number of tasks correct per minute -- Percent task success per minute -- 6.2 Self-Reported Data -- 6.2.1 Rating scales -- Top-2-box scores -- 6.2.2 Open-ended questions, comments, and other verbatims -- Task-based comments -- Open-ended questions at the end of the study -- 6.2.3 Overall assessment tools -- 6.3 Clickstream Data -- 6.4 Correlations and Combinations -- 6.4.1 Correlations
  • 9.7.1 Project background