- Usability Testing Software For Mac Windows 10
- Usability Testing Examples
- Software Usability Testing Report
- Usability Testing Software For Mac Download
What is Usability Testing?
USABILITY TESTING measures how easy to use and user-friendly a software system is. Here, a small set of target end-users, 'use' the software sysem to expose usability defects. This testing mainly focuses on the user's ease to use the application, flexibility in handling controls and the ability of the system to meet its objectives. It is also called User Experience(UX) Testing.
Usability testing is a technique used in user-centered interaction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system. It is more concerned with the design intuitiveness of the product and tested with users who have no prior exposure to it. Usability Testing Software. This is an example of the types of software that are commonly used for usabilty testing. For more tools, please see the Usability Testing Tools page. Silverback, by Clearleft. (Local testing) Windows. Morae, by TechSmith. (Local testing) UserVue, by TechSmith. (Remote testing) Hosted Remote, Unmoderated.
This testing is recommended during the initial design phase of SDLC, which gives more visibility on the expectations of the users.
In this tutorial, you will learn-
Why do Usability Testing
Aesthetics and design are important. How well a product looks usually determines how well it works.
There are many software applications/websites, which miserably fail, once launched, due to following reasons -
- Where do I click next?
- Which page needs to be navigated?
- Which Icon or Jargon represents what?
- Error messages are not consistent or effectively displayed
- Session time not sufficient.
Software Engineering, Usability Testing identifies usability errors in the system early in the development cycle and can save a product from failure.
Example Usability Testing Test Cases
The goal of this testing is to satisfy users and it mainly concentrates on the following parameters of a system:
The effectiveness of the system
- Is the system is easy to learn?
- Is the system useful and adds value to the target audience?
- Are Content, Color, Icons, Images used are aesthetically pleasing?
Efficiency
- Little navigation should be required to reach the desired screen or webpage, and scrollbars should be used infrequently.
- Uniformity in the format of screen/pages in your application/website.
- Option to search within your software application or website.
Accuracy
- No outdated or incorrect data like contact information/address should be present.
- No broken links should be present.
User Friendliness
- Controls used should be self-explanatory and must not require training to operate
- Help should be provided for the users to understand the application/website
- Alignment with the above goals helps in effective usability testing
How to do Usability Testing: Complete Process
Usability testing process consists of the following phases
Planning:- During this phase the goals of usability test are determined. Having volunteers sit in front of your application and recording their actions is not a goal. You need to determine critical functionalities and objectives of the system. You need to assign tasks to your testers, which exercise these critical functionalities. During this phase, the usability testing method, number & demographics of usability testers, test report formats are also determined
Recruiting: During this phase, you recruit the desired number of testers as per your usability test plan. Finding testers who match your demographic (age, sex etc.) and professional ( education, job etc.) profile can take time.
UsabilityTesting: During this phase, usability tests are actually executed.
Data Analysis: Data from usability tests is thoroughly analyzed to derive meaningful inferences and give actionable recommendations to improve the overall usability of your product.
Reporting: Findings of the usability test is shared with all concerned stakeholders which can include designer, developer, client, and CEO
Methods of Usability Testing: 2 Techniques
There are two methods available to do usability testing -
- Laboratory Usability Testing
- Remote Usability Testing
Laboratory Usability Testing:. This testing is conducted in a separate lab room in presence of the observers. The testers are assigned tasks to execute. The role of the observer is to monitor the behavior of the testers and report the outcome of testing. The observer remains silent during the course of testing. In this testing, both observers and testers are present in a same physical location.
Remote Usability Testing: Under this testing observers and testers are remotely located. Testers access the System Under Test, remotely and perform assigned tasks. Tester's voice , screen activity , testers facial expressions are recorded by an automated software. Observers analyze this data and report findings of the test. Example of such a software - http://silverbackapp.com/
How many users do you need ?
Research (Virzi, 1992 and Neilsen Landauer, 1993) indicates that 5 users are enough to uncover 80% of usability problems. Some researchers suggest other numbers.
The truth is , the actual number of the user required depends on the complexity of the given application and your usability goals. Increase in usability participants results into increased cost , planning , participant management and data analysis.
But as a general guideline, if you on a small budget and interested in DIY usability testing 5 is a good number to start with. If budget is not a constraint its best consult experienced professionals to determine the number of users.
UX Testing Checklist
The primary goal of this testing is to find crucial usability problems before the product is launched. Following things have to be considered to make a testing success:
- Start the UX testing during the early stage of design and development
- It's a good practice to conduct usability testing on your competitor's product before you begin development. This will help you determine usability standards for your target audience
- Select the appropriate users to test the system(Can be experts/non-experts users/50-50 of Experts and Non-Experts users)
- Use a bandwidth shaper . For instance , your target audience has poor network connectivity , limit network bandwidth to say 56 Kbps for your usability testers.
- Testers need to concentrate on critical & frequently used functionalities of the system.
- Assign a single observer to each tester. This helps observer to accurately note tester's behavior. If an observer is assigned to multiple testers, results may be compromised
- Educate Designers and Developers that this testing outcomes is not a sign of failure but it's a sign of Improvement
Usability Testing Advantages
As with anything in life, usability testing has its merits and de-merits. Let's look at them
- It helps uncover usability issues before the product is marketed.
- It helps improve end-user satisfaction
- It makes your system highly effective and efficient
- It helps gather true feedback from your target audience who actually use your system during a usability test. You do not need to rely on 'opinions' from random people.
Usability Testing Disadvantages
- Cost is a major consideration in usability testing. It takes lots of resources to set up a Usability Test Lab. Recruiting and management of usability testers can also be expensive
However, these costs pay themselves up in form of higher customer satisfaction, retention and repeat business. Usability testing is therefore highly recommended.
Usability testing is a technique used in user-centeredinteraction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system.[1] It is more concerned with the design intuitiveness of the product and tested with users who have no prior exposure to it. Such testing is paramount to the success of an end product as a fully functioning app that creates confusion amongst its users will not last for long.[2] This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users.
Usability testing focuses on measuring a human-made product's capacity to meet its intended purpose. Examples of products that commonly benefit from usability testing are food, consumer products, web sites or web applications, computer interfaces, documents, and devices. Usability testing measures the usability, or ease of use, of a specific object or set of objects, whereas general human–computer interaction studies attempt to formulate universal principles.
What it is not[edit]
Simply gathering opinions on an object or document is market research or qualitative research rather than usability testing. Usability testing usually involves systematic observation under controlled conditions to determine how well people can use the product.[3] However, often both qualitative and usability testing are used in combination, to better understand users' motivations/perceptions, in addition to their actions.
Rather than showing users a rough draft and asking, 'Do you understand this?', usability testing involves watching people trying to use something for its intended purpose. For example, when testing instructions for assembling a toy, the test subjects should be given the instructions and a box of parts and, rather than being asked to comment on the parts and materials, they are asked to put the toy together. Instruction phrasing, illustration quality, and the toy's design all affect the assembly process.
Methods[edit]
Setting up a usability test involves carefully creating a scenario, or realistic situation, wherein the person performs a list of tasks using the product being tested while observers watch and take notes (dynamic verification). Several other test instruments such as scripted instructions, paper prototypes, and pre- and post-test questionnaires are also used to gather feedback on the product being tested (static verification). For example, to test the attachment function of an e-mail program, a scenario would describe a situation where a person needs to send an e-mail attachment, and ask him or her to undertake this task. The aim is to observe how people function in a realistic manner, so that developers can see problem areas, and what people like. Techniques popularly used to gather data during a usability test include think aloud protocol, co-discovery learning and eye tracking.
Hallway testing[edit]
Hallway testing, also known as guerrilla usability, is a quick and cheap method of usability testing in which people—e.g., those passing by in the hallway—are asked to try using the product or service. This can help designers identify 'brick walls', problems so serious that users simply cannot advance, in the early stages of a new design. Anyone but project designers and engineers can be used (they tend to act as 'expert reviewers' because they are too close to the project).
Remote usability testing[edit]
In a scenario where usability evaluators, developers and prospective users are located in different countries and time zones, conducting a traditional lab usability evaluation creates challenges both from the cost and logistical perspectives. These concerns led to research on remote usability evaluation, with the user and the evaluators separated over space and time. Remote testing, which facilitates evaluations being done in the context of the user's other tasks and technology, can be either synchronous or asynchronous. The former involves real time one-on-one communication between the evaluator and the user, while the latter involves the evaluator and user working separately.[4] Numerous tools are available to address the needs of both these approaches.
Synchronous usability testing methodologies involve video conferencing or employ remote application sharing tools such as WebEx. WebEx and GoToMeeting are the most commonly used technologies to conduct a synchronous remote usability test.[5] However, synchronous remote testing may lack the immediacy and sense of 'presence' desired to support a collaborative testing process. Moreover, managing inter-personal dynamics across cultural and linguistic barriers may require approaches sensitive to the cultures involved. Other disadvantages include having reduced control over the testing environment and the distractions and interruptions experienced by the participants' in their native environment.[6] One of the newer methods developed for conducting a synchronous remote usability test is by using virtual worlds.[7]
Asynchronous methodologies include automatic collection of user's click streams, user logs of critical incidents that occur while interacting with the application and subjective feedback on the interface by users.[8] Similar to an in-lab study, an asynchronous remote usability test is task-based and the platform allows researchers to capture clicks and task times. Hence, for many large companies, this allows researchers to better understand visitors' intents when visiting a website or mobile site. Additionally, this style of user testing also provides an opportunity to segment feedback by demographic, attitudinal and behavioral type. The tests are carried out in the user's own environment (rather than labs) helping further simulate real-life scenario testing. This approach also provides a vehicle to easily solicit feedback from users in remote areas quickly and with lower organizational overheads. In recent years, conducting usability testing asynchronously has also become prevalent and allows testers to provide feedback in their free time and from the comfort of their own home.
Expert review[edit]
Expert review is another general method of usability testing. As the name suggests, this method relies on bringing in experts with experience in the field (possibly from companies that specialize in usability testing) to evaluate the usability of a product.
A heuristic evaluation or usability audit is an evaluation of an interface by one or more human factors experts. Evaluators measure the usability, efficiency, and effectiveness of the interface based on usability principles, such as the 10 usability heuristics originally defined by Jakob Nielsen in 1994.[9]
Nielsen's usability heuristics, which have continued to evolve in response to user research and new devices, include:
- Visibility of system status
- Match between system and the real world
- User control and freedom
- Consistency and standards
- Error prevention
- Recognition rather than recall
- Flexibility and efficiency of use
- Aesthetic and minimalist design
- Help users recognize, diagnose, and recover from errors
- Help and documentation
Usability Testing Software For Mac Windows 10
Automated expert review[edit]
Similar to expert reviews, automated expert reviews provide usability testing but through the use of programs given rules for good design and heuristics. Though an automated review might not provide as much detail and insight as reviews from people, they can be finished more quickly and consistently. The idea of creating surrogate users for usability testing is an ambitious direction for the artificial intelligence community.
A/B testing[edit]
In web development and marketing, A/B testing or split testing is an experimental approach to web design (especially user experience design), which aims to identify changes to web pages that increase or maximize an outcome of interest (e.g., click-through rate for a banner advertisement). As the name implies, two versions (A and B) are compared, which are identical except for one variation that might impact a user's behavior. Version A might be the one currently used, while version B is modified in some respect. For instance, on an e-commerce website the purchase funnel is typically a good candidate for A/B testing, as even marginal improvements in drop-off rates can represent a significant gain in sales. Significant improvements can be seen through testing elements like copy text, layouts, images and colors.
Multivariate testing or bucket testing is similar to A/B testing but tests more than two versions at the same time.
Number of test subjects[edit]
In the early 1990s, Jakob Nielsen, at that time a researcher at Sun Microsystems, popularized the concept of using numerous small usability tests—typically with only five test subjects each—at various stages of the development process. His argument is that, once it is found that two or three people are totally confused by the home page, little is gained by watching more people suffer through the same flawed design. 'Elaborate usability tests are a waste of resources. The best results come from testing no more than five users and running as many small tests as you can afford.'[10]
The claim of 'Five users is enough' was later described by a mathematical model[11] which states for the proportion of uncovered problems U
where p is the probability of one subject identifying a specific problem and n the number of subjects (or test sessions). This model shows up as an asymptotic graph towards the number of real existing problems (see figure below).
In later research Nielsen's claim has eagerly been questioned with both empirical evidence[12] and more advanced mathematical models.[13] Two key challenges to this assertion are:
- Since usability is related to the specific set of users, such a small sample size is unlikely to be representative of the total population so the data from such a small sample is more likely to reflect the sample group than the population they may represent
- Not every usability problem is equally easy-to-detect. Intractable problems happen to decelerate the overall process. Under these circumstances the progress of the process is much shallower than predicted by the Nielsen/Landauer formula.[14]
It is worth noting that Nielsen does not advocate stopping after a single test with five users; his point is that testing with five users, fixing the problems they uncover, and then testing the revised site with five different users is a better use of limited resources than running a single usability test with 10 users. In practice, the tests are run once or twice per week during the entire development cycle, using three to five test subjects per round, and with the results delivered within 24 hours to the designers. The number of users actually tested over the course of the project can thus easily reach 50 to 100 people. Research shows that user testing conducted by organisations most commonly involves the recruitment of 5-10 participants[15].
In the early stage, when users are most likely to immediately encounter problems that stop them in their tracks, almost anyone of normal intelligence can be used as a test subject. In stage two, testers will recruit test subjects across a broad spectrum of abilities. For example, in one study, experienced users showed no problem using any design, from the first to the last, while naive user and self-identified power users both failed repeatedly.[16] Later on, as the design smooths out, users should be recruited from the target population.
Usability Testing Examples
When the method is applied to a sufficient number of people over the course of a project, the objections raised above become addressed: The sample size ceases to be small and usability problems that arise with only occasional users are found. The value of the method lies in the fact that specific design problems, once encountered, are never seen again because they are immediately eliminated, while the parts that appear successful are tested over and over. While it's true that the initial problems in the design may be tested by only five users, when the method is properly applied, the parts of the design that worked in that initial test will go on to be tested by 50 to 100 people.
Example[edit]
A 1982 Apple Computer manual for developers advised on usability testing:[17]
- 'Select the target audience. Begin your human interface design by identifying your target audience. Are you writing for businesspeople or children?'
- Determine how much target users know about Apple computers, and the subject matter of the software.
- Steps 1 and 2 permit designing the user interface to suit the target audience's needs. Tax-preparation software written for accountants might assume that its users know nothing about computers but are expert on the tax code, while such software written for consumers might assume that its users know nothing about taxes but are familiar with the basics of Apple computers.
Apple advised developers, 'You should begin testing as soon as possible, using drafted friends, relatives, and new employees':[17]
Our testing method is as follows. We set up a room with five to six computer systems. We schedule two to three groups of five to six users at a time to try out the systems (often without their knowing that it is the software rather than the system that we are testing). We have two of the designers in the room. Any fewer, and they miss a lot of what is going on. Any more and the users feel as though there is always someone breathing down their necks.
Designers must watch people use the program in person, because[17]
Ninety-five percent of the stumbling blocks are found by watching the body language of the users. Watch for squinting eyes, hunched shoulders, shaking heads, and deep, heart-felt sighs. When a user hits a snag, he will assume it is 'on account of he is not too bright': he will not report it; he will hide it ... Do not make assumptions about why a user became confused. Ask him. You will often be surprised to learn what the user thought the program was doing at the time he got lost.
Education[edit]
Usability testing has been a formal subject of academic instruction in different disciplines.[18]
See also[edit]
References[edit]
- ^Nielsen, J. (1994). Usability Engineering, Academic Press Inc, p 165
- ^Mejs, Monika (2019-06-27). 'Usability Testing: the Key to Design Validation'. Mood Up team - software house. Retrieved 2019-09-11.
- ^Dennis G. Jerz (July 19, 2000). 'Usability Testing: What Is It?'. Jerz's Literacy Weblog. Retrieved June 29, 2016.
- ^Andreasen, Morten Sieker; Nielsen, Henrik Villemann; Schrøder, Simon Ormholt; Stage, Jan (2007). What happened to remote usability testing?. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '07. p. 1405. doi:10.1145/1240624.1240838. ISBN9781595935939.
- ^Dabney Gough; Holly Phillips (2003-06-09). 'Remote Online Usability Testing: Why, How, and When to Use It'. Archived from the original on December 15, 2005.
- ^Dray, Susan; Siegel, David (March 2004). 'Remote possibilities?: international usability testing at a distance'. Interactions. 11 (2): 10–17. doi:10.1145/971258.971264.
- ^Chalil Madathil, Kapil; Joel S. Greenstein (May 2011). Synchronous remote usability testing: a new approach facilitated by virtual worlds. Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems. CHI '11. pp. 2225–2234. doi:10.1145/1978942.1979267. ISBN9781450302289.
- ^Dray, Susan; Siegel, David (2004). 'Remote possibilities?'. Interactions. 11 (2): 10. doi:10.1145/971258.971264.
- ^'Heuristic Evaluation'. Usability First. Retrieved April 9, 2013.
- ^'Usability Testing with 5 Users (Jakob Nielsen's Alertbox)'. useit.com. 2000-03-13.; references Jakob Nielsen; Thomas K. Landauer (April 1993). 'A mathematical model of the finding of usability problems'. Proceedings of ACM INTERCHI'93 Conference (Amsterdam, The Netherlands, 24–29 April 1993).
- ^Virzi, R. A. (1992). 'Refining the Test Phase of Usability Evaluation: How Many Subjects is Enough?'. Human Factors. 34 (4): 457–468. doi:10.1177/001872089203400407.
- ^'Testing web sites: five users is nowhere near enough - Semantic Scholar'. semanticscholar.org. 2001.
- ^Caulton, D. A. (2001). 'Relaxing the homogeneity assumption in usability testing'. Behaviour & Information Technology. 20 (1): 1–7. doi:10.1080/01449290010020648.
- ^Schmettow, Heterogeneity in the Usability Evaluation Process. In: M. England, D. & Beale, R. (ed.), Proceedings of the HCI 2008, British Computing Society, 2008, 1, 89-98
- ^'Results of the 2020 User Testing Industry Report'. www.userfountain.com. Retrieved 2020-06-04.
- ^Bruce Tognazzini. 'Maximizing Windows'.
- ^ abcMeyers, Joe; Tognazzini, Bruce (1982). Apple IIe Design Guidelines(PDF). Apple Computer. pp. 11–13, 15.
- ^Breuch, Lee-Ann; Mark Zachry; Clay Spinuzzi (April 2001). 'Usability Instruction in Technical Communication Programs'. Journal of Business and Technical Communication. 15 (2): 223–240. doi:10.1177/105065190101500204.