StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Usability Tests and Heuristic Evaluations in Assessing Prototypes of Interface Designs - Essay Example

Cite this document
Summary
Usability Tests and Heuristic Evaluations in Assessing Prototypes of Interface Designs
In order for a product to be usable it should possess the following characteristics: usefulness, efficiency, effectiveness, learnability, satisfaction, and accessibility…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER96% of users find it useful
Usability Tests and Heuristic Evaluations in Assessing Prototypes of Interface Designs
Read Text Preview

Extract of sample "Usability Tests and Heuristic Evaluations in Assessing Prototypes of Interface Designs"

?Usability Tests and Heuristic Evaluations in Assessing Prototypes of Interface Designs Usability Tests In order for a product to be usable it shouldpossess the following characteristics: usefulness, efficiency, effectiveness, learnability, satisfaction, and accessibility. Usefulness is a measure of how a product enables the user to accomplish a specific goal or task. In addition, usefulness refers to the degree of motivation a person has in deciding to use the product. Meanwhile, efficiency refers to the amount of time a user spends in accomplishing a certain goal or task using the product (Rubin & Chisnell, 2008). Effectiveness is described as the degree of consistency that a product will behave as expected. It is also the measure the degree of ease users experience in using the product. Learnability refers to the ability of the user to operate the system with a definite degree of competence after a certain amount of training. It also refers to likelihood that users who have not used the system for a period of time will be able to relearn how to operate it (Rubin & Chisnell, 2008). Satisfaction includes a user’s feelings, perceptions, and opinion about the system. Information is usually collected through written and oral means (Rubin & Chisnell 2008). Finally, accessibility involves the capability of the system to allow persons with disabilities to perceive, understand, navigate, and interact with it (Kappel, 2003). In performing usability tests, two basic principles should always be remembered: (1) a designer’s perception on the system design is different from the target audience; and (2) it is better to conduct multiple tests with a few users rather than running a single test with a large number of users (Silver, 2005). Usability tests usually involve either getting an audience evaluation of the system or having a usability design expert perform a heuristic evaluation of the program. One drawback of heuristic evaluation is that the expert reviewer may not share the same perception as the target users or may identify errors that target users do not consider as problems. In contrast, an audience evaluation of the system is a credible representation of a real-world user’s needs and perceptions of the system (Silver, 2005). In terms of determining the ideal number of users to comprise a usability test group, the following guidelines are recommended: (1) too few users yield inferior results since the users may not be able to identify most of the problems; and (2) too many users would increase the chance that most users would identify the same problem and reduces the chances that the users will be able to identify less obvious errors (Silver 2005). There are four types of usability tests which a developer may choose from depending on the situation: (1) exploratory; (2) assessment; (3) evaluation; and (4) comparison. Exploratory testing is usually performed in the early part of the design stage. It has to primary objectives. First, to verify of the functions selected to be used in the system are useful and appropriate for the user. Second, to determine the degree as to which the system design matches the user’s mental model of the system. A mental model is described as a user’s assumptions and expectations regarding how certain tasks are accomplished (Silver, 2005). One important feature of an exploratory test is the high degree of freedom a developer can attain in developing early designs of the system. By using tools such as paper screens and system prototypes with limited functionalities, the developer can collect important information and feedback from the users. This allows the developer to determine if the initial design matches user perception of the system. In addition, exploratory tests enable developers to detect serious flaws in the design before a mode concrete plan of the system is created (Rubin & Chisnell, 2008). In an exploratory test, a moderator may provide the user with screenshots of the system and ask if all the types of information or functions that the user expected to be in the system is present or appropriate. In addition, the user may be asked if the information in the screens is easy to comprehend. Moreover, the user may be asked to identify which function would he or she select to perform a particular task. In turn, the moderator would present the user with the corresponding screen. This is done to identify if there would be any confusion on the functions presented on the system design (Silver, 2005). Results of the exploratory test can serve as basis on changes in the functions or tasks to be utilized in the system. Furthermore, these results may also affect how specific elements are placed or named (Silver, 2005). This allows the developer to create a solid framework wherein all the details of the system will fall into place (Rubin & Chisnell, 2008). Assessment tests are the most common of usability tests. It is usually performed in the middle part of development or before the system is completed. Normally, this test is done when the features identified in the early stages of development and navigation structure are in place. The purpose of an assessment test is to verify if the planned features are incorporated correctly into the system (Silver, 2005). In an assessment test, a moderator asks the user to perform a specific series of tasks. In this stage the moderator will not assist the user in any way while the test is in progress. The moderator will then observe how the user performs each task and takes note of any issue or problem encountered by the user (Silver, 2005). The goal is to observe and evaluate how well the concept has been implemented in the system (Rubin & Chisnell, 2008). Evaluation tests are conducted after the system has been released and operational. The objective is to identify any modification in the design with the aim of improving its level of usability (Silver, 2005). Usually, a standard or benchmark is utilized as a means of verifying if the system meets a specific set of standards prior to being released (Rubin & Chisnell, 2008). In cases wherein there is no standard or benchmark available, an evaluation test may be used to create a standard for future systems. A benchmark would therefore ensure that the level of usability inherent in the product will be maintained as more functions are created and implemented on future versions of the system (Rubin & Chisnell, 2008). Meanwhile, a comparison test is performed to compare multiple approaches to system design. This type of test may be conducted anytime during development (Silver, 2005). In early stages, the developer may compare two contrasting approaches on system implementation with the aim of determining which system is more preferred by the target users. When conducted in the middle of development, a comparison test may be used to gauge the effectiveness of a specific element in the system. On the other hand, a comparison test done at the end part of system development can determine how a system measures up to a competitor product (Rubin & Chisnell, 2008). In a comparison test, a moderator may present the user with a number of designs. In turn, the user is asked to perform certain tasks using each design to determine which one would be easiest for the user. Normally, each design would have its strong and weak points. Therefore, a developer may incorporate the strong points of each design into a single design (Silver, 2005). Usability tests follow a specific set of procedures. Weber (2004) enumerated the six crucial steps in the conduct of usability testing, namely: (1) identifying test objectives and methods; (2) creation of test materials; (3) identify and recruit participants; (4) create test environment; (5) perform the test; and (6) perform analysis and generate test results. The first step involves the formulation of the purpose and objectives of the usability test. In this stage the developer chooses the best type of usability test to perform based on the information that needs to be collected. Aside from the goals and objectives, a usability test should also take into consideration the following elements: participants, test environment, and personnel involved in the test (Silver, 2005). Participants were chosen based on a set criterion while the characteristics and number of participants are also determined. In addition, environmental factors present in the test environment include the test venue, computer hardware, and software. Moreover, the roles of the facilitator / moderator and participant should be laid out (Silver, 2005). The test plan should take into consideration the following data: (1) the steps the user took to accomplish a specific task; (2) the time spent in performing a particular task; (3) participant’s perception of how easy or difficult a certain task is; (4) participant’s overall level of satisfaction regarding the system being tested; and (5) the number of times the user diverged from the most ideal way of performing a task (Silver, 2005). After the test plan has been laid out, materials needed for the test proper are made and printed out. Test materials include: screening questionnaire, consent form, task sheets, and post-test questionnaire / interview forms. A screening questionnaire may include questions regarding demographical data, computer / internet use, etc. Meanwhile, consent forms are used whenever user input is recorded either through video, audio, or both. The consent form should clearly state how the information will be used and should show that the user has freely given permission based on the conditions stated (Silver, 2005). Meanwhile, a task sheet contains specific instructions or tasks which the user is required to carry out during the course of the test (Silver, 2005). It is advisable to create a script to follow during the course of the test. In addition, each user should be asked to act out the same set of scenarios to ensure the reliability of the results. Usability tests performed in the early stages of system development usually focus on general problems related to icons, terminologies, navigation, and organization. On the other hand, usability testing performed at later stages usually focus on specific user tasks (Weber, 2004). Post-test questionnaires are provided to test participants after they have performed the activities and scenarios specified in the task sheet. Questions usually focus on the overall perception of users about the system in terms of overall usability, ease or difficulty in completing required tasks, best features, user comments, and suggestions for improvement (Silver, 2005). The next step in usability testing is the selection of test participants. Participants should be comprised of individuals who best represent the target audience. One way of ensuring this is by the use of screening questionnaires. Data on each possible participant are stored on the database wherein developers can select the final list based on specific parameters (Silver, 2005). On the average, five test participants are enough. If test participants are sourced internally, the developer might consult with human resources to determine which personnel to invite. Otherwise, the developer might just approach several personnel who might be interested in participating. Another source would be the existing customer base. However, if test participants are to be sourced externally, a form of compensation should be considered (Weber, 2004). After planning the test, preparing test materials, and sourcing participants, the next stage involves preparing the test venue. Depending on the parameters agreed upon, a simple setup is sufficient. The test venue may include a table or desk, two chairs, a computer, and Internet access if necessary. If a pen-and-paper test is preferred, computer hardware is left out (Weber, 2004). In some cases a video camera may be utilized to record the test or may be connected to an external monitor so that other observers may view the test without interrupting the participants. Moreover, developers may opt to set up the test venue to simulate real-life environments or perform the test using the same equipment that target users have access to. (Weber, 2004). The next step in conducting a usability test is the test proper itself. To ensure a successful test, Silver (2005) recommended the following guidelines: (1) participants should be at ease with the test environment; (2) facilitators should not “lead” the participants, keeping body language and facial expressions neutral; (3) a degree of flexibility should be maintained during the test proper; (4) facilitators should be calm and patient towards the participants; and (5) facilitators should not jump immediately to conclusions. According to Weber (2004), the main function of a facilitator is to observe, take notes, maintain a stress-free test environment, and encourage participants to voice out their opinions about the system. After the test proper, participants are given post-test questionnaires to accomplish. Participants are asked to give their opinion, comments, and suggestions regarding the system. Based on the results, the developer may create recommendations to further improve the system. Silver (2005) provided the following guidelines in making effective recommendations: (1) recommendations should be concise; (2) include identified problem areas and positive observations of the system; and (3) user comments must be arranged in a logical manner. In addition, the inclusion of an executive summary to present an overview of important findings is recommended. The following examples of usability tests involve two websites, Garden.com and TowerRecords.com. Both websites performed usability studies to determine overall level usability and to identify usability issues. Garden.com’s initial usability test revealed flaws in the website which became the basis for recommendations for the redesign of the site. The second usability test is conducted to check for usability issues after the redesign has been performed. A total of six test users participated in the study. Four scenarios were created for the test and were installed in a prototype system. Each scenario followed a script and the test participants were informed that they will be working on a prototype which meant that some options or functions are not available (Ergosoft Laboratories, 1998). The following were measured during the test: (1) time spent to accomplish task; (2) scrolling clicks or the number of mouse clicks in scroll bars; (3) navigational clicks; (4) total number of pages visited; and (5) number of unique pages visited. In addition, post-test questionnaires were also used to assess ease of use. Moreover, the moderator’s input was also considered such as the moderator’s summary of observations, usability report card scores, and participants’ comments (Ergosoft Laboratories, 1998). The results of the test are as follows: (1) users took an average of 512.5 seconds to accomplish a task; (2) users clicked on scroll bars 37.45 times; (3) users clicked 10.97 times on average while navigating through the system; (4) users visited an average of 12 pages to accomplish a task; and (5) users visited an average of 7.92 unique pages to accomplish a task. Overall, users gave the system a mean usability rating of 73.675 (Ergosoft Laboratories, 1998). Based on the usability report card, there are some improvements in comparison to the first test. However, none of the improvements were statistically significant. In summary, the redesigned system garnered positive responses from the test participants. Some issues regarding categorization exists which affect ease of use among browse-dominant users. It has also been observed that users prefer websites that behave like software applications, are capable of automatically updating information, and has a help function (Ergosoft Laboratories, 1998). TowerRecords.com performed a usability test to evaluate the level of usability of the website for users located in Northern Europe. Secondary objectives include evaluation of core website tasks and gauging of customer perception regarding the website giving good value compared to local and foreign competitors. The test participants were composed of 6 individuals from Denmark. A Danish usability professional was tasked to act as test moderator. The test venue was a rented meeting room in Copenhagen and the each test was scheduled to run for 1 ? hour (DialogDesign, 2001). The usability test was composed of 3 stages: (1) interview; (2) performing test tasks; and (3) debriefing. The interview stage consists of signing agreement and consent forms, after which participants were interviewed about their expectations of the website before they were able to view it. After the interview, test participants were asked to complete specific tasks using the TowerRecords.com website (DialogDesign, 2001) On day prior to the test, the facilitator asked the test participants to think about music that they would buy online. However, the facilitator did not disclose what website is involved in the test. In addition, the test participants were informed that they will be allowed to purchase a product from the website up to $45 as compensation. Otherwise, they will be given a gift certificate of $35 as a token of appreciation (DialogDesign, 2001). The decision to provide this type of compensation is based on the assumption that participants’ attempts to complete a task will be more realistic and is driven by motivation. During the duration of the test, participants are encouraged to think aloud and to voice out their comments or opinions regarding the website (DialogDesign, 2001). The final stage of the test involved asking participants to fill out questionnaires. In addition, participants are asked to think aloud while answering the questionnaire in order to gather verbal comments about the website. Moreover, the participants are interviewed about their experience and overall impression of the website (DialogDesign, 2001). The following summarizes the positive observations (strong points), areas for improvement (weak points), and recommendations: Regarding positive observations, first, the users consider the checkout process as smooth and convenient. Second, the users appreciated the capability of the website of providing images of album covers and music samples while browsing for music that they wanted to purchase. Third, the users noted that the website can identify mistyped names or albums and make the necessary correction. In addition, the website provides the corrected information to the user. (DialogDesign, 2001). Areas for improvement include: removal of empty links, correction of database entries, refinement of search algorithm, and more support for first-time or inexperienced music store customers. From these results the following recommendations were made: (1) allocation of more resources in ensuring that broken links and incorrect database entries are minimized, if not eliminated; (2) make refinements on the search engine algorithms; and (3) members of the project team should research more on consumer needs and behaviour (DialogDesign, 2001). Heuristic Evaluations Compared to other usability inspection techniques, heuristic evaluation or expert evaluation is viewed as the most used and popular method of determining the usability of a system (Hurtado, Ruiz & Torres, 2009). Developed in the 1990s, heuristic evaluation is based on a defined set of rules and guidelines. The number of rules is significantly less compared to previous system usability guidelines. In addition, a heuristic evaluation involves a minimum of two expert evaluators, with the problem-identification rate going up as the number of evaluators increases (Rosenbaum, 2008). Heuristic evaluation works effectively when system development schedules have little room for flexibility. Compared to other usability evaluation methods, heuristic evaluation does not require the creation of scripts and recruitment of test participants. Moreover, heuristic evaluation provides immediate feedback to developers. This enables developers to identify immediate and obvious usability issues, giving more time for to address more complex and advanced issues later (Rosenbaum, 2008). A major weakness of heuristic evaluation is that evaluators are still categorized as surrogate users or expert evaluators acting as users. This means that expert evaluators cannot fully represent a target user group nor have the likelihood to encounter problems as experience by typical users of the system (Rosenbaum, 2008). Heuristics evaluation has ten categories: (1) flexibility and efficiency in use; (2) recognition rather than recall; (3) user control and freedom; (4) error prevention; (5) consistency and standards; (6) aesthetic and minimalistic design; (7) visibility of system status; (8) help users recognize, diagnose, and recover from errors; (9) match between system and real world; (10) help and documentation (Natt och Dag, Regnell, Madsen & Aurum, 2001). The Bureau of Labour Statistics performed a heuristic evaluation on a prototype of their first website to identify usability issues and to explore the potential of the use of heuristic evaluation on future systems of the BLS. Two experts conducted the evaluation of the system. Prior to the start of the evaluation, two documents were prepared: (1) a project overview that enumerates the project objectives, target audience, and expected usage patterns of an online public access system; and (2) list of heuristics or usability principles. The heuristics developed for the study are: (1) speak the users’ language; (2) consistency; (3) minimize the users’ memory load; (4) flexibility and efficiency of use; (5) aesthetic and minimalist design; (6) chunking; (7) progressive levels of detail; and (8) navigational feedback (Levi & Conrad 1996). Results of the evaluation revealed numerous problems with the prototype, mainly on speak users’ language, aesthetic and minimalist design, and navigation. Though the evaluation was able to identify a significant number of violations based on the heuristics, there have been some drawbacks which were identified. First, there was no baseline data to compare the results to. Second, the evaluation was performed on just one type of machine, browser, and local network. Third, since some users had difficulty distinguishing between the boundary between the browser and the web page, some of their comments or identified issues may have been pertaining to the browser, which is beyond the developers’ control (Levi & Conrad 1996). Comparison and Contrast Usability testing and heuristic evaluation each has its own strengths and weaknesses. In terms of test users utilized, usability testing attempts to identify and recreate a typical user of a system based on demographic or other existing organization data. Based on these parameters, developers identify potential test participants. On the other hand, heuristic evaluations depend on usability experts as evaluators. However, expert evaluators acting as typical users of the system are not likely to represent the target user audience. In addition, expert evaluators are more likely to depend on their expertise and knowledge which would be significantly different from how an average user thinks. In terms of preparation time, usability tests involve the creation of agreement and consent forms, task sheets, scripts, identification and recruitment of test participants. In contrast, a heuristic evaluation only involves getting a panel of expert evaluators which also double as test users. In addition, a minimal amount of documentation is required. In this case, heuristics evaluation is more suitable in development environments where schedules have a limited degree of flexibility. In terms of number of guidelines, usability testing may involve a considerable number of guidelines. In comparison, heuristics evaluation usually involves around 8 to 10 usability principles. This is due to usability testing guidelines being more specific while heuristic evaluation principles are more likely to be a generalization or an aggregate of several guidelines which are similar in nature. Overall, the choice of utilizing usability testing or heuristics evaluation would depend on a number of factors: (1) time and resources available to the organization; (2) the nature of the system itself; and (3) the development stage when the usability assessment is to be performed. However, Preston (2004) emphasized the importance of the inclusion of real users as testers in the evaluation process due to the amount of real-world data real users can provide. However, there are instances that this is not feasible so it is up to the developer to choose which usability assessment method/s to utilize. References Ergosoft Laboratories (1998). Garden.com U=usability study 2. Retrieved from http://www.google.com.ph/url?sa=t&source=web&cd=1&ved=0CBYQFjAA&url=http%3A%2F%2Fwww.ict.griffith.edu.au%2Fmarilyn%2Fuidweek10%2Fergosoft.pdf&ei=NmuxTcDxO4mGvAOs3bmZBw&usg=AFQjCNERAneNsnEsCHQGc0GevRTPpJT8TQ&sig2=vRKkMqiX6DdZscjrlue5vQ DialogDesign (2001). Usability test of www.TowerRecords.com. Retrieved from http://www.google.com.ph/url?sa=t&source=web&cd=1&ved=0CBYQFjAA&url=http%3A%2F%2Fwww.dialogdesign.dk%2Ftekster%2FTower_Test_Report.pdf&ei=BmyxTf_wDo7IuAOfqoyVBw&usg=AFQjCNEIbuiPyBbPVvbjlwqaZRDxWwSuWg&sig2=0BCT6PLrNOrzDeEVjj6O8A Hurtado, N, Ruiz, M & Torres, J (2009). Using a dynamic model to simulate the heuristic evaluation of usability. In Gross, T, Gulliksen, J, Kotze, P, Oestreicher, L, Palanque, P & Prates, RO (Eds.), Human-Computer Interaction – INTERACT 2009 (pp. 912-915). New York: Springer. Kappel, G (2003). Web engineering: the discipline of systematic development of web applications. Hoboken, NJ: John Wiley & Sons. Levi, MD & Conrad, FG (1996). A heuristic evaluation of a World Wide Web prototype. Interactions, 3(4), 50-61. Natt och Dag, Regnell, B, Madsen, OS & Aurum, A (2001). An industrial case study of usability evaluation in market-driven packaged software development. In Smith, MJ, Salvendy, G, Harris, D & Koubek, RJ (Eds), Usability evaluation and interface design: Cognitive engineering, intelligent agents, and virtual reality (pp. 425-429). Manwah, NJ: Lawrence Erlbaum Associates. Preston, A (2004). Types of usability methods. Usability Interface, 10(3), 15. Rosenbaum, S (2008). The future of usability evaluation: Increasing impact on value. In Law, E, Hvannberg, E & Cockton, G (Eds.), Maturing usability: Quality in software, interaction, and value (pp. 344-380). New York: Springer. Rubin, J & Chisnell, D (2008). Handbook of usability testing: How to plan, design, and conduct effective tests. Indianapolis, IN: Wiley Publishing. Silver, M (2005). Exploring interface design. Clifton Park, NY: Thomson-Delmar Learning. Weber, JH (2004). Is the help useful? How to create online help that meets your users’ needs. Whitefush Bay, WI: Hentzenwerke Publishing. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“Usability Tests and Heuristic Evaluations in Assessing Prototypes of Essay”, n.d.)
Retrieved from https://studentshare.org/environmental-studies/1415416-usability-tests-and-heuristic-evaluations-in-assessing-prototypes-of-interface-designs
(Usability Tests and Heuristic Evaluations in Assessing Prototypes of Essay)
https://studentshare.org/environmental-studies/1415416-usability-tests-and-heuristic-evaluations-in-assessing-prototypes-of-interface-designs.
“Usability Tests and Heuristic Evaluations in Assessing Prototypes of Essay”, n.d. https://studentshare.org/environmental-studies/1415416-usability-tests-and-heuristic-evaluations-in-assessing-prototypes-of-interface-designs.
  • Cited: 0 times

CHECK THESE SAMPLES OF Usability Tests and Heuristic Evaluations in Assessing Prototypes of Interface Designs

Researching how to conduct a profitable online business

This project deals with the premise that decision aids are extremely important for retailer - buyer interface in online shopping.... With a boom in E-Commerce, it has become imperative to study the evaluation and other techniques attached with the measurement of the appropriateness of the simulation models adopted....
18 Pages (4500 words) Essay

Web System Development by the AMAL

 This coursework is about the presentation, planning, and coordination of a new business idea.... This coursework presents a detailed and comprehensive analysis of a new business idea launched by the AMAL.... This new business will be an online Jewellery shop.... hellip;  Liam (2009) stated that the growing amount of people who make use of the Internet is showing to be a boom for businesses that are expanding their reach to the online community....
30 Pages (7500 words) Coursework

Evaluation of Website Interface Re-Design 5

he analysis is done after the heuristic evaluation and user testing led to the advancement of interface development.... For the heuristic design evaluation, the success of its outcome is largely dependable on the experience of whoever is conducting the evaluation; the number of interface evaluators; and the number of users evaluated so as to reach a defined conclusion.... The paper "Evaluation of Website interface Re-Design 5" asserts Both heuristic and user-centered design evaluations for the Nebula Web interface had their unique advantages and disadvantages....
8 Pages (2000 words) Research Paper

Prototype Interface Design - Usability Engineering

Such qualities are making interface designs a dominant feature in the mobile device.... The author of this paper "Prototype interface Design - Usability Engineering" discusses how applications experts are applying interface design, which is the process involved in combining multiple modules that communicate with each other with the main purpose of improving the experience of a user.... To develop such applications experts are applying interface design, which is the process involved in combining multiple modules that communicate with each other with the main purpose of improving the experience of a user....
13 Pages (3250 words) Essay

Human-Computer Interface and Usability

nbsp;… It is after the need identification of the disabled will it be possible to effectively apply universal designs to computer interfaces.... t is after the need identification of the disabled will it be possible to effectively apply universal designs to computer interfaces.... The author of this paper "Human-Computer interface and Usability" examines the full understanding of possible challenges that are faced by disabled individuals in their attempts to use computers, basic knowledge on the capabilities of such disabled persons has to be known....
5 Pages (1250 words) Assignment

Computer Sciences and Information Technology

Considering any interface requirements is also an important step that needs to be followed when creating a system request.... The interface requirements are very important since they determine how the end-user will view the product by identifying color schemes, command button requirements and any other part of an interface that is successful....
12 Pages (3000 words) Assignment

Web-Based Expert System

Due to a lack of a common protocol this can discourage designs involving cooperation or dynamic information sharing.... … Web-based Expert SystemsIntroduction to chapter twoThis paper talks of the web-based applications which is just an application that can be accessed from a given point or work station through an Internet or intranet....
9 Pages (2250 words) Assignment

Usability Testing and Heuristic Evaluation - Website Interface Design

… The paper " Usability Testing and heuristic Evaluation - Website Interface Design " is a worthy example of a literature review on information technology.... nbsp;Usability testing and heuristic evaluation are two techniques that an organization may use to measure the effectiveness and efficiency of its system user interface (Lecture 6 2013).... The paper " Usability Testing and heuristic Evaluation - Website Interface Design " is a worthy example of a literature review on information technology....
11 Pages (2750 words) Literature review
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us