Misplaced Pages

Questionnaire for User Interaction Satisfaction

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
The topic of this article may not meet Misplaced Pages's general notability guideline. Please help to demonstrate the notability of the topic by citing reliable secondary sources that are independent of the topic and provide significant coverage of it beyond a mere trivial mention. If notability cannot be shown, the article is likely to be merged, redirected, or deleted.
Find sources: "Questionnaire for User Interaction Satisfaction" – news · newspapers · books · scholar · JSTOR (October 2012) (Learn how and when to remove this message)

The Questionnaire For User Interaction Satisfaction (QUIS) is a tool developed to assess users' subjective satisfaction with specific aspects of the human-computer interface. It was developed in 1987 by a multi-disciplinary team of researchers at the University of Maryland Human–Computer Interaction Lab. The QUIS is currently at Version 7.0 with demographic questionnaire, a measure of overall system satisfaction along 6 scales, and measures of 9 specific interface factors. These 9 factors are: screen factors, terminology and system feedback, learning factors, system capabilities, technical manuals, on-line tutorials, multimedia, teleconferencing, and software installation. Currently available in: German, Italian, Portuguese, and Spanish.

Background


When the QUIS was developed, a large number questionnaires concerning user subjective satisfaction had been developed. However, few of these exclusively focused on user evaluation of the interface itself. This was the motivation for the development of the QUIS

Version 1.0

In 1987, Ben Shneiderman presented a questionnaire that directed user attention to focus on their subjective rating of the human-computer interface. While this questionnaire was a strong step towards focus on users' evaluations of an interface, no empirical work had been done to assess its reliability or validity.

Version 2.0

This original questionnaire consisted of 90 questions in total. Of these questions, 5 were concerned with rating a user's overall reaction of the system. The remaining 85 were organized into 20 groups which, in turn, consisted of a main component question followed by related subcomponent questions.
The reliability of the questionnaire was found to be high with Cronbach's alpha=.94

Version 3.0

QUIS Version 2.0 was modified and expanded to three major sections. In the first section, there were three questions concerned with the type of system under evaluation and the amount of time spent on that system. In the second section, four questions focused on the user's past computer experiences. The last section, section III, included the modified version of QUIS Version 2.0, now containing 103 questions. These modifications included changing the 1-10 rating scale to be from 1-9, which 0 used as "not applicable". This also simplified future data entry for the questionnaire since a maximum rating would no longer require two keystrokes (as it would have in "10"). This in turn would reduce response bias from subjects.

Version 4.0

Chin, Norman and Shneiderman (1987) administered the QUIS Version 3.0 and a subsequent revised Version 4.0 to an introductory computer science class learning to program in CF PASCAL. Participants, were assigned to either the interactive batch run IBM mainframe or an interactive syntax-directed editor programming environment on an IBM PC. They evaluated the environment they had used during the first 6 weeks of the course (version 3.0). Then, for the next 6 weeks, the participants switched programming environments and evaluated the new system with the QUIS Version 4.0.
Although version 4.0 appeared to be reliable, there were limitations to the study due to sampling. The sample of the users doing the evaluation were limited to those in an academic community. There was a clear need to determine if the reliability of the QUIS would generalize to other populations of users and products, like a local PC User's Group.

Version 5.0

Another study using QUIS Version 5.0 was carried out with a local PC User's Group. In order to look at ratings across products, the participants were divided into 4 groups. Each group rated a different product. The products were:

  1. a product the rater liked
  2. a product the rater disliked
  3. a command line system (CLS)
  4. a Menu Driven Application (MDA)

This investigation examined the reliability and discriminability of the questionnaire. In terms of discriminability, the researchers compared the ratings for software that was liked vs. the ratings for the software that was disliked. Lastly, a comparison between a mandatory CLS with that of a voluntarily chosen MDA was made. The researchers found that the overall reliability of QUIS Version 5.0 using Cronbach's alpha was .939.

Version 5.5

Even though the QUIS Version 5.0 was a powerful tool for interface evaluation, interface issues limited the utility of the on-line version. Previous versions of the QUIS have been laid out in a very linear fashion, in which one question would be shown on each screen. However, this format is unable to capture the hierarchical nature of the question sets in the QUIS. This in turn limits the continuity between questions. QUIS 5.5 presented related sets of questions on the same screen. This helped improve question continuity within a set and reduced the amount of time subjects would spend navigating between questions.
Users of the QUIS often avoided the on-line version because it failed to record specific user comments about the system. This was not acceptable since these comments are often vital for usability testing. In response to this need, QUIS Version 5.5 collected and stored comments online for each set of questions.
The output format of the QUIS data was also a source of frustration. The original format made analysis confusing and error-prone. QUIS Version 5.5 stores data in a format that could be easily imported into most popular spreadsheet and statistical analysis applications.
Overall, the most significant change to the QUIS in Version 5.5 is improved flexibility. Prior versions required experimenters to use all questions in all areas despite how most often, only a sub-set of the 80 questions was actually applicable to the interface under evaluation. QUIS Version 5.5 allowed experimenters to select subsets of the QUIS questions to display. Overall, this saved subjects and experimenters time and effort.

Version 5.5 - Development of the Web Based QUIS

Standard HTML forms were used to let users interact with the QUIS Version 5.5. The online version's style is very similar to the paper version of the questionnaire. The online version displayed multiple questions per page and comment areas at the end of each section. In order to ensure that users considered each question, a response was required for each question (users were able to answer "Not Applicable"). Client-side JavaScript was used to both validate and format the user's responses. The data for each section of the QUIS was time stamped and recorded on the client computer. At the end of the questionnaire the data from all sections of the QUIS were gathered together and send as a single piece back to the server where the QUIS was deployed. This method of data collection ensures that only completed questionnaires were entered, and prevents concurrency issues between users.

Version 5.5 Paper vs. Online Study

This study compared responses from paper and on-line formats of the QUIS Veresion 5.5. The majority of studies were interested in assessing equivalence between computerized and paper forms of tests. Overall, the results have not indicated significant differences. Twenty subjects evaluated WordPerfect© using both the paper and online formats of the QUIS Version 5.5. Each administration of the QUIS was preceded by a practice session to refamiliarize the subject with the interface. As the researchers expected, the format of the questionnaire did not affect users' ratings. However, it was of note that subjects using the online format wrote more in the comment sections than those who used the paper format. Also, the comments made by subjects using the online format provided better feedback in terms of problems, strengths, and examples. These results indicated that the online QUIS format provides more higher-quality information to developers, researchers and human factors experts than the paper-pencil format.

Version 6.0

QUIS Version 5.5 was expanded into Version 6.0 and used for the study of the AVR "Guardian" system.

Version 7.0

The QUIS Version 7.0 is an updated and expanded version of the previously validated QUIS 5.5. It is arranged in a hierarchical format and contains: (1) a demographic questionnaire, (2) six scales that measure overall reaction ratings of the system, (3) four measures of specific interface factors: screen factors, terminology and system feedback, learning factors, system capabilities, and (4) optional sections to evaluate specific components of the system. These specific components include:

  1. technical manuals and online help
  2. on-line tutorials
  3. multimedia
  4. Internet access
  5. software installation

Additional space allowing the rater to make comments regarding the interface is also included within the questionnaire. The comment space is headed by a statement that prompts the rater to comment on each of the specific interface factors.

Current

In addition to English, the QUIS 7.0 is currently available in the following languages: German, Italian, Portuguese (Brazilian), and Spanish.
In Fall 2011, a group of University of Maryland students began work updating the QUIS Version 7.0.

Competitors

References

  1. "About". 2018-04-25. Archived from the original on 2018-04-25.
  2. ^ Chin, J. P., Diehl, V. A. and Norman, K. L. (1988). Development of an instrument measuring user satisfaction of the human-computer interface. Proceedings of SIGCHI '88, (pp. 213-218), New York: ACM/SIGCHI.
  3. Harper, B. D. & Norman, K. L. (1993). Improving User Satisfaction: The Questionnaire for User Interaction Satisfaction Version 5.5. Proceedings of the 1st Annual Mid-Atlantic Human Factors Conference, (pp. 224-228), Virginia Beach, VA
  4. ^ Slaughter, L. A., Harper, B. D. & Norman, K. L. (1994). Assessing the Equivalence of Paper and On-line versions of the QUIS 5.5. Proceedings of the 2nd Annual Mid-Atlantic Human Factors Conference, (pp. 87-91), Washington, D.C.
  5. Wallace, D. F. & Norman, K. L., & Plaisant, C. (1988). The American Voice And Robotics "Guardian: System: A Case Study In User Interface Usability Evaluation. Technical Report (CAR-TR-392). College Park, MD: Human-Computer Interaction Laboratory, Center for Automation Research, University of Maryland.
  6. "What is SUMI?". Archived from the original on 2011-11-21. Retrieved 2011-12-08.
Categories: