You are hereHome » Evaluation and Assessment
Evaluation and Assessment
Translation of content on this website is performed by Google™ Translate, which performs automated computer translations that are only an approximation of the original content. The translations should only be used as a rough guide. MSU does not and cannot guarantee the accuracy of the translations generated by Google™ Translate.
Testing and Grading Issues (University of Michigan, Center for Research on Learning and Teaching).
This well-designed site has links that cover multiple aspects of testing and grading from a basic introduction to testing and measurement to such specific issues as improving multiple choice tests and advice on writing and grading essay questions.
Assessment Terminology: A Glossary of Useful Terms (New Horizons for Learning).
This site contains clear definitions of assessment terms from accountability to validity.
"Quizzes, Tests, and Exams," Barbara Gross Davis (University of California, Berkeley). From Tools for Teaching, Jossey-Bass. 1993.
An overview of testing, including descriptions of many kinds of tests and guidelines for constructing effective tests.
“How to Prepare Better Tests: Guidelines for University Faculty,” by Beverly B. Zimmerman, Richard R. Sudweeks, Monte F. Shelley, and Bud Wood ( Brigham Young University ).
A handbook that covers developing a test; preparing, assembling, and administering a test; evaluating and scoring a test; and interpreting test results.
Evaluating Student Projects (University of North Carolina at Chapel Hill, Center for Teaching and Learning).
Describes guidelines and assessment tools for evaluating/grading student projects.
Characteristics of a Good Grading System (University of Minnesota, Center for Teaching and Learning Services
Five important points for designing a good grading system for a course.
"Improving College Grading," Gerald S. Hanna and William E. Cashin, Kansas State University. (IDEA Paper No. 19, January 1988). PDF/Adobe Acrobat.
This paper discusses the limitations of percentage and class-curve grading and recommends anchor grading, which the authors feel is a more meaningful grading system.
Grading Systems (University of Minnesota, Center for Teaching and Learning Services).
Defines and describes norm-referenced, criterion-referenced, and other grading systems.
Assessment of Teaching & Learning (University of Southern California, Center for Excellence in Teaching).
The numerous links on this comprehensive site address a variety of issues in student and program assessment.
VALUE: Valid Assessment of Learning in Undergraduate Education (Association of American Colleges and Universities).
A few minutes completing a sign-in form gives free access to 15 downloadable rubrics in three areas. 1) Intellectual and Practical Skills, such as Critical thinking or Written communication; 2) Personal and Social Responsibility, such as Ethical reasoning; and 3) Integrative and Applied Learning. Each rubric was developed by teams of faculty and educational professionals and reflects “faculty expectations for essential learning across the nation.”
“How to Prepare Better Multiple-Choice Test Items: Guidelines for University Faculty,” by Steven J. Burton, Richard R. Sudweeks, Paul F. Merrill, and Bud Wood ( Brigham Young University ).
Links to a 33-page handbook on constructing effective multiple-choice test items. Good introduction to the use of multiple-choice tests, including discussion of pros and cons, measuring higher-level objectives, varieties, and guidelines.
"Improving Multiple-Choice Tests," Victoria L. Clegg and William E. Cashin, Kansas State University. (IDEA Paper No. 16, Kansas State University, September 1986).
Presents strengths and imitations of multiple-choice tests. Makes recommendations for when multiple-choice items should be used, offers detailed instructions on how to construct them, and suggests methods for organizing the entire test.
Improving Multiple Choice Questions (University of North Carolina at Chapel Hill, Center for Teaching and Learning, For Your Consideration #8, November 1990).
Discusses the need to design exam items at three levels—recall, application, and evaluation—in order to achieve validity and reliability in multiple-choice exams that test higher order cognitive skills as well as factual information. Offers guidelines for writing questions and analyzing the responses after the test is given.
Authentic Assessment Toolbox, Jon Mueller, North Central College, Naperville, Illinois.
A guide for constructing multiple-choice test questions. Includes terminology; list of guidelines; and a section on creating higher-level comprehension, application, and analysis items, all with examples.
Writing Multiple Choice Questions that Demand Critical Thinking (University of Oregon, Teaching Effectiveness Program).
Many practical suggestions for writing effective items plus a detailed set of techniques for writing several different types of multiple-choice questions that demand higher order thinking, with examples of each.
"Writing Multiple Choice Questions which Require Comprehension," Russell A. Dewey, Ph.D., Georgia Southern University (Emeritus).
Another guide to writing multiple-choice questions. Has useful section on ways to defeat the "test-wise" strategies of students who don’t study.
“Writing Multiple Choice Test Items” by D.M. Zimmaro ( University of Texas at Austin ).
Guidelines for writing test items, with a particular focus on items related to Bloom’s Taxonomy of Educational Objectives. Provides numerous examples.
"Writing Multiple-Choice Test Items," Jerard Kehoe, Virginia Polytechnic Institute and State University.
General how-to information on constructing multiple-choice tests.
“How Can We Construct Good Multiple-Choice Items?” by Derek Cheung, Chinese University of Hong Kong and Robert Bucat, University of Western Australia.
Eight guidelines for constructing multiple-choice items, each with examples of poorly formed and well formed multiple-choice items. Although the examples are for chemistry, the 8 guidelines can apply to any discipline.
"More Multiple-Choice Item Writing Do’s and Don’ts," Robert B. Frary, Virginia Polytechnic Institute and State University. Practical Assessment, Research & Evaluation, 4 (11).
A list of recommendations for writing multiple-choice test items. Covers content, structure, options, and errors to avoid. Includes examples of do’s and don’ts for each topic.
Designing and Managing MCQs (University of Cape Town).
A faculty handbook for developing multiple-choice questions. Includes examples from first year courses in Philosophy of Education, Sociology, and Economics, plus a section on scoring and statistics.
"Constructing Written Test Questions for the Basic and Clinical Sciences" (National Board of Medical Examiners).
Although written for the sciences, this comprehensive manual has information useful to those writing multiple-choice test items in all subject areas. The complete manual is downloadable (free of charge for educational purposes) in PDF format, available in English, Spanish, or Russian.
"Clinical Teaching," Thomas L. Schwenk, University of Michigan Medical School (CRLT Occasional Paper #1, 1987, University of Michigan, Center for Research on Learning and Teaching).
Discusses four factors required for successful clinical teaching and applies them to bedside teaching.
"Five Microskills for Clinical Teaching," developed by Kay Gordon and Barbara Meyer; adapted by David Irby and updated by Tom Greer (University of Washington School of Medicine).
Describes 5 microskills for effective clinical teaching plus examples of their application to 2 cases. Contains a simulation exercise for practicing these skills.
Clinical Education (University of Medicine and Dentistry of New Jersey, Center for Teaching Excellence).
An extensive collection of links to web resources on all aspects of clinical teaching and education.
“A Guide to Clinical Performance Testing,” Neal Whitman, University of Utah Medical Center. (IDEA Paper No. 7, Kansas State University, February 1982).
This paper describes the steps necessary for developing an effective performance evaluation in the health professions as well as other disciplines. Covers clarifying purposes, establishing performance goals and objectives, and several methods for measuring student performance such as checklists, observation logs, and anecdotal records. Stresses the need for multiple data sources and instruments.
Evaluating Clinical Performance (University of Medicine and Dentistry of New Jersey).
Links to several sites that provide material on evaluating clinical performance and competence, including assessment tools.
Strategies in Clinical Teaching (University of Kansas School of Medicine).
Eight mini-teaching modules that cover quick facts about various aspects of clinical teaching designed to help community-based faculty be more effective teachers.
Professional Learning: This is online access to a book chapter on developing expertise in medical education.
Resources specifically for the College of Osteopathic Medicine were compiled and annotated by Christina Dokter, Curriculum Development Specialist, College of Osteopathic Medicine (based on faculty and student survey results), drawing on material from the College of Human Medicine when appropriate.