"...there seems to be no escape from the conclusions that the two types of exams are measuring identical things" (Paterson, 1926, p. 246). This conclusion should not be surprising; after all, a well written essay item requires that the student (1) have a store of knowledge, (2) be able to relate facts and principles, and (3) be able to organize such information into a coherent and logical written expression, whereas an objective test item requires that the student (1) have a store of knowledge, (2) be able to relate facts and principles, and (3) be able to organize such information into a coherent and logical choice among several alternatives.
9.
TRUE
Both objective and essay test items are good devices for measuring student achievement. However, as seen in the previous quiz answers, there are particular measurement situations where one item type is more appropriate than the other. Following is a set of recommendations for using either objective or essay test items: (Adapted from Robert L. Ebel, Essentials of Educational Measurement, 1972, p. 144).
1 Sax, G., & Collet, L. S. (1968). An empirical comparison of the effects of recall and multiple-choice tests on student achievement. J ournal of Educational Measurement, 5 (2), 169–173. doi:10.1111/j.1745-3984.1968.tb00622.x
Paterson, D. G. (1926). Do new and old type examinations measure different mental functions? School and Society, 24 , 246–248.
When to Use Essay or Objective Tests
Essay tests are especially appropriate when:
the group to be tested is small and the test is not to be reused.
you wish to encourage and reward the development of student skill in writing.
you are more interested in exploring the student's attitudes than in measuring his/her achievement.
you are more confident of your ability as a critical and fair reader than as an imaginative writer of good objective test items.
Objective tests are especially appropriate when:
the group to be tested is large and the test may be reused.
highly reliable test scores must be obtained as efficiently as possible.
impartiality of evaluation, absolute fairness, and freedom from possible test scoring influences (e.g., fatigue, lack of anonymity) are essential.
you are more confident of your ability to express objective test items clearly than of your ability to judge essay test answers correctly.
there is more pressure for speedy reporting of scores than for speedy test preparation.
Either essay or objective tests can be used to:
measure almost any important educational achievement a written test can measure.
test understanding and ability to apply principles.
test ability to think critically.
test ability to solve problems.
test ability to select relevant facts and principles and to integrate them toward the solution of complex problems.
In addition to the preceding suggestions, it is important to realize that certain item types are better suited than others for measuring particular learning objectives. For example, learning objectives requiring the student to demonstrate or to show , may be better measured by performance test items, whereas objectives requiring the student to explain or to describe may be better measured by essay test items. The matching of learning objective expectations with certain item types can help you select an appropriate kind of test item for your classroom exam as well as provide a higher degree of test validity (i.e., testing what is supposed to be tested). To further illustrate, several sample learning objectives and appropriate test items are provided on the following page.
Learning Objectives
Most Suitable Test Item
The student will be able to categorize and name the parts of the human skeletal system.
Objective Test Item (M-C, T-F, Matching)
The student will be able to critique and appraise another student's English composition on the basis of its organization.
Essay Test Item (Extended-Response)
The student will demonstrate safe laboratory skills.
Performance Test Item
The student will be able to cite four examples of satire that Twain uses in .
Essay Test Item (Short-Answer)
After you have decided to use either an objective, essay or both objective and essay exam, the next step is to select the kind(s) of objective or essay item that you wish to include on the exam. To help you make such a choice, the different kinds of objective and essay items are presented in the following section. The various kinds of items are briefly described and compared to one another in terms of their advantages and limitations for use. Also presented is a set of general suggestions for the construction of each item variation.
II. Suggestions for Using and Writing Test Items
The multiple-choice item consists of two parts: (a) the stem, which identifies the question or problem and (b) the response alternatives. Students are asked to select the one alternative that best completes the statement or answers the question. For example:
Sample Multiple-Choice Item
(a)
(b)
*correct response
Advantages in Using Multiple-Choice Items
Multiple-choice items can provide...
versatility in measuring all levels of cognitive ability.
highly reliable test scores.
scoring efficiency and accuracy.
objective measurement of student achievement or ability.
a wide sampling of content or objectives.
a reduced guessing factor when compared to true-false items.
different response alternatives which can provide diagnostic feedback.
Limitations in Using Multiple-Choice Items
Multiple-choice items...
are difficult and time consuming to construct.
lead an instructor to favor simple recall of facts.
place a high degree of dependence on the student's reading ability and instructor's writing ability.
Suggestions For Writing Multiple-Choice Test Items
1. When possible, state the stem as a direct question rather than as an incomplete statement.
Undesirable:
Desirable:
2. Present a definite, explicit and singular question or problem in the stem.
Undesirable:
Desirable:
3. Eliminate excessive verbiage or irrelevant information from the stem.
Undesirable:
Desirable:
4. Include in the stem any word(s) that might otherwise be repeated in each alternative.
Undesirable:
5. Use negatively stated stems sparingly. When used, underline and/or capitalize the negative word.
Undesirable:
Desirable:
Item Alternatives
6. Make all alternatives plausible and attractive to the less knowledgeable or skillful student.
Undesirable
Desirable
7. Make the alternatives grammatically parallel with each other, and consistent with the stem.
Undesirable:
8. Make the alternatives mutually exclusive.
Undesirable:
The daily minimum required amount of milk that a 10 year old child should drink is
9. When possible, present alternatives in some logical order (e.g., chronological, most to least, alphabetical).
Undesirable
Desirable
10. Be sure there is only one correct or best response to the item.
Undesirable:
11. Make alternatives approximately equal in length.
Undesirable:
12. Avoid irrelevant clues such as grammatical structure, well known verbal associations or connections between stem and answer.
Undesirable: (grammatical clue)
of water behind the dam.
13. Use at least four alternatives for each item to lower the probability of getting the item correct by guessing.
14. Randomly distribute the correct response among the alternative positions throughout the test having approximately the same proportion of alternatives a, b, c, d and e as the correct response.
15. Use the alternatives "none of the above" and "all of the above" sparingly. When used, such alternatives should occasionally be used as the correct response.
A true-false item can be written in one of three forms: simple, complex, or compound. Answers can consist of only two choices (simple), more than two choices (complex), or two choices plus a conditional completion response (compound). An example of each type of true-false item follows:
Sample True-False Item: Simple
The acquisition of morality is a developmental process.
True
False
Sample True-False Item: Complex
Sample true-false item: compound.
The acquisition of morality is a developmental process.
True
False
Advantages In Using True-False Items
True-False items can provide...
the widest sampling of content or objectives per unit of testing time.
an objective measurement of student achievement or ability.
Limitations In Using True-False Items
True-false items...
incorporate an extremely high guessing factor. For simple true-false items, each student has a 50/50 chance of correctly answering the item without any knowledge of the item's content.
can often lead an instructor to write ambiguous statements due to the difficulty of writing statements which are unequivocally true or false.
do not discriminate between students of varying ability as well as other item types.
can often include more irrelevant clues than do other item types.
can often lead an instructor to favor testing of trivial knowledge.
Suggestions For Writing True-False Test Items
1. Base true-false items upon statements that are absolutely true or false, without qualifications or exceptions.
Undesirable:
Desirable:
2. Express the item statement as simply and as clearly as possible.
Undesirable:
Desirable:
3. Express a single idea in each test item.
Undesirable:
Desirable:
4. Include enough background information and qualifications so that the ability to respond correctly to the item does not depend on some special, uncommon knowledge.
Undesirable:
Desirable:
5. Avoid lifting statements from the text, lecture or other materials so that memory alone will not permit a correct answer.
Undesirable:
Desirable:
6. Avoid using negatively stated item statements.
Undesirable:
Desirable:
7. Avoid the use of unfamiliar vocabulary.
Undesirable:
Desirable:
8. Avoid the use of specific determiners which would permit a test-wise but unprepared examinee to respond correctly. Specific determiners refer to sweeping terms like "all," "always," "none," "never," "impossible," "inevitable," etc. Statements including such terms are likely to be false. On the other hand, statements using qualifying determiners such as "usually," "sometimes," "often," etc., are likely to be true. When statements do require the use of specific determiners, make sure they appear in both true and false items.
Undesirable:
required to rule on the constitutionality of a law. (T)
easier to score than an essay test. (T)
Desirable:
180°. (T)
other molecule of that compound. (T)
used for the metering of electrical energy used in a home. (F)
9. False items tend to discriminate more highly than true items. Therefore, use more false items than true items (but no more than 15% additional false items).
In general, matching items consist of a column of stimuli presented on the left side of the exam page and a column of responses placed on the right side of the page. Students are required to match the response associated with a given stimulus. For example:
Sample Matching Test Item
Advantages In Using Matching Items
Matching items...
require short periods of reading and response time, allowing you to cover more content.
provide objective measurement of student achievement or ability.
provide highly reliable test scores.
provide scoring efficiency and accuracy.
Limitations in Using Matching Items
have difficulty measuring learning objectives requiring more than simple recall of information.
are difficult to construct due to the problem of selecting a common set of stimuli and responses.
Suggestions for Writing Matching Test Items
1. Include directions which clearly state the basis for matching the stimuli with the responses. Explain whether or not a response can be used more than once and indicate where to write the answer.
Undesirable:
Desirable:
2. Use only homogeneous material in matching items.
Undesirable:
1.
2.
3.
4.
5.
a.
b.
c.
d. O
e.
f.
Desirable:
1.
2.
3.
4.
a. SO
b.
c.
d. O
e. HCl
3. Arrange the list of responses in some systematic order if possible (e.g., chronological, alphabetical).
Undesirable
Desirable
1.
2.
3.
4.
a.
b.
c.
d.
e.
a.
b.
c.
d.
e.
4. Avoid grammatical or other clues to the correct response.
Undesirable:
1.
2.
3.
4.
Desirable:
5. Keep matching items brief, limiting the list of stimuli to under 10.
6. Include more responses than stimuli to help prevent answering through the process of elimination.
7. When possible, reduce the amount of reading time by including only short phrases or single words in the response list.
The completion item requires the student to answer a question or to finish an incomplete statement by filling in a blank with the correct word or phrase. For example,
Sample Completion Item
According to Freud, personality is made up of three major systems, the _________, the ________ and the ________.
Advantages in Using Completion Items
Completion items...
can provide a wide sampling of content.
can efficiently measure lower levels of cognitive ability.
can minimize guessing as compared to multiple-choice or true-false items.
can usually provide an objective measure of student achievement or ability.
Limitations of Using Completion Items
are difficult to construct so that the desired response is clearly indicated.
are more time consuming to score when compared to multiple-choice or true-false items.
are more difficult to score since more than one answer may have to be considered correct if the item was not properly prepared.
Suggestions for Writing Completion Test Items
1. Omit only significant words from the statement.
Undesirable:
called a nucleus.
Desirable:
.
2. Do not omit so many words from the statement that the intended meaning is lost.
Undesirable:
Desirable:
3. Avoid grammatical or other clues to the correct response.
Undesirable:
decimal system.
Desirable:
4. Be sure there is only one correct response.
Undesirable:
.
Desirable:
.
5. Make the blanks of equal length.
Undesirable:
and (Juno) .
Desirable:
and (Juno) .
6. When possible, delete words at the end of the statement after the student has been presented a clearly defined problem.
Undesirable:
.
Desirable:
is (122.5) .
7. Avoid lifting statements directly from the text, lecture or other sources.
8. Limit the required response to a single word or phrase.
The essay test is probably the most popular of all types of teacher-made tests. In general, a classroom essay test consists of a small number of questions to which the student is expected to demonstrate his/her ability to (a) recall factual knowledge, (b) organize this knowledge and (c) present the knowledge in a logical, integrated answer to the question. An essay test item can be classified as either an extended-response essay item or a short-answer essay item. The latter calls for a more restricted or limited answer in terms of form or scope. An example of each type of essay item follows.
Sample Extended-Response Essay Item
Explain the difference between the S-R (Stimulus-Response) and the S-O-R (Stimulus-Organism-Response) theories of personality. Include in your answer (a) brief descriptions of both theories, (b) supporters of both theories and (c) research methods used to study each of the two theories. (10 pts. 20 minutes)
Sample Short-Answer Essay Item
Identify research methods used to study the S-R (Stimulus-Response) and S-O-R (Stimulus-Organism-Response) theories of personality. (5 pts. 10 minutes)
Advantages In Using Essay Items
Essay items...
are easier and less time consuming to construct than are most other item types.
provide a means for testing student's ability to compose an answer and present it in a logical manner.
can efficiently measure higher order cognitive objectives (e.g., analysis, synthesis, evaluation).
Limitations In Using Essay Items
cannot measure a large amount of content or objectives.
generally provide low test and test scorer reliability.
require an extensive amount of instructor's time to read and grade.
generally do not provide an objective measure of student achievement or ability (subject to bias on the part of the grader).
Suggestions for Writing Essay Test Items
1. Prepare essay items that elicit the type of behavior you want to measure.
Learning Objective:
The student will be able to explain how the normal curve serves as a statistical model.
Undesirable:
Describe a normal curve in terms of: symmetry, modality, kurtosis and skewness.
Desirable:
Briefly explain how the normal curve serves as a statistical model for estimation and hypothesis testing.
2. Phrase each item so that the student's task is clearly indicated.
Undesirable:
Discuss the economic factors which led to the stock market crash of 1929.
Desirable:
Identify the three major economic conditions which led to the stock market crash of 1929. Discuss briefly each condition in correct chronological sequence and in one paragraph indicate how the three factors were inter-related.
3. Indicate for each item a point value or weight and an estimated time limit for answering.
Undesirable:
Compare the writings of Bret Harte and Mark Twain in terms of settings, depth of characterization, and dialogue styles of their main characters.
Desirable:
Compare the writings of Bret Harte and Mark Twain in terms of settings, depth of characterization, and dialogue styles of their main characters. (10 points 20 minutes)
4. Ask questions that will elicit responses on which experts could agree that one answer is better than another.
5. Avoid giving the student a choice among optional items as this greatly reduces the reliability of the test.
6. It is generally recommended for classroom examinations to administer several short-answer items rather than only one or two extended-response items.
Suggestions for Scoring Essay Items
ANALYTICAL SCORING:
Each answer is compared to an ideal answer and points are assigned for the inclusion of necessary elements. Grades are based on the number of accumulated points either absolutely (i.e., A=10 or more points, B=6-9 pts., etc.) or relatively (A=top 15% scores, B=next 30% of scores, etc.)
GLOBAL QUALITY:
Each answer is read and assigned a score (e.g., grade, total points) based either on the total quality of the response or on the total quality of the response relative to other student answers.
Examples Essay Item and Grading Models
"Americans are a mixed-up people with no sense of ethical values. Everyone knows that baseball is far less necessary than food and steel, yet they pay ball players a lot more than farmers and steelworkers."
WHY? Use 3-4 sentences to indicate how an economist would explain the above situation.
Analytical Scoring
Global Quality
Assign scores or grades on the overall quality of the written response as compared to an ideal answer. Or, compare the overall quality of a response to other student responses by sorting the papers into three stacks:
Read and sort each stack again divide into three more stacks
In total, nine discriminations can be used to assign test grades in this manner. The number of stacks or discriminations can vary to meet your needs.
Try not to allow factors which are irrelevant to the learning outcomes being measured affect your grading (i.e., handwriting, spelling, neatness).
Read and grade all class answers to one item before going on to the next item.
Read and grade the answers without looking at the students' names to avoid possible preferential treatment.
Occasionally shuffle papers during the reading of answers to help avoid any systematic order effects (i.e., Sally's "B" work always followed Jim's "A" work thus it looked more like "C" work).
When possible, ask another instructor to read and grade your students' responses.
Another form of a subjective test item is the problem solving or computational exam question. Such items present the student with a problem situation or task and require a demonstration of work procedures and a correct solution, or just a correct solution. This kind of test item is classified as a subjective type of item due to the procedures used to score item responses. Instructors can assign full or partial credit to either correct or incorrect solutions depending on the quality and kind of work procedures presented. An example of a problem solving test item follows.
Example Problem Solving Test Item
It was calculated that 75 men could complete a strip on a new highway in 70 days. When work was scheduled to commence, it was found necessary to send 25 men on another road project. How many days longer will it take to complete the strip? Show your work for full or partial credit.
Advantages In Using Problem Solving Items
Problem solving items...
minimize guessing by requiring the students to provide an original response rather than to select from several alternatives.
are easier to construct than are multiple-choice or matching items.
can most appropriately measure learning objectives which focus on the ability to apply skills or knowledge in the solution of problems.
can measure an extensive amount of content or objectives.
Limitations in Using Problem Solving Items
require an extensive amount of instructor time to read and grade.
generally do not provide an objective measure of student achievement or ability (subject to bias on the part of the grader when partial credit is given).
Suggestions For Writing Problem Solving Test Items
1. Clearly identify and explain the problem.
Undesirable:
Desirable:
2. Provide directions which clearly inform the student of the type of response called for.
Undesirable:
Desirable:
3. State in the directions whether or not the student must show his/her work procedures for full or partial credit.
Undesirable:
Desirable:
4. Clearly separate item parts and indicate their point values.
A man leaves his home and drives to a convention at an average rate of 50 miles per hour. Upon arrival, he finds a telegram advising him to return at once. He catches a plane that takes him back at an average rate of 300 miles per hour.
Undesirable:
Desirable:
5. Use figures, conditions and situations which create a realistic problem.
Undesirable:
Desirable:
6. Ask questions that elicit responses on which experts could agree that one solution and one or more work procedures are better than others.
7. Work through each problem before classroom administration to double-check accuracy.
A performance test item is designed to assess the ability of a student to perform correctly in a simulated situation (i.e., a situation in which the student will be ultimately expected to apply his/her learning). The concept of simulation is central in performance testing; a performance test will simulate to some degree a real life situation to accomplish the assessment. In theory, a performance test could be constructed for any skill and real life situation. In practice, most performance tests have been developed for the assessment of vocational, managerial, administrative, leadership, communication, interpersonal and physical education skills in various simulated situations. An illustrative example of a performance test item is provided below.
Sample Performance Test Item
Assume that some of the instructional objectives of an urban planning course include the development of the student's ability to effectively use the principles covered in the course in various "real life" situations common for an urban planning professional. A performance test item could measure this development by presenting the student with a specific situation which represents a "real life" situation. For example,
An urban planning board makes a last minute request for the professional to act as consultant and critique a written proposal which is to be considered in a board meeting that very evening. The professional arrives before the meeting and has one hour to analyze the written proposal and prepare his critique. The critique presentation is then made verbally during the board meeting; reactions of members of the board or the audience include requests for explanation of specific points or informed attacks on the positions taken by the professional.
The performance test designed to simulate this situation would require that the student to be tested role play the professional's part, while students or faculty act the other roles in the situation. Various aspects of the "professional's" performance would then be observed and rated by several judges with the necessary background. The ratings could then be used both to provide the student with a diagnosis of his/her strengths and weaknesses and to contribute to an overall summary evaluation of the student's abilities.
Advantages In Using Performance Test Items
Performance test items...
can most appropriately measure learning objectives which focus on the ability of the students to apply skills or knowledge in real life situations.
usually provide a degree of test validity not possible with standard paper and pencil test items.
are useful for measuring learning objectives in the psychomotor domain.
Limitations In Using Performance Test Items
are difficult and time consuming to construct.
are primarily used for testing students individually and not for testing groups. Consequently, they are relatively costly, time consuming, and inconvenient forms of testing.
generally do not provide an objective measure of student achievement or ability (subject to bias on the part of the observer/grader).
Suggestions For Writing Performance Test Items
Prepare items that elicit the type of behavior you want to measure.
Clearly identify and explain the simulated situation to the student.
Make the simulated situation as "life-like" as possible.
Provide directions which clearly inform the students of the type of response called for.
When appropriate, clearly state time and activity limitations in the directions.
Adequately train the observer(s)/scorer(s) to ensure that they are fair in scoring the appropriate behaviors.
III. TWO METHODS FOR ASSESSING TEST ITEM QUALITY
This section presents two methods for collecting feedback on the quality of your test items. The two methods include using self-review checklists and student evaluation of test item quality. You can use the information gathered from either method to identify strengths and weaknesses in your item writing.
Checklist for Evaluating Test Items
EVALUATE YOUR TEST ITEMS BY CHECKING THE SUGGESTIONS WHICH YOU FEEL YOU HAVE FOLLOWED.
____
When possible, stated the stem as a direct question rather than as an incomplete statement.
____
Presented a definite, explicit and singular question or problem in the stem.
____
Eliminated excessive verbiage or irrelevant information from the stem.
____
Included in the stem any word(s) that might have otherwise been repeated in each alternative.
____
Used negatively stated stems sparingly. When used, underlined and/or capitalized the negative word(s).
____
Made all alternatives plausible and attractive to the less knowledgeable or skillful student.
____
Made the alternatives grammatically parallel with each other, and consistent with the stem.
____
Made the alternatives mutually exclusive.
____
When possible, presented alternatives in some logical order (e.g., chronologically, most to least).
____
Made sure there was only one correct or best response per item.
____
Made alternatives approximately equal in length.
____
Avoided irrelevant clues such as grammatical structure, well known verbal associations or connections between stem and answer.
____
Used at least four alternatives for each item.
____
Randomly distributed the correct response among the alternative positions throughout the test having approximately the same proportion of alternatives a, b, c, d, and e as the correct response.
____
Used the alternatives "none of the above" and "all of the above" sparingly. When used, such alternatives were occasionally the correct response.
____
Based true-false items upon statements that are absolutely true or false, without qualifications or exceptions.
____
Expressed the item statement as simply and as clearly as possible.
____
Expressed a single idea in each test item.
____
Included enough background information and qualifications so that the ability to respond correctly did not depend on some special, uncommon knowledge.
____
Avoided lifting statements from the text, lecture, or other materials.
____
Avoided using negatively stated item statements.
____
Avoided the use of unfamiliar language.
____
Avoided the use of specific determiners such as "all," "always," "none," "never," etc., and qualifying determiners such as "usually," "sometimes," "often," etc.
____
Used more false items than true items (but not more than 15% additional false items).
____
Included directions which clearly stated the basis for matching the stimuli with the response.
____
Explained whether or not a response could be used more than once and indicated where to write the answer.
____
Used only homogeneous material.
____
When possible, arranged the list of responses in some systematic order (e.g., chronologically, alphabetically).
____
Avoided grammatical or other clues to the correct response.
____
Kept items brief (limited the list of stimuli to under 10).
____
Included more responses than stimuli.
____
When possible, reduced the amount of reading time by including only short phrases or single words in the response list.
____
Omitted only significant words from the statement.
____
Did not omit so many words from the statement that the intended meaning was lost.
____
Avoided grammatical or other clues to the correct response.
____
Included only one correct response per item.
____
Made the blanks of equal length.
____
When possible, deleted the words at the end of the statement after the student was presented with a clearly defined problem.
____
Avoided lifting statements directly from the text, lecture, or other sources.
____
Limited the required response to a single word or phrase.
____
Prepared items that elicited the type of behavior you wanted to measure.
____
Phrased each item so that the student's task was clearly indicated.
____
Indicated for each item a point value or weight and an estimated time limit for answering.
____
Asked questions that elicited responses on which experts could agree that one answer is better than others.
____
Avoided giving the student a choice among optional items.
____
Administered several short-answer items rather than 1 or 2 extended-response items.
Grading Essay Test Items
____
Selected an appropriate grading model.
____
Tried not to allow factors which were irrelevant to the learning outcomes being measured to affect your grading (e.g., handwriting, spelling, neatness).
____
Read and graded all class answers to one item before going on to the next item.
____
Read and graded the answers without looking at the student's name to avoid possible preferential treatment.
____
Occasionally shuffled papers during the reading of answers.
____
When possible, asked another instructor to read and grade your students' responses.
____
Clearly identified and explained the problem to the student.
____
Provided directions which clearly informed the student of the type of response called for.
____
Stated in the directions whether or not the student must show work procedures for full or partial credit.
____
Clearly separated item parts and indicated their point values.
____
Used figures, conditions and situations which created a realistic problem.
____
Asked questions that elicited responses on which experts could agree that one solution and one or more work procedures are better than others.
____
Worked through each problem before classroom administration.
____
Prepared items that elicit the type of behavior you wanted to measure.
____
Clearly identified and explained the simulated situation to the student.
____
Made the simulated situation as "life-like" as possible.
____
Provided directions which clearly inform the students of the type of response called for.
____
When appropriate, clearly stated time and activity limitations in the directions.
____
Adequately trained the observer(s)/scorer(s) to ensure that they were fair in scoring the appropriate behaviors.
STUDENT EVALUATION OF TEST ITEM QUALITY
Using ices questionnaire items to assess your test item quality .
The following set of ICES (Instructor and Course Evaluation System) questionnaire items can be used to assess the quality of your test items. The items are presented with their original ICES catalogue number. You are encouraged to include one or more of the items on the ICES evaluation form in order to collect student opinion of your item writing quality.
102--How would you rate the instructor's examination questions?
116--Did the exams challenge you to do original thinking?
Excellent
Poor
Yes, very challenging
No, not challenging
103--How well did examination questions reflect content and emphasis of the course?
118--Were there "trick" or trite questions on tests?
Well related
Poorly related
Lots of them
Few if any
114--The exams reflected important points in the reading assignments.
122--How difficult were the examinations?
Strongly agree
Strongly disagree
Too difficult
Too easy
119--Were exam questions worded clearly?
123--I found I could score reasonably well on exams by just cramming.
Yes, very clear
No, very unclear
Strongly agree
Strongly disagree
115--Were the instructor's test questions thought provoking?
121--How was the length of exams for the time allotted.
Definitely yes
Definitely no
Too long
Too short
125--Were exams adequately discussed upon return?
109--Were exams, papers, reports returned with errors explained or personal comments?
Yes, adequately
No, not enough
Almost always
Almost never
IV. ASSISTANCE OFFERED BY THE CENTER FOR INNOVATION IN TEACHING AND LEARNING (CITL)
The information on this page is intended for self-instruction. However, CITL staff members will consult with faculty who wish to analyze and improve their test item writing. The staff can also consult with faculty about other instructional problems. Instructors wishing to acquire CITL assistance can contact [email protected] .
V. REFERENCES FOR FURTHER READING
Ebel, R. L. (1965). Measuring educational achievement . Prentice-Hall. Ebel, R. L. (1972). Essentials of educational measurement . Prentice-Hall. Gronlund, N. E. (1976). Measurement and evaluation in teaching (3rd ed.). Macmillan. Mehrens W. A. & Lehmann I. J. (1973). Measurement and evaluation in education and psychology . Holt, Rinehart & Winston. Nelson, C. H. (1970). Measurement and evaluation in the classroom . Macmillan. Payne, D. A. (1974). The assessment of learning: Cognitive and affective . D.C. Heath & Co. Scannell, D. P., & Tracy D. B. (1975). Testing and measurement in the classroom . Houghton Mifflin. Thorndike, R. L. (1971). Educational measurement (2nd ed.). American Council on Education.
Center for Innovation in Teaching & Learning
249 Armory Building 505 East Armory Avenue Champaign, IL 61820
Brame, C. (2013) Writing good multiple choice test questions. Retrieved [todaysdate] from https://cft.vanderbilt.edu/guides-sub-pages/writing-good-multiple-choice-test-questions/.
Constructing an Effective Stem
Constructing effective alternatives.
Additional Guidelines for Multiple Choice Questions
Considerations for Writing Multiple Choice Items that Test Higher-order Thinking
Additional resources.
Multiple choice test questions, also known as items, can be an effective and efficient way to assess learning outcomes. Multiple choice test items have several potential advantages:
Reliability: Reliability is defined as the degree to which a test consistently measures a learning outcome. Multiple choice test items are less susceptible to guessing than true/false questions, making them a more reliable means of assessment. The reliability is enhanced when the number of MC items focused on a single learning objective is increased. In addition, the objective scoring associated with multiple choice test items frees them from problems with scorer inconsistency that can plague scoring of essay questions.
Validity: Validity is the degree to which a test measures the learning outcomes it purports to measure. Because students can typically answer a multiple choice item much more quickly than an essay question, tests based on multiple choice items can typically focus on a relatively broad representation of course material, thus increasing the validity of the assessment.
The key to taking advantage of these strengths, however, is construction of good multiple choice items.
A multiple choice item consists of a problem, known as the stem, and a list of suggested solutions, known as alternatives. The alternatives consist of one correct or best alternative, which is the answer, and incorrect or inferior alternatives, known as distractors.
1. The stem should be meaningful by itself and should present a definite problem. A stem that presents a definite problem allows a focus on the learning outcome. A stem that does not present a clear problem, however, may test students’ ability to draw inferences from vague descriptions rather serving as a more direct test of students’ achievement of the learning outcome.
2. The stem should not contain irrelevant material , which can decrease the reliability and the validity of the test scores (Haldyna and Downing 1989).
3. The stem should be negatively stated only when significant learning outcomes require it. Students often have difficulty understanding items with negative phrasing (Rodriguez 1997). If a significant learning outcome requires negative phrasing, such as identification of dangerous laboratory or clinical practices, the negative element should be emphasized with italics or capitalization.
4. The stem should be a question or a partial sentence. A question stem is preferable because it allows the student to focus on answering the question rather than holding the partial sentence in working memory and sequentially completing it with each alternative (Statman 1988). The cognitive load is increased when the stem is constructed with an initial or interior blank, so this construction should be avoided.
1. All alternatives should be plausible. The function of the incorrect alternatives is to serve as distractors,which should be selected by students who did not achieve the learning outcome but ignored by students who did achieve the learning outcome. Alternatives that are implausible don’t serve as functional distractors and thus should not be used. Common student errors provide the best source of distractors.
2. Alternatives should be stated clearly and concisely. Items that are excessively wordy assess students’ reading ability rather than their attainment of the learning objective
3. Alternatives should be mutually exclusive. Alternatives with overlapping content may be considered “trick” items by test-takers, excessive use of which can erode trust and respect for the testing process.
4. Alternatives should be homogenous in content. Alternatives that are heterogeneous in content can provide cues to student about the correct answer.
5. Alternatives should be free from clues about which response is correct. Sophisticated test-takers are alert to inadvertent clues to the correct answer, such differences in grammar, length, formatting, and language choice in the alternatives. It’s therefore important that alternatives
have grammar consistent with the stem.
are parallel in form.
are similar in length.
use similar language (e.g., all unlike textbook language or all like textbook language).
6. The alternatives “all of the above” and “none of the above” should not be used. When “all of the above” is used as an answer, test-takers who can identify more than one alternative as correct can select the correct answer even if unsure about other alternative(s). When “none of the above” is used as an alternative, test-takers who can eliminate a single option can thereby eliminate a second option. In either case, students can use partial knowledge to arrive at a correct answer.
7. The alternatives should be presented in a logical order (e.g., alphabetical or numerical) to avoid a bias toward certain positions.
8. The number of alternatives can vary among items as long as all alternatives are plausible. Plausible alternatives serve as functional distractors, which are those chosen by students that have not achieved the objective but ignored by students that have achieved the objective. There is little difference in difficulty, discrimination, and test score reliability among items containing two, three, and four distractors.
Additional Guidelines
1. Avoid complex multiple choice items , in which some or all of the alternatives consist of different combinations of options. As with “all of the above” answers, a sophisticated test-taker can use partial knowledge to achieve a correct answer.
2. Keep the specific content of items independent of one another. Savvy test-takers can use information in one question to answer another question, reducing the validity of the test.
When writing multiple choice items to test higher-order thinking, design questions that focus on higher levels of cognition as defined by Bloom’s taxonomy . A stem that presents a problem that requires application of course principles, analysis of a problem, or evaluation of alternatives is focused on higher-order thinking and thus tests students’ ability to do such thinking. In constructing multiple choice items to test higher order thinking, it can also be helpful to design problems that require multilogical thinking, where multilogical thinking is defined as “thinking that requires knowledge of more than one fact to logically and systematically apply concepts to a …problem” (Morrison and Free, 2001, page 20). Finally, designing alternatives that require a high level of discrimination can also contribute to multiple choice items that test higher-order thinking.
Burton, Steven J., Sudweeks, Richard R., Merrill, Paul F., and Wood, Bud. How to Prepare Better Multiple Choice Test Items: Guidelines for University Faculty, 1991.
Cheung, Derek and Bucat, Robert. How can we construct good multiple-choice items? Presented at the Science and Technology Education Conference, Hong Kong, June 20-21, 2002.
Haladyna, Thomas M. Developing and validating multiple-choice test items, 2 nd edition. Lawrence Erlbaum Associates, 1999.
Haladyna, Thomas M. and Downing, S. M.. Validity of a taxonomy of multiple-choice item-writing rules. Applied Measurement in Education , 2(1), 51-78, 1989.
Morrison, Susan and Free, Kathleen. Writing multiple-choice test items that promote and measure critical thinking. Journal of Nursing Education 40: 17-24, 2001.
Teaching Guides
Quick Links
Services for Departments and Schools
Examples of Online Instructional Modules
Your Article Library
Essay test: types, advantages and limitations | statistics.
ADVERTISEMENTS:
After reading this article you will learn about:- 1. Introduction to Essay Test 2. Types of Essay Test 3. Advantages 4. Limitations 5. Suggestions.
Introduction to Essay Test:
The essay tests are still commonly used tools of evaluation, despite the increasingly wider applicability of the short answer and objective type questions.
There are certain outcomes of learning (e.g., organising, summarising, integrating ideas and expressing in one’s own way) which cannot be satisfactorily measured through objective type tests. The importance of essay tests lies in the measurement of such instructional outcomes.
An essay test may give full freedom to the students to write any number of pages. The required response may vary in length. An essay type question requires the pupil to plan his own answer and to explain it in his own words. The pupil exercises considerable freedom to select, organise and present his ideas. Essay type tests provide a better indication of pupil’s real achievement in learning. The answers provide a clue to nature and quality of the pupil’s thought process.
That is, we can assess how the pupil presents his ideas (whether his manner of presentation is coherent, logical and systematic) and how he concludes. In other words, the answer of the pupil reveals the structure, dynamics and functioning of pupil’s mental life.
The essay questions are generally thought to be the traditional type of questions which demand lengthy answers. They are not amenable to objective scoring as they give scope for halo-effect, inter-examiner variability and intra-examiner variability in scoring.
Types of Essay Test:
There can be many types of essay tests:
Some of these are given below with examples from different subjects:
1. Selective Recall.
e.g. What was the religious policy of Akbar?
2. Evaluative Recall.
e.g. Why did the First War of Independence in 1857 fail?
3. Comparison of two things—on a single designated basis.
e.g. Compare the contributions made by Dalton and Bohr to Atomic theory.
4. Comparison of two things—in general.
e.g. Compare Early Vedic Age with the Later Vedic Age.
5. Decision—for or against.
e.g. Which type of examination do you think is more reliable? Oral or Written. Why?
6. Causes or effects.
e.g. Discuss the effects of environmental pollution on our lives.
7. Explanation of the use or exact meaning of some phrase in a passage or a sentence.
e.g., Joint Stock Company is an artificial person. Explain ‘artificial person’ bringing out the concepts of Joint Stock Company.
8. Summary of some unit of the text or of some article.
9. Analysis
e.g. What was the role played by Mahatma Gandhi in India’s freedom struggle?
10. Statement of relationship.
e.g. Why is knowledge of Botany helpful in studying agriculture?
11. Illustration or examples (your own) of principles in science, language, etc.
e.g. Illustrate the correct use of subject-verb position in an interrogative sentence.
12. Classification.
e.g. Classify the following into Physical change and Chemical change with explanation. Water changes to vapour; Sulphuric Acid and Sodium Hydroxide react to produce Sodium Sulphate and Water; Rusting of Iron; Melting of Ice.
13. Application of rules or principles in given situations.
e.g. If you sat halfway between the middle and one end of a sea-saw, would a person sitting on the other end have to be heavier or lighter than you in order to make the sea-saw balance in the middle. Why?
14. Discussion.
e.g. Partnership is a relationship between persons who have agreed to share the profits of a business carried on by all or any of them acting for all. Discuss the essentials of partnership on the basis of this partnership.
15. Criticism—as to the adequacy, correctness, or relevance—of a printed statement or a classmate’s answer to a question on the lesson.
e.g. What is the wrong with the following statement?
The Prime Minister is the sovereign Head of State in India.
16. Outline.
e.g. Outline the steps required in computing the compound interest if the principal amount, rate of interest and time period are given as P, R and T respectively.
17. Reorganization of facts.
e.g. The student is asked to interview some persons and find out their opinion on the role of UN in world peace. In the light of data thus collected he/she can reorganise what is given in the text book.
18. Formulation of questions-problems and questions raised.
e.g. After reading a lesson the pupils are asked to raise related problems- questions.
19. New methods of procedure
e.g. Can you solve this mathematical problem by using another method?
Advantages of the Essay Tests:
1. It is relatively easier to prepare and administer a six-question extended- response essay test than to prepare and administer a comparable 60-item multiple-choice test items.
2. It is the only means that can assess an examinee’s ability to organise and present his ideas in a logical and coherent fashion.
3. It can be successfully employed for practically all the school subjects.
4. Some of the objectives such as ability to organise idea effectively, ability to criticise or justify a statement, ability to interpret, etc., can be best measured by this type of test.
5. Logical thinking and critical reasoning, systematic presentation, etc. can be best developed by this type of test.
6. It helps to induce good study habits such as making outlines and summaries, organising the arguments for and against, etc.
7. The students can show their initiative, the originality of their thought and the fertility of their imagination as they are permitted freedom of response.
8. The responses of the students need not be completely right or wrong. All degrees of comprehensiveness and accuracy are possible.
9. It largely eliminates guessing.
10. They are valuable in testing the functional knowledge and power of expression of the pupil.
Limitations of Essay Tests:
1. One of the serious limitations of the essay tests is that these tests do not give scope for larger sampling of the content. You cannot sample the course content so well with six lengthy essay questions as you can with 60 multiple-choice test items.
2. Such tests encourage selective reading and emphasise cramming.
3. Moreover, scoring may be affected by spelling, good handwriting, coloured ink, neatness, grammar, length of the answer, etc.
4. The long-answer type questions are less valid and less reliable, and as such they have little predictive value.
5. It requires an excessive time on the part of students to write; while assessing, reading essays is very time-consuming and laborious.
6. It can be assessed only by a teacher or competent professionals.
7. Improper and ambiguous wording handicaps both the students and valuers.
8. Mood of the examiner affects the scoring of answer scripts.
9. There is halo effect-biased judgement by previous impressions.
10. The scores may be affected by his personal bias or partiality for a particular point of view, his way of understanding the question, his weightage to different aspect of the answer, favouritism and nepotism, etc.
Thus, the potential disadvantages of essay type questions are :
(i) Poor predictive validity,
(ii) Limited content sampling,
(iii) Scores unreliability, and
(iv) Scoring constraints.
Suggestions for Improving Essay Tests:
The teacher can sometimes, through essay tests, gain improved insight into a student’s abilities, difficulties and ways of thinking and thus have a basis for guiding his/her learning.
(A) White Framing Questions:
1. Give adequate time and thought to the preparation of essay questions, so that they can be re-examined, revised and edited before they are used. This would increase the validity of the test.
2. The item should be so written that it will elicit the type of behaviour the teacher wants to measure. If one is interested in measuring understanding, he should not ask a question that will elicit an opinion; e.g.,
“What do you think of Buddhism in comparison to Jainism?”
3. Use words which themselves give directions e.g. define, illustrate, outline, select, classify, summarise, etc., instead of discuss, comment, explain, etc.
4. Give specific directions to students to elicit the desired response.
5. Indicate clearly the value of the question and the time suggested for answering it.
6. Do not provide optional questions in an essay test because—
(i) It is difficult to construct questions of equal difficulty;
(ii) Students do not have the ability to select those questions which they will answer best;
(iii) A good student may be penalised because he is challenged by the more difficult and complex questions.
7. Prepare and use a relatively large number of questions requiring short answers rather than just a few questions involving long answers.
8. Do not start essay questions with such words as list, who, what, whether. If we begin the questions with such words, they are likely to be short-answer question and not essay questions, as we have defined the term.
9. Adapt the length of the response and complexity of the question and answer to the maturity level of the students.
10. The wording of the questions should be clear and unambiguous.
11. It should be a power test rather than a speed test. Allow a liberal time limit so that the essay test does not become a test of speed in writing.
12. Supply the necessary training to the students in writing essay tests.
13. Questions should be graded from simple to complex so that all the testees can answer atleast a few questions.
14. Essay questions should provide value points and marking schemes.
(B) While Scoring Questions:
1. Prepare a marking scheme, suggesting the best possible answer and the weightage given to the various points of this model answer. Decide in advance which factors will be considered in evaluating an essay response.
2. While assessing the essay response, one must:
a. Use appropriate methods to minimise bias;
b. Pay attention only to the significant and relevant aspects of the answer;
c. Be careful not to let personal idiosyncrasies affect assessment;
d. Apply a uniform standard to all the papers.
3. The examinee’s identity should be concealed from the scorer. By this we can avoid the “halo effect” or “biasness” which may affect the scoring.
4. Check your marking scheme against actual responses.
5. Once the assessment has begun, the standard should not be changed, nor should it vary from paper to paper or reader to reader. Be consistent in your assessment.
6. Grade only one question at a time for all papers. This will help you in minimising the halo effect in becoming thoroughly familiar with just one set of scoring criteria and in concentrating completely on them.
7. The mechanics of expression (legibility, spelling, punctuation, grammar) should be judged separately from what the student writes, i.e. the subject matter content.
8. If possible, have two independent readings of the test and use the average as the final score.
Related Articles:
Merits and Demerits of Objective Type Test
Types of Recall Type Test: Simple and Completion | Objective Test
Educational Statistics , Evaluation Tools , Essay Test
Comments are closed.
What's on the Tests
All ACCUPLACER tests use a multiple-choice format except for WritePlacer ® , which is an essay test. There’s no time limit on the tests, so you can focus on doing your best to demonstrate your skills.
ACCUPLACER uses the latest computer-adaptive technology, which means the questions you see are based on your skill level. Your response to each question determines the difficulty level of the next question, so it’s important to give each question as much thought as you can before selecting your answer.
Remember: No one passes or fails ACCUPLACER tests, but it’s important to complete the test using your best effort, so you can get an accurate measure of your academic skills and be placed in the appropriate course.
Get resources to help you practice for the tests.
Inside the Tests
Reading test.
The Reading test assesses your ability to derive meaning from a range of texts and to determine the meaning of words and phrases in short and extended contexts. Passages on the test cover a range of content areas, writing modes, and complexities. Both single and paired passages are included.
Writing Test
The Writing test evaluates your ability to revise and edit multiparagraph text.
The Arithmetic test focuses on computation, order of operations, estimation and rounding, comparing and ordering values in different formats, and recognizing equivalent values across formats. The Arithmetic test assesses the following knowledge and skills:
Whole Number Operations
Fraction Operations
Decimal Operations
Number Comparisons and Equivalents
The Quantitative Reasoning, Algebra, and Statistics (QAS) test assesses the following knowledge and skills:
Rational Numbers
Ratio and Proportional Relationships
Algebraic Expressions
Linear Equations
Linear Applications and Graphs
Probability Sets
Descriptive Statistics
Geometry Concepts
The Advanced Algebra and Functions (AAF) test assesses the following knowledge and skills:
Radical and Rational Equations
Polynomial Equations
Exponential and Logarithmic Equations
Trigonometry
WritePlacer Essay
The WritePlacer essay measures your ability to write effectively, which is critical to your academic success. Your score is based on your ability to express, organize, and support your opinions and ideas. The position you take on the essay topic doesn’t affect your score. The following characteristics of writing will be considered:
Purpose and Focus: The extent to which you present information in a unified and coherent manner, clearly addressing the issue
Organization and Structure: The extent to which you order and connect ideas.
Development and Support: The extent to which you develop and support ideas.
Sentence Variety and Style: The extent to which you craft sentences and paragraphs demonstrating control of vocabulary, voice, and structure.
Mechanical Conventions: The extent to which you express ideas using Standard Written English.
Critical Thinking: The extent to which you communicate a point of view and demonstrate reasoned relationships among ideas.
ACCUPLACER for English Language Learners
If English isn't your first language, you may be asked to take one or more ACCUPLACER placement tests for English language learners to assess your English language skills.
The ESL Language Use test measures your proficiency in using correct grammar in English sentences.
The ESL Listening test measures your ability to listen to and understand one or more people speaking in English. Conversations take place in a wide range of locations including lecture halls, grocery stores, and libraries.
The ESL Reading Skills test measures your ability to read English through the comprehension of short passages.
The ESL Sentence Meaning test measures how well you understand the meaning of sentences in English.
Plan for College and Career
Take the ACT
School and District Assessment
Career-Ready Solutions
Students & Parents
Open Search Form
The ACT Test
Administer District Testing
About the ACT *
Accommodations
Success Stories
Other ACT Services and Products
About The ACT Test
About the act test.
The ACT ® test motivates students to perform to their best ability. Test scores reflect what students have learned throughout high school and provide colleges and universities with excellent information for recruiting, advising, placement, and retention.
Many times, students who are not considering higher education rethink their plans when they see their ACT test results. This is especially true for underrepresented students. To support college and career planning, the ACT also offers a career exploration component to help students identify career options.
Who Typically Takes the ACT
The ACT test is designed for the 10th, 11th, and/or 12th grade levels to provide schools and districts with the data necessary to position students for success after high school.
Did You Know?
More than 1.34 million students in the 2022 high school graduating class took the ACT test.
ACT test scores are accepted by all four-year US colleges and universities, including highly selective institutions.
The ACT is not an aptitude or an IQ test. Questions are directly related to what students have learned in high school courses.
The ACT is administered both nationally and internationally each year, with additional state and district test dates.
The ACT is approved for use in state models for federal and state accountability.
ACT College and Career Readiness Standards
The standards are empirically derived descriptions of the essential skills and knowledge students need to become ready for college and career, giving clear meaning to test scores and serving as a link between what students have learned and what they are ready to learn next.
When students take the ACT test, high school educators and counselors receive valuable information for guidance and curriculum development. K-12 professionals use ACT reports to:
Guide students toward college and career readiness
Assist students with college and career planning
Evaluate the effectiveness of instruction
Plan changes and improvements in curriculum
The ACT Test User Handbook
This handbook offers educators the most comprehensive information for K-12 professionals about the ACT test. In addition to detailed information about updates to the test, accommodations, and reports, the handbook offers helpful tips about:
Preparing for and Taking the ACT
ACT Reports and Services
Uses of ACT Data
ACT Scores
What the act measures .
The ACT contains four multiple-choice tests—English, mathematics, reading, and science—and an optional writing test. These tests are designed to measure skills that are most important for success in postsecondary education and that are acquired in secondary education. The score range for each of the four multiple-choice tests is 1–36. The Composite score is the average of the four test scores rounded to the nearest whole number.
The ACT English test puts an examinee in the position of a writer who makes decisions to revise and edit a text. Short texts and essays in different genres provide a variety of rhetorical situations. Passages are chosen for their appropriateness in assessing writing and language skills and to reflect students’ interests and experiences.
The ACT mathematics test assesses the skills students typically acquire in courses taken through grade 11. The material covered on the test emphasizes the major content areas that are prerequisites to successful performance in entry-level courses in college mathematics. Knowledge of basic formulas and computational skills are assumed as background for the problems, but recall of complex formulas and extensive computation are not required.
The ACT reading test measures the ability to read closely, reason logically about texts using evidence, and integrate information from multiple sources. The test questions focus on the mutually supportive skills that readers must bring to bear in studying written materials across a range of subject areas. Specifically, questions will ask you to determine main ideas; locate and interpret significant details; understand sequences of events; make comparisons; comprehend cause-effect relationships; determine the meaning of context-dependent words, phrases, and statements; draw generalizations; analyze the author’s or narrator’s voice and method; analyze claims and evidence in arguments; and integrate information from multiple texts.
The ACT science test measures the interpretation, analysis, evaluation, reasoning, and problem-solving skills required in the natural sciences. The test presents several authentic scientific scenarios, each followed by a number of multiple-choice test questions. The content of the test includes biology, chemistry, Earth/space sciences (e.g., geology, astronomy, and meteorology), and physics. The questions require you to recognize and understand the basic features of, and concepts related to, the provided information; to examine critically the relationship between the information provided and the conclusions drawn or hypotheses developed; and to generalize from given information to gain new information, draw conclusions, or make predictions.
The optional ACT writing test is an essay test that measures writing skills taught in high school English classes and entry level college composition courses. The test consists of one writing prompt that describes a complex issue and provides three different perspectives on the issue. You are asked to read the prompt and write an essay in which you develop your own perspective on the issue. Your essay must analyze the relationship between your own perspective and one or more other perspectives. You may adopt one of the perspectives given in the prompt as your own, or you may introduce one that is completely different from those given.
Complete information about the ACT test is available in the technical manual.
ACT High School Report
The ACT High School Report provides comprehensive information about a student's needs, interests, background, and abilities. The report includes the following sections:
Identifying Information
Scores and Predictive Data
College Readiness
Information about Colleges
College Selection Items
Educational and Vocational Plans
Educational Needs and Interests
Interest Inventory Scores and Map Regions
You can also see the questions that students answer when they register to take the ACT test:
For training on how to use data from the ACT test for advising and curriculum development, see the list of available videos, webinars, and workshops that ACT offers.
Electronic Score Reporting
Data from the ACT High School Report are available in ASCII flat file format, delivered online for high schools and districts to import into any system set up to receive the data. Records are available on demand as scores are released. Data is also provided for the current testing year as well as three previous testing years.
The current High School Record Layout (xlsx) is the key to interpreting the file you receive from ACT. It identifies location, field name, and field content for each data element.
Help your students do their best on test day!
ACT offers numerous ways for students to prepare for test day, including:
The Official ACT Self-Paced Course, Powered by Kaplan — Bite-sized, on-demand lessons offer the perfect mix of structure and flexibility.
The Official ACT Live Online Class, Powered by Kaplan —A whole team of expert teachers keeps you engaged in a virtual classroom.
The Official ACT Subject Guides - Individual prep books perfect for students who want to practice a specific subject to improve their test scores.
The Official ACT Prep Guide —An ACT-authorized prep book, with three practice tests, each with an optional writing test, plus access to hundreds of additional questions online.
Preparing for the ACT —This free booklet includes helpful test information, a complete practice test, and a sample writing prompt.
Sample Test Questions —Practice questions to give you a taste of what to expect on the ACT test.
Live and On-Demand Webinars - Choose from live and recorded webinars to help make the most of your ACT test prep options.
ACT Online Prep
Help your students prepare for the ACT test. The same research and expertise that has made the ACT test the most widely used college entrance exam in the nation was used to develop ACT Online Prep. Benefits of the program for schools and districts include:
Ability to monitor performance with the Administrator’s Dashboard—See how long your students are spending in the system, how they’re performing on the practice questions and tests, and the areas in which a whole class may need targeted help.
Flexible, personalized learning paths—The ACT Online Prep system drives students through the courses so they can review independently, at their own pace, without falling behind.
Confidence-building experiences—Using the practice questions and tests, students will familiarize themselves with the structure of the actual ACT test. There will be no surprises on test day for your students.
A free mobile app for students so that they can review for the ACT anytime, anywhere.
Please note that ACT test preparation materials are copyrighted and may not be copied or distributed without ACT's prior written permission.
Purchase ACT Online Prep annual licenses for students in your school or district. Discounts are available for GEAR UP organizations and schools with more than 50% free and reduced lunch students.
After your order is processed, each school-level administrator will receive an email from ACT that includes a quick-start guide and instructions for activating their account. The administrator can then upload students, assign instructors, and create classrooms.
Administration User Guide
The ACT Online Prep Administration User Guide (PDF) provides all the information you need to set up classrooms, instructors, and students as well as to monitor performance.
Order ACT Fee Waivers
ACT provides a variety of materials you can use to help your students learn about and prepare for the test.
Use ACT's account-based ordering platform to request fee waivers.
Additional Support Items:
How to Order Fee Waivers
Fee Waiver Eligibility Requirements and Procedures
Prebilled Voucher Order Form
Alternate Format Practice Test Materials Order Form
High School Codes
An ACT/SAT Common High School Code allows students attending your school to send official ACT and SAT scores directly and automatically to your school. Forms are available to apply for a code, to request name or address changes, or to indicate a school is no longer in operation. No payment is required for these services.
Apply for a High School Code
Make Changes to an Existing High School Code or Deactivate a High School Code
Become an ACT Test Center
Help students remove the guesswork in finding a test center. Your school can request to be a test center for students taking the ACT on a national test date. As a test center, your school provides:
A staff member to serve as the Test Supervisor
Other school staff willing to serve as room supervisors and proctors
Space for testing on ACT National Testing dates
Quiet areas—free from distractions and other events
Classrooms, preferably, with full-sized desks
Plenty of space between examinees, for security purposes
A secure location to store test materials
Complete the ACT Test Center Request form if you would like your institution to become an ACT Test Center.
Test Security
More than three thousand colleges, universities, and scholarship agencies use ACT test scores to make decisions about admission, scholarship awards, and course placement. Because these institutions, as well as the examinees, rely on the integrity of ACT test scores, ACT takes seriously the importance of reporting valid test scores.
In addition to conducting our own internal score reviews, ACT regularly receives inquiries from college admissions officers, high school counselors, and others who have concerns about an individual examinee's score.
Score Inquiry
You can report concerns using ACT's Score Inquiry form . ACT will review the inquiry and investigate the validity of the scores. If you prefer, you may submit an inquiry anonymously online or by calling 855.382.2645 to use our dedicated Test Security Hotline.
If ACT initiates a score review, ACT will notify the examinee directly. For privacy reasons, ACT generally does not discuss the details of a score review with anyone other than the examinee unless the examinee expressly authorizes us to do so by executing an Authorization to Release Personal Information form.
Only official score recipients will receive notice of ACT’s decision regarding the validity of the scores. For your institution to be an official score recipient, the examinee must request that ACT send the score report to your institution. An examinee can send official scores by logging into his/her ACT web account and choosing "Send Your Scores."
For complete details of ACT's score review process, please see Procedures for Investigating Testing Irregularities and Questioned Test Scores (PDF).
Additional Information
Authorization to Release Information (PDF)
Procedures for Investigating Testing Irregularities and Questioned Test Scores (PDF)
Mail: ACT Test Security (53) P.O. Box 168 Iowa City, IA 52243-0168
Explore More District Testing Resources
This action will open a new window. Do you want to proceed?
Welcome to ACT
If you are accessing this site from outside the United States, Puerto Rico, or U.S. Territories, please proceed to the non-U.S. version of our website.
This site uses various technologies, as described in our Privacy Policy, for personalization, measuring website use/performance, and targeted advertising, which may include storing and sharing information about your site visit with third parties. By continuing to use this website you consent to our Privacy Policy and Terms of Use .
What makes a good test.
It Measures What It Purports to Measure
This should go without saying. However, tests like the SAT prove that this isn't always the case. For many decades, the test was the "Scholastic Aptitude Test," and it purported to measure intelligence. We helped push the College Board to admit that the SAT did no such thing and the test was renamed in 1994 (fittingly, the acronym "SAT" now stands for nothing at all). The College Board also promoted the idea that the SAT measures high school studies in a way that eliminated differences in grades across schools and classes; The Princeton Review helped show that the test measures almost nothing taught in high school. Finally, the test's advocates claim it predicts college success. In fact, the correlation between college performance and SAT scores is weaker than that indicated by high school transcripts and not much better than family income and other purely socioeconomic indicators.
It's Unbiased
High–stakes tests should be unbiased. That doesn't mean that every demographic group should score equally well; it simply means that similar students should achieve similar scores. Women score 40 to 50 points lower on the SAT, for example, than do men, though they have better grades in both high school and college. Since SAT scores determine both admissions and scholarship/financial aid awards, women are doubly penalized.
At the same time tests can help highlight unequal outcomes, as with the provision in No Child Left Behind that requires states to report scores separately for each subgroup of students (boys and girls, rich and poor, white and non–white, etc.). This disaggregated data helps expose systems that are failing the kids most in need.
It's Fair, Open, and Has Reasonable Policies
More than 20 years ago, New York State Senator Ken LaValle promulgated the first "Truth–in–Testing" laws. They required that any admissions test given in New York be subject to independent review of its content and scoring, that test items and scoring be publicly released, and that there be due process for any student accused of cheating or other irregularities.
Although the Educational Testing Service (the designers of the SAT) lobbied hard against the laws, claiming they would raise the cost and difficulty of administering the test so greatly that they would no longer be able to give them in New York, no such calamities took place. In fact, the testing companies have since stated that Truth–in–Testing has made their tests better. We believe that all high–stakes tests, including those used for K-12 accountability, should be subject to similar rules.
It Promotes Good Education
High–stakes tests are more than snapshots or benchmarks. They are powerful motivators of whatever behavior–good or bad– will lead most directly to higher scores. You can see this demonstrated when some schools drop recess in favor of narrow drill–and–kill practice sessions, when others provide their teachers with high–quality professional development designed to improve teaching and learning, or by the hard work kids do in our test prep courses to learn skills that are useful only on tests.
Psychometricians tend to focus on a test's accuracy, precision, and reliability. More important than those, though, are the teaching and learning behaviors that the test promotes. The best way to prepare for an essay test is to write a lot; not surprisingly, teachers in states giving essay tests have their students write more. Though it may be no better than a multiple–choice test at assessing writing skills (and somewhat more expensive) it has the inherent advantage of actually getting teachers to get kids to write.
Test Preparation Should Be Stress–Relieving, Not Stress–Inducing
If you give high–stakes tests, people will prepare for them. The question is whether that preparation is efficient, equitable, and effective and does not distort or disrupt the larger learning context. Our twenty–five years of test prep experience has taught us that, above all, there is a time and a place for it, ideally one that is as minimally intrusive and time–consuming as possible. When we work with schools, we show teachers how to avoid test prep and instead focus on the real work of education that keeps their classrooms from becoming dominated by deadening exercises intended primarily to raise test scores. Assessment should be the tail, not the dog.
From the outset, we've made sure that the fruits of our research and development have been widely available through inexpensive books, free distance learning, and courses underwritten by The Princeton Review Foundation. And we work with thousands of schools at every socioeconomic level to help them deal with testing and admissions issues, almost always at a cost savings relative to their existing approaches, and with better outcomes.
Finally, we measurably improve performance. We hire outside firms to assess our results, and participate in every third–party study proposed to us. We encourage our customers to ask us–as well as our competitors–for documentation of any performance claims
Through all of these approaches, and with a consistently honest voice about testing, we've tried to relieve, rather than stimulate, anxiety.
Free MCAT Practice Test
I already know my score.
MCAT Self-Paced 14-Day Free Trial
Enrollment Advisor
1-800-2REVIEW (800-273-8439) ext. 1
1-877-LEARN-30
Mon-Fri 9AM-10PM ET
Sat-Sun 9AM-8PM ET
Student Support
1-800-2REVIEW (800-273-8439) ext. 2
Mon-Fri 9AM-9PM ET
Sat-Sun 8:30AM-5PM ET
Partnerships
Teach or Tutor for Us
College Readiness
International
Advertising
Affiliate/Other
Enrollment Terms & Conditions
Accessibility
Cigna Medical Transparency in Coverage
Register Book
Local Offices: Mon-Fri 9AM-6PM
SAT Subject Tests
Academic Subjects
Social Studies
Find the Right College
College Rankings
College Advice
Applying to College
Financial Aid
School & District Partnerships
Professional Development
Advice Articles
Private Tutoring
Mobile Apps
International Offices
Work for Us
Affiliate Program
Partner with Us
Advertise with Us
International Partnerships
Our Guarantees
Accessibility – Canada
Privacy Policy | CA Privacy Notice | Do Not Sell or Share My Personal Information | Your Opt-Out Rights | Terms of Use | Site Map
TPR Education, LLC (doing business as “The Princeton Review”) is controlled by Primavera Holdings Limited, a firm owned by Chinese nationals with a principal place of business in Hong Kong, China.
The GRE ® General Test
One test for graduate, business and law school
Select a step to learn more about your GRE ® General Test journey.
Overview of the Analytical Writing Measure
Analytical writing measure (beginning september 22, 2023).
The Analytical Writing measure of the GRE General Test administered beginning September 22, 2023, assesses your critical thinking and analytical writing skills by assessing your ability to:
articulate and support complex ideas
construct arguments
sustain a focused and coherent discussion
It doesn’t assess specific content knowledge.
The Analytical Writing measure consists of a 30-minute “Analyze an Issue” task. This task presents an opinion on an issue and instructions on how to respond. You’re required to evaluate the issue, consider its complexities and develop an argument with reasons and examples to support your views.
You’ll use a basic word processor developed by ETS to type your essay responses. The word processor contains the following functionalities: insert text, delete text, cut-and-paste and undo the previous action. Tools such as a spellchecker and grammar checker are not available.
Analytical Writing Measure before September 22, 2023
The Analytical Writing measure of the GRE General Test administered before September 22, 2023, assesses your critical thinking and analytical writing skills by assessing your ability to:
construct and evaluate arguments
The Analytical Writing measure consists of two separately timed analytical writing tasks:
The "Analyze an Issue" task presents an opinion on an issue and instructions on how to respond. You’re required to evaluate the issue, consider its complexities and develop an argument with reasons and examples to support your views.
The "Analyze an Argument" task requires you to evaluate an argument according to specific instructions. You’ll need to consider the logical soundness of the argument rather than agree or disagree with the position it presents.
The two 30-minute tasks are complementary. The Issue task requires you to construct your own argument, while the Argument task requires you to evaluate someone else's argument.
Preparing for the Analytical Writing measure
Everyone — even the most practiced and confident of writers — should spend time preparing for the Analytical Writing measure to understand the skills measured and how the tasks are scored. It may also be useful to review the scoring guides, sample topics, scored sample essay responses and rater commentary for each task.
The tasks in the Analytical Writing measure relate to a broad range of subjects — from the fine arts and humanities to the social and physical sciences — but don’t require specific content knowledge. Each task has been tested by actual GRE test takers to ensure that it possesses several important characteristics, including the following:
GRE test takers, regardless of their field of study or special interests, understood the task and could easily respond to it.
The task elicited the kinds of complex thinking and persuasive writing that university faculty consider important for success in graduate school.
The responses were varied in content and in the way the writers developed their ideas.
Published topic pools for the Analytical Writing measure
To help you prepare for the Analytical Writing measure, the GRE Program has published the entire pool of tasks from which your test tasks will be selected. You might find it helpful to review the Issue and Argument pools:
Issue Topic Pool (PDF)
Argument Topic Pool (PDF) (the Argument task was removed from the General Test beginning September 22, 2023)
Test-taking strategies for the Analytical Writing measure (in the General Test beginning September 22, 2023)
Before taking the GRE General Test, review the strategies, sample topics, sample essay responses with rater commentary, and scoring guide for the task. This will give you a deeper understanding of how raters evaluate essays and the elements they're looking for in an essay.
It is important to budget your time. Within the 30-minute time limit, allow sufficient time to consider the issue and the specific instructions, plan a response, and compose your essay. You want your essay response to be the best possible example of your writing that you can produce under the testing conditions.
Save a few minutes at the end of the timed task to check for obvious errors. An occasional spelling or grammatical error won’t affect your score, but serious and persistent errors detract from the overall effectiveness of your writing and lower your score accordingly.
The Complete Guide to Standardized Essay Tests by JR Education Resources
7 Steps to Studying for an Essay Test
PPT
Essay Test: The Ultimate Guide with The Best Strategies
What's a Good Essay Score?
What Is an Evaluation Essay? Simple Examples To Guide You
VIDEO
How to Write an Evaluation Essay
ESSAY CLASSES & TEST SERIES FOR TS & AP BILINGUAL
English Academic Writing Review: Problem and Solution Essay
Essay Test 2 Discussion OCS 2023 Live Class
UPSC Essay Test series (Test-III)
Essay Test No 01 Explanation By Dr Vivekananda Sir
COMMENTS
Essay Test: The Ultimate Guide with The Best Strategies
An essay test, a fundamental tool in academic assessment, measures a student's ability to express, argue, and structure their thoughts on a given subject through written words. This test format delves deeper into a student's critical thinking and writing skills unlike other conventional exam types.
Psychology: A Concise Introduction 5th Edition: Ch 5 Study ...
An essay test measures _____, and a multiple-choice test measures _____. recall; recognition. Which of the following theories of forgetting argues that the forgotten information was in long-term memory but is no longer available? storage decay theory.
Improving Your Test Questions
measure almost any important educational achievement a written test can measure. test understanding and ability to apply principles. ... An essay test item can be classified as either an extended-response essay item or a short-answer essay item. The latter calls for a more restricted or limited answer in terms of form or scope. An example of ...
Writing Good Multiple Choice Test Questions
In addition, the objective scoring associated with multiple choice test items frees them from problems with scorer inconsistency that can plague scoring of essay questions. Validity: Validity is the degree to which a test measures the learning outcomes it purports to measure. Because students can typically answer a multiple choice item much ...
PDF AN APPROACH TO ESSAY TESTS
AN APPROACH TO ESSAY TESTS Essay tests are tests for which students give long, written answers to test questions. They are sometimes called subjective tests because they are graded on the basis of the judgments, opinions, and preferences of the person who reads them. When you write answers to an essay test, one instructor might think your
PDF Strategies for Essay Writing
Harvard College Writing Center 2 Tips for Reading an Assignment Prompt When you receive a paper assignment, your first step should be to read the assignment
PDF Essay Tests
part of class tests. Essay tests differ from essays written outside class in several ways that make them more intimidating: they are timed, the questions/prompts are not known before the test begins, and help or advice is not permitted. Following the steps below will help you prepare to write a successful essay under these conditions. But let us
PDF Essay Exams: Common Question Types
Essay Exams: Common Question Types, Spring 2009. Rev. Summer 2014. 1 of 2 Essay Exams: Common Question Types When approaching any essay exam, it is important to identify what kind of response is expected—that is, what is being asked of you and what information you are required to include.
Writing Test Prep
The ACT writing test is a 40-minute essay test that measures your writing skills. The test consists of one writing prompt that will describe a complex issue and present three different perspectives on that issue. It is a paper-and-pencil test. You will write your essay in pencil (no mechanical pencils or ink pens) on the lined pages of an ...
Essay Test: Types, Advantages and Limitations
1. It is relatively easier to prepare and administer a six-question extended- response essay test than to prepare and administer a comparable 60-item multiple-choice test items. 2. It is the only means that can assess an examinee's ability to organise and present his ideas in a logical and coherent fashion. 3.
PDF HOW TO WRITE BETTER TESTS
to use. Two-thirds of the faculty surveyed said they preferred the essay format but could not use it because of the size of their classes. They used essay tests only in small classes. 3. Time Available to Prepare and Score Test It takes a long time to score an essay test. By contrast, it takes a long time to construct a multiple-choice test.
PDF How to Prepare Better Multiple-Choice Test Items: Guidelines for
er the distinction the students must make in order to identify the correct answer. Multiple-choice items are amenable to item analysis, which enables the tea. er to improve the item by replacing distractors that are not functioning properly. In addition, the distractors chosen by the student may be used to. iagnose m.
PDF A Separate title page
An essay test is a direct procedure for assessing candidates' writing ability. Essay tests are comprehensive tests which target at ... about the connection between what a test claims to measure and what is actually measures. All of these definitions imply the relative nature of validity.i.e, that no test
What's on the Tests
WritePlacer Essay. The WritePlacer essay measures your ability to write effectively, which is critical to your academic success. Your score is based on your ability to express, organize, and support your opinions and ideas. ... The ESL Listening test measures your ability to listen to and understand one or more people speaking in English ...
About the ACT Test
The optional ACT writing test is an essay test that measures writing skills taught in high school English classes and entry level college composition courses. The test consists of one writing prompt that describes a complex issue and provides three different perspectives on the issue. You are asked to read the prompt and write an essay in which ...
What Makes a Good Test?
The best way to prepare for an essay test is to write a lot; not surprisingly, teachers in states giving essay tests have their students write more. Though it may be no better than a multiple-choice test at assessing writing skills (and somewhat more expensive) it has the inherent advantage of actually getting teachers to get kids to write.
GRE General Test Analytical Writing Overview
The Analytical Writing measure of the GRE General Test administered before September 22, 2023, assesses your critical thinking and analytical writing skills by assessing your ability to: articulate and support complex ideas. construct and evaluate arguments. sustain a focused and coherent discussion. It doesn't assess specific content knowledge.
Psych 201 Final Chapters 6 & 7 Flashcards
The reason students may perceive an essay test as being "more difficult" than a multiple-choice test is probably that an essay relies on the ability to _____ without any retrieval cues, while the multiple-choice test measures_____ use recall; recognition.
What Is the ACT? A Complete Guide
The ACT is one of the most popular standardized college entrance exams in the U.S. The three-hour test measures college readiness in English, math, reading, and science. Test-takers can also sit for an optional essay section. Scoring well on the ACT is a strong indicator of success your first year of college.
An essay test measures _____ and a multiple-choice test measures
An essay test measures recall while a multiple-choice test measures recognition. Recall requires one to generate an answer and then decide if it seems correct, a two-step process which is typically more challenging. Recognition, however, simply involves identifying the correct answer from a given set of options. Both testing methods assess ...
Opinion
Readers offer different answers in response to a guest essay. Also: Protests on campus; China's adoption policy; Trump the fixer; hearing loss.
IMAGES
VIDEO
COMMENTS
An essay test, a fundamental tool in academic assessment, measures a student's ability to express, argue, and structure their thoughts on a given subject through written words. This test format delves deeper into a student's critical thinking and writing skills unlike other conventional exam types.
An essay test measures _____, and a multiple-choice test measures _____. recall; recognition. Which of the following theories of forgetting argues that the forgotten information was in long-term memory but is no longer available? storage decay theory.
measure almost any important educational achievement a written test can measure. test understanding and ability to apply principles. ... An essay test item can be classified as either an extended-response essay item or a short-answer essay item. The latter calls for a more restricted or limited answer in terms of form or scope. An example of ...
In addition, the objective scoring associated with multiple choice test items frees them from problems with scorer inconsistency that can plague scoring of essay questions. Validity: Validity is the degree to which a test measures the learning outcomes it purports to measure. Because students can typically answer a multiple choice item much ...
AN APPROACH TO ESSAY TESTS Essay tests are tests for which students give long, written answers to test questions. They are sometimes called subjective tests because they are graded on the basis of the judgments, opinions, and preferences of the person who reads them. When you write answers to an essay test, one instructor might think your
Harvard College Writing Center 2 Tips for Reading an Assignment Prompt When you receive a paper assignment, your first step should be to read the assignment
part of class tests. Essay tests differ from essays written outside class in several ways that make them more intimidating: they are timed, the questions/prompts are not known before the test begins, and help or advice is not permitted. Following the steps below will help you prepare to write a successful essay under these conditions. But let us
Essay Exams: Common Question Types, Spring 2009. Rev. Summer 2014. 1 of 2 Essay Exams: Common Question Types When approaching any essay exam, it is important to identify what kind of response is expected—that is, what is being asked of you and what information you are required to include.
The ACT writing test is a 40-minute essay test that measures your writing skills. The test consists of one writing prompt that will describe a complex issue and present three different perspectives on that issue. It is a paper-and-pencil test. You will write your essay in pencil (no mechanical pencils or ink pens) on the lined pages of an ...
1. It is relatively easier to prepare and administer a six-question extended- response essay test than to prepare and administer a comparable 60-item multiple-choice test items. 2. It is the only means that can assess an examinee's ability to organise and present his ideas in a logical and coherent fashion. 3.
to use. Two-thirds of the faculty surveyed said they preferred the essay format but could not use it because of the size of their classes. They used essay tests only in small classes. 3. Time Available to Prepare and Score Test It takes a long time to score an essay test. By contrast, it takes a long time to construct a multiple-choice test.
er the distinction the students must make in order to identify the correct answer. Multiple-choice items are amenable to item analysis, which enables the tea. er to improve the item by replacing distractors that are not functioning properly. In addition, the distractors chosen by the student may be used to. iagnose m.
An essay test is a direct procedure for assessing candidates' writing ability. Essay tests are comprehensive tests which target at ... about the connection between what a test claims to measure and what is actually measures. All of these definitions imply the relative nature of validity.i.e, that no test
WritePlacer Essay. The WritePlacer essay measures your ability to write effectively, which is critical to your academic success. Your score is based on your ability to express, organize, and support your opinions and ideas. ... The ESL Listening test measures your ability to listen to and understand one or more people speaking in English ...
The optional ACT writing test is an essay test that measures writing skills taught in high school English classes and entry level college composition courses. The test consists of one writing prompt that describes a complex issue and provides three different perspectives on the issue. You are asked to read the prompt and write an essay in which ...
The best way to prepare for an essay test is to write a lot; not surprisingly, teachers in states giving essay tests have their students write more. Though it may be no better than a multiple-choice test at assessing writing skills (and somewhat more expensive) it has the inherent advantage of actually getting teachers to get kids to write.
The Analytical Writing measure of the GRE General Test administered before September 22, 2023, assesses your critical thinking and analytical writing skills by assessing your ability to: articulate and support complex ideas. construct and evaluate arguments. sustain a focused and coherent discussion. It doesn't assess specific content knowledge.
The reason students may perceive an essay test as being "more difficult" than a multiple-choice test is probably that an essay relies on the ability to _____ without any retrieval cues, while the multiple-choice test measures_____ use recall; recognition.
The ACT is one of the most popular standardized college entrance exams in the U.S. The three-hour test measures college readiness in English, math, reading, and science. Test-takers can also sit for an optional essay section. Scoring well on the ACT is a strong indicator of success your first year of college.
An essay test measures recall while a multiple-choice test measures recognition. Recall requires one to generate an answer and then decide if it seems correct, a two-step process which is typically more challenging. Recognition, however, simply involves identifying the correct answer from a given set of options. Both testing methods assess ...
Readers offer different answers in response to a guest essay. Also: Protests on campus; China's adoption policy; Trump the fixer; hearing loss.