An assessment may contain a mixture of items which are to be automatically scored and items which need human intervention (and even individual interactions within one test item may be mixed). Some questions might be simple multiple choice questions which can be auto-scored in TAO, whilst others are open-ended questions which need a human scorer to mark them. As the assessment industry becomes increasingly digitized, institutions need a streamlined and secure way of measuring and storing assessments of all types, including those that contain open response sections.
TAO Grader, TAO’s open response scoring software, enables teachers and other subject matter experts (SMEs) to manually score the tests that have been set up in the TAO platform, providing a more efficient solution for grading open response and essay questions. With TAO Grader, written responses filter into TAO’s reporting tools in an electronic format, rather than as a digital scan. This makes it much easier to assign scores, move and share test response files, designate internal scorers who complete the first grading pass, and assign reviewers to support or negate those marks.
While automated scoring eliminates grading work for many types of assessment questions, like multiple choice or gap match, there may still be test sections that require written responses, which then need to be graded by a teacher or subject-matter expert. Many complications can arise when grading essays and other types of open response questions by hand without a manual scoring engine. These can include anything from difficulties deciphering handwriting, to challenges storing results and issues when scanning those written assessment sections into an assessment reporting solution.
TAO Grader solves for these challenges by providing educators with a streamlined, intuitive approach to accessing and grading open response questions online. This effectively eliminates the need to print, score and store large-volumes of paper-based assessments by hand.
A crucial advantage of a manual scoring system is the ability to assign and grade responses in teams. TAO enables asynchronous team scoring, reducing the operational overhead required when printing, organizing and scoring exams by hand. Not only does team-based grading lighten the workload for scorers and shorten the results feedback time, but it also enables subject matter experts to participate in scoring, and effectively eliminates scorer to test-taker biases.
For example, groups of scorers in TAO may be randomly assigned questions for grading on either a subject category basis, or based on user defined test-taker groups. In either case, the test-taker names are obfuscated from the scorer view to enable objective, unbiased scoring.
High-stakes and large-scale exams, like the ACT or SAT in the United States and Matura or Baccalaureate exams in Europe, are organized into sections by specific subjects or learning outcomes. Grading open ended assessment questions in different subjects, or competencies within a subject, often requires that scorers have niche expertise in those topics.
In this case, manual scoring systems like TAO Grader that allow you to categorize assessment items and assign scorers by subject, rather than the whole test, can lead to more effective, faster and streamlined scoring. This functionality can also help simplify scoring workflows, ensuring that the right subject matter experts are scoring the appropriate test questions.
TAO Grader’s project organization and queue for grading open-ended responses is tailored for both small-scale, low-stakes and national, high-stake assessment programs. As an out-of-the-box solution, TAO’s open response grader is fully integrated in the TAO platform and offers streamlined, end-to-end workflows for organizing, storing and sharing results and student data. This is a multilingual solution, and can be easily extended to support new language functionality.
Rubrics are efficient tools for consistently scoring open ended responses, and play a key role in online assessment. Rubrics are scoring tools designed to describe or classify performance levels for test-takers based on a number of criteria. By laying out specific criteria in a rubric, educators are able to communicate the learning objectives and level of mastery on which they will evaluate their test-takers. Additional advantages of rubric-based scoring of open responses in online assessment include:
- Laying out an unbiased, uniform scoring mechanism
- Facilitating quicker grading for educators and feedback for test-takers
- Improving communication with test-takers by clearly defining grading expectations
- Inspiring critical thinking
TAO’s manual scoring software provides a built-in solution for attaching digital PDF rubrics to facilitate the grading of written test responses. Students are able to understand the expectations of the test and how they will be evaluated, while teachers receive the incoming responses immediately after submission, and have a consistent set of guidelines to grade against to support faster feedback.
With TAO Grader, the users that perform grading and rescoring tasks are known as Scorers and Reviewers respectively. After assigning subject categories to assessment questions, you can assign groups of designated scorers and reviewers to test items or sections. Workflows then enable these users to visualize and walk through their scoring tasks in a streamlined and efficient manner.
The individual test-takers’ names are obfuscated during scoring to eliminate scorer bias. Additionally designating a reviewer to confirm or challenge the original scorer’s rating further ensures an objective approach when double-blind scoring is necessary.
TAO open response grading software includes a feature that allows other scoring project users to “rate the rater,” i.e. to grade different scorers on their scoring abilities. For example, you may have a scorer who is consistently marking lower or higher than a reviewer for certain questions or subject categories. In this case, it is valuable to evaluate that user and change or remove their scoring projects. For high-stakes assessment especially, it’s crucial that scorers are vetted and up-to-par so as not to interfere with student progress by assigning lower or higher scores than are warranted.
A modern and easy to use solution for non tech-savvy users can help scorers complete grading tasks remotely. Today’s modern educators users are looking for flexibility in running their EdTech solutions, leveraging multiple devices and browsers. Having the ability to score assessments on a tablet or laptop makes it possible to grade responses remotely and still provide fast feedback to students.
Accessibility is always a top concern in online assessment, but is all too often overlooked. Following WCAG 2.1 AA accessibility requirements and Section 508 Guidelines, TAO’s online assessment solutions are designed for accessibility from the ground up to ensure access for all aspects of the testing cycle. The web-based user interface for TAO Online Manual Scoring Professional is innately accessible, and can be used with a screen-reader for those users with visual impairments.
Learn more about TAO's accessibility here.
Sometimes, different aspects of a response need to be awarded separate scores, meaning that multiple scores are needed for one item. This is described as multi-trait scoring. In TAO, designating multi-trait scoring means that an item has more than one outcome which needs to be manually scored. For example, an essay question might take into account traits like, “grammatical accuracy” and “word choice,” which apply to the same questions, but require different scores.
A robust online scoring system needs to support open-response questions in multiple languages in order to facilitate different types of assessments, like those testing writing fluency. TAO’s online manual scoring software is multilingual from the ground up, designed to support open-response grading for different languages.