RATER MODULE ENHANCEMENTS
The latest OWL Testing Software release, OWL v. 5.2.0, includes the beta release of two new item types as well as, enhancement to the rating process. This includes enhancements to the rating module, assessment processing algorithms, and the list pages and reports associated with manual ratings.
In the rating module, rubrics will no longer have the right most column pre-selected as a default rating. It is now mandatory that every rubric be fully rated, or marked as unratable, before the "Commit Rating" button can be clicked.
Rubric Pre-Selections Removed | When a rater opens a response to perform a manual assessment using the OWL Rater module, the rightmost column of the rubric will no longer be pre-selected. As a result the rater will not be able to click ”Commit Rating” until they have selected an option on every rubric associated with the OWL Activity. An exception is made to this requirement if the response is deemed to be unratable. More information on this exception is detailed below.
“Unratable” and “Technical Issues” Designations | Two new check boxes have been added to the response rating interface.
- Performing a Rating - Raters now have the ability to mark a test taker’s response as having “Technical Issues” or as being “Unratable”. In the event that a proficiency level cannot be assigned to part of test taker’s response due to the quality of the testing session, a rater can still Commit an assessment on the ratable portion of the response. This feature removes the need for a “No Rating” column in rubrics and removes the need for a “Technical issues” comment or rubric column.
- Viewing a Rating - When viewing the assessment in OWL Rater, the “Unratable” and/or “Technical Issues” labels appear in the header of the Test, Section or Item to which the status has been assigned.
- Reporting Results - For reporting purposes these designations appear in the various Assessment Reports under the headings “Ratable Status” (unratable/ratable) and “Technical Status” (satisfactory/unsatisfactory). These reports include:
- Test Scores
- Test Rubric Ratings
- Section Scores
- Section Rubric Ratings
- Item Scores
- Item Rubric Ratings
ASSESSMENT LIST CHANGES
This screen capture of the Assessment List page is numbered to correspond with the changes outlined below.
View Final (1)- View finalized assessment of the examinee's Response.
formerly: NA - New Command
change to assessment: None
"Processing" Status (2) - Additional services have been added to the processing of assessment data within the OWL system. While this requires NO added steps on the part of the OWL User, it does mean that you may occasionally see the word “processing” under status in the Response table of the Assessment Page. At these times, the icons for reviewing and reopening/editing a particular assessment are marked as “NA”.
If this is the case, simply wait approximately 30 seconds and refresh the page using the refresh button at the top right hand side of the the List Page.
View (3) - Click to view the assessment as the examinee would see it. (w/ randomization that examinee experienced during test session)
formerly: View as Examinee
change to assessment: None
Review (4) - Click to inspect the rater’s assessment of the response.
change to assessment: None (if you close without clicking "Commit Rating")
change to assessment: A new assessment will be created with you as the rater. (if you make changes, and click “Commit Rating”)
Edit (5)- Use this command if you wish to repoen the assessment.
change to assessment: You will become the rater of record for the edited assignment. No new rating will be created.
Unratable/Technical Issues (6) - (Yes/No) indicates adequacy of response.
formerly: NA - New
OTHER USER-REQUESTED ENHANCEMENTS
External Item Type (beta) | We have beta released to a limited number of users a new item type called an “External Item”. This item type allows information from external sources to be linked with an examinee’s response record in the OWL TMS. In this way, users can evaluate examinees on a variety of different responses including video files, images, .pdf’s, Word Docs and any other possible file types. OWL raters can then evaluate all of a test taker’s responses from multiple sources using a single interface, the OWL Rater module.
Drop Down Item Builder (beta) | The Item Builder for creating Drop Down Items has been further modified to reflect improvements recommended by the beta testing process. This item continues to be a beta feature.
Testing Session Algorithms | OWL is moving from using “Finalization Algorithms” toward using “Response Processing Algorithms." This gives each unique rating process the ability to change in a greater variety of ways. Potential applications include SPEAK assessment processing and averaging scores across sections.