OWL Articles

17 March 2017

Dr. Thekla Fall is NECTFL 2017 Brooks Award Winner

Congratulations to the Northeast Conference on the Teaching of Foreign Languages award winner, Dr. Thekla Fall. Thekla was recently honored with NECTFL's 2017 Nelson H. Brooks Award for Outstanding Leadership in the Profession. Read more at the (NECTFL Website).

Dr. Fall is a world language consultant and retired curriculum supervisor from Pittsburgh Public Schools. She is also a frequent contributor to the OWL Testing Software blog. You can check out some of her articles on issues that face World Language Learning and Assessments.


15 January 2015

OWL Founder to speak at CALICO Conference

Because OWL goes beyond prepackaged solutions by creating a unique solution for each of its users, custom activities can be created to practice and test the most difficult to measure parts of language acquisition – speaking and cultural interaction. With OWL, language departments can have the features and controls needed to manage both the on-campus and distance assessment programs. No special software needs to be installed on the computers; each student uses a web browser to access the test from virtually anywhere in the world, no matter which language the student is learning. Likewise, the raters can rate the speech samples from any web-enabled computer, anywhere and at any time. This significantly reduces the complexity of delivering assessments and practice activities in a synchronous mode.

This session will demonstrate a variety of activities in all four communicative modes. Including both practice activities, tutorials, as well as high and low stakes assessments. Demonstrations will be in a variety of languages. This presentation will be interactive with time available for questions and demonstrations of specific techniques.

Topics covered will include:

  • Rubric Builder – Creating both analytic and holistic, as well as scored and unscored rubrics

  • Test Creation – A wide variety of item types, randomization and item banking

  • Activity Management – Scheduling testing, self registration and test security

  • Managing Rating – Blind ratings, managing multiple ratings and automating a percent of re-ratings

  • Rating Module – Use of written and spoken feedback

  • Reporting – Pivot, item analysis and grade-book reports

  • Integration – Single Sign On, Student Information System, grade-book


11 May 2010

 

New Language Testing Software a Win-Win for Students & Faculty

TERRY MARES — The Department of Modern Languages at the College of Staten Island recently implemented new oral OWL Testing Software. The platform is now being used by more than 800 CSI students for all proficiency levels of French, Spanish, and Italian. There are also plans for the Department to employ the software for the assessment of Mandarin Chinese and Basic Arabic later this semester.

Commenting on the choice of the new platform, Valeria Belmonti, Director of CSI's Modern Languages Media Center, says, “The Media Center was looking for a user-friendly application to provide enough flexibility for faculty to customize oral exams according to languages and levels. Another important criterion was to find software that includes a built-in assessment module that would allow faculty to grade oral exams on the same platform used by students to record oral exams. Training efficiencies are quickly realized by having students and professors using the same software. We also needed an online application that would allow faculty to grade exams on and off campus.

What does this platform have that old evaluation methods didn’t? According to Belmonti, “OWL allows us to integrate multimedia into the questions, time students’ answers, randomize questions in various ways, and apply different grading rubrics and/or points systems to different sections of an oral exam. OWL’s ability to generate reports also makes it easy for the Department to record and analyze the results of oral exams. [In addition] the OWL built-in audio recorder has particularly streamlined the process of oral exams, since students now access the questions and record their answers using the same interface.

Gerry Milligan, Acting Chair of the Modern Languages Department, Assistant Professor of Italian, and Italian Studies Coordinator, states, “The software is useful for faculty because it allows for swift oral exams [which is particularly useful when a large number of students require an oral exam at any given time]…Also, the software allows faculty to create an electronic portfolio of students’ verbal skills at critical moments in their training.

This is particularly useful for outcomes assessment because the portfolios allow faculty to determine students’ performance at each level of our language sequence. Finally, the possibility of giving the students oral feedback is fundamental to error correction of pronunciation, a central issue in second language acquisition.”

As for the benefits to students, Milligan points out that “the software, unlike the online activities provided by the students’ workbooks, allows students to create a portfolio of verbal recordings. They can practice their verbal skills and even receive recorded oral feedback from the instructor. They can also listen to their previous recordings in order to practice their speech as well as monitor their own progress. Ultimately, we are creating superior language learners and, in particular, better second language speakers.”

Professor of French and French Program Coordinator Kathryn Talarico notes that her students “seem to like the ability to see immediate results and get oral feedback from their instructor. Some students have said that they like the pressure of being timed in their answers since it forces them to think on their feet and to respond quickly. Since language study is all about communication, a serious program that tests oral skills (listening and speaking) makes learning a language more authentic.”

Regarding her opinion as an educator, Talarico states that, beyond the platform’s ease of use, “I have found that we can do more intensive oral testing and training of students, something that really isn’t done with any regularity or consistency at other colleges around the country. In French, we use the software for both testing and for systematic training of students’ pronunciation. The software allows instructors to leave oral feedback from students, so, for instance, if they mispronounce something or make a mechanical error, the instructor can record the correct answer. Students can get their grades with both oral and written feedback a few days after the test or exercise is over.”

Another faculty user, Sarah Pollack, Assistant Professor of Spanish and the Coordinator of the Spanish Program, says that when it comes to her students “Listening and speaking are probably the hardest language skills to acquire in the classroom. By incorporating this software into the curriculum, students will be afforded more opportunities to practice and be evaluated on these fundamental areas of communication. I also agree with Professor Talarico that the ability to leave oral feedback for students is incredibly beneficial, as they can get immediate, individualized feedback on their speaking–something that is difficult to do effectively in the classroom setting.”

Pollack also appreciates the flexibility of the software and notes that “the OWL platform allows us to create oral exercises that are close to real-life situations, and [this allows] students to practice language in a more authentic setting.” In addition, she says, “We can now efficiently create exercises and exams that evaluate all of our students in a creative and systematic fashion. My hope is that we can slowly increase our use of OWL in the Spanish Program until we are able to give as much weight to oral work as to written work.”


30 April 2010

Pittsburgh Public Schools World Language Competition

PITTSBURGH (April 30, 2010) – It may sound like an impossible combination, but that is just what the Pittsburgh Public Schools did by taking advantage of OWL’s state-of-the-art language practice technology. In a recent article released by the Pennsylvania State Modern Language Association (PSMLA), Dr. Thekla Fall describes how Pittsburgh Public Schools (PPS) used OWL’s Practice Activities for Language Students (PALS) module to create a district-wide language competition.

Background - Over the last 10 years, the district administered annual, online, speaking proficiency tests using OWL Testing Software. Using OWL’s reporting features to collect and analyze the results, they had determined that one of the main reasons students have difficulty in moving up the Novice Levels of the ACTFL Scale is their limited command of vocabulary. It was decided, therefore, to use Title VI, USDE, Foreign Language Assistance Program (FLAP) funding to purchase an online Practice Activities for Language Students (PALS) Program -- a supplementary component of the OWL Testing Software.

The Competition - In December, PPS instituted the first, district-wide, PALS Competition. According to Dr. Fall, the goal of the PALS contests is to motivate students to learn lots of contextualized vocabulary in a fun and interesting way within a relatively short time period. It is expected that with successful, focused, goal-oriented practice during the competition, students will become more effective and efficient independent users of PALS throughout the year.

The competition offered something for all levels of language students from K-12. As one might expect, the students were highly motivated by the chance to win a pizza party. Some third grade students even volunteered to come in early to school, stay and work through lunch, and work during bus room after school. Students played some of the activities over 50 times. Perhaps most importantly, their teacher has observed the students using the vocabulary from the practice activities in their daily exchanges.

Dr. Isabel Espino de Valdivia’s Japanese 2 class was the winning high school class. According to Dr. Valdivia, “The High School Japanese classes found the PALS activities especially valuable because they include a reading component. Students in level 1 and 2 learn Hiragana and Katakana, each of which has 46 symbols plus combinations. The reading in PALS is presented in a theme context with visuals and this helps students to make connections integrating the symbols at a new level in the brain. They are not just reading symbols in isolation but they are making meaning out of them, connecting them to the theme and visuals. This is especially critical for American students who only use the alphabet system.”

Unlike many contests that are one-shot, right or wrong efforts, the PALS program allows students infinite opportunity to not just show what they know—but also to learn as they go. The activity items are limited in number (8) to encourage rapid learning. Students receive immediate feedback. To increase their likelihood of success, they are encouraged to move through the activities from the easier receptive skills to the more difficult productive skills.

The Role of OWL PALS Program - OWL’s PALS Program carried the brunt of the organizational effort for the competition: asking questions, scoring, collecting, and tallying the needed student data. The district was able to execute this monumental competition with relatively little fuss or bother! No busses were needed. Students competed from their home schools throughout the two weeks.

  • Direct Curriculum Connection - The PALS Program included separate activities that were created by teachers during summer curriculum writing, ensuring a direct tie-in to the district’s curriculum.

  • Daily Feedback - Using OWL’s reporting capabilities; the district could provide their students daily postings of their progress relative to the other classes across multiple grades and schools. This made the students highly motivated to increase their practice activities. Multiple Languages - The district was able to create a competition that crossed all seven of their available languages -- Chinese, French, German, Italian, Japanese, Russian, and Spanish.

  • Ready Access - Because the PALS program runs on OWL’s web-based platform, teachers who wanted to participate could find ways to do so. They did not have to adhere to the oft-limited computer lab schedules. Additionally, OWL’s PALS activities are available online throughout the school year, the goal is to give students (and parents) anywhere/anytime access. Students can practice vocabulary outside of class: at home, in daycare settings, libraries, community centers, etc. This enables students to be better prepared for real communicative exchanges with their teacher during class time.

  • Familiar Context - The contexts are similar to what students will encounter when taking a SOPI-type oral proficiency test.

  • 4 Practice Modes - Every PALS Activity has 8 contextualized questions that are presented in 4 different modes: listening comprehension, reading, speaking, and writing—going from receptive to productive skills). Hence, students are asked to practice each set of 8 questions in 4 different modes.

About OWL Testing Software
OWL Testing Software is a leading provider of language test building software to academia, business, and government markets. Built as a Web-based solution for test creation, administration, and management, OWL Testing Software is unique in its ability to create tests for all four communication skills – oral, aural, reading and writing. OWL is available as licensed software and as a hosted solution to meet the needs of the largest and smallest schools, businesses, and government agencies concerned with enhancing the language learning process and outcomes. Please visit www.owlts.com or call 877.695.3305 for more information.

For more information on the Pennsylvania State Modern Language Association, visit www.psmla.org or e-mail Dr. Thekla Fall at thekla.fall@gmail.com. Fall, Thekla.

(2010, Spring). PPS Launches District-Wide Foreign Language Competition. Pennsylvania Language Forum, 80 (1), 80-81.


15 March 2012

Achievement Testing vs. Proficiency Testing

At a recent conference I overheard an instructor boasting that her second year students were all at the Intermediate High level of oral proficiency. Years of experience and a wealth of test data suggests that this was extremely unlikely. But, it is easy to see how she came to this conclusion. It is likely that she used a rubric tied to the ACTFL Oral Proficiency Scale to rate the results of her students' performance on textbook-related tests. She compared the proverbial apples to oranges – described the results of one test with the proficiency scale of another. Because this is an error I see often, I believe it is important for language educators to understand the distinction.

Are your students 'road-ready'? - It is not unusual to find second year students correctly replying to items like the past-tense questions given on book tests. The questions are directly tied to familiar contexts in the unit being studied. Good students memorize the information and reproduce it well on the corresponding test. We can liken this to performing well on the knowledge portion of a driver's test. Anyone who has taught a teenager to drive will surely agree that simply passing the “written exam” does not qualify the young driver to take your car out on their own. We must know if the student can take what has been learned in their driver's manual and apply it on the road. How PROFICIENT is the student performing in real-life situations? The ACTFL/SOPI proficiency tests simulate real life situations and are not based on any given textbook. Their corresponding proficiency scale describes a student's ability to perform in real life situations – not repeat what they have learned in their recently-studied chapter.

Let's look at a specific way I often see this demonstrated with language students – past-tense probes. This language skill requires extensive practice, and cannot be attained in the month or two that is dedicated to a given unit of the text book. Typically, students do not perform as well with past-tense probes on proficiency tests because these items require them to apply their learning to a variety of real-life language tasks that are not specifically tied to a given text. While a novice student may be able to conjugate the verbs they are focusing on in a given unit, they cannot readily produce the proper verb tenses in connected speech samples.

The “written” test... Most achievement tests determine the percentage of errors students make — the focus is on error correction. With discreet-point questions, students fill in the blank, translate or add an ending. There is usually only one correct answer. Teachers look for, and reward, perfection--100%! Good Job!

Achievement tests are important tools to guide instructors. By looking at the errors on achievement tests instructors focus and reinforce their students' skills in those areas that are lacking; thus helping students build a strong foundation for learning.

Can you parallel park? Proficiency tests, on the other hand, are performance-based and are used for students to demonstrate what they know and what they can do. Their outcomes are judged using a specific scale or rubric. The scale is applied over the student's entire academic life – not just the particular unit, semester, grade or even degree. Students perform open-ended speaking or writing tasks, and make expected errors – more at lower levels and ever fewer at higher levels. Unlike an “A “grade or percentage score on an achievement test, ratings on the ACTFL Proficiency Scale show students their increasing levels of proficiency from elementary school through graduate school. Results on the scale or rubric highlight a student’s capabilities at the point in time and what they need to do to reach the next level. Teachers act as coaches to help students attain ever higher goals.

Proceed with caution... Educators should take care not to turn their proficiency test into an achievement test. If the exact same SOPI-type test is given year after year, it is possible that teachers will start to “teach the test” rather than “teach toward the test.” When teachers teach specific test items, students will recall responses; rather than applying what they learned to create unique responses to new items. To help avoid this pitfall, schools can develop an item bank from which they can select tasks and create their SOPI test each year. In this way, instructors are not burdened with designing a unique test each year, however; the test does not become practiced or stale. Even though you are using different items each year, drawing from an item bank also allows you to track performance data on specific tasks over the long term.

Choose the right vehicle... Both achievement and proficiency testing are useful, but they should be used for different purposes and at different times. The OWL Test Management System can be used to make both types of assessments easier to manage. OWL makes it easy to create items, develop item banks, create tests, score items, and rate speech and writing samples. The OWL Community Library includes samples of SOPI-type tests and task banks in several languages as well as several rubrics that are tied to the ACTFL Scale. Perhaps most important, OWL allows you to collect and store the results data from these exams over the life of your program and careers of your students, raters and instructors. Deficits can be identified and improvements be made in all aspects of your program. For students, an electronic portfolio of performance can be created to motivate them to achieve their language learning goals.

You can visit actfl.org or www.cal.org for specific information on proficiency testing and rating materials and upcoming workshop information.


15 February 2010

Cheating? Everyone Used to Do It!

Dr. Thekla Fall, World Language Consultant (February 15, 2010) — Just look up the topic of “cheating on tests” on the internet and you might be surprised find thousands of links— everything from studies showing the widespread prevalence of cheating to tips for students on how to cheat! Most of the methods for cheating have been around for a long time and are well known—such as crib sheets, passing down tests from student to student and year to year, or simply looking on someone else’s paper. During my college years, there were always rumors that frat houses had huge file cabinets filled with tests. I wondered, even then, why professors would give the same test over and over again.

Nowadays, students are finding new and unintended uses for modern technology in their quest to pass courses the easy way. Whether it’s using cell phones to photograph/email test questions, texting one another for answers, and/or using iPods as modern day crib sheets when the teacher isn’t looking, it appears that students are finding many new and creative ways to cheat.

What is even more disturbing, however, is that some teachers are now joining the ranks of cheaters! At one time, tests were used primarily to assess student learning. Nowadays, with the emergence of high stakes testing and teacher/school accountability, test results are also used to assess the effectiveness of the instruction. Some teachers (and even administrators) have succumbed to the pressure of producing higher test scores by giving students the questions ahead of time, giving students more time to respond, giving hints, and even changing incorrect answers on student papers. In my former position as district supervisor, I and about 50 or so other district administrators were each assigned to a school, for a week each year, to monitor the administration of the annual state tests. This is a huge commitment of dedicated staff time to make sure that the tests are administered properly in terms of time allotments, test distribution, inclass monitoring, and test collection; i.e., to make sure there is no cheating.

We all frown when we hear about students and staff cheating, but it shouldn’t just be something we shrug off by saying “well, everyone does it.” The bottom line is that it is dishonest, and there is much at stake! In addition to traditional uses of test data for grades, GPAs, student placement, class ranking, honors, scholarships, etc., tests are now used for feedback to the teacher on what needs to be re-taught and for accountability. This all works well—but only when tests are valid and reliable and when students take the tests in the prescribed manner. It is very important that hard working, honest students, teachers, or administrators aren’t cheated when accolades and rewards are given out and that students don’t leave school with a mindset that says everyone cheats or that cheating is ok.

So what can be done? Obviously, one of the best remedies to prevent cheating is to make sure that students learn the material in the first place. Learning objectives, instruction, and assessments must be aligned. If students are confident that they know what they are supposed to know, there is little incentive to cheat. Unfortunately, that may still leave some students who are inattentive in class, uninterested in the subject matter, unwilling/unable to do homework, or are simply looking for an easier way. For these students, the systemic use of computerized tests could make a dramatic difference in reducing the incidence of cheating.

Let’s consider some of the major ways that are used to cheat and what preventative measures can be taken to discourage this practice.

Students pass on tests from class to class and year to year
This practice is only successful when the same test is given over and over again. Certainly, teachers and professors must realize this practice increases the risk of cheating—so why do they do it?  Most likely, the answer is simply that the instructional materials purchased only include one test per unit or chapter. Since writing good test questions and tasks is time consuming and often complex, it can be difficult for the individual teacher to come up with several comparable, valid and reliable tests. That is why OWL Testing Software (OWLTS) enables teachers to collaborate across the country, to share good test items, and to create large item banks. OWLTS allows teachers to easily import and export test items complete with audio, visuals, and text as needed. In addition, OWLTS makes it easy for teachers to select and create multiple, yet comparable, versions of a particular test for different periods of the day, different days, and different years as needed.

Another reason that teachers may only have one test per chapter might be that checking and grading different versions of the same paper and pencil test could be confusing and more time consuming. Since OWLTS automatically scores all questions that have one definitive answer and provide a percentage for the total score, greatly simplifying test correction.

For open ended questions, the teacher is able to specify one rubric for the different test versions. The total score is automatically computed.

Once the test bank and scoring rubrics have been created, there is a considerable savings of time for the teacher, time which could be used to improve instruction.

Students look on someone else’s paper
This simple, age-old, cheating technique can be stopped by changing the order in which items are presented on a test. OWLTS makes it easy for the teacher to specify that all of the test items (or items within a section) be listed in a random order so that the test appears different from screen to screen. Furthermore, cheating by secretive texting among class members would be difficult if students weren’t sure who was on which question.

Students cheat due to anxiety rooted in specific learning disabilities or test phobias
Students with individual education plans may require considerable modifications to help them learn at their level. OWLTS has developed a simple way to create these modifications and assign specific versions of the tests to specific students. The modifications include (but are not limited to) an audio component for poor readers, word banks, limits on the number of choices, etc.
Giving students a practice test before the main event can do much to alleviate student anxiety. OWLTS makes it easy to create and administer practice tests to help students become familiar with both the software interface and also the types of questions that they can expect to see on a specific test.

Even with all of these safeguards, teachers still need to be vigilant. Students may still bring in cheat sheets or some temporary form of body art (complete with formulas or verb endings). The teacher needs to monitor the test takers. However, OWLTS can help by enabling teachers to specify a time limit for individual test questions, sections, or the entire test, thus limiting the time students have to use their IPods or cell phones when the teacher isn’t looking. OWLTS also enables teachers to choose whether or not students are allowed to go back to a prior part of the test and prevents students from being able to use the browser.

Teachers cheat to inflate student test scores on district-wide test
Many of the advantages of OWLTS mentioned above will also discourage teachers from cheating. The most important difference is that teachers won’t have a hard copy of the test beforehand (preventing disclosure of questions ahead of time).

Teachers will see the test for the first time as they monitor the first class that takes the test. Furthermore, the administration of multiple, yet comparable, versions of a test reduces the likelihood that teachers will give out test questions to subsequent classes, since teachers won’t know what versions will be administered next.

The use of passwords and settings prevents unauthorized access to change students’ incorrect responses once they are saved. Comprehensive test data collection enables tracking of student progress throughout the year, across grade levels, and to different schools making it easier to recognize abnormalities as comparisons are made. For example, noting that students from a particular school or teacher consistently score higher one year but much lower the next.

OWLTS also affords numerous other safeguards against an inadvertent (or deliberate) faux pas. The ability to set a time limit for the test and/or individual test items assures that all students across the district or campus are given the same amount of time to take the test. A setting that allows each student to take a test only once prevents students/ teachers from going in to see the test beforehand and/or being able to go in a second time to change an answer. The computer clocks the day and time the test is taken.

The good news is that with OWLTS, schools and colleges can be more certain that test data used to make critical educational decisions is accurate. Yes, teachers will still need to be vigilant and monitor the test administration since some students will always try to find another way to cheat. However, OWLTS can make it more difficult for these students to succeed, while making the job of test creation, administration, and rating/scoring so much more efficient for the teacher. With OWLTS, cheating may become a thing of the past. Instead of saying "everyone does it" people may start to say "everyone used to do it."

________

About the Author:
Thekla Fall is a world language consultant and retired curriculum supervisor from Pittsburgh Public Schools.

Further resources:
Bramucci, Dr. Robert S. How to Cheat: Techniques Used by Cheaters, http://www.teachopolis. org/justice/cheating/che ating_how_to.htm, TEACHOPOLIS, 2003
McTaggart Jacquie, Why Some Teachers Cheat, http://www.ednews.org/ articles/why-someteachers-cheat.html,  EducationNews Commentaries, 2008

This article originally appeared in ADVENTURES in Online Testing (Volume 4, Issue 1)


Volume 2, Issue 3
01 June 2009

Oral Proficiency Testing: Should Teachers be Involved in the Testing and Rating Process?

By Dr. Thekla Fall, World Language Consultant

As proficiency testing becomes more widespread in standards-based communicative language programs, the question arises, should a school district or institute of higher learning engage a company (or another institution) to outsource both the test and rating of student speech and writing samples? Or should the school or district control the testing and the rating process?

Having a third party take over the entire testing / rating process may, on the surface, appear to be the easiest choice. After all, although it may cost more, it simplifies the process. All the teacher needs to do is send students to the computer lab and sometime later the results appear. While appealing, farming out the test and ratings divorces the teacher from the testing process and does not serve to guide the teaching/learning process.(Wiggins, 1998)

Clarifying this issue is critical because, as noted by Paul Black, professor emeritus at King College London’s School of Education, and Dylan Wiliam, head of the school and professor of educational assessment, “There is a body of firm evidence that formative assessment is an essential component of classroom work and that its development can raise standards of achievement.”

A recent Education Week article (2008), “Test Industry Split Over ‘Formative’ Assessment” focuses on the current controversy. In the article, Ray Wilson, the executive director of assessment and accountability for the Poway Unified School District in Poway, Calif. is quoted, “I still contend that so long as a teacher doesn’t have primary control [over assessment], you will never have a truly formative assessment.” Testing expert Richard J. Stiggins, executive director of the Portland, Ore.-based Assessment Training Institute, maintains that “formative assessment isn’t something you buy—it is something you practice.” Thus, in addition to end of the year or end of a sequence summative assessment, teachers need away to frequently assess the efficacy of their instruction and use the resulting feedback to directly impact student learning.

In-House Assessment Means Teacher Involvement

As a longtime supervisor of the world language program in a large urban school district, I was able to observe firsthand what happens when 65-75 teachers are engaged, annually, in the testing and rating process. From the beginning, we decided that it was crucial to involve foreign language teachers in both summative and formative assessments. Starting in 2003, the district implemented large-scale summative testing using OWL Testing Software to collect and rate student speech samples, in-house, using a variation of a Simulated Oral Proficiency Interview (SOPI) type test structure (Stansfield, 1996). For more than 10 years, district world language teachers:

  • developed a large bank of speaking tasks for proficiency testing;

  • rated resulting student speech samples, (OWL Testing Software can be set to prevent teachers from rating their own students, if the district so chooses);

  • analyzed the resulting data and used it to improve instruction;

  • developed proficiency-oriented instructional tools; and

  • participated and contributed to staff development to make instruction more proficiency-oriented

Teachers Support Testing

From the start, most teachers were supportive of district-wide testing because they saw the nation-wide push for standards and accountability, and because they were directly involved in the development process. In the mid-1990s, teacher committees developed the first cassette tape-mediated SOPI-like assessments. Most teachers agreed that although speaking is the most difficult skill to assess, it is also the most important skill to assess. By 2003, even though some non-techie teachers were taken out of their comfort level with the need to administer the test in a computer lab using their new OWL Testing Software, no one complained. Teachers were willing to work through initial bugs and hardware/lab difficulties because they were invested in its development and realized the huge potential for simplifying the testing and rating process.

Rating Sessions are Invaluable Staff Development Opportunities

At the end of each school year, several weeks were set aside to rate 1,300 to 1,800 student speech samples, in French, German, Italian, Japanese, and Spanish at four levels (5th grade, 8th grade, level 3 high school, and seniors). All teachers were asked to listen to and rate 20 speech samples. Rating sessions were preceded with a rubric review and calibration practice. New teachers were given a more intensive rater training session that included initial paired ratings. Most teachers learned to rate the lower level test (No Ratings to Intermediate Low). More experienced upper level teachers rated the higher level test (up to Intermediate High). This simplified the amount of rater training needed.

We found that the rating sessions provide invaluable staff development opportunities. To be successful in teaching for proficiency, teachers must have a thorough understanding of how proficiency is measured. Teachers began to understand the ACTFL Scale at a much deeper level when they used it to actually rate students’ speech samples. For many teachers, this is eye-opening, in terms of what students can and cannot do with real-life tasks. Teachers begin to appreciate the real difference between routine classroom achievement tests and life simulating proficiency tests. Lower level teachers were thrilled to actually hear what more advanced students could say and they gained a better understanding of their role in preparing students to reach higher levels. Higher level teachers gained a better appreciation for the work of the lower level teachers. Inevitably, during the rating process, teachers began to talk about program articulation, instructional gaps, as well as what works and what doesn’t. They shared their tried and true tips with one another.

Throughout, the goal for graduating seniors was to attain an Intermediate Low level or higher of speaking proficiency (the standard level advocated by the PA State Board of Education). We found that periodic proficiency testing and annual ratings are powerful motivators—keeping both students and teachers focused on the goal. Students know where they are on the Scale and what they have to learn to get to the next level. Likewise, teachers spend five or more hours each year recalibrating, listening/rating student speech samples, and talking about the data. As a result, teachers not only are focused on a common goal, but also have a common language based on the ACTFL Scale for use across K-12 levels and across languages.

Teacher Initiated Remedies

Over the years, as results were posted, teachers reviewed and analyzed the data at team meetings and district-wide in-services. Unsatisfied with some of the ratings and finding the district’s older textbooks deficient, teachers decided they needed new

instructional tools to help students attain higher levels of proficiency. With their deeper understanding of proficiency testing, teachers came up with the idea of developing whole class protocol involving situations for communication to encourage students to rev up their speech to ever higher levels. It is unlikely that there would have been this level of teacher involvement if the tests had been farmed out. By doing the ratings in-house, the district identified the lack of vocabulary as a major stumbling block for Novice level students. This resulted in teachers also playing a major role in advocating for the vocabulary practice activities component of OWL Testing Software. This game-like feature encourages students to practice vocabulary in situational contexts with various responses. Teacher committees provide the contexts and vocabulary.

Teachers Identify Changes in Their Own Instruction

Most importantly, teachers started remarking on changes they were making in their instruction. For example, teachers stated that the test and rating process “served as a catalyst to make them more aware of the need to design more classroom experiences that engendered real-life speaking tasks and student interaction; to compliment the textbook by filling in gaps regarding survival level functional language tasks and vocabulary; to explain early in the year the PPS ORALS rubric to both students and parents—in other words, to unveil the objectives and the goals of the curriculum” (Fall 2007). This is an on-going process as teachers see what works and what doesn’t.

Achievement Testing and Teacher-Based Tests

In addition to the district-wide oral proficiency testing, the district recently began using the latest iteration of OWL Testing Software to phase-in annual district-wide pre- and post-achievement tests. Furthermore, the district is now giving teachers private accounts, making OWL available for individual classroom use. Teachers are encouraged though, to share and collaborate on the development of effective formative assessments. Once this phase of the program is fully implemented, there will be additional data for teachers to make informed instructional decisions tied directly to their teaching and students will have meaningful data to help focus their learning.

Formative Testing Increases Student Speaking Proficiency

Is in-house proficiency testing a panacea? No. Doesit make a difference? Yes! This is demonstrated by the resulting test data. The district has seen definite trends over the six years of online testing—the percentage of students at lower levels of proficiency is decreasing and more students are attaining the Intermediate Low goal. There is still room for improvement. What becomes clear is that substantive change for the better takes time; ingrained traditional teaching habits don’t change overnight. However, proficiency levels will rise when there is a sustained instructional/learning focus, data analysis that informs teaching, meaningful staff development, and the purchase of instructional materials (based on clearly defined needs).

Implications for New OWL Users There are advantages for institutions that are just starting in-house proficiency testing at this time since much of the early, iterative development work has been done. Many of the staff development tools, student/parent awareness materials, and instructional tools have been disseminated, free of charge (see link to “additional resources” below). Also, the newest version of OWL Testing Software includes support components: a rater calibration feature, vocabulary practice component, help buttons, and an individual teacher testing component that enables teachers to input their classroom tests for routine formative testing. All of these features will enable institutions and teachers to quickly focus on the formative and summative aspects of testing to guide teaching and learning.

In Conclusion

There certainly are times when it is appropriate and desirable to use outside, double-rated tests such as the official ACTFL OPI or CAL SOPI tests. Districts use them for validation studies and for students who demonstrate ACTFL Advanced levels of proficiency. However, when a high level of speaking proficiency is a major goal for all students, all teachers should thoroughly understand the test and the rating scale. Most importantly, when teachers help to create, administer and rate tests, and help analyze the resulting data, they become invested in the test, the process, and in seeing improved results.

________

About the Author:

Dr. Thekla Fall is a world language consultant and retired curriculum supervisor from Pittsburgh Public Schools.

Sources:

1. Black, Paul and Dylan Wiliam,(1998) Inside the BlackBox: Raising Standards Through Classroom Assessment, Phi Delta Kappan, 148.

2. Cech, Scott J. (2008) “Test Industry Split Over ‘Formative’ Assessment”, Education Week, Vol. 28, No. 4. 1, 15.

3. Fall, T., Adair-Hauck, B., & Glisan, I. (2007) “Assessing Students’ Oral Proficiency: A Case for Online Testing”. Foreign Language Annals, 40, 377 - 406.

4. Stansfield, C. W. (1996). Test Development Handbook: Simulated Oral Proficiency Interview (SOPI). Washington, DC: Center for Applied Linguistics.

5. Wiggins, G. (1998). Educative Assessment: Designing Assessments to Inform and Improve Student Performance. San Francisco: Jossey-Bass Publishers.


Volume 1, Issue 1
10 January 2008

Focus on Speaking

By Dr. Thekla Fall, World Language Consultant

If you were to ask students what is the main thing that they would like to learn
in your French class, what would most of them say?

Motivation

Students tell us what they want to learn—we just have to listen to them! Most don’t say, “I want to learn to do grammar drills” — or read or write a new language. They say they want to learn to SPEAK the language! This doesn’t mean that reading, writing, and grammar shouldn’t be taught—but it does mean that teachers and programs should emphasize SPEAKING if they want to motivate their students!

Whenever I say this to a group of teachers, inevitably one of them will say, “But my kids won’t speak in class!” Usually there is a reason for this reluctance. Perhaps somewhere along the line these students have had a bad experience. Perhaps they are just afraid of sounding foolish in front of their peers. Perhaps they have learned that by complaining they can get away with not doing it—the teacher will give in. After all, when students have to speak in class, they have to pay attention and think about what they are going to say—which is hard work. However, the exciting thing is that once students see that they can express themselves, at ever higher levels, they become more motivated and excited to continue!

Getting Students Speak Up

After years of observing students’ behavior in the classroom, I’ve witnessed myriad ways teachers unwittingly discourage students from speaking the language. Some teachers only provide grammar practice, leaving communicative activities for level 3 & 4 students. Sadly, by then, many students have dropped the class from boredom or poor performance. Even when some teachers provide opportunities for speaking, the teacher corrects every utterance coming from the student’s mouth. This focus on the negative practically guarantees the student will be reluctant to speak again. Other tactics to avoid?

  • Giving students grades at random times during speaking practice. Keep practice and testing separate.

  • Allowing other students to laugh at or otherwise ridicule a student’s effort to speak.

  • Requiring or rewarding only memorized phrases and utterances.

So what positive steps can be taken to get students talking?

  • Get students talking on the first day. Keep it simple. Use cognates when possible, start with one-word responses, yes/ no, either/ or, naming. Tracy Terrell1 and Stephen Krashen’s2 Natural Approach and language acquisition recommendations work well.

  • Engage students in choral practice: songs, rhymes, tongue twisters, etc. that let students hear themselves as they try out these new strange sounds—without being singled out.

  • Provide non-threatening, non-graded practice. Have students work in pairs and small groups. (Remember David W. and Roger R. Johnson’s famous words “It’s hard to get left out of a pair!”)

  • Design communicative practice where the focus is on the communication rather than grammar. Save the grammar for a defined grammar review and practice at a different time—and then, be selective; don’t try to fix everything at once!

  • Personalize the communicative practice. Students like to talk about themselves and find out about their peers.

  • Integrate culture into communicative activities whenever possible.

  • Reward students’ efforts to speak and to express their own thoughts.

  • Help students create some rules of conduct themselves. Respect, not ridicule, during classroom speaking exercises should be the guiding principle. Give the students responsibility for creating a learning environment in which they each feel comfortable to speak.

  • Provide DAILY speaking practice. True communicative exercises, not memorized speech, takes more effort but gets the students thinking in the target language.

  • Give students ample wait-time to THINK about their responses. As first described by Mary Budd Rowe: look around the room to encourage everyone to mentally come up with a response. Don’t call anyone’s name until after the wait-time. The Think-Pair-Share technique* developed by Frank Lyman and colleagues is an effective way to get all students thinking and involved.

  • Make speaking a significant part of all chapter and unit tests. As we have learned from educator, author, and consultant on education reform and assessment Grant Wiggins3, “what gets counted, counts” in the mind of students. If you don’t make speaking practice part of your tests, your students may be tempted to say, “why bother learning it if it isn’t on the test?”

  • Assess speaking proficiency using the ACTFL Scale, at periodic intervals throughout the program. Students will be motivated as they see their speaking proficiency is increasing.

Assessing Oral Proficiency

Often teachers will say that it is too difficult to assess speaking. Some teachers have huge classes that make it impossible to interview each student individually. Many teachers do not have access to sophisticated language labs. They don’t have time to develop taped SOPI-like tests and they find audio tapes extremely time consuming to rate. Fortunately, it is now easier to assess students’ proficiency and thereby motivate students.

With periodic proficiency assessments using the ACTFL Scale students see that they are moving toward their speaking goal. For example, a student will know— I am at the Novice High level and this is what I have to do to reach the next level. This is an important step in helping students to become more responsible for their own learning and motivate them to continue with the language! This is why OWL Testing Software was created.

OWL Testing Software makes it easy—easy-to-create speaking tests and easy and far less time-consuming to rate speech samples. Best of all, the software will function in any computer lab or library, etc. that connects to the school’s intranet.

Teaching and assessing speaking proficiency is one sure way to motivate your students to reach higher levels of overall language proficiency. So be sure to listen to your students when they say they want to learn to SPEAK the language!

* Think-Pair-Share is a cooperative discussion strategy developed by Frank Lyman and colleagues in Maryland. It gets its name from the three stages of student action, with emphasis on what students are to be DOING at each of those stages. See http://www.readingquest.org/strat/tps.html for details.

_____

Selected Bibliography & Further Reading

1 - Terrell, Tracy D. (1977) “A Natural Approach to Second Language Acquisition and Learning.” Modern Language Journal, 61. 325-37

2 - Krashen, Stephen. (1982) Principles and Practice in Second Language Acquisition. New York: Pergamon Press.

3 - Wiggins, Grant. Presentation at Schenley High School. Pittsburgh Public Schools, 1985.

Adair Hauck, B. et al. (2003) PSMLA Standards and Guide to Assessment: What to Teach and How to Test It! Pennsylvania State Modern Language Association.

Omaggio Hadley, A. (1993) Teaching Language in Context. 2nd Ed. Boston. MA: Heinle & Heinle.

Shrum, J.L. & Glisan, E. (2005). Teacher’s Handbook: Contextualized Language Instruction. 3rd ed. Boston, MA: Thomson Heinle.

Pittsburgh Public Schools, PA. (2004). Seven Best Practices For World Language Instruction. http://www.pps.k12.pa.us/pps/site/default.asp. Pittsburgh Public Schools, Pittsburgh, PA.

This article originally appeared in the OWL Testing Software Newsletter (Volume 1, Issue 1).