Scaling for non-incremental gains

I think scaling could be very usefulĀ  to measure a specific skill that was learned during a course.

For example, if you were teaching negotiating or presentation skills, it could be used to measure a student going into the course with very little competency to a much higher competency.

Scaling in a short course setting like a college semester or corporate Continue reading “Scaling for non-incremental gains”

Advertisements

Questioning assumptions about teaching to the test

So often teachers complain about the fact that they must teach so much out of the book so that the students can pass some test. But then in the next breath the teachers complain that the students forget much of what the teachers had to teach and the students had to learn. Then teachers and students say that is how things are and we cannot do anything about it.

But I am suggesting that it is not a fact that teachers and students have to do such a thing.

I am suggesting that teachers may be making assumptions about tests, for example the CET and the
BEC tests, that may be incorrect assumptions. Namely I am suggesting that what is taught in the books may not be what is tested in the tests.

Let’s take these two tests as examples. Although they are two different tests, what I am suggesting is that if we did an analysis of actual CET and BEC test questions that we may not find those test answers in the CET and BEC books.

I feel that all of us as teachers may be holding too many assumptions about our craft These assumptions may be hindering and even harming us and our students causing us to waste time, waste energy, waste teaching and learning
capacity and even waste money.

The implications are tremendous. If you consider the hundreds of millions who take these tests, took these tests or will take these tests, and what could have otherwise been done with this time, energy, capacity and money, it is our responsibility as professionals to be sure about these things.

I suggest that we reexamine all of our assumptions about these kinds of tests and the assumptions that we hold about them.

Perhaps you are right. Perhaps I am wrong. But I suggest that it is a worthwhile effort to ask these questions. This is what I am currently doing with the CET Chinese English Test.

Have students check each other’s tests and quizzes?

How to have students check each others’ test without cheating and “helping” a friend? If a student is checking another student’s test or quiz paper, they may be tempted to change a few answers to help a buddy. What is a simple way to control that.

I like the idea of students correcting each others’ tests and quizzes. I think the more time the students spend on the material, thinking about it, studying the questions and answers, the better they will understand it. So if students take a quiz and never see it again until it is handed back by the teacher, it is not as effective as having students correct each others’ papers.

One thing that works for me is to give students a pen with a special color to use when they take the test or quiz. In my case, I give them all a green pen to use but purple or some other odd color would also be good.

Then when you collect the quiz papers, collect the pens also. I then redistribute the papers to the students so that no student gets his own paper.

Let the students use their own pens when correcting another students’ paper.

Teach the students how to give a score to the paper as well.

Before turning in the papers to you, let the students give the papers back to their owners. This allows the student to see and reflect on their errors. It also allows a way to check the scoring. If there are any mistakes made in the correction and scoring, the owner of the paper will be sure to spot it and can ask you about it so you can make sure the score is correct.

Afterwards the students pass in their papers to you and you can enter their scores into your computer or score sheet.

[Photo: Dave’s students in China taking a quick 9-question quiz using green pens.]

IELTS preparation tips

Your training should focus on two areas:

1. Introducing the structure of the test, how it works, the timing of things like the different parts of the speaking test and some strategies to deal with the speaking test, etc. There are some basic errors that students can make that can cost them one or two band levels. Conversely, if the students understand these things and use proper strategies it can raise their score one or two band levels. (Photo: My students doing a mock IELTS test.)

2. Raising their English level. This does not mean teaching them a lot of complex vocabulary or grammar rules. One of the best things is to provide them a lot of English samples at their target level. If their target is Band 6.5 then provide a lot of samples of English at those levels. This is especially true of your own speaking to the students. Although speaking at Band 6.5 is characterized by a small amount of advanced vocabulary, making a lot of errors with basic vocabulary and grammar is a sure way for them to crash and burn in the test.

There is some great free material at ESLPod [1] but I would suggest you keep something in mind if you use it. ESLPod features some bits of more advanced vocabulary. This specific vocabulary may be more useful and attainable to your students in a passive sense. That is, they may only learn it well enough to be able to recognize it when they hear it or read it but may not be able to produce it when speaking. However, when evaluating ESLPod, do not overlook all of the other speaking and text in the podcast. It consists of a lot of basic vocabulary and samples of grammar. Think of the podcasts as featuring some islands of slightly more advanced vocabulary in a sea of very good basic vocabulary and grammar. Expect that your students will improve their speaking with the basics and improve their listening and reading with the slightly more advanced vocabulary.

Understand oral English testing in schools

1. According to research, it is too difficult for highly trained examiners to measure English proficiency spanning more than 9-10 levels. (Covering a scale of no ability to highly proficient.)

2. Even measuring 10 levels professional examiners can be wrong 27% of the time.

3. On such a scale, for classes that meet once or twice a week for 45-minutes, it may take a full year of training or more to improve one level

4. Factors weigh on the whole process which can cause inaccuracies. During some research on IELTS training it was found that after a 3-month intensive training candidates could improve half a level. However, some candidates actually scored at a lower level at the end of the training than they did at the beginning. Reasons for this were the state-of-mind of the candidate on the test day, familiarity and lack of familiarity with the subject matter of the tests, faults in the testing system (IELTS), etc. So it would be possible to test your student at the beginning of the course and then the student could do worse at the test at the end of the course. How would you give a grade in this case?

Alternatively, test the students on the course material at the beginning of the course. If the students are properly placed in the right level classes they should score very low on such a test but after training on the material should score very high at the end of the course.

In that it is too late at this point, the next best thing is to review the material we taught the students and find some distinct and important points that we can test on. However, if the classes consisted of “well, what does everybody want to talk about today?” it may be impossible to test the students.

Finally, the idea of scoring by student attitude or participation in the class or other student behavior focused method, to my mind, but perhaps many would disagree with me, is fraught with the most disadvantages. Such a criteria makes the test exactly one of classroom behavior, not learning or language acquisition. As the focus of most of us teachers is on the student centered classroom we are constantly questioning ourselves if we are meeting the needs and capturing the interests of our students. Some days we do better than others. Some students (especially the very bright and the very dull) are more easily bored than others. We are tempted to punish our troublemakers and reward our so-called ‘good’ students by the score we would give them. But again, we would be scoring the student on their behavior which is perhaps not the best way to reflect their learning in our class.

The irony is that for many schools it really doesn’t matter what score you give the students because the school often does not treat the oral English class taught by a foreigner as a ‘real’ class. These grades often don’t show up as part of their year-end scores.

Nonetheless, I think we should all be careful in how we go about making these kinds of decisions. There are certain ‘automatic’ impulses that we should be aware of and question. How often do we do things because that’s the way everyone else does them? How often do we do things because that’s the way it was done when we were students?

These things I have said in the above are with the realization that they may not exactly apply to the certain aspects of the current discussion. Without complete understanding of the situation I know I may be misunderstanding some things. I am just trying to explore various factors and considerations for the purpose of reflection.

We are the teachers and in a position of power with the students and the school. It is a great opportunity to for us to explore and discover the best ways to do things for our students and ourselves.

>High-speed correcting

>If you find yourself with a large number of tests to correct and little time to do them you could make a cut-out template to help you.

What I like to do though is to make a recording of the answers on my computer, using the ‘sound recording’ feature under Windows accessories. I then play the recording, sometimes even speeding it up to 200%, and check the papers.

The advantage of recording it on the computer as opposed to a tape recorder is that you don’t have to rewind to play it again.

>How students learn for tests

>When our students take the big exams are the only questions they get right the ones the teacher “taught” them? I don’t think so. I would like to know how effective is teacher “teaching” as compared to indirect learning.

I think they are answering some questions on the test correctly for items that they were not “taught”. If so, then how did they learn them? I believe Comprehensible Input is playing a bigger role than we realize.

Krashen tells the story of how his French teacher wanted to only speak French to them and was explaining a grammar point, in French. Finally frustrated, she told them in English. However, her effort to explain it in French, all that French speaking to explain something, actually constituted Comprehensible Input for the students and helped their French.

Every time the teacher talks to the students in the L2 is Comprehensible Input. Teachers are naturals for adjusting their English speaking so students can understand them.

So between the teacher’s speaking and the student’s own study they are getting a lot of CI.

Perhaps the student is reading a business text and it is talking about international finance and the teacher wants the student to learn some language about stocks, bonds, interest rates, prime lending rate, etc. Perhaps the student has some degree of success in learning some of those terms but there are many things in the text that the student was not studying but was learning such as “carry on” when it says “banks cannot carry on making risky loans” or something like that.

>The Bob System: Tracking students for formative assessment

>

I am often called on to teach oral English. Unlike teaching written English where the students will be submitting a lot of writing samples, oral English offers less opportunity to sample the students’ English ability.
My primary interest in using the Bob System and some sort of scoring system is in formative assessment.
When I have a clear understanding of how they are doing then I have the ability to try to make my training more effective in two ways.

First, are my students “getting it”? Am I helping them to learn what will be useful for them to know?

Second, I can customize my training more to my students’ specific needs. I may not be able to give each student individual training (that ability and technology will be coming in the future) but I could segment the class. I want to know who is doing well and who is doing poorly.

When I know this I can offer extra training to those who need extra help. What about students who are doing very well in the class? Sometimes there is an academic ceiling in the classroom. Bright students cannot go higher because the teacher is teaching to the “middle level” of the class. But if we know which students are doing very well and how many of them there are then we can focus on their needs better by providing extra challenge.
Tracking this sort of information can be very useful in other ways, as of Action Research in the classroom. If you are monitoring many aspects of the student’s performance in the classroom and you have a student who always participates correctly, does the pairwork, groupwork, homework, listens and doesn’t goof off but does not seem to progress in their English from one term to the next then that would raise some very good questions for the teacher.

Of course, finally, the data that is collected can help in summative assessment. The teacher does not need to simply rely on a final exam for a score. The teacher will have a multidimensional way to look at the students.
[Photo: Some of my 300 college students that I taught weekly last term doing pairwork. Next term I will have 400 college students each week.]

>Speaking evaluations made simple

>This is a very complicated subject. It is not easy to conduct a speaking test but I will go over just a few things about it and touch on them lightly. There are many ways to do speaking tests. I have studied them, tried some of them and have settled on this way. It is similar to the way I was trained as an IELTS examiner with a few differences.

QUESTIONS

Design three levels of questions.

(1) Easy questions which are answered with straight factual answers. “Where are you from?” “How long have you been here?” “What did you do yesterday?” “What do you like to do on the weekends?” These questions make little demand on the student and only very low-level students will have problems with these.

(2) Moderately difficult questions demand more from the student. These are questions asking a student to describe a city or restaurant, relate the story of a movie recently seen or a book recently read. “Tell me about your last holiday? “Describe your best friend.”

(3) Difficult questions are those that require the student to give an opinion and justify their opinion with reasons. “Should students be required to wear school uniforms? Why?” “Should smoking be banned in all buildings? Why?”

Be aware that some questions are not only difficult to discuss in English, sometimes they are just plain difficult to discuss at all. I once designed a question, “If you had two weeks to live, what would you do?” This question was so deep that the students became extremely thoughtful in trying to give their answers to the point that it interfered with any attempts to show fluency. Questions do not need to be so deep.

Although the question may be difficult at times, to understand the question should be simple. Remember, this is a speaking test, not a listening test. For example, “Given the opportunity to go on a round-the-world cruise or participate in a scientific exploration in Africa, which do you think could potentially be more beneficial for your career development?” Many low and mid level students would not be able to understand that question and therefore would not be able to speak on it. Make sure your questions are easily understandable.

I like to let the students ask each other the questions. This way I can focus on listening and evaluating. But I do not allow the students to prepare for the questions except for perhaps just a couple minutes before the interview.

Look me in the eye? In western countries we have no problem looking into people’s eyes when speaking to them but this is something that Asians do not do. Therefore, when you conduct the speaking test with Asian students it is best to not try to look deeply into their eyes or to hold their gaze. Look elsewhere, shift your eyes around or even just focus on your band descriptors or rubric.

EVALUATION

You can use a rubric or band descriptor to measure the student’s level such as the IELTS band desciptors or the Common European Framework.

You will notice in the IELTS descriptors that at Band 4 it says:

“Is able to talk about familiar topics but can only convey basic meaning on unfamiliar topics and makes frequent errors in word choice. Rarely attempts paraphrase.”

and then at Band 5 it says:

“Manages to talk about familiar and unfamiliar topics but uses vocabulary with limited flexibility. Attempts to use paraphrase but with mixed success.”

That is why it is important to design your interview questions with easy, moderate and difficult topics so that the student will have to try to produce a full range of English at different challenging levels to respond accurately. The English of many students will begin to break down at the higher levels and this will allow you to see the limit of their English.

I put the band descriptors and all the students names on an Excel spreadsheet. I give the student a score for each rating catagory (Fluency and coherence, Lexical resource, Grammatical range and accuracy)
and the program averages it out into a final band score. Depending on the situation I will add formulas to work that score into a grade, average all the scores to compare one group with another or other things. Click on the picture (above) to see it enlarged.

ACCURACY

The more realistic the task is, talking naturally about a topic the student may actually need to discuss rather than some sort of T/F or multiple choice, the more difficult it is to test. So this sort of test will always be subjective, affected by your personal judgment of the student’s performance.

One thing that helps is to be sure to base your judgment as closely as possible on the rubric or band descriptors you are using. You should never compare students to each other. This will lead you off the track. Always compare to your chosen rubric.

I always record my test interviews. A couple days later I will listen to some of the interviews and rescore them without looking at the score I gave the first time. If there is a strong correlation then that is good. If you find that you are scoring much differently the second time then you need to try to understand why and may even need to rescore all your interviews. It happens that you can be in a certain mood that will cause you to score differently. (Another good reason to record is to contribue to a record of the student’s progress.)

IELTS research has even shown that male interviewers will sometimes give attractive females a slightly higher score which leads to inaccuracy. If the interviewer is tired, sleepy, hungry or if the interviewer has scored several high level students in a row and suddenly gets a low level student it can affect his accuracy. To run an effective test you need to be aware of all of these things and try to guard against them effecting your judgement.