Thursday, 4 May 2017

Student Feedback dos and don'ts

Which of the following do you think has the biggest impact on 'student evaluation of teaching' (SET) feedback?
  1. How hard the course is 
  2. The grade the student gets 
  3. The teacher’s gender 
  4. The teacher’s personality 
  5. How ‘hot’ the teacher is 


Ready...SET...


There is a ton of research into SETs starting over 80 years ago (Clayson 2009) and including, as of 1990 over 2000 articles (Feldman in Felder 1992). The literature includes several meta-analyses and even one meta-analysis of meta-analyses (Wright & Jenkins-Guarnieri 2012). In short, it is a well-researched field. Since university professors' careers can depend on these evaluations, perhaps this isn't surprising. Despite the large body of research (or perhaps because of it?) the science is not settled (Spooren et al 2013). 

There are however some general observations which can be made, reasonably confidently, about the effect of certain variables on SETs. So according to the literature* which factors have the biggest effect on student feedback? What follows is my hand list of dos and don'ts to improve your student feedback.

Do be likeable!

One of the variables which correlates highly with positive student feedback is personality and there is a substantial relationship between a teacher’s personality and the feedback they will be given (Feldman 1986Cardy and Dobbins 1986, Williams and Ceci 1997). Foote et al suggest that “[instructors] who score highly on evaluations may do so not because they teach well, but simply because they get along well with students” (2003:17). One researcher writes that personality is such a strong predictor of SET results that "the SET instrument could be replaced with a personality inventory with little loss of predictive validity” (Clayson online). 

There also seems to be something of a Halo effect at work with SETs. Basically, one positive attribute (Good looks) may cause people believe other positive things about a person (they are trustworthy, for instance). This is the reason handsome criminals get shorter prison sentences for the same crime than less attractive ones. This means that student opinions of personality might colour other variables and subsequently ‘likeable’ teachers may be judged positively in areas unrelated to ‘likeability’, such as teaching ability or professionalism. 


Does attraction affect scores?
This is problematic because it means the feedback you get will be tainted by the students general opinion of you. The picture on the left shows some feedback I recently received. Clearly the student had a high opinion of my teaching. Ho-hum.

The last column asks how useful the virtual self-access centre (VASC) was, the student has written 'very useful'. Now, being the teacher of the course, I can say with some confidence that I said not a word about the VSAC nor did any part of the course use the VSAC. Studies seem to corroborate this phenomenon showing that students are more than happy, to report false information to either reward of punish teachers (Clayson & Haley 2011).It should be noted that the Halo effect also works in reverse, so whatever happens, don't be disliked! 


Do be hot! 

Company promotes bribery
There is evidence that teachers who are perceived to be physically attractive tend to score more highly than their plainer colleagues. Riniolo et al (2006) found a 0.8 advantage on a 5 point scale for ‘hot’ teachers. After analysing the ratemyprofessor.com website, where teachers can be given a ‘hot’ rating, Felton et al (2004) found that ‘sexy’ teachers generally rated more highly than ‘non-sexy’ teachers. The authors note:


If these findings reflect the thinking of American college students when they complete in-class student opinion surveys, then universities need to rethink the validity of student opinion surveys as a measure of teaching effectiveness (91).

Do be expressive!


Despite various methodological flaws, the landmark ‘Dr. Fox’ studies (Naftulin et al. 1973), created interest in the question of the validity of SETs and what exactly it is that students are assessing when they complete feedback. In this study (see the actual study in the video below) an actor lectured a group of medical students with a largely meaningless talk that he had learnt the previous day. The student were told the speaker, Myron Fox was an expert in 'game theory'. 





The actor’s expressiveness and charm was seemingly enough for him to receive positive feedback from three separate audiences. Later researchers showed that even the meaningless talk was unnecessary. Ambady & Rosenthal's (1993) “thin slice” study asked students to evaluate teachers based on a silent 15 second clip of them teaching. The authors found a remarkable similarity between the term-end evaluations and those made after watching the short clips. 15 silent seconds was enough time to give an 'accurate' evaluation of the teacher. 


Do be a man!



Russell is annoying, his class is boring 
Researchers tend to agree that gender plays a minor role in overall evaluation. That is, one gender is not consistently rated lower than the other. In fact, “when significant differences were found, they generally favoured the female teacher” (Feldman in Pounder 2007). So what does 'be a man' mean? Well, despite this seeming equality, different genders may be rated on the basis of stereotyped views of gender (Laube et al 2007). For example, the most highly scoring men were described as ‘funny’ whereas the lowest scoring men were ‘boring’ in contrast the highest scoring women were ‘caring’ whereas the lowest scoring were either 'too smart' or 'not smart enough' or were simply a ‘bitch’ (Sprague & Massoni 2005).


There is also the question of whether a male teacher has to work as hard to get a top SET score as a female teacher. Women may suffer from the ‘Ginger Rogers effect’. That is "Ginger Rogers, one-half of the famous dance-team of 1930s movies, had to do everything Fred Astaire did, only she had to do it backwards and in high heels" (Sprague & Massoni 2005:791).  


Do grade generously!


There is a reasonably strong correlation between the grade, expected or real, and the type of feedback a teacher gets. This correlation can be summarised thus, “to put it succinctly, university teachers can buy ratings with grades” (Hocutt in Pounder 2007:185). 

The highest rated prof on RateMyProfessor.com
Clayson (online) notes that in his research 50% of students asked, admitted purposefully either lowering or inflating feedback grades as retribution or reward, and adds that whether or not grades actually affect scores is perhaps less important than whether faculty believe this to be the case as the belief is potentially enough to alter the way grades are given. Pounder backs this up noting “many university teachers believe that lenient grading produces higher SET scores and they tend to act on this belief” (Pounder 2007:185). However, It should be noted though that this is something of a controversial area with a large number of studies finding no relation between SET score and grades. (see Aleamoni 1999)


And if this isn't enough...

Here are a few more killer tips taken from the literature (Pounder 2007)

Do
  • bribe students with food 
  • let students leave early 
  • praise the class on its ability before doing SETs 
  • do the SETs when the weak students are absent 
  • do a ‘fun activity’ before the SETs 
  • stay in the room 
  • teach small classes 
Don't 


Not convinced yet? 


Here's a satisfied customer's testimony. From a remarkable paper published under the pen name name "A Great Teacher". This teacher, faced with the prospect of losing his job over poor SETs decided to throw out his morals and aim for good ratings. He stopped being such a 'tough' teacher and 'sucked up' to the students instead, making the course easy and trying to build rapport with his students:
What were the results of my experiment? The consequences for learning were not good. Students did less well than expected even on deliberately easy quizzes. Their final exam papers proved to be among the worst I had seen in years. Most students displayed only a superficial knowledge of the material. It was clear that some had concluded that with a kinder, gentler me, one didn’t need to work as hard. Although the pedagogical consequences were poor, the results for me were great! My [SET] scores went through the roof (2010:495-6)
And so, armed with this information, you too can become an well-loved teacher. Alternatively, you can treat student feedback with the caution it probably deserves.   







* Seldin (2010) suggests “one can find empirical support for any common allegation pertaining to student ratings” (in Hughes and Pate 2013:50). It's also worth noting that all of this research was carried out (like much research) on American University students. THere has been very little research on carried on in this are on FL students.

Sunday, 23 April 2017

Red flags: Assessing sources

This was originally posted on 'The Scarlet Onion' webpage but it has since ceased to exist :( 

I once had an online 'discussion' with a chap claiming that the twin towers were, in fact, brought down by the US government and that 9.11 was all an inside job. He sent me a link to, in his words, a ‘peer reviewed academic journal article’ to back this theory up. 

The link led to the ‘Journal of 9.11 studies. One of the editors was Kevin Ryan who coincidentally was also the article’s author. Editors do occasionally write articles for their own journals but still...I was dubious. So I googled Kevin Ryan and found that he’d written a book on why 9.11 was actually 'an inside job' and was fired from his previous university job his views. The article was about Nano-thermites (which he said the government used to blow up the twin towers), a type of explosive, and yet Ryan was working on water testing. Things didn’t add up. 

O Red Flags  
Imagine going to a restaurant and seeing no customers, the paint peeling and a smoke coming from the kitchen. None of this necessarily means the food there is bad, -hey it might be great, but these are all the kind of things which Dorothy Bishop refers to as ‘red flags’. These red flags can acts as a kind of early warning system. 

This brings me a conversation about the relative pros and cons of learning styles/MI and the Montessori Method I had on twitter earlier in the year. I was sent a link to a series of educational articles by a teacher who thought they supported her case. When I read them i noticed a number of red flags. I thought it would be useful to blog about these and hopefully this will be useful for other teachers assessing the quality of sources. So here are a few things to look out for.

O Mode of publication
Is the article in a book or on a website or in a journal? 

This matters because anyone can say anything in a book or on a website. Books often seem impressive as if somehow putting something in a book makes it more weighty and serious. Usually, academic journals, which have been through peer-review, are more likely to have credible information than books. Websites are almost always a no-no as there is usually zero quality control. Disclaimer (of course it always depends on exactly what you are looking for and what kind of website/book it is).

What about the papers that I was directed to look at? They seem to be on a website, which is an initial red flag. However, when we get to the articles in the download section they actually seem to be from real journals, namely, the MASAUM Journal of Reviews and Surveys (volume 1), MASAUM Journal of Open Problems in Science and Engineering (volume 1) and the International Journal of Engineering Research and Applications (volume 2); so far so good. But O (first red flag) why publish education articles in journals of engineering? 

O Dubious journals  
So these articles are in journals but what kind of journals?

Ideally the journals would be Peer-reviewed, and well established. The first two of these journals fail the ‘well-established’ test as they are all only seemingly on their second or first volume and are not available anywhere on the web. The third one actually does exist and looks (roughly) like a journal should. The site looks a bit cheap and unprofessional (Gmail address for submissions etc) but it exists, which is a start.

O Pay for play
A little bit of digging around the FAQ section of the third journal and we find this:

Q: How much do I have to pay for publication fee
A: $150

Paying for publication isn’t a great sign. As it’s an open-access journal (anyone can view the articles for free) this doesn’t necessarily mean it’s a bad thing, as long as the quality control isn’t affected. If we go to the ‘stats’ page though, we can see that they accept roughly 40% of all submissions. This is quite high considering journals like ‘Applied Linguistics’ accept only about 10% of submissions. It’s also notable that they record stats monthly and from those we can see they accept about 250 papers each month which means each volume has well over 300 articles. For context that’s about 7 years’ worth of ELTJ articles in one volume. 

Peer-reviewed
Peer-review is sometimes mistakenly thought of as the ‘end’ of the process. But actually all peer review means is that a couple of other people in the profession have read it and think it’s good enough to be published. This doesn’t mean it’s perfect, or that no one can question it, just that it has reached a certain level of acceptability. After peer-review, the academic community at large get their teeth into and then we often see criticisms, repeat studies and sometimes retractions. The International Journal of Engineering Research and Applications is actually peer-reviewed so that seems pretty reassuring. Except, considering they boast that peer-review only takes 4-6 days (not the couple of months journals usually take) and because they read about 800 papers a month and only have about 27 reviewers, they are getting through those papers at an astonishing rate! You have to worry about quality control. 

O The author 
If you’re at all in doubt you can always check out the author’s credentials. According to his website Dr. Qais Faryadi (or ‘Dr. Prince’ as his website calls him) has a PhD in computer science and a master’s in Sharia Law as well as being an expert in curriculum design, criminal law, software engineering and Islam. He lectures on all these subjects and has even published a number of books (or rather ebooks) on such diverse topics as teaching, Islam and “Magnesium The Health Restorer: The Missing Link To Recovery”. 

O The nose-test
When you actually start reading the paper, does it sound plausible or like an academic paper should? 

Dr. Prince’s starts his learning styles paper with the statement that:

This evaluation examines teaching and learning from the lenses of mind blowing scholars such as David Kolb, (1984), Honey, (1982), Dick and Carey model (1990), Anthony, Sudbury Model, VAK Model and Madeline. (Faryadi, 2012:222)

Language like this should set alarm bells off instantly. This paper isn’t going to be an unbiased review of learning styles, not when the writer believes their creators to be ‘mind blowing’. Not only this, the article is littered with errors. In this small section alone we have such oddities as:

  •        ‘Honey’ should be ‘Honey and Mumford’ (as in the reference section),
  •        He starts by talking about ‘scholars’ but then switches to ‘models’
  •        Anthony? Who he?
  •        Madeline? Who she?


Wouldn’t peer-review usually sort errors like these out? Then again, with reviewers getting through 50 or so papers a month in 4-6 days each, it’s perhaps not surprising that the quality suffers.

The article itself is a relatively unremarkable laundry list of things the author believes ‘good teachers’ should do. There is no attempt to critically engage with the various learning styles models presented or to talk about why and how they differ from each other.

So in conclusion we have sloppy education articles written by an expert in computer science and law who claims expertise in a number of other fields. They were published in science and engineering journals that either no longer exist or have very little reputation. The one journal that does exist, charges for publication and has numerous credibility issues.


The food may be great here, it really might…but I’m going to look elsewhere.