Today I was lucky enough to get to visit Durrington High School. It was a great event and provided a lot of food for thought, and some decent lasagne. I haven’t been to an educational research conference before (or started a blog), but I have spent some of my career working on various action research projects myself, and turning to Twitter to find solutions to the problems our department faces has been refreshing and immediate.
I am just going to pick out some of the most important messages I took from the speakers I saw and try to see how they will change our department’s policy moving forward into next academic year.
Professor Daniel Mujis‘ keynote address was valuable in many ways. I found most comforting the idea that a person that represents Ofsted was talking in such a rational and informed position. Those of us that have been in the profession for over a decade will remember the times when this was not always the case!
An area that has been mentioned by many that I have a surface-level understanding of is the work on instruction by Rosenshine. This is something I want to bring into our department more deliberately, probably by using Oliver Caviglioli’s (@olivercavigliol) excellent summary.
It’s going to be important to avoid ‘plantation thinking’ and give my staff the freedom to adopt the ideas in a way that works for the students they teach, so I need to ensure my team understand the mental models that are need beneath the actions (more on this later).
A major theme of the talks I went to was about assessment limitations, probably due to my need for confirmation of my own cognitive biases. There was some great explanation of the value of assessment, reporting to parents and issues with the system from both Tom Sherrington and Professor Becky Allen. Highlights included reminding teachers about the nature of the bell curve and how 50% of students have to “fail” in our education system due to the emphasis on “pass” grades. Professor Allen also demonstrated the power of noise in assessments by using EEF trial data in KS2 – with a daughter just approaching KS1 SATS this was fascinating CPD for multiple reasons.
The key challenges we face as a department with assessment are:
- The reliability of the data we produce
- How we use the information to help inform the students of knowledge and/or skills they need to improve on.
- Creating the appropriate climate for the assessment so that it achieves the job we want it to achieve.
So I’m going to try to put some of their thoughts into these areas:
The reliability of the data we produce
We are fortunate as part of United Learning to be able to utilise the services of Ben Littlewood, United Learning’s science adviser. He has created a broad and challenging KS3 curriculum and the associated assessments to go along with each topic. The assessments are long (50 mins) so provide a good overview of the content and a selection of the ‘working scientifically’ skills. So we are able to give a decent indication of the attainment of the students we teach in a given topic on that day.
How we use the information to help inform the students of knowledge and/or skills they need to improve on.
Here I have made some mistakes, I admit: I once designed a test feedback sheet mainly for the purpose of informing my SLT that marking is happening. Luckily things are changing and advice from up high in the food chain is sensible. I want us to bring some improvements to the post-test feedback lesson. I’ve been trialling the use of specifically designed low-stakes tasks that practise the application of the specific knowledge domain covered by the test. The idea is that the test feedback is an entire lesson and involves 30 minutes of test feedback using ‘show call’ and modelling the more complex aspects of the questions, then the students complete short tasks aimed at those areas they found difficult, which they will know from the table on the front of the paper. I’m hoping this will inform students and staff of the areas of strength and weakness in each topic.
Creating the appropriate climate for the assessment so that it achieves the job we want it to achieve.
“Teacher accountability is the enemy of inference”, Prof. Becky Allen.
These words really hit home to me. I am quite pre-occupied on a day-to-day basis with trying to ensure the accountability machine plays no part in how my staff go about doing their job. We have a moral responsibility to the students to ensure they receive the best education we can give them, but often the QA process can create a climate of scrutiny over support. If we create an assessment culture that is high stakes for the staff then ultimately staff will teach to the test and we will not get an accurate picture of the students’ knowledge domain, as no test can cover all content. At the same time we need to raise the stakes of the testing for the students. This can be a double-edged sword due to the vulnerable students’ propensity to quit when the going gets tough and embrace the certainty of failure over the risks of trying and falling short. I think our best way of achieving this balance is to ensure the students have all they need to revise and encourage this, but maintain a language of growth and support when administering and feeding back the test. We are starting to embrace KOs this year and using pre-release questioning to demonstrate that learning just takes effort will be a key part of our homework policy.
Peps Mcrea gave a fascinating insight into the concept of expertise in teaching. The parts of it that really resonated with me were the concept of experts’ use of energy and the expertise pyramid model (which I forgot to get a photo of).
His message that experts take much less energy to do the mundane and apply their focus to unexpected areas to ascertain what will happen next was really insightful. This will help when working with staff to try and manage more challenging classes and could be a good question stem for coaching conversations with strong teachers to try and get them to consider their practice at a deeper level. I found myself reflecting on the work of LeBron James, the world’s best basketball player, a expert in making the right pass at the right time.
His other main message to me was the idea that you have to have a solid foundation in mental models (pedagogy, psychology, subject specialism and your emotions) to be an expert. This resonated to me as I have worked on projects involving changing students’ mental models, but never thought about it as a reason some staff find it hard to take on certain techniques and gain positive impact from them. This is possibly an explanation why coaching works so well in developing teachers as they themselves are analysing their own models and adapting them. This is something I will have to remember when dealing with developing teachers.
My final session was focused on questioning by Sarah Donarski. Whilst there were areas of TLAC2.0 and other familiar ideas from her blog, I found the discussion and debate aspects very interesting. As we have moved to low-stakes testing as ‘do now’ tasks we have struggled with ‘locked-out’ students who refuse to try. I personally feel this is compounded by the idea of books as a proxy for learning. Students get neatness drilled into them and therefore become afraid to make mistakes. So I think as a department we need to counter those ideas both verbally in our instruction but also in the type of task. I am going to pilot some good old-fashioned match-up style activities for the first level of recall with reluctant learners to see if i can take advantage of the Ben Franklin effect and generate some momentum to then get them into more traditional questions.
So that’s the main takeaways for me. It was an excellent event and its nice to have so much of my improvement plan in place for next year already.