On Sunday 10 May I flew to Finland for a week of fact-finding. My mission: to learn from Finnish education leaders and teachers what it means to run a world-class education system. Over the course of several articles, I will recount my experiences and share my findings…
These articles are written in the form of diary entries based on the scribbled notes I made during various meetings with colleagues. Any errors and inaccuracies are entirely my own. I’d like to thank all the colleagues I met for their generosity and kindness. They made me feel most welcome in their country and selflessly gave me their time and expertise. I have chosen not to name these colleagues because I do not want to attribute opinions to them which they may not wish to be in the public domain. They were disarmingly honest and yet utterly professional at all times.
Tuesday 12 May (cont’d)
My first meeting of the day on Tuesday concerned quality assurance. I wanted to find out how the Finnish government ensured that schools and colleges provided both quality and value for money, and how individual organisations quality assured their own provision and each of their teachers.
I met K who is responsible for the process of quality assurance for the Espoo region. She kept apologising for her poor command of English and yet her English was near-perfect. Like many of the colleagues I met during my visit, she was humble and very honest about the limitations of – and faults with – the system in which she worked. I assume that this humility and honesty is a result of working in an autonomous, trusting system. Our high-stakes system of inspection – where rhetoric defeats reality – breeds a different sort of mindset, one where cracks are papered over and everything is portrayed in the best possible light.
We began by talking about state controls. I explained the Ofsted model to K and asked how Finland compared. Her short answer was: it doesn’t. I learnt that there is no inspectorate and the state does very little to interfere with, judge, or report on individual schools and colleges. The government and Board of Education are very ‘hands-off’. The system is built on high levels of trust and autonomy. But it wasn’t always this way. I gathered from K that there used to be more central control but that, as the education system improved, the government shrunk back.
K, like many of the people I met in Finland, shook her head in bemusement at the very notion of Ofsted and expressed not a little relief that Finland didn’t operate such a system. Some other Finnish colleagues I met on my trip felt that our system of high-stakes inspection was a sign of distrust in teachers and education leaders, and not at all in-keeping with being part of a respected profession.
Although Finland doesn’t have an inspectorate, that is not to say the government does not evaluate schools and colleges or set expectations about educational standards. I learnt, for example, that the Ministry of Education measures schools and colleges against a range of efficiency indicators which cover aspects of performance such as financial management, how individual institutions develop their staff, and how easily students get work once they graduate. Colleges are then publicly ranked on the Ministry’s website according to their efficiency levels, and higher performing institutions receive more government money than lower performing ones. This ranking, I was told, was regarded as being fair because it showed how effectively schools and colleges used public funds – the feeling was that tax-payers deserved to know what was being done with their money. But the government does not make judgments about the quality of teaching or leadership, nor does it involve itself in making judgments about behaviour and safety. Quality assurance is the job of individual institutions.
In 2015, all 189 vocational colleges in Finland had to self-assess their quality systems and submit their findings to the Ministry. This submission was followed by external audits of a sample of 35 colleges – carried out as peer reviews – designed to make sure the self-assessments were valid and consistent across all institutions, in much the same way as we would moderate a sample of controlled assessments or exam papers. Finland’s Education Evaluation Centre now plans to make these sorts of assessments every third or fourth year.
The week I returned from Finland, we had a QAA review in our college. This is the Higher Education equivalent of Ofsted. And yet not. The system of ‘inspection’ for university-level provision is much more akin to Finland’s system of self-assessment and peer review.
In preparation for our visit from the QAA (which are really peers from other institutions), we had to write a Self Evaluation Document (SED) and provide a range of supporting evidence. This was then reviewed over the course of three days in the form of meetings with QAA reviewers. Lessons were not observed and there were no interrogations. Instead, we sat in a room with reviewers and were gently probed to test out the judgments in our SED. The judgments were triangulated at all levels by talking to senior leaders, teachers, support staff and students. In fact, students played a significant role in the review.
Speaking as an old hand when it comes to Ofsted inspections (both in schools and in FE colleges), the process of Higher Education review felt, in comparison, professional, mature and supportive.
This is what the Finnish system of self-assessment feels like, too. It is designed to help schools and colleges assess their own provision with a view to improving it not judging it. No arbitrary grades are attached to any description. It feels a lot more grown-up somehow.
The self-assessments carried out by colleges in Finland are made against 152 descriptors, each scored as either missing, beginning, developing, or advanced. The descriptors are taken from the European EFQM 2013 standards.
The purposes of the EFQM standards are: to add value for customers; to create a sustainable future; to develop organisational capability; to harness creativity and innovation; to lead with vision, to inspire and act with integrity; to manage with agility; to succeed through the talent of people; and to sustain outstanding results. The overarching EFQM criteria are as follows: Leadership; Strategy; People; Partnerships and Resources; Processes, Products and Services; Customer Results; People Results; Society Results; and Business Results.
Colleges must assess each criterion and provide documentary evidence to support each assessment. Each of the above criteria are marked out of 100 points, with the exception of leadership and strategy which each carry 150 points so that the total score is out of a round 1000.
Leadership = 150pt
Strategy = 150pt
People = 100pt
Partnerships and resources = 100pt
Processes, products and services = 100pt
Customer results = 100pt
People results = 100pt
Society results = 100pt
Key results = 100pt
= 1000 pts
Within each college, self-assessment is carried out by a group of representatives from all areas of the organisation as well as by occupational groups; each makes a self-assessment every year which feeds into a college-wide judgment and informs its annual development plan. The guiding principles behind this self-assessment activity are ‘confidence’ and ‘accountability’.
With the exception of the benchmarking activity I describe above which is carried out by college leaders peer-reviewing other colleges, there is no external checking of a college’s own assessment. The system is run from a position of trust and autonomy. As such, quality assurance is seen as a developmental process and honest judgments lead to frank reflections and improvements.
The college I visited also has a ‘balanced scorecard’ which is populated at subject level and then feeds into a college-wide scorecard. The scorecard contains a number of measures including – though not exclusively – the ones the Board of Education uses in its efficiency ratings. The scorecard shows the current year’s goal versus actual results, as well as the previous few years’ worth of results to give a sense of progress/emerging trends.
The measures in the scorecard I saw were as follows:
- Development discussions
- Teacher work experience in last 5 years
- Sickness absence (days per person)
- Retirement age on average
- Teachers’ required qualifications
- Personal development
- Student survey results – average score
- Staff survey results – average score
- International mobility
- Drop-outs (retention)
- Graduates (achievement)
- Employment (destinations)
- Further studies (progression)
- Operating margin
A number of benchmarking activities take place each year to help populate and validate the scorecard including: peer reviews; internal subject audits; and the robust gathering of feedback from customers such as students and staff (gathered at key points, 3 times a year), adult education users, labour market training users, apprenticeship training users, and surveys from workplaces and interest groups.
As with self-assessment, the results of this scorecard are evaluated and the results are used to inform future improvements and support the development of the models and activities used in all future scorecard / quality assurance activities. The scorecard results are also analysed at subject level and, based on this, each subject area makes plans about how to improve their results the following year. Individual subject leaders are responsible for their scorecards and take ownership of their own improvements.
Some of the internal audits follow themes such as: improving retention; improving achievement; and improving personal study plans. The themes are determined by what the evidence suggests needs to be improved most urgently.
Next time, I will share my observations of teaching and learning.
TO BE CONTINUED…
Follow me on Twitter: @mj_bromley
Read more education articles
Browse my books on Amazon