Blog

Beyond the Top 100: Unpacking Australia’s School Rankings

In the latest NAPLAN results, Tasmania, the Australian Capital Territory, and the Northern Territory found themselves without a single school in this year’s coveted and much-reported “Top 100 schools list”. For many in these communities, this revelation has sparked questions about their education systems, the work of teachers, and broader implications of poor literacy and numeracy skills.

In this post, I delve into the nature of lists like the “Top 100 schools” to shed light on whether education ministers, school leaders, and teachers in some states and territories should be losing sleep over this phenomenon.

What Does It Mean to Be in the “Top 100”?

For a school to be ranked in the highest scoring 100 primary or secondary schools means it has outperformed the vast majority of Australian schools when you lump together the scores of all their students across all five domains of NAPLAN (i.e., reading, writing, spelling, grammar and punctuation, and numeracy). This does not say a great deal about individual student performance (you can have high-scoring students at any school) but simply reflects a given school’s average score across all tests for all its students.

As I’ve discussed in research articles and other blog posts (like this one), it’s worth noting that schools with more female students have tended to fare better statistically when you aggregate all students and scores in this way. This is because four of the five NAPLAN domains test literacy skills, and there are substantial gender gaps in favour of girls for all literacy tests (a phenomenon found in countries across the world). More girls generally mean higher average literacy scores for schools, resulting in higher aggregate NAPLAN scores.

So which Schools Tend to Make the Cut?

Perhaps unsurprisingly, given their populations of ~8.3 million and ~6.8 million respectively, New South Wales and Victoria dominate every year’s “Top 100 schools list”. For 2023, these states accounted for 86 of the top primary schools and 79 of the top secondary schools. Queensland and Western Australia fought for the scraps (with just 6 of the highest-scoring primary schools each, and Queensland having 12 of the highest-scoring secondary schools compared with 8 in WA). South Australia snuck in with a couple of primary schools and one secondary.

Dominating the list each year are elite private schools and selective public schools. Elite private schools can charge upwards of $50k per child per year in NSW and VIC and this quite remarkable level of resourcing affords smaller class sizes and (as many of their websites suggest) more differentiated and personalised learning. Selective public schools are free to attend but only admit the highest achieving children from a given community (hence being selective). This enrolment process results in schools made up of essentially only high-achieving children, leading to higher-than-usual results on NAPLAN for these schools when all students are lumped together.

Meanwhile, Tasmania, the Australian Capital Territory, and the Northern Territory have historically struggled to secure spots in such top school lists. This can be explained by various factors, with the most obvious being their minuscule populations (~572k, ~466k, and ~252k respectively), private schools with comparatively low fees and resourcing, and the absence of selective schools that filter in the “best and brightest”.

Tasmania and the Northern Territory also face several contextual challenges, making their absence from top 100 lists easier to understand. With smaller populations comes fewer high-performing students pushing up the topmost extreme. Moreover, the prevalence of students attending schools in regional, rural, and remote communities, who statistically perform lower than their urban counterparts, further impacts the overall NAPLAN outcomes of schools in these areas.

“Top 100 schools” lists

With all this said, should there be concerns everywhere apart from NSW and VIC about lists like this each year? I don’t think so. For a school to make the grade in a region that does not have the most elite of private schools or selective schools would be quite remarkable when you consider the schools with which they are competing.

Personally, I think ACARA’s list of high-performing schools in each state and territory is more useful than the top 100, since this does take into account socio-educational advantage to highlight the schools around the country that are punching above their weight classes when it comes to NAPLAN outcomes.

Given that many education departments, Catholic Education Offices, and independent schools around the country have recently implemented policies that reflect decades of research into effective literacy and numeracy instruction, it will be interesting to see whether the results of these changes flow through to higher NAPLAN scores from next year.

Analysing the 2023 NAPLAN Test Results: A Whole New World of Testing

The release of NAPLAN test results always sparks conversations and debates about the state of education and student performance. The 2023 results are no exception, but this year’s results come with a twist that makes comparisons even more intriguing. With NAPLAN testing shifting to two months earlier in the school year (from May to March), it was expected that student results would be lower than in previous years (when students had two months of additional learning to influence test preparation). Of course, it doesn’t make much sense to compare the test results in 2023 with any of the previous tests (which began in 2008), since we would be comparing ripe apples with slightly less ripe apples. But I decided to go ahead and compare the 2023 results with the 2022 results anyway, just for fun, split up by gender. Some of the findings were definitely unexpected!

Reading and Writing: A Mixed Bag

The 2023 reading results for male and female students in every tested year level (i.e., Years 3, 5, 7, and 9) all showed a downward trend compared to the 2022 results. This can likely be attributed to the earlier testing date, with 2023 students having less time to develop reading skills before the test.

There was a dip in writing results among primary school males and females that mirrored this trend, as was expected. However, in a fascinating twist, writing scores actually improved for male and female students in Year 7 and Year 9. In fact, compared to all NAPLAN writing tests since it was modified in 2011, the 2023 scores were the highest ever for Year 7 males and females and Year 9 males, while for Year 9 females it was their second highest. How is this possible? Could something have changed in the marking process? This is unlikely since nothing like this has been discussed by ACARA. Did secondary school students find the 2023 writing prompt easier to address in the limited test time? This is somewhat plausible. Could secondary school students be feeling more positive about NAPLAN testing in March than in May? Without more information, it’s not possible to know what has driven this marked increase in secondary writing test scores. But it’s certainly odd that students with two fewer months of preparation would perform higher on a test that, for all intents and purposes, seems equivalent to all recent writing tests.

Despite the positive news for secondary students, it should be pointed out that Year 9 males are still performing at a level equivalent to Year 7 females, demonstrating a persistent gender gap that merits further investigation. Year 9 males performed more than two years of equivalent learning behind Year 9 females (i.e., 24.12 months – yikes!).

Spelling and Grammar: Heading Down

Like reading, spelling scores were down for males and females in all tested year levels. Again, this was expected given the shift to earlier testing.

Grammar and punctuation results mostly followed the same downward pattern, with Year 3, Year 5, and Year 9 males and females all achieving lower scores than the 2022 students. Strangely, grammar and punctuation scores for Year 7 students from each gender were higher than Year 7 students who sat the test in 2022.

As a noteworthy point from the data, Year 7 females outperformed Year 9 males for the first time in any NAPLAN grammar and punctuation test (or any NAPLAN literacy test). This can be explained by the considerable (but expected) decline in Year 9 male scores, while Year 7 female performance was somehow largely consistent with recent years, even with the earlier testing.

Numeracy: A Glint of Improvement

In terms of numeracy, all primary school males and females somehow scored higher than their 2022 counterparts (except for Year 5 females whose scores in 2023 were slightly down). Surely the numeracy test and its marking procedures haven’t changed, so it’s unclear why there would be clear improvements. Year 3 males managed to achieve their highest mean score of any previous NAPLAN numeracy test. With two fewer months of class time 🤷‍♂️

On the other hand, secondary school students, regardless of gender, scored lower than the 2022 students. But again, this was expected, so no alarm bells yet.

Final Thoughts: Beyond the Numbers

While the 2023 NAPLAN results might not be directly comparable to previous years due to the changed testing timeline, they offer valuable insights into the dynamics of education and student performance. The interplay of gender, year levels, and subject areas provides a rich tapestry of information that policymakers, educators, and researchers can draw from to tailor interventions and strategies.

It was kind of shocking to see that in specific areas, the earlier 2023 testing procedure resulted in higher scores (i.e., secondary writing and primary numeracy). That said, all students would clearly benefit from the additional two months of learning about reading, spelling, grammar, and punctuation.

The 2023 results highlight the importance of considering the broader context surrounding NAPLAN test scores. As we move forward with this whole new world of NAPLAN testing, complete with four shiny new proficiency standards that replace the previous bands, it will be as intriguing as ever to see the rise and fall of student results across the country. These broad pictures of student achievement would not be possible without NAPLAN testing.

Figuring out figurative language in high-scoring narratives

Recently, I started a new research project with four colleagues to investigate the writing choices made by primary and secondary school students who scored highest of all Queensland students on the three most recent NAPLAN writing tests. I have done this sort of research in the past but always focused on successful persuasive writing across the tested year levels (i.e., 3, 5, 7, and 9). For our new project, named NAPtime, we will investigate the narrative writing choices valued by NAPLAN markers for the first time. The Queensland Curriculum and Assessment Authority (caretakers of completed NAPLAN tests up here) granted us access to the 285 highest-scoring Queensland writing samples written for the 2019, 2021, and 2022 NAPLAN tests (i.e., roughly 20-25 samples per year level for the three years of the test). In the next couple of years, my colleagues and I will use a variety of linguistic and rhetorical frameworks to identify patterns in the students’ writing and communicate our findings to the education community.

My first exploration of the successful writing samples will focus on the students’ use of figurative language to entertain their readers. Figurative language choices are often referred to as figurative devices, rhetorical devices, literary devices or figures of speech, and are commonly associated with poetry and argumentation, but high-quality narratives are also overflowing with artful and playful uses of figurative language. In fact, this is often what makes the stories we read so engaging.

Figurative language has been the focus of research and teaching for (literally) thousands of years. The figurative language choices I’ll be looking for in the NAPLAN writing samples were identified first by Aristotle and other rhetoricians way back in Ancient Greece. Aristotle outlined the ins and outs of five canons of classical rhetoric—Invention, Arrangement, Style, Memory, and Delivery—which included everything a speaker or writer would need to discover, organise, and communicate compelling ideas through spoken and written texts. Of most relevance to our NAPtime research project is the third canon, Style, which concerns how we put the ideas we have into words that are communicated with others. This is the part of classical rhetoric that dealt with figurative language.

Figurative language in the Australian Curriculum: English.

It’s quite amazing to see just how much emphasis is given to figurative language in the Australian Curriculum: English. Even a cursory glance will show this is one of the most underrated aspects of English teaching. Unlike certain other aspects of English that are only dealt with in niche sub-strands of the curriculum, figurative language can be found across all three strands (i.e., Language, Literature, and Literacy), spread out across a full eight sub-strands! While figurative language is taught from Year 1 to Year 10, it becomes especially prominent in the secondary school years, where it’s mentioned directly in six content descriptions for each secondary year level (i.e., 7, 8, 9, and 10). In this sense, teaching students to interpret and use figurative language is likely a regular part of every secondary English teacher’s day job.

Despite the wide reach of figurative language, this aspect of English is, arguably, treated in a fairly disjointed manner in the Australian Curriculum: English. Figurative language pops up here, there, and everywhere. It is described as serving many varied functions in different types of texts, such as enhancing and building up layers of meaning; shaping how readers interpret and react to texts; influencing audience emotions, opinions, and preferences; evaluating phenomena; and conveying information and ideas. At times, it is described as a stylistic tool of poetry, songs, and chants; at other times it’s a persuasive tool of argumentation; at other times it’s a literary tool of storytelling. All these uses make figurative language feel a bit like sand slipping through your fingers; nothing really ties it together.

The Australian Curriculum: English refers to 14 figurative devices explicitly (i.e., metaphor, simile, personification, onomatopoeia, assonance, alliteration, hyperbole, idiom, allegory, metonymy, ellipses, puns, rhetorical questions, and irony). This might seem like a lot, but more than 200 figurative devices have been identified in the writing of William Shakespeare alone (Joseph, 1947)! It would be interesting to know how and why these 14 figurative devices have been named in the curriculum.

Figurative language in the NAPLAN writing tests

Another place educators come across figurative devices is in the NAPLAN writing marking guides. The persuasive writing version of the test includes a criterion named Persuasive devices, which involves “The use of a range of persuasive devices to enhance the writer’s position and persuade the reader” (ACARA, 2013, p. 6). In the glossary of the persuasive writing marking guide, nine figurative devices are mentioned: alliteration, simile, metaphor, personification, idiom, puns, irony, hyperbole, and rhetorical questions. The guide also includes some descriptions of the effects of other figurative devices (e.g., parallelism, anaphora, epistrophe) without mentioning the technical names (e.g., “Words or phrases at the beginning or end of successive clauses or statements” refers to anaphora and epistrophe).

The NAPLAN narrative writing marking guide (ACARA, 2010) drops the Persuasive devices criterion and replaces it with another named Character and setting, which involves “The portrayal and development of character” and “The development of a sense of place, time and atmosphere” (p. 4). Only metaphor and simile are mentioned in the glossary as part of key vocabulary choices, while ellipsis is mentioned as a key resource for building text cohesion.

What can we take from the emphasis on figurative language in these marking guides? It seems the designers of the NAPLAN writing test are expecting students to use figurative language in both versions, but only really sets markers up to identify the use of specific figurative devices in the persuasive version. There is possibly an assumption here that figurative language is more important in persuasive writing than in narrative writing. When you add the Australian Curriculum’s substantial but disjointed emphasis on figurative language into the mix, it’s quite likely that some Australian teachers would feel unsure about the aspects of figurative language to teach and in which genres.

Our approach in the NAPtime research

Educators and curriculum designers in contemporary settings might get a better grip on figurative devices if we follow the lead of classical rhetoricians who divided them into two categories: schemes and tropes. Both can be described as fundamental to how we put together sentences in written or spoken texts.

Simply put, a scheme (from the Greek word schēma, meaning form or shape) involves changing the usual pattern or arrangement of words in a sentence. A well-known scheme is alliteration, which involves the repetition of initial phonemes in two or more adjacent words, such as when Professor McGonagall from Harry Potter described students as “behaving like a babbling, bumbling band of baboons!”

A trope (from the Greek word tropein, meaning to turn) involves changing the normal meaning of words in a sentence. A well-known trope is metaphor, which involves making a comparison between two different things that have something in common, such as when Mrs Dursley from Harry Potter is compared to a crane (i.e., a longnecked bird): “she spent so much of her time craning over garden fences, spying on the neighbours”.

Dividing the 14 figurative devices mentioned in the Australian Curriculum: English and the nine in the NAPLAN persuasive writing marking guide into schemes and tropes shows that these documents strongly favour tropes (i.e., nine tropes vs. three schemes in the curriculum and eight tropes vs. one scheme in the NAPLAN marking guide). A key interest of my research into high-scoring NAPLAN narratives will be to determine how the students used schemes and tropes to entertain their readers, and how well these key policy documents reflect the choices valued in the NAPLAN writing context.

I will pay close attention to the following 19 schemes and 17 tropes that are particularly useful in contemporary writing (Corbett & Connors, 1999). Clearly, this is more than double the number mentioned in the curriculum and NAPLAN, and some may not have been used much or at all by the high-scoring students. It’s also possible that some devices were only used in certain year levels, so there is potential for interesting findings here. If we discover that NAPLAN markers rewarded students for using figurative devices that do not even appear in the key policy documents guiding our teachers, there will be fascinating implications for the usefulness, equity, and ongoing enhancement of these documents.

Without further ado, here is a table of the schemes and tropes that I will look for in my first NAPtime article, with pronunciations, definitions, and examples:

SchemeDefinitionExample
ParallelismRefers to when words, word groups, or clauses in a sequence have a similar structureHe enjoyed studying English, history, and science.
Isocolon (ī-sō-cō’-lon)A type of parallelism that occurs when parallel elements not only share a similar structure but also have the same length, such as the same number of words or even syllablesIn this classroom, minds expand, ideas ignite, and knowledge flourishes.
ClimaxWorks together with parallelism. Occurs when words, word groups, or clauses are arranged to build up in importance or intensityBy the end of the school year, students will be armed with skills, wisdom, and a burning desire to make their mark on the world.
Antithesis (an-tith’-ə-sis)A type of parallelism that occurs when contrasting ideas are placed side by sideDespite the rules and routines, the class had wild bursts of creativity. They seemed to value both conformity and rebellion.
Anastrophe (ə-‘na-strə-fē)When the usual word order of a clause or sentence is invertedA place of endless possibilities, a school is.
Parenthesis (pə-ren’-thə-sis)The insertion of a verbal unit that interrupts the normal flow of a sentence’s structureA school—with students hurrying between classrooms and the sound of slamming lockers—is a vibrant and dynamic place.
AppositionPlacing two elements side by side, where the second element serves an an example or modification of the firstThe teacher, a tireless advocate for learning, guides the students with dedication and passion.
EllipsisThe intentional omission of a word or words that can be easily understood from the contextYou can enter the Year 5 classroom down the corridor, and Year 6 up the stairs.
Asyndeton (a-sin’-də-ton)The purposeful omission of conjunctions between connected clausesBooks, pencils, notebooks, a backpack filled to the brim—all essentials for a day of learning.
Polysyndeton (pol-ē-sin’-də-ton)The purposeful use of many conjunctionsThe young student struggled to carry her books and her pens and her laptop and her calculator and her highlighters to class.
AlliterationThe repeated use of the same sound at the start of several consecutive wordsA boisterous banter of students blended with the rhythmic rattle of rolling backpacks.
AssonanceThe repeated use of similar vowel sounds in stressed syllables of consecutive words, with different consonant sounds before and after themThe playful students stayed late to engage in debate.
Anaphora (ə-naf’-ə-rə)The repeated use of the same word or words at the start of several consecutive clausesIn this class we pursue our dreams. In this class we discover our potential. In this class we become who we are meant to be.
Epistrophe (ə-pis’-trō-fē)The repeated use of the same word or words at the ends of consecutive clausesIn the classroom, we learn. In the hallways, we learn. In the library and the gym, we learn. Everywhere in this school, we learn.
Epanalepsis (ə-pon-ə-lep’-sis)The repeated use of a word or words at the end of a clause that was used at the beginning of the same clauseLearning to write is the most important part of learning.
Anadiplosis (an-ə-di-plō’-sis)The repeated use of the last word of one clause at the beginning of the next clauseEducation is the key to unlocking doors, and doors lead to endless possibilities for a life lived well.
Antimetabole (an-tē-mə-tab’-ō-lē)The repeated use of words in successive clauses, but in reverse grammatical orderIn this class you will not only learn to read, but you will read to learn.
Chiasmus (kī-əz’-mus)When the grammatical structure in successive word groups or clauses is reversedAs teachers, we shape our students, but then our students shape us.
Polyptoton (pō-lip’-tə-tahn)The repeated use of words that are derived from the same root wordThe new learnings of the learners helped them learn most of all.
TropeDefinitionExample
MetaphorThe comparison of two different things that by implying a connection between themSchools are fertile gardens where knowledge takes root and young minds can bloom.
SimileThe comparison of two different things by using ‘like’ or ‘as’ to make the comparison explicitThe children gathered around the teacher, like bees around a hive.
Synecdoche (si-nek’-də-kē)When a part of something is used to represent the whole thingMany hands helped make the school fair a success.
Metonymy (mə-tahn’-ə-mē)The substitution of a word or word group with another that is closely associated or suggestive of intended meaningThe pen is mightier than the sword.
Pun: Antanaclasis (an-ta-nak’-la-sis)The intentional use of one word in two or more different waysIf you never learn the content, you’ll never learn to be content.
Pun: Paronomasia (par-ə-nō-mā-zha)The intentional use of words that sound similar but have different meaningsThe teacher plainly explained how the plane’s crash was unplanned.
Pun: SyllepsisThe intentional use of a word in a way that modifies two or more other words, but with each of those words understanding the original word differentlyThe teacher did not raise her voice or her hopes.
AnthimeriaOne part of speech is substituted for anotherThe student papered the hallway with his artistic skills.
Periphrasis (pə-rif’-ə-sis)The use of a descriptive word or word group instead of a proper noun or the use of a proper noun to refer to a quality or characteristic associated with itSarah was crowned the Queen of Knowledge for her amazing academic results.
PersonificationGiving human qualities or abilities things that are not humanThe library books whispered enticing stories, beckoning the students to embark on magical adventures.
Hyperbole (hī-pur’-bə-lē)The intentional use of exaggerated terms to emphasis meaningFor maths we were forced to sit and work through a thousand complex equations.
Litotes (lī’-tə-tēz)The intentional use of understated terms to minimise meaningJim’s performance in the science fair was not unimpressive.
Rhetorical questionPosing a question, not to receive an answer, but to express a point indirectlyCan you deny the importance of education in a child’s life?
IronyThe use of words in a way that mean the opposite of their literal meaningThe 50-page maths assignment was every student’s idea of a fun-filled holiday.
OnomatopoeiaThe use of a word that imitates the sounds it describesOver the courtyard she clashed and clattered on the way to the classroom.
OxymoronThe combination of two terms that are usually contradictory or opposite to each otherThe silent cacophony of the empty classroom filled the air.
ParadoxMaking a statement that seems contradictory but that holds some truthThe more you learn, the more you realise you don’t know.

I look forward to letting you know what we find. My hypothesis is that figurative language plays a much larger role in high-scoring narratives than the narrative marking guide suggests. If you are a teacher, how do you currently teach students to understand and use figurative devices in their own writing? Do you think it’s important for narrative writing?

References

Australian Curriculum, Assessment and Reporting Authority. (2010). National Assessment Program – Literacy and Numeracy: Narrative writing marking guide. https://www.nap.edu.au/_resources/2010_marking_guide.pdf

Australian Curriculum, Assessment and Reporting Authority. (2013). National Assessment Program – Literacy and Numeracy: Persuasive writing marking guide. https://www.nap.edu.au/_resources/amended_2013_persuasive_writing_marking_guide_-with_cover.pdf

Corbett, E. P. J., & Connors, R. J. (1999). Classical rhetoric for the modern student (4th ed.). Oxford University Press. 

Joseph, M. (1947). Shakespeare’s use of the arts of language. Columbia University Press.

3 simple ways to fix NAPLAN writing without scrapping the test

NAPLAN

Many education stakeholders and social commentators dislike the NAPLAN writing test. They think it (and the whole suite of annual tests) should be scrapped. NAPLAN undeniably influences classroom practices in a large number of Australian schools, and it’s also raised stress levels for at least some groups of students and teachers (Hardy & Lewis, 2018; Gannon, 2019; Ryan et al., 2021). These are valid concerns.

But as Australia’s only large-scale standardised assessment of writing, the test has the potential to provide unique and useful insight into the writing development, strengths, and weaknesses of Australia’s primary and secondary school populations (here’s an example). Added to this, the political value of NAPLAN, and the immense time, energy, and money that’s been poured into the tests since 2008 make it unlikely that the tests will be scrapped anytime soon.

Instead of outright scrapping the tests, or keeping them exactly as they are (warts and all), a third option is to make sure the tests are designed and implemented as well as possible to minimise concerns raised since their introduction in 2008. I’ve given the design of the NAPLAN writing test a great deal of thought over the past decade; I’ve even written a PhD about it (sad but true). In this post, I offer 3 simple fixes ACARA can make to improve the writing test while simultaneously addressing concerns expressed by critics.

1. Fix how the NAPLAN writing test assesses different genres

What’s the problem? At present, the NAPLAN writing test requires students to compose either a narrative or a persuasive text each year, giving them 40 minutes to do so.

Why is this a problem? The singular focus on narrative or persuasive writing is potentially problematic for a test designed to provide valid and reliable comparisons between tests over time. Those who have taught narrative and persuasive writing in classrooms will know these genres require often very different linguistic and structural choices to achieve different social purposes. It’s OK to compare them for some criteria, like spelling, but less so for genre specific criteria. ACARA know this too because the marking criteria and guide for the narrative version of the test (ACARA, 2010) are not the same as those for the persuasive writing version (ACARA, 2013). Even though the marking criteria for both versions are not identical, the results are compared as though all students completed the same writing task each year. There is a risk that randomly shifting between these distinct genres leads us to compare apples and oranges.

Also related to genre is the omission of informative texts (e.g., procedures, reports, explanations, etc.) in NAPLAN testing. Approaches to writing instruction like genre-based pedagogy, The Writing Revolution, and SRSD emphasise the importance of writing to inform. This is warranted by the fact that personal, professional, and social success in the adult world relies on being able to clearly inform and explain things to others. It’s not ideal that the significant time spent developing students’ informative writing skills across the school years is not currently assessed as part of NAPLAN.

What’s the solution? A better approach would be to replicate how the National Assessment of Educational Progress (NAEP) in the US deals with genres.

How would this improve NAPLAN? In the NAEP, students write two short texts per test instead of one, with these texts potentially requiring students to persuade (i.e., persuasive), explain (i.e., informative), or convey real or imagined experience (i.e., narrative). The NAEP is administered in Years 4, 8, and 12. Matching the typical development of genre knowledge (Christie & Derewianka, 2008), the Year 4 students are most likely to be asked to write narrative and informative texts, while those in Years 8 and 12 are most likely to write informative and persuasive texts. But students in all tested year levels can still be asked to write to persuade, explain, or convey experience, so knowledge about all the genres is developed in classrooms.

Why couldn’t we do something similar with NAPLAN? Including informative texts in our writing test would incentivise the teaching of a fuller suite of genres in the lead up to NAPLAN each year. Not including informative texts in NAPLAN is a little baffling since a large proportion of student writing in classrooms is informative.

2. Fix the NAPLAN writing prompt design

What’s the problem? At the start of every NAPLAN writing test, students receive a basic prompt that provides general information about the topic. Here’s an example from the 2014 test, which required a persuasive response:

NAPLAN prompt

As you can see, some useful information is provided about how children can structure their texts (e.g., Start with an introduction) and the sorts of writing choices they might like to make (e.g., choose your words carefully to convince a reader of your opinion). But how useful is this to a student who doesn’t have a lot of background knowledge about laws and rules?

My younger sister was in Year 5 in 2014 and she completed this writing test. She had previously been on two international school trips, and drew on these (and other) experiences to write an argument about raising Australia’s legal drinking age to 21, as it is in the US, and the many ways this would positively impact our society. Perhaps unsurprisingly, my sister achieved at the Year 9 standard for this persuasive text.

Why is this a problem? Compare my sister’s experience with a child from a lower socioeconomic area who had never been out of their local area, let alone Australia. It’s more challenging to suggest how rules or laws in our context should change if you don’t know about how these rules or laws differ in other places. The information provided in the prompt is far less useful if the child does not have adequate background knowledge about the actual topic.

Keeping the topic/prompt secret until the test is meant to make the test fairer for all students, yet differences in children’s life experiences already make a prompt like this one work better for some students than others. As an aside, in 2014 so many primary school students couldn’t answer this prompt that ACARA decided to write separate primary and secondary school prompts from 2015. This changed the test conditions in a considerable way, which might make it harder to reliably compare pre- and post-2014 NAPLAN writing tests, but I digress.

What’s the solution? A fairer approach, particularly for a prompt requiring a persuasive writing response, would be to provide students with select information and evidence for both sides of an issue and give them time to read through these resources. The students could then integrate evidence and expert opinion from their chosen side into their arguments (this is a fascinating process known as discourse synthesis, which I’d like to write about another time). Students could still freely argue whatever they liked about the issue at stake, but this would mean Johnny who never went out of his local area would at least have some information on which to base his ideas. Plus, we could potentially make the whole experience more efficient by making these supporting materials/evidence the same as those used to test students’ reading skills in the NAPLAN reading test.

How would this improve NAPLAN? Supporting information for the persuasive writing test (and the informative writing test if we can add that family of genres) would not need to be long: even just a paragraph of evidence on both sides would offer plenty for students to synthesise into their texts. We know that the test conditions and criteria influence what’s taught in classrooms, so there’s an opportunity to promote writing practices that will set students up for success in upper secondary and (for those interested) higher education contexts.

At the moment, students rarely include any evidence in their NAPLAN writing, even high-scoring students. Introducing some supporting information might help our students to get away from forming arguments based on their gut reactions (the kinds of arguments we encounter on social media).

3. Fix how the writing test positions students to address audiences

What’s the problem? Since 2008, NAPLAN writing prompts have had nothing to say about audience. Nothing in the prompt wording positions students to consider or articulate exactly who their audience is. Despite this, students’ capacity to orient, engage, and affect (for narrative) or persuade (for persuasive) the audience is one of the marking criteria. Put another way, we currently assess students’ ability to address the needs of an audience without the marker (nor perhaps the student) explicitly knowing who that audience is.

Why is this a problem? The lack of a specified audience leads many students to just start writing their narratives or persuasive texts without a clear sense of who will (hypothetically speaking) read their work. This isn’t ideal because the writing choices that make texts entertaining or persuasive are dependent on the audience. This has been acknowledged as a key aspect of writing since at least Aristotle way back in Ancient Greece.

Imagine you have to write a narrative on the topic of climate change. Knowing who you are writing for will influence how you write the text. Is the audience younger or older? Are they male or female? Do they like action, romance, mystery, drama, sports-related stories, funny stories, sad stories, or some other kind of story? What if they have a wide and deep vocabulary or a narrow and shallow vocabulary? There are many other factors you could list here, and all of these would point to the fact that the linguistic and structural choices we make when writing a given genre are influenced by the audience. The current design of NAPLAN writing prompts offers no guidance on what to do with the audience.

Others have noticed that this is a problem. In a recent report about student performance on the NAPLAN writing test, the Australian Education Research Organisation (AERO) (2022) described the Audience criterion as one of five that should be prioritised in classroom writing instruction. They argued: “To be able to write to a specific audience needs explicit teaching through modelling, and an understanding of what type of language is most appropriate for the audience” (p. 70). How can the marker know if a student’s writing choices are appropriate if an audience isn’t defined?

What’s the solution? A simple fix would be to give the students information about the audience to whom they’re entertaining, persuading, and/or informing. This is, again, how the NAEP in the US handles things, requiring that the “writing task specify or clearly imply an audience” (National Assessment Governing Board, 2010 p. vi). Audiences for the NAEP will be specified by the context of the writing task, age- and grade-appropriate, familiar to students, and consistent with the purpose identified in the writing task (e.g., to entertain) (see here for more). Another fix would be to ask students to select their own audience and record this somewhere above their response to the prompt.

How would this improve NAPLAN? Having more clarity around the intended audience of a written piece would position students to tailor their writing to suit specific reader needs. This would allow markers to make more accurate judgements about a child’s ability to orient the audience. If this isn’t fixed, markers will continue having to guess at who the student was intending to entertain or persuade.

Boy writing

Righting the writing test

Would these changes make the NAPLAN writing test 100% perfect? Well, no. There would still be questions about the weighting of certain criteria, the benefit/cost ratio of publicly available school results through MySchool, and other perceived issues (if anyone out there finds this interesting, I’d like to write about 3 more possible fixes in a future post). But the simple fixes outlined here would address several concerns that have plagued the writing test since 2008. This would influence the teaching of writing in positive ways and make for a more reliable and meaningful national test. The NAPLAN writing test isn’t going anywhere, so let’s act on what we’ve learnt from over a decade of testing (and from writing tests in other countries) to make it the best writing test it can be.

Seems pretty persuasive to me.

References

Australian Curriculum, Assessment & Reporting Authority. (2010). National Assessment Program – Literacy and Numeracy – Writing: Narrative marking guide. https://www.nap.edu.au/_resources/2010_Marking_Guide.pdf

Australian Curriculum, Assessment & Reporting Authority. (2013). NAPLAN persuasive writing marking guide. https://www.nap.edu.au/resources/Amended_2013_Persuasive_Writing_Marking_Guide-With_cover.pdf

Australian Education Research Organisation. (2022). Writing development: What does a decade of NAPLAN data reveal? https://www.edresearch.edu.au/resources/writing-development-what-does-decade-naplan-data-reveal/writing-development-what-does-decade-naplan-data-reveal-full-summary

Christie, F., & Derewianka, B. (2008). School discourse. Continuum Publishing Group.

Gannon, S. (2019). Teaching writing in the NAPLAN era: The experiences of secondary English teachers. English in Australia, 54(2), 43-56

Hardy, I., & Lewis, S. (2018). Visibility, invisibility, and visualisation: The danger of school performance data. Pedagogy, Culture & Society, 26(2), 233-248. https://www.doi.org/10.1080/14681366.2017.1380073

Ryan, M., Khosronejad, M., Barton, G., Kervin, L., & Myhill, D. (2021). A reflexive approach to teaching writing: Enablements and constraints in primary school classrooms. Written Communication. https://doi.org/10.1177/07410883211005558

Gender gaps in literacy and numeracy

This post provides the key points of a journal article I recently had published in The Australian Educational Researcher, co-written with Belinda Hopwood (UTAS), Vesife Hatisaru (ECU), and David Hicks (UTAS). You can read the whole article here

boy and girl reading
Girls better at literacy, boys better at numeracy?

In recent years, there has been increased attention on gender gaps in literacy and numeracy achievement. This is due in part to international assessments of students’ reading achievement such as PIRLS and PISA (Lynn & Mikk, 2009) that have found gender differences in reading are universal, with girls from all participating countries significantly and meaningfully outperforming boys. Previous research has shown that girls score higher on reading tests and are more likely to be in advanced reading groups at school (Hek et al., 2019), while those who fall below the minimum standards for reading are more likely to be boys (Reilly et al., 2019). Large-scale assessments of numeracy have seen similarly consistent results, though with boys outperforming girls. So, what’s the situation in Australia?

Recently, three colleagues and I found out what 13 years of NAPLAN reading and numeracy testing might show about boys’ and girls’ performance in the Australian context. Something that has been lacking from international research has been a clear picture of how reading and numeracy gender gaps become wider or more narrow across the primary and secondary school years. To provide this picture, we drew on publicly available NAPLAN results from the NAPLAN website (ACARA, 2021) and The Grattan Institute’s (Goss & Sonnemann, 2018) equivalent year level technique.

Findings: Gender gap in reading

We looked at the average reading performance of boys and girls across the four tested year levels of NAPLAN (i.e., Years 3, 5, 7, and 9) between 2008 and 2021. Girls improved consistently from Year 3 to Year 9, with approximately two years of progress made between each test. Boys progressed to a similar extent between Years 3 and 5, yet they fell behind the girls at a faster rate between Year 5 and Year 7. Specifically, boys made 1.95 years of progress between Year 3 and Year 5 and 1.92 years between Year 7 and Year 9, but only managed 1.73 years of progress between Year 5 and Year 7 (i.e., in the transition between primary and secondary school).

letters

The average gap between boys and girls was wider for each increase in year level, with Year 3 males around 4 months of equivalent learning behind Year 3 females, Year 5 males 5 months behind, Year 7 males 7 months behind, and Year 9 males around 10 months of learning behind Year 9 females. While boys made more progress between Year 7 and Year 9, this was also when girls made most progress. While boys seem to keep up with girls reasonably well in the primary school years, more boys struggle with reading as they transition into secondary school.

Findings: Gender gap in numeracy

The overall picture for numeracy is similar to reading, though with boys outperforming girls and the gender gap increasing across each tested year level. Boys made approximately two years of progress between each numeracy test, while girls consistently made just over 1.8 years of progress between each test, leading to a gender gap that grew wider at a consistent rate over time. Put differently, boys and girls made consistent progress between each numeracy test, though the rate of progress was higher for males, leading to a neatly widening gender gap over time.

What about the writing gender gap?

In 2020, I conducted a similar study that looked into the NAPLAN writing results, finding that on average, boys performed around 8 months of equivalent learning behind girls in Year 3, a full year of learning behind in Year 5, 20 months behind in Year 7, and a little over two years of learning behind in Year 9. Boys fell further behind girls with writing at every tested year level, yet the rate at which girls outperformed boys was greatest between Years 5 and 7. Our study into reading and numeracy has found that similar gaps exist in these domains too, though not to the same extent as writing. For ease of comparison, the following graph shows the extent and development of the gender gaps in numeracy, reading, and writing.

gender gaps
Gender gaps in numeracy, reading, and writing (2008-2021)

Why do more boys struggle with literacy as they transition into secondary school? For most Australian students, Year 7 is when many (most?) will move physically from their primary school campus to a secondary school campus. This physical transition has been shown to impact student reading achievement, particularly for boys (Hopwood et al., 2017). For some students, their reading achievement stalls in this transition, or in serious cases, declines to levels below that of their primary school years (Hanewald, 2013). Some students entering secondary school have failed to acquire the necessary and basic reading skills in primary school required for secondary school learning (Lonsdale & McCurry, 2004) stifling their future reading development (Culican, 2005). The secondary school curriculum is more demanding and students are expected to be independent readers, able to decode and comprehend a range of complex texts (Duke et al., 2011; Hay, 2014). As argued by Heller and Greenleaf (2007), schools cannot settle for a modest level of reading instruction, given the importance of reading for education, work, and civic engagement. We need to know more about why this stage of schooling is difficult for many boys and how they can be better supported.

The analysis of the numeracy gender gap was quite different from both the reading and writing results. While previous international studies have suggested that the gender gap in numeracy only becomes apparent in secondary school (Heyman & Legare, 2004), this study showed that average scores for boys were higher than those of girls on every NAPLAN numeracy test, though to a lesser extent than the other domains. The widest numeracy gender gap of a little over 6 months of learning in Year 9 was smaller than the smallest writing gender gap of 8 months in Year 3.

Implications of gender gaps in literacy and numeracy

The findings suggest links between reading and writing development, in that more boys struggle with both aspects of literacy in the transition from primary to secondary school. While other researchers have looked at the numeracy gap over time using NAPLAN scale scores (e.g., Leder & Forgasz, 2018), by using the equivalent year level values, we’ve been able to show how the gender gap widens gradually from roughly 2 months of learning in Year 3 to 6 months of learning in Year 9. While this supports the general argument that, on average, boys outperform girls in numeracy and girls outperform boys in literacy tests, it also shows how the gaps are not equal.

References

Australian Curriculum, Assessment and Reporting Authority. (2021). NAPLAN national report for 2021. https://bit.ly/3q6NaC4

Culican, S. J. (2005). Learning to read: Reading to learn – A middle years literacy intervention research project. Final Report 2003–4. Catholic Education Office.

Goss, P., & Sonnemann, J. (2018). Measuring student progress: A state-by-state report card. https://bit.ly/2UVNxy5

Hanewald, R. (2013). Transition between primary and secondary school: Why it is important and how it can be supported. Australian Journal of Teacher Education, 38(1), 62–74.

Hek, M., Buchman, C., & Kraaykamp, G. (2019). Educational systems and gender differences in reading: A comparative multilevel analysis. European Sociological Review, 35(2), 169-186.

Heller, R. & Greenleaf, C. (2007). Literacy instruction in the content areas: Getting to the core of middle and high school improvement. Alliance for Excellent Education.

Heyman, G. D., & Legare, C. H. (2004). Children’s beliefs about gender differences in the academic and social domains. Sex Roles, 50(3/4), 227-236. https://doi.org/10.1023/B:SERS.0000015554.12336.30

Hopwood, B., Hay, I., & Dyment, J. (2017). Students’ reading achievement during the transition from primary to secondary school. Australian Journal of Language and Literacy, 40(1), 46-58.

Leder, G. C., & Forgasz, H. (2018). Measuring who counts: Gender and mathematics assessment. ZDM, 50, 687–697. https://doi.org/10.1007/s11858-018-0939-z

Lonsdale, M. & McCurry, D. (2004). Literacy in the new millennium. Australian Government, Department of Education, Science and Training.

Lynn, R., & Mikk, J. (2009). Sex differences in reading achievement. Trames, 13, 3-13.

Reilly, D., Neuman, D., & Andrews, G. (2019). Gender differences in reading and writing achievement: Evidence from the National Assessment of Educational Progress (NAEP). American Psychologist, 74(4), 445-458.

New teacher suitability: Walking the LANTITE rope

I recently read the Next Steps: Report of the Quality of Teacher Education Review by Expert Panel members Lisa Paul, Bill Louden, Malcolm Elliot, and Derek Scott, which focuses on a number of key issues facing ITE providers, pre-service teachers, and the teaching profession. Of the 17 recommendations made by the Expert Panel, the 11th involved modifying requirements for completing the Literacy and Numeracy Test for Initial Teacher Education (LANTITE). If these modifications are made, it has the potential to strongly influence whether certain cohorts of pre-service teachers complete their degrees and join the teaching profession.

What is LANTITE?

In case you haven’t heard of the LANTITE, it is a test used to assess the personal literacy and numeracy skills of pre-service teachers. In terms of difficulty, the LANTITE is designed at approximately a Year 9 standard. The Australian Government mandated in 2016 that all undergraduate (e.g., Bachelor of Education) and postgraduate (e.g., Master of Teaching) pre-service teachers must pass this test before graduation to meet their degree requirements.

From apple isle to big banana

In 2014, I started lecturing at the University of Tasmania (UTAS). In November 2021, I took up a new position at the University of Queensland (UQ). In the transition between teacher education providers, I have experienced the expected lumps and bumps of changing software platforms, learning new policies and practices (e.g., forms of assessment, modes of delivery), and shifting from a hybrid mode of delivery at UTAS to a more traditional, face-to-face approach at UQ. But something that caught me slightly off guard was the difference in pre-service teacher cohorts.

Starting a teaching degree in Australia

At UTAS, and many other Australian universities, there are several ways for prospective students to begin teaching degrees. The ATAR pathway, which requires students to score highly on a range of pre-tertiary subjects in senior secondary school (Year 11 and 12), is the preferred route for education providers. But with many people who did not achieve highly enough at school to meet ATAR requirements for teaching still keen to enter the teaching profession, universities have a range of alternative entry pathways that render ATAR scores preferable but unnecessary. For example, the UTAS webpage that explains entry requirements into their Bachelor of Education (Primary) degree states: “Applicants without senior secondary, tertiary or VET/TAFE study can complete a personal competency statement” and “applicants may be eligible for an offer if they have relevant work and/or life experiences which demonstrate a capacity to succeed in this course” (see here). In my four years as a UTAS pre-service teacher, three years as a PhD student, and eight years as an academic, it was commonplace to work with pre-service teachers who did not meet the ATAR requirements to enter a teaching degree.

By comparison, when I started working at UQ, I was shocked to learn about the small sizes of their pre-service teacher cohorts. I’ve taught up to 450 students in a single course at UTAS, but at UQ, it’s more like 45! When I asked about this, a UQ colleague explained that UQ does not really offer alternative entry pathways. In other words, the only pre-service teachers at UQ are those whose ATAR scores were approximately 75 or higher. After doing a little digging, I found that UQ does have an alternative entry pathway based on prior life experience, but of the 78 students admitted to the Bachelor of Education (Primary) degree in 2021, fewer than 5 entered without an ATAR. There are other ITE providers in Queensland that, like UTAS, have much larger student cohorts; I’m not sure how their entry requirements work, but my suspicion is that they may not be as strict as UQ.

Approaches to LANTITE testing

Other than entry requirements, UTAS and UQ have taken different approaches when it comes to the LANTITE test. At UTAS, Bachelor of Education students complete the LANTITE test in their fourth year, before commencing their final school placement. Considerable support is provided to upskill those who may have been accepted into the degree without adequate literacy and numeracy skills. All pre-service teachers complete an Academic Literacies course in their first semester that prepares them for the demands of academic writing and gives them practice at internal literacy and numeracy tests that replicate the LANTITE. Those who do not pass these practice tests are identified in the first year of their degree and supported to develop their skills well before they sit the actual LANTITE test in fourth year. In my experience, this approach has worked well for UTAS and its pre-service teachers.

At UQ, while pre-service teachers are also officially required to pass LANTITE before they graduate, it is commonplace for them to do so at the very beginning of their degree. It is treated more like an entry requirement rather than something students will work towards in their time at university. School of Education staff at UQ support pre-service teachers who do not pass the LANTITE test. It’s quite likely that, in the near future, I will be asked to offer such support to those who struggle with the literacy test. But overall, it seems to be the norm at UQ that pre-service teachers pass the LANTITE without issue as they commence their teaching degrees. Remember, almost all of these students scored a comparatively high ATAR score, so it might be expected that they would pass literacy and numeracy tests aimed at a Year 9 standard.

How this relates to the ITE review

This brings me, then, to the 11th recommendation of the Expert Panel on the ITE review. They have argued that the LANTITE test should be made available before students commence ITE degrees and that it should be passed in the first year of study. The first part of this recommendation was actually already agreed to by education ministers in 2020, with students set to be able to complete LANTITE before commencing their teaching degrees from 2023. In terms of requiring students to pass the LANTITE test before the end of their first year of study, the Expert Panel recommended this change to help students identify whether a career in teaching is likely to be suitable.

But hang on a minute – remember I mentioned that many of the pre-service teachers attending other universities around Australia did not achieve high ATAR scores at school? Passing the LANTITE test on entry may be a tough ask for a lot of these people. It’s not to say that they wouldn’t make fantastic teachers at the end of their degrees, but it seems that having considerable time and support at university can help them to develop the literacy and numeracy skills required to pass the LANTITE.

The Expert Panel acknowledged this might be a problem for pre-service teachers who enter their degrees without strong literacy and numeracy skills. The review cites the Australian Capital Territory Directorate who argued that requiring LANTITE to be passed by the end of the first year of a teaching degree “could create a barrier to achieving diversity in the teaching workforce” (p. 59).

So, to support students described by the Expert Panel as culturally and linguistically diverse, they suggest ITE providers should develop bridging courses in literacy and numeracy that students might be funnelled into if they don’t pass the test on entry. To me though, failing a test like LANTITE at the start of a degree, with a mountain still left to climb, is quite different from failing it a couple of months from graduation, when the mountain is already climbed. I think this change has the potential to prevent a lot of pre-service teachers from moving beyond the first year since they might deem that their lack of skills makes them unsuitable for the profession.

The bottom line

For universities like UQ, where pre-service teachers are already expected to complete the LANTITE test in their first year of study, this particular change wouldn’t make a lot of difference. But, it would likely be more problematic for ITE providers that accept much larger numbers of students into their teaching degrees, many without high ATAR scores who gain entry through alternative pathways.

I don’t think it’s unfair to expect that those hoping to join the teaching profession in the near future should have strong foundational literacy and numeracy skills and the Expert Panel is betting that requiring them to pass this test in their first year would make this more likely. To me, the introduction of LANTITE in 2016 and its potential transformation into an entry/early requirement is one of many examples of significant change in the ITE space in Australia. It’s perhaps not the most newsworthy recommendation in a very interesting review, but this change will almost certainly impact who will and won’t become a teacher.

Does The Writing Revolution (TWR) work?

takeoff
Will it fly?

In 2017, Judith Hochman and Natalie Wexler published The Writing Revolution (TWR): a book outlining a new way of thinking about and teaching writing. A key feature that sets TWR apart from other approaches is its suggestion that school students should only focus on sentence-level writing until this is mastered (i.e., the purposes and structures of written genres should only be added after a lot of work on sentences).

This is a somewhat controversial idea if you believe that the sentences we write are always influenced by what and why we’re writing. It also introduces the risk that children will spend much of their primary schooling (and even their secondary schooling, depending on when they start) repeating the same set of basic sentence tasks in every subject. But in taking a developmental approach, Hochman and Wexler argue that learning to write is challenging for young learners, and focusing solely on sentences in the beginning greatly reduces the cognitive load. They say you can’t expect a child to write a strong text, let alone a strong paragraph, until they can write strong sentences. A brief document has been published on the TWR website outlining the theoretical ideas that underpin the approach, which you can read about here.

As I mentioned in my last post on TWR, there haven’t been any research studies or reports to verify if teaching the TWR way enables or constrains writing development… until now.

A reader named ‘Rebecca A’ left a comment on that post to say she’d found a report by an independent research and evaluation firm (Metis Associates) into the efficacy of a TWR trial in New York. The firm partnered with TWR in 2017 and spent some years evaluating how it worked with 16 NYC partner schools and their teachers. Partner schools were given curriculum resources, professional development sessions in TWR, and on-site and off-site coaching by TWR staff.

Evaluating TWR

Metis Associates were interested in TWR writing assessment outcomes, outcomes from external standardised writing assessments, and student attendance data. They compared the writing outcomes of students at partner schools with the outcomes of children at other schools. Teacher attitudes were also captured in end-of-year surveys.

This report did not go through a rigorous, peer-reviewed process, but if you are interested to know if TWR works, it’s probably the best that’s currently out there. Also, keep in mind that the partner schools were very well supported by the TWR team with resourcing, PD, and ongoing coaching. In that sense, you might consider this a report of TWR under ideal circumstances.

If you work at a school using TWR or if you’re interested in the approach, I’d recommend reading the full report here. I will summarise the key findings of the report in the rest of this blog post.

Key finding 1: Teacher attitudes

Teachers at partner schools reportedly found the TWR training useful for their teaching and got the most value from the online TWR resource library. School leaders liked being able to reach out to the TWR team for support if necessary. Some teachers wanted more independence from the strict sequence and focus of TWR activities. Most though found the approach had helped them to teach writing more effectively.

Key finding 2: Impact of TWR on student writing outcomes

But what about the development of students’ writing skills? TWR seems to have made a positive difference at the partner schools. TWR instruction helped students in each grade to advance somewhat beyond the usual levels of achievement. It’s not possible to say much more about this since the presentation of results is quite selective and we only see how the partner schools compared with non-partner schools for certain statistics, like graduation rates and grade promotion rates, which are likely influenced by all sorts of factors. The one writing assessment statistic that does include comparison schools is for the 2019 Regents assessment for students in Years 10, 11, and 12. In this case, those at TWR schools did better in Year 10, results between TWR and comparison schools were similar in Year 11, while comparison schools did better in Year 12. So, a mixed result. Being behind other schools is not really an issue if everyone is doing well, but it’s not immediately clear from this report how these results compare with grade-level expectations or previous results at the same schools.

Figure 4 from Ricciardi et al.’s (2020) evaluation of TWR in partner schools

Something that might explain the mixed outcome for senior secondary students is the tendency for teachers at partner schools to favour the basic sentence level strategies over paragraph or whole text/genre strategies in their teaching. Partner schools taught TWR in Year 3 through to Year 12, and 81% of teachers reported teaching the “because, but, so” strategy regularly (i.e., more than 2 times per week). By comparison, evidence-based strategies like sentence combining were far less commonly taught (i.e., regularly taught by 22% of teachers). This suggests that it’s important for schools using TWR to be systematic and intentional about the strategies taught and to ensure that you aren’t spending longer than needed on basic sentence-level activities so you can get the most important of what TWR offers, which I would argue comes with the single and multiple paragraph outlines and genre work.

When only looking at partner school outcomes, the picture looks positive. The report shows percentages of students performing at Beginning, Developing, Proficient, Skilled, and Exceptional levels at the beginning and end of the year. At each partner school, percentages are all heading in the right direction with many more proficient and skilled writers at the end of the evaluation.

Conclusion

To summarise, in offering select outcomes and comparisons only, and in using metrics that aren’t entirely clear, the report highlights the need for rigorous, peer-reviewed studies to better understand how TWR works for different learners and teachers in different contexts. Despite its limitations, the report points to positive outcomes for the new approach to teaching writing. This is good news for the schools out there that have jumped on board the TWR train.

It also suggests that careful attention should be paid to the specific TWR strategies that dominate classroom instruction if students are to get the most out of it. If you are using the TWR approach, my advice would be not to spend a disproportionate amount of time on basic sentence work from the middle primary years, since well-supported approaches like SRSD and genre pedagogy have shown students can (and should?) be writing simple texts that serve different purposes from a young age.

I remain greatly intrigued by TWR. It turns the writing instruction game on its head and has made me question whether other approaches expect too much from beginning writers. Its approach seems to line up nicely with cognitive load theory, in gradually building the complexity and expectation as learners are prepared for it. There’s a lot at stake though if this specific combination of strategies doesn’t actually prepare students for the considerable challenge of genre writing in the upper primary and secondary school years. You could follow its strategies diligently across the school years but inadvertently limit your students’ writing development (in time, more research will tell us if this is the case).

I realise it’s anecdotal, but my 7-year-old son (just finished Year 1) and I have been talking about argumentative/persuasive writing at home for the last few weeks and the discussions we’ve had and the writing he’s done as a result have been incredibly satisfying for both of us. To think that he should be limited to basic sentence writing and not think about and address different purposes of writing
(like persuading others about matters of personal significance) for years into his primary schooling wouldn’t sit well with me after seeing what he’s capable of with basic support grounded in a firm knowledge of language and text structures and encouragement.

It’s also possible to see how students who struggle badly with writing could benefit from practice with basic sentence writing before much else. It was in a context filled with struggling writers that TWR was first conceived, and that it may be most useful.

I’m looking forward to additional research being conducted about TWR. If any schools using the approach are able to comment on its usefulness in your context, that might be helpful for others thinking about giving it a go.

References

Hochman, J. C., & Wexler, N. (2017). The writing revolution: A guide to advancing thinking through writing in all subjects and grades. Jossey-Bass.

Ricciardi, L., Zacharia, J., & Harnett, S. (2020). Evaluation of
The Writing Revolution: Year 2 Report
. Metis Associates. https://www.guidestar.org/ViewEdoc.aspx?eDocId=1956692

NAPLAN 2021: Making sense of the reading, numeracy, and writing results

The full NAPLAN results for 2021 were released by ACARA today. There were concerns that student performance would be negatively impacted by COVID, but my analysis of gender differences suggests there is a LOT to be optimistic about, particularly for primary school leaders, teachers, students, and parents.

(NOTE: To calculate months of equivalent learning, I used the Grattan Institute’s Equivalent Year Levels approach, which you can read about here)

YEAR 3: For reading, Year 3 boys and girls did better than ever on any previous NAPLAN test. The gender gap was also the widest ever at 5.16 months of equivalent learning in favour of girls. For numeracy, the Year 3 gender gap was the widest of any previous test in favour of boys at 2.52 months. For writing, boys and girls did better than any previous NAPLAN test. The gender gap was the same as last year at 7.2 months in favour of girls.

YEAR 5: For reading, boys had their equal best performance on any test and girls did their best ever. The gender gap was the largest ever for reading at 5.76 months in favour of girls. For numeracy, boys had their equal best performance while females were similar to 2019 leading to the widest ever gender gap of 4.68 months in favour of boys. For writing, boys had their best performance since 2010 and females did their best since 2015. The gender gap was the lowest ever at 9.72 months in favour of girls.

YEAR 7: For reading, boys and girls were down slightly from last year. The gender gap was 8.04 months in favour of girls. For numeracy, boys had their equal second-best performance while girls were down slightly. The gender gap was 5.52 months in favour of boys. For writing, boys and girls had their best performance since 2011. The gender gap was the second-lowest at 18.12 months.

YEAR 9: For reading, boys and girls performed lower than in 2019. The gender gap was 9.96 months. For numeracy, boys and girls were down from last year. The gender gap was 5.64 months. For Year 9 writing, males had their best performance since 2011, and females performed higher than in 2018 and 2019. The gender gap was the second-lowest ever at 20.52 months.

READING SUMMARY: Outstanding outcomes for primary students with their best ever performances on any NAPLAN reading test. Secondary reading was down from recent tests. With increased performance, the gender gap appears to be widening at the primary end.

NUMERACY SUMMARY: Primary school boys and girls did reasonably well on the numeracy test. Other than Year 7 males, numeracy performance was down for secondary school students compared to recent tests. The gender gap appears to be widening at the primary end in favour of boys though the gap is still considerably smaller than reading and writing.

WRITING SUMMARY: Outstanding outcomes for primary and secondary males and females with notable improvement over previous tests. The gender gap in favour of girls appears to be closing at all year levels but is still considerably wider than any other NAPLAN test.

Key messages to take from the 2021 NAPLAN tests

Something is clearly working in Australia’s primary schools, particularly when thinking about reading and writing. At the primary end, the gender gaps are widening for reading and numeracy and closing for writing. As has been the case in all NAPLAN tests, girls are ahead on the literacy tests and boys are ahead on the numeracy test. The widest gender gap is still clearly associated with the writing test, with girls performing 7.2 months ahead in Year 3, 9.72 months in Year 5, 18.12 months in Year 7, and (a still concerning) 20.52 months in Year 9! Boys appear to be struggling to keep up with the increased writing demands in the transition from primary to secondary school.

While secondary students’ writing performance was higher than in previous tests, their reading and numeracy performances were down. In this sense, NAPLAN for 2021 might be a cause for celebration in primary schools and a cause for reflection in secondary schools.

Assessment tasks in a new reading course for teacher education

assessment

As with every tertiary field, assessment plays a major role in what initial teacher education (ITE) students learn in their teacher training. As part of my design of a new reading-focused course at the University of Queensland (UQ), which I introduced here, I’m thinking about the best ways of designing assessment tasks that will prepare ITE students for the realities of primary school reading instruction. In this post, I’ll outline important considerations when designing assessment tasks for tertiary contexts, my preliminary assessment ideas for the new reading course, and some things I’m still wondering about, which you might like to weigh in on.

Designing assessment tasks for a new reading course

First, here are six points I consider important when thinking about assessment task design in tertiary contexts:

1. Assessment tasks should accurately and reliably assess ITE students understandings and skills;

2. Assessment tasks should be practically useful for ITE students;

3. Assessment tasks should replicate the sorts of practices of effective classroom teachers;

4. Assessment tasks should include opportunities for planning for, teaching, assessing, and reflecting on student outcomes in that area (e.g., reading in this case);

5. Assessment tasks related to reading instruction should assess ITE students' knowledge and skills of all essential elements of reading - not just phonics and comprehension;

6. Assessment task expectations should be clearly communicated to ITE students through weekly lectures and tutorials and assessment-specific supports such as task walkthroughs or video conferences.

If assessment tasks are accurate and reliable, practically useful, similar to the practices of effective classroom teachers, focused on the core aspects of teaching and learning, based on all essential elements of reading, and clearly communicated, they should be well-received by ITE students as worthy of considerable time and effort.

Most university courses run for 13 weeks. While some courses may only have two assignments, it’s now common practice for course designers to include an early, low-stakes assignment. This means courses usually have three or four assignments in total. At UQ, lecturers are not allowed to include more than four assignments in a course, though we can have assignments that include multiple components.

I think it’s safe to say that marking assignments is not the most enjoyed aspect of teaching for educators in many contexts. While teacher educators don’t teach nearly as often as classroom teachers, they commonly mark assignments from upwards of 100 or more ITE students every semester. With courses usually including three or four assignments, and with these often being written tasks with 1,000 to 2,000 words per assignment, marking can become a major part of a lecturer’s job in no time.

Preliminary assessment schedule for the new reading course

With this context out of the way, here’s my thinking so far for assessment in the reading course. For students to demonstrate their understandings of all essential elements of reading, I have to assess them more than four times (something I’m not allowed to do if these are standalone assignments). To get around this issue, the ITE students will complete six smaller tasks that will combine to become a reading instruction resource portfolio (i.e., Assessment Task 1). Then, they will design two early and upper primary reading lesson and assessment plans (i.e., Assessment Task 2). I’ve explained my thinking for these tasks below.

Assessment Task 1: Reading instruction resource portfolio

I’d like the ITE students to spend the semester creating a portfolio of reading learning experiences and resources they’ll be able to use on their placements. If they see these portfolio resources as useful, I’m hoping the students will continue reflecting on and adding to these once they start teaching a class of their own.

The portfolio will include a shorter task for each essential element of reading (i.e., phonological awareness, phonics, comprehension, vocabulary, and fluency, plus oral language). These tasks will require ITE students to engage in a variety of practical activities, such as using the IPA to identify phonemes and graphemes in words and sentences, assessing elements of reading as they listen to children reading, critiquing and improving examples of problematic reading instruction, interpreting and making decisions based on reading data, and creating reading teaching resources.

Assessment Task 2: Early and upper primary reading lesson and assessment plans

The second assignment would involve ITE students planning two standalone lessons, with one focused on early primary and the other on upper primary. Students could base these lessons on any two of the essential elements of reading. They would be required to complement their lesson plans with assessment plans, detailing how they would integrate assessment opportunities into their learning experiences, and a theoretical rationale, justifying their choices.

These lessons would be designed with reference to key concepts underpinning the course, such as Rosenshine’s principles of explicit instruction (2012), the gradual release of responsibility model (Pearson & Gallagher, 1983), and the Response to Intervention approach (e.g., Fuchs & Fuchs, 2006) to explicitly support student learning about phonics and comprehension. ITE students would be expected to include learning intentions and success criteria in child-friendly language and examples of teacher talk, think-alouds, and modelling with one or several mentor texts. Their lesson plans would be written in the genre of procedural texts that could be picked up and used by other teachers if needed.

The ITE students’ lessons would be based on developmentally appropriate elements of reading. For instance, I’d expect them to plan phonological awareness or phonics lessons for early primary but not for upper primary, since the majority of school students will have mastered the decoding side of the simple view of reading (Gough & Tunmer, 1986) by upper primary school if teachers focus on approaches like systematic synthetic phonics in the early primary years. I’m excited to assess upper primary lesson plans on traditionally underrepresented elements of reading, such as fluency and vocabulary.

Three things I’m still wondering about

1. Have I missed anything that really should be here? I’ve tried to design practically useful tasks that involve the creation of teaching materials, assessment of children’s reading, the demonstration of knowledge of reading concepts, and decision-making based on reading data. Since this will be the only course dedicated to early reading in the degree, it’s important to include all major practices but also to go deep enough with what matters most

2. Having shorter assessment tasks mapped against the weekly content should mean most students engage well each week AND that I can assess them for each essential element of reading. I’m slightly worried about the workload though for teaching staff. We would need to think carefully about how to efficiently provide personalised and useful feedback to avoid an overwhelming marking load.

3. For the second assignment, I thought ITE students could find a child, assess an element of their reading, design a learning experience to support their development, teach and assess it, and reflect on the experience. This would be an authentic experience, but I’m also aware that they need to plan whole-class experiences and that they could be teaching any class from Foundation to Year 6, so planning early and upper primary lessons seems important. I would include authentic tasks in the portfolio of resources. Does this all sound reasonable?

As a side note, I would love to integrate practical videos from actual teachers in actual classrooms for each essential element of reading throughout the course. ITE students often want to know about how to set up the classroom for literacy blocks, how to assess on the fly in busy classroom environments, and what reading programs and resources are most useful for different aspects of reading (and how long should be spent teaching different aspects of reading each day). These kinds of questions are best answered by expert, practising teachers. It would be excellent to have a short practical video to complement the lecture and tutorial content each week (such as this wonderful example of phonics instruction from Saint Augustine’s primary school originally shared by AITSL that I’m sure many of you would have seen). Imagine having something like this on fluency, and vocabulary, and phonological awareness, and so on!

It’s been extremely helpful to write about this process and receive ideas and support from many in the education community. I’ll shift back to translating key literacy research for my next post, but if you have any suggestions for how designing assessment tasks in a new reading course could drive ITE student engagement, and give them opportunities to practise key practices of reading teachers, it would be great to hear them.

References

Fuchs, D., & Fuchs, L. S. (2006). Introduction to Response to Intervention: What, why, and how valid is it? Reading Research Quarterly, 41(1), 93–99. https://doi.org/10.1598/RRQ.41.1.4

Gough, P. B., & Tunmer, W. E. (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7, 6-10. http://dx.doi.org/10.1177/074193258600700104

Pearson, P. D., & Gallagher, M. C. (1983). The instruction of reading comprehension. Contemporary Educational Psychology, 8, 317-344

Rosenshine, B. (2012). Principles of instruction: Research-based strategies that all teachers should know. American Educator, Spring 2012, 12-39. https://www.aft.org/sites/default/files/periodicals/Rosenshine.pdf

Designing a new early reading course for teacher education

designing

This post outlines my initial design of a new, 13-week early reading course that will be completed by all BEd Primary and MTeach Primary initial teacher education (ITE) students at the University of Queensland (UQ).

In a recent post, I wrote about the opportunity for teacher educators to be transparent about their course (i.e., unit) design. Explaining what and how we teach, and seeking feedback from the wider education community, has the potential to support other teacher educators providing similar courses at other institutions, while also building confidence in how teacher educators prepare ITE students for careers in the classroom.

I’d like to invite you to be part of this kind of course design process at UQ for the new reading course.

My hope is that some of you will take me up on this offer and that this will improve how I prepare ITE students to plan for, teach, and assess reading in primary classrooms.

What helped me to design this initial outline

The new course’s design had four sources of inspiration: reading experts who have written about reading development and instruction from the science of reading perspective; the Australian Institute for Teaching and School Leadership (AITSL), the education community, and my previous teaching at the University of Tasmania (UTAS). I’ll briefly explain how they informed the course design before introducing the course’s key underpinning theories/concepts and weekly topics.

  1. Reading experts

In the past four years, I’ve spent a considerable amount of time skilling myself up in terms of my knowledge about reading development and instruction. This knowledge building had occurred, largely, through reading. Reading books by well-known cognitive scientists like Stanislas Dehaene, Maryanne Wolf, Daniel T. Willingham, and Mark Seidenberg helped me learn about the complexities and importance of reading development.

I’ve also engaged regularly with the scholarly literature on reading instruction from the likes of Castles, Rastle, and Nation; Beck and McKeown; Rasinski; Stainthorp; Ehri; Moats; Rupley; Snow; Oakley, Cain, and Elbro; and many more. New research papers about what works in the teaching of all elements of reading are published all the time, so keeping up with the evolving research landscape is important for every educator interested in teaching reading well.

Other reading experts are also having a major impact through the translation of reading research into practical guidance and training for teachers. It built my confidence to see that the topics covered by Pam Snow and Tanya Serry in their Science of language and reading introductory course were very similar to the sorts of topics I included in my teaching at UTAS. If you’re a teacher who never really learnt the ins and outs of reading development and key theories like the simple view of reading, Pam and Tanya’s courses are well worth the (very reasonable) price of admission.

Websites like Five from five and (FREE) professional learning offered through Think Forward Educators have also helped many teachers engage with and gain confidence in the teaching of reading from the SoR perspective. Many talented teachers and researchers are giving a great deal of time and energy to support others in this space for the sake of the children in today’s and tomorrow’s classrooms. With free resources and PL opportunities, every teacher has the opportunity to learn to teach reading effectively.

  1. AITSL

Due to changing accreditation standards (explained here), it’s likely that teacher education providers will soon be adding considerably more reading content into their programs. At universities like UQ, new 13-week courses on reading and reading instruction are being developed and introduced, which is brilliant for everyone involved. But since English education is a broad field, and since most teacher educators are not experts in every element of English and its teaching, there are lecturers out there who would not consider themselves as experts in reading instruction. Clearly, this is a problem when a key part of the job is preparing beginning teachers to teach reading.

To help all teacher educators meet the new accreditation standards, AITSL has published a lofty 166-page document outlining in detail the sorts of English education topics they believe should be included in teacher education programs. They have outlined:

  • Sample program outlines, indicating several options for the flow of topics across multiple courses
  • 32 detailed modules (e.g., Language development) with suggested program year levels (i.e., should it be in the first, second, third, or fourth year of a degree), descriptions of content, suggestions for tutorial activites, key learning outcomes, references to build teacher educators’ knowledge of module content, and resources that can be used with initial teacher education students to support their learning.

It’s quite amazing how much useful information is in this AITSL document. Even if a lecturer is not an expert in reading, the guidance offered here means every university should be providing a first-rate experience when preparing ITE students to teach reading. I actually think any educator interested in reading instruction would gain from looking through this document and reading some of the many excellent references it lists. Think of it as a cheat sheet for all things reading instruction.

  1. The education community

Recently, I posted the following question on Twitter:

Tweet - reading course

Nearly 400 people interacted with this tweet and a very generous 24 took the time to share their thoughts. The wonderful Tina Daniel-Zitzlaff also reposted the question on her Facebook group, Reading Teachers Australia, and sent me the feedback since I’m not a Facebook user. On the whole, the responses highlighted the importance of: specific elements of reading not traditionally taught (well?) in teacher education; assessment and data literacy; and reading models/frameworks such as Scarborough’s Reading Rope. This was very helpful feedback, so thanks to those who shared their ideas and experiences.

  1. My previous teaching at the University of Tasmania

As a final source of inspiration, I’ve been teaching early reading instruction at UTAS since taking over the second core English course there in 2016 (prior to that I was coordinating the first core English course). At UTAS, I covered early reading and writing in one 13-week course, which I split into 7 weeks for reading and 6 weeks for writing.

With only 7 weeks, I had an initial overview of reading, followed by the five essential elements of reading, and a week on explicit reading instruction. While there were bits and pieces missing from this approach, it still allowed me to cover many important concepts and set me up well to prepare a full course for UQ.

Some points to bear in mind while looking at the new course design

  • This is a 13-week course, so I have one extra week to either cover a current topic over two weeks OR add something new into the mix. Have I missed anything or should something be spread out?
  • For each element of reading noted below, I will provide an introduction, key teaching practices, and key assessment practices.
  • While many teacher education programs prepare ITE students to become early childhood educators, BEd and MTeach ITE students at UQ are all training to become primary school teachers.

Key underpinning theories/concepts

For this new course, the following theories and concepts will underpin the weekly content:

  • The simple view of reading (Gough & Tumner, 1986)
  • Scarborough’s reading rope (Scarborough, 2001)
  • The dual route cascaded model (Coltheart et al., 2001)
  • Rosenshine’s principles of explicit instruction (Rosenshine, 2012)
  • Response to Intervention (e.g., Fuchs and Fuchs, 2006)

Without further ado, here is the initial plan of weekly content:

Weekly topics for new early reading course

1. Overview and history of reading and reading instruction

2. Explicit reading instruction

3. Oral language (introduction, teaching, assessment)

4. Phonological awareness (introduction, teaching, assessment)

5. Concepts of print and alphabet knowledge (introduction, teaching, assessment)

6. Phonics (introduction, teaching, assessment)

7. Comprehension (introduction, teaching, assessment)

8. Vocabulary (introduction, teaching, assessment)

9. Fluency (introduction, teaching, assessment)

10. Supporting all readers

11. Links between reading and spelling

12. Course reflection: Managing the elements of reading

13. ?

How you can help inform the design and refinement of this reading course

At this stage, I’m looking for feedback from teachers and other education professionals about: (1) the selected underpinning theories/concepts, (2) the general flow of weekly topics, and (3) whether I should cover a current topic over two weeks or introduce another topic for the 13th week (the topic can be slotted in at any point in the course).

If you have any thoughts to share, you are most welcome to leave a comment below or reply to my initial tweet promoting this post on Twitter.

What’s next?

With key theories and weekly topics outlined, the next step will be designing the assessment tasks that will provide ITE students with opportunities to demonstrate their understandings of reading concepts and how to plan for, teach, and assess reading in classroom contexts.

Thank you to those who support me in designing (what I hope will be) a first-rate experience for ITE students at UQ. If any other teacher educators are currently designing or fleshing out reading-focused courses, I would like to support you in that process. Please reach out via Twitter or email and let’s talk about it (damon.thomas@uq.edu.au).

References

Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. (2001). DRC: a dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108(1), 204-56 https://www.doi.org/10.1037/0033-295x.108.1.204

Fuchs, D., & Fuchs, L. S. (2006). Introduction to Response to Intervention: What, why, and how valid is it? Reading Research Quarterly, 41(1), 93–99. https://doi.org/10.1598/RRQ.41.1.4

Gough, P. B., & Tunmer, W. E. (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7, 6-10. http://dx.doi.org/10.1177/074193258600700104

Rosenshine, B. (2012). Principles of instruction: Research-based strategies that all teachers should know. American Educator, Spring 2012, 12-39. https://www.aft.org/sites/default/files/periodicals/Rosenshine.pdf

Scarborough, H. S. (2001). Connecting early language and literacy to later reading (dis)abilities: Evidence, theory, and practice. In S. Neuman & D. Dickinson (Eds.), Handbook for research in early literacy (pp. 97–110). Guilford Press.