As I belong to the legion of education professionals that’s usually on the receiving end of the term “top-down,” I’m not too keen on the enterprise’s more top-down improvements.
Unlike many in that legion, my negative feelings about education’s top-down improvement initiatives don’t have much to do with the reduced autonomies and increased accountabilities they often bring. Rather, my disapproval comes from how so many of these initiatives miss the mark on more crucial issues and occupy such huge amounts of improvement infrastructure.
In other words, it doesn’t matter if it’s a mandated teacher-evaluation system, a district-wide digital learning initiative, a statewide move to “proficiency-based learning,” or whatever else. If the improvement initiative being rolled into scores of schools’ professional-developmental and everyday-operational spaces isn’t based on (a) how humans learn, (b) what individual schools’ kids actually need most, or (c) positive results it has produced elsewhere, I’m not going to like it. Over my couple decades working in and studying the education field, I’ve simply seen too much cash, time, and angst wasted on the wrong things—and seen too few positive student results in return.
What the academy taught me
The roots of this stance started fifteen years ago, back when I was teaching in a school that learned to build effective improvements from within. I share a lot about this experience in my new book, What the Academy Taught Us: Improving Schools from the Bottom Up in a Top-Down Transformation Era.
Essentially, the book shows how our school produced “continuous school improvement” (a bottom-up transformation concept/process that is, rather encouragingly, catching on of late), and well before “continuous school improvement” was even really a thing. Readers are able to see how the school’s leaders carefully determined improvement priorities, how teacher-teams were empowered to design and lead improvement strategies and actions, how teacher-leaders systematically monitored and supported those strategies and actions, and so on—all without the aids of processes like root-cause analyses, PDSA cycles, and such. Improvement science hadn’t really hit the education space at the time, of course, so we really just had to figure things out as we went. (And yes, this also means that readers will also see how, with so few guideposts to follow, all our improvements had to trip around a bit before finding their right strides.)
Also, and less fortunately, the book shows how these site-level improvements were eventually run off the road by central office’s efforts at “transformational system change,” and how that’s turned out for the district as a whole in the past decade or so. (Spoiler alert: The “equitable achievement” this district sought to create with its “transformational system change” hasn’t exactly materialized to date. Again, this is where and why my anti-top-down bias first took root.)
Building from this real-life model of effective “bottom-up” school improvement and my subsequent administrative and advisory experiences, the book offers a number of principles for school leaders to consider as they build various improvements from within.
In all, I hope What the Academy Taught Us will be useful to the growing numbers of schools currently taking on the difficult work of carrying out truly “bottom-up” and continuous school improvement. If we can get it right, I’m convinced it’s something that can take schools and their kids much further than sweeping top-down improvements and reforms ever did.
How to not blow the continuous school improvement moment
While I am of course encouraged by the enterprise’s recent surge of interest in the idea of continuous school improvement, I and others are a bit concerned that we may—as education tends to with various Next Big Things—be rushing from buzz-term to operation without fully understanding or appreciating all the parts.
Continuous school improvement is not solely about having tons of data and finding the right root causes, but also about choosing the appropriate, evidence-supported improvement strategies once those root causes are unearthed. I’ve argued for some time that education has been stuck for decades because of how poorly it chooses effective actions, and I can’t say I’ve noticed that tendency changing, yet, among the schools and foundations currently working so hard on continuous improvement. (In fairness, however, I’m not actually in their network meetings to hear how it’s all going. But when I see some of the recommendations, lessons learned, and trends that come out of them, I’m not very hopeful that they’re as mindful of effective solution-selection as they’ll need to be for all this to work.)
Having direct experience with widely used improvement processes (like the Public Education Leadership Project’s “Problem-Solving Approach to Designing and Implementing a Strategy to Improve Performance” and Carnegie’s PDSA Cycles), for instance, I can confidently say that they don’t do nearly enough to aid this shift. Between all the data study and root-cause-determining and action-strategizing and progress-studying they facilitate, an explicit “Learn” element is still sorely lacking. ESSA’s evidence guidance improves on this a bit by directing improvement-planners to consult evidence in one of its stages, but it is in effect an extra hoop to jump through with the word “evidence” taped on it—certainly not one for building the field’s capacity as research- and evidence- guided intervention selectors.
As schools and districts begin designing better “bottom-up” improvement via continuous school improvement processes, improvement scientists and policymakers can help—first, of course, by simply realizing and acknowledging the education field’s historic struggle to get truly evidence-based practices into operation. Simply, the answers to more effective instruction are not always “in the room.” This must stop being the assumption.
Next, improvement scientists and policymakers can help steer more effective intervention selection (and, to boot, permanently sharpen the research literacies of many educators) through a number of means. Here are a few to start:
- Improvement scientists can alter existing problem-solving processes to include explicit research/learning/vetting stages—processes to walk educators through evidence study similar to how current processes walk educators through root-cause analyses.
- Policymakers can work to make school-improvement leaders more aware of available stores of educational research, and to be sure that those resources are more readily accessible and digestible for the educators who find them. When so many working educators don’t know that resources like the IES’s Regional Education Labs even exist, it’s safe to say that what’s been built isn’t terribly useful. (And people really wonder why so many teachers turn to Pinterest and Teachers Pay Teachers for their materials? Interesting.)
- Also, policymakers can steer more effective school-improvement work by funding and rigorously training school-improvement supports. I was part of just such a team in Minneapolis Public Schools, and my state actually offers such support through its Regional Centers of Excellence. As it currently stands, however, such support is fairly minimal (six people make up the entire Centers of Excellence staff, for instance, to cover all of Minnesota), reserved first for chronically underperforming schools.
With regard to creating more truly bottom-up school improvement, continuous school improvement’s current moment is loaded with potential—both for creating positive and context-specific change at school levels, and for permanently building the research capacities of educators far and wide. My experience in the field has taught me that while doing bottom-up transformation right takes a lot of work, it’s well worth it to get digging in. We’d all be wise not to blow this moment.