Join the #ISCoachingCollab EU

Join the #ISCoachingCollab EU

Are you a Tech Coach, Instructional Coach, or Data Coach? Or are in an a Coaching-adjacent role like a Tech Integrator, Curriculum Coordinator, or Team Lead? We can often be the only person in our role, so #VisualizeYourLearning is launching the #ISCoachingCollab to help us network, learn and grow. Together.

We aim to have virtual meetups 2-3 times per semester, launching in January of 2023. The meetups will include learning, networking, problem-solving, upcoming professional development, and whatever else the Collab wants. And… we are proud to partner with Learning2 as part of their L2Threads which aim to keep the conversations alive.

How do you join? Simple, fill out this form and you are in!

What is it?

  • 2-3 times per semester online meetup for an hour
  • Includes 20-30 minute learning sessions provided by members like you!
  • 15-25 minutes to network with other Coaching roles and dig into the learning

Why are we partnering with #Learning2?

  • The Collab is part of #Learning2 Threads, creating continuity between conversation spaces
  • #Learning2 has agreed to help us look for ways to incorporate the Collaborative into future events in-person
  • The #ISCoachingCollab is designed by participants and won’t be sponsored to avoid conflict of interest

How much does it cost?

  • It’s free!

Our Next Meetup will be March 20, at 17:00 Central European Time


Fall Data Priorities: 3 Strategies To Use Right Now For Creating A Data Rich Classroom

Fall Data Priorities: 3 Strategies To Use Right Now For Creating A Data Rich Classroom

When we start the school year, we can have a lot of data goals in mind. Perhaps we have big ideas about using data portfolios or data walls. It can be an exciting and generative time while we still have energy. However, we really have only one reason to use data in the first few months:

The priority for data in the first few months of school, is to get to know your students.

Our goal is to learn who they are as people, and who they are in their learning. Here are 3 strategies to get to know your students this fall:

Use a questionnaire to learn about your students

I ask my students just a few questions on the first day: What do you want me to call you in class? What pronouns do you prefer? What has been your relationship with learning my subject in the past? What is a hope you have for this year? Is there anything else you’d like to share?

Asking names and pronouns signals your room is safe and inclusive. Asking about past experiences can be an enlightening view into how a student approaches your class. I collect my responses digitally with a Microsoft Form, but my goal with surveys is to keep them short for easy completion.

My Microsoft forms survey

Review past file data

You may work on a team where transitions from year-to-year happen as part of your structures. If so, great. If not, you may be like me in needing to review files. I look in two places: the last report card and standardized testing data.

It takes me about a day or so to review the report cards of 80+ students. I’m particularly looking at the narrative pieces for feedback on what each learner is like. This might be the most tedious process, but super valuable.

Collect diagnostic data

Even with the last report card, we may not know if gaps were formed or if there are new areas for extension and growth. Therefore, I collect formal diagnostic data. However, this doesn’t mean that I start every year with a pretest – in fact, I start every year with collaborative tasks and mindset activities.

Instead, I employ a gradual diagnostic. As part of students’ homework or in class tasks, I will ask them to complete 3 questions online targeting specific standards from the previous grade and the current grade. I target known “tricky” standards or use priority standards set by our school.

After a few weeks, I have a good idea of what my students are comfortable on and where we might need to remediate.

After this series of diagnostics, I can see we may want to spend time on 6.EE.1

Again, our priority is to learn how to teach the students we have in front of us. They are different from last year and different from the next year. It’s our job as educators to use the information at our disposal to figure out what they need as learners.


The problem with patterns

The problem with patterns

David Morgan

Correlation and Causation 

When presented with numerical data our brain, and secondary education, will seek out patterns and then, if required, plot and graph out the relationship. By the end of our school years we will do this with relative ease and confidently share conclusions that this data shows.

However just because a clear pattern emerges – have we satisfactorily established CAUSATION or simply identified a CORRELATION. Unfortunately the uncomfortable truth is that we never really understood the difference.

In his 2006 documentary, An Inconvenient Truth, Al Gore stood next to a huge graph that tracked the temperature of the earth and the concentration of carbon dioxide. His conclusion was that rising CO2 had caused a rise in temperature and the resulting climate changes. Over the next few years the counter-argument insisted that their was an undeniable correlation between the two variables but no causal link – indeed some observers argued the relationship was the inverse with temperature causing carbon dioxide rise. While this argument has since been resolved by the IPCC – the problem of causation and correlation remains.

Data in public policy is even more complex to analyse and the problems with rushing to judgement on the link between two data sets is commonplace. This is best exemplified by the predictions regarding inner city crime in the US during the nineties. Politicians who made predictions of a crime wave linked to poverty, unemployment and drug use were in abundance. However explaining the eventual drop in crime rates proved equally elusive. This is famously covered by Dubner and Levitt in Freakonomics who establish a clear relationship between abortion rates and crime.

The question for educators is whether we condition students to assume all relationships are causation without really considering correlation. 

Many years ago I would have students investigate factors that would affect how long a candle would burn when covered by a glass beaker. Most would vary the size of the beaker and find that the bigger the beaker the larger volume of air would allow the candle to burn longer. Occasionally students would vary the size of candle and would struggle to come to a conclusion. Every once in a while a student would suggest the colour of the candle – as a caring teacher I would discourage them from this path. Was this the right thing to do? I now question that decision. I have no idea whether the coloured compounds in the wax has any effect on the rate of combustion. I know that there will be data and I know that there could even be a possible pattern in that data. Establishing whether that pattern is causation or correlation, however, would be a much greater test of a student’s scientific knowledge than the pre-determined and well known causation that the rest of the class has found.

“Just because two variables have a statistical relationship with each other does not mean that one is responsible for the other.” -Nate silver

No doubt we are teaching our students to collect and analyse data, but are we being too safe in our choice of experiments and investigations. By using the same old experiences we are conditioning them to expect and assume causation. Should’t we be having them look at data that is less obvious and have them argue over causation and correlation? There won’t always be a clearcut answer but isn’t that more like real life anyway?

David Morgan is an international teacher and administrator currently based in Switzerland  

Gore, Al, Davis Guggenheim, Laurie David, Lawrence Bender, Scott Z. Burns, Jeff Skoll, Lesley d, Bob Richman, Jay Cassidy, Dan Swietlik, and Michael Brook. An Inconvenient Truth. Hollywood, Calif: Paramount, 2006.

Levitt, Steven D., and Stephen J. Dubner. Freakonomics. Harper Trophy, 2006.

Schools Have Data, Now What? Part 2: Three Steps to Know Your Data Better.

Schools Have Data, Now What? Part 2: Three Steps to Know Your Data Better.

In our last post, Chris Smith wrote about the data pipeline: the process by which we clean, wrangle, and pre-process data so that we can visualize and make proper inferences. Today we give you another necessary step when telling data stories: Know your data.

In my role as a Data Coach, I spend a majority of my time helping people to understand what a certain metric means and what are plausible inferences to make from the data. What I find most often is one of two scenarios: (1) people have little idea how a certain metric might be calculated, and (2) people are overconfident in their ability to make inferences from data and often jump to conclusions. So here are three steps for you and your school to know your data better to avoid those two pitfalls.

Know The Math Behind The Data

Within reason, you must now a little bit of math to understand your data. You need to know how the metric is calculated so avoid making false assertions. Let me tell you a tale of my own experience:

I’ve worked at schools that take annual standardized tests. These tests have a “Growth Projection” metric in the fall for each student and then when we take the test again, we see ow many of our students met this projection. Here was my approximate data over 7 years:

Are seeing what I’m seeing? My first 5 years I hovered in the high 40, low 50 percent range before climbing to the 60-70 percent range in 2016-2017. The first inference to commonly make is that only around 50% of my students are meeting growth targets. While a perfect 100% is not a fair expectation, this is far too low, right? A previous administrator thought so and I had to set goals to improve my scores. Do you side with that inference and action plan?

Not so fast. If you don’t know how those projections are calculated, then how can you infer what percentage is appropriate? It turns out in this test, that each students’ growth projection is calculated from an average of a huge data set of similar students. You know the thing about averages, right? They’re in the middle – meaning approximately half your students will be above that number and half below. The target percentage to achieve on this metric is therefore to have around 50% of your students meet projections. I’d go as far as to say that when my data started to stretch into the 70% range, that now we have a real problem: I might be teaching to the test to get metrics that high or something in the curriculum might be too standardized test-oriented.

Make sure you know (within reason) how your data is calculated to make accurate inferences.

Know The Limitations Of Your Data

This strategy takes humility. People who like data often point to how numbers are more concrete or a hard science as compared to individual perceptions. However, its so often that the second we get data, we start making inferences well beyond the scope of the data itself.

At its core, data only measures what it measured. For example: if your students take a history quiz, you may use the results to have ideas about how much your students understand about the unit or topic. However, that is not what the data measured, that is meaning we have added to the data, or inference. Concretely, all the data 100% says is how that group of students did at answering those specific questions on that day. Any further conclusions about what students know, or what was taught well, is an inference made by us, not the data.

All data is limited this way. The SAT only measures how a student does on the SAT and colleges have used that data to infer potential college success. The grades we assign in class only measure the sets of data we have collected and we infer indirectly that it shows proficiency. This is why I’m a big proponent of building multiple sets of data for high-stakes conclusions. It’s fine to use a single quiz to recommend a student do review assignment or the teacher re-tool a lesson. It’s one small piece of data, but the stakes are low. However, placement tests to determine course placements? All we’ve measured is how a student does on one test on one day on those particular questions, but we’ve determined potential years of course options.

Humility to understand limitations and willingness to do the work to triangulate data is a necessity in a positive data culture.

Appoint Experts Who Will Do the Research When Necessary

Large data sets are constantly changing and evolving. The NWEA MAP Growth tests have over 150 potential variables/metrics for each student and reports from College Board and IBO are similarly large. It’s impossible to study all such variables, and in my experience with representatives from those organizations, it can be hard for their customer facing representatives to know how those variables are measured. No shade being thrown here, it’s just a lot.

At my school, as the Data Coach, one of my functions is to read documentation. Like all coaches, staying on top of our role requires reading research. For me, sometimes that means studying standardized testing documentation and learning how metrics are calculated. With so much amazing data to dig into, someone needs to be able to clarify how the data is gathered, measured, and how it can be used to generate theories. If your school has the means, designate a point-person to become a specialist in understanding specific data sets. Not a general “data person” who is “good at numbers,” but someone who will become an expert in standardized tests, a person who will become an expert in your database data sets, etc. This person needs to be willing to follow my previous two recommendations of knowing the math and knowing the limitations.


Pin It on Pinterest