Make me a Match

TeacherMatch sounds like something straight off the cover of the Onion, especially when eyeing up the price tag, but it could be the new reality of the Madison Metropolitan School District. It is enough to make Yente, the matchmaker from Fiddler on the Roof, turn green with envy. The Madison Metropolitan School District purports to be stretching its dollars in the upcoming budget cycle, so what guarantees does this $273,000 venture promise?

$273,000 closer to a business model for education

$273,000 closer to a business model for education

A Chicago based company, TeacherMatch, claims to use algorithms to predict the effect¬†that a teacher candidate will have on value added student test scores. Whether this is plausible or not, in an era where we are looking at testing bias and social-emotional learning standards, the very definition of a good teacher being measured only by students’ standardized test scores is faulty.

Last week, I was called over to a student’s computer while administering the Measures of Academic Progress (MAP) test. He was doing a reading test, so I was surprised that this talented reader was stuck on a question so early on in his test. I soon found out what was causing the looks of frustration. In order to answer the question, he had to know what was meant by the “Touch of Midas.” This information could not be found in the passage he was being tested on; it wasn’t even hinted at. This was simply a piece of background knowledge the test assumed when asking questions. Of course, I could only encourage the student and remind him to “read carefully,” but it was disheartening, knowing that the answer was not actually there. Am I a less effective teacher, because I hadn’t told my students the story of King Midas? Where exactly can you find King Midas in the Common Core State Standards?

The creators of TeacherMatch have boiled down a teacher’s value to four distinct categories: where the candidate went to college, a candidate’s drive or ability to work through challenges, content knowledge, and teaching skills.

Having attended the University of Wisconsin – Madison, a top school in Teacher Education, I feel fairly confident that I’d score okay in this first category. However, most of my tips and tricks as a teacher were picked up from actual classrooms, working with real students in diverse settings or were picked up from my mentor teachers along the way. The University did get me ready for this challenge, but it wasn’t the end of my journey. My grade point average as an undergraduate was 3.84 and as a graduate student my 4.0 remains in tact, but in no way do these numbers indicate my own level of perseverance in obtaining a bachelor’s and a master’s degree as a single parent against incredible odds. Nowhere in these numbers is it obvious that I was once labeled an “at risk” student myself, which is the strongest motivator imaginable. As for testing teacher skills, the best indicator of my effectiveness as a teacher can be found by watching me in a classroom. That is where I shine, no matter what shows up on paper.

Data does not define me as a teacher.

The University of Wisconsin, My Alma Mater

The University of Wisconsin, My Alma Mater

Incomplete or faulty criteria aside, TeacherMatch’s assessment methodologies are also cause for alarm. As a means of arriving at a rating, a candidate must answer 100 questions. Each of the question must be answered within 2 minutes or the score is invalidated. So much for think time!

As a teacher, I have learned that arriving at the truth takes time and follow-up questions, but there is no room for this type of authentic interaction with TeacherMatch. The learners who benefit most from “think time” or extended answer time in my classroom are English Language Learners. For this reason alone, it is obvious that TeacherMatch comes complete with its own testing biases.

Seeing my student teachers interacting with my students gives me a window into who they will become as educators. Talking with teacher candidates about classroom community, their individual successes and failures with students, and their own journey of personal growth offers me insights into their humanity.

TeacherMatch moves schools another step closer to a business model and abandons the heart of teaching.

The Madison Metropolitan School District should leave the matchmaking to the administrators, teachers, and, of course, to Yente.

Matchmaker, Matchmaker,
Plan me no plans
I’m in no rush
Maybe I’ve learned
Playing with matches
A girl can get burned

Write the school board at to tell them your thoughts about TeacherMatch.

There will be an open Board meeting with the vendor of TeacherMatch and MMSD BOE members available to answer questions on June 9th at La Follette High School, 702 Pflaum Road. The meeting will commence at 5:00 and will be followed by a public hearing on the budget at 6:00.


Photo courtesy of Erin Proctor, EA-MTI president

Banner photograph courtesy of Erin Proctor, EA-MTI president

This entry was posted in education, labor, politics and tagged , , , , , , , , , , , . Bookmark the permalink.

3 Responses to Make me a Match

  1. Dear Karen: Thank you for your thoughtful reflections and challenges. It is clear that you are passionate about your craft and sincere in your love for education. As experienced teachers and administrators ourselves, we agree that teaching is a complex art. I did want to add some detail to the discussion. We spent several years doing the research that helped shape our selection assessment and that research was done in collaboration with 4 universities and hundreds of teachers.

    In fact, practicing teachers wrote the vast majority of our items and the rest were written by university professors. Also, our framework for the factors that matter most came from decades of educational research and heavy surveying of teachers. The four general areas are: qualifications, attitudinal factors, cognitive ability, and professional knowledge of teaching skills (the most important area). Those factors gleaned from the research and from teacher surveys were then used to shape our item construction. All of those items were then tested across thousands of teachers nationally and every teacher in our study provided feedback ofter taking pilot tests.

    Also, our tool is not just based on standardized test scores. It is designed to evolve as our partner districts give us feedback data that includes multiple measures such as classroom observation results, student course surveys, parent surveys, and student data. Districts decide what data matters most to them and our assessment responds accordingly. We do not see our work as a static set of items, but rather as an ongoing action research project done in collaboration with partner districts.

    Finally, I would ask you to consider the current state of teacher hiring in our nation. It s incredibly arbitrary and subjective. When I do workshops at conferences, I use an exercise called Resume Roulette where I give the room full of leaders the same stack of 20 resumes and I ask them to identify the two they would bring in for an interview. Every single time I do this exercise, all 20 get invited to an interview. Then I ask the group to identify the criterion they used to make the decision. The criteria are all over the place and are often contradictory within the same hiring teams. And, we tend to not look at the factors that the research suggest are important. The current state of affairs is unfair for teachers and does not give every candidate a fair shake based on things that matter most for student success. We are seeking to work with school districts to change this. We would never suggest using the results of our assessment to make the hiring choice. Instead, we see it as adding one more powerful data point that can be used in unison with sample lessons, interviews, etc. conducted by experienced administrators and teacher hiring teams.

    I am always open to discussing this more with anyone interested and appreciate the chance to weigh in here.

    Don Fraynd, CEO, TeacherMatch,
    UW-Madison class of 2000 – Go Badgers :-)!

  2. sglover says:

    Don Fraynd’s does himself no favors with his example of how “arbitrary and subjective” teacher selection supposedly is:

    I use an exercise called Resume Roulette where I give the room full of leaders the same stack of 20 resumes and I ask them to identify the two they would bring in for an interview. Every single time I do this exercise, all 20 get invited to an interview. Then I ask the group to identify the criterion they used to make the decision. The criteria are all over the place and are often contradictory within the same hiring teams.

    So…. Does each participant assess the 20 resumes solo? Are the participants split into groups? (Presumably the entire group doesn’t come to a collective decision, since in that case only two candidates would be chosen.) Are some of the 20 resumes egregiously horrible? Dazzlingly stellar?

    It’s not hard to imagine choose x from a pool of y trials in which it would be perfectly normal for every member of the pool to be chosen at least once. One would think that a guy who claims statistical competence would be aware of that. In any case, Fraynd seems to wants to imply that the outcome he sees is awful, completely wrong. Maybe it is. Who knows? His own description is so muddled that it’s foolish to draw any conclusions from it. Which is odd, because in my experience mathematical competence tends to go with precision in language.

    On the other hand, fast talk and slipshod language is just perfect for a sales pitch. I suggest that the biggest concern of this ” ongoing action research project” is, how can we divert public school funds into our shareholders’ pockets? It’s a vibrant, “disruptive” line of “work”, these days…..

Leave a Reply

Your email address will not be published. Required fields are marked *