Scientific expertise is not broadly distributed
an underappreciated obstacle to creating better content

In 2015, the MIT Online Education Policy Initiative report proposed a profession of content-creating "learning engineers". Basically, science post-docs with parallel training in education research and pragmatics. Sort of "combine expertise in science and education... and the science expertise is easy, just get good post-docs". I felt this badly underestimated the magnitude of effort needed on the science side, and from the science research community itself. The following was drafted as a response. It also became an exploration of whether content correctness and insight, so rare and so costly, was even necessary.

Introduction

Let us suppose that transformatively improving science education turns out to require science education content be made much more correct and insightful. This may not be necessary (it seems an open question), but suppose. What would this cost? I suggest creating creating more correct and insightful content, will be far more costly, and require far more support from the science research community, than is often assumed.

Education is full of surprises. Like these: What do you mean my lectures aren't working? What do you mean my students have rich ecologies of misconceptions? But they're doing ok on my exams! You've heard of those before. But what about this one: What do you mean my first-tier science graduate students and post-docs, lack the science expertise, to write coherent and numerate science education content?!? For any age level, even K?

Imagine a lunch-time party. You overhear an elderly gentleman ask: "So, you are a doctor?" "Yes, of medieval french literature." "Good, I have this question about my digestion...". Did you laugh? Ha! So many people just don't understand how expertise is distributed.

Then across the room, you see a five-year old approaching a group of first-tier astronomy graduate students. She asks them "I have a new set of finger paints! I'm painting space! What color is the Sun? Which paint should I use?". Did you laugh? Ha! How large a group would she need ask, to get even 50-50 odds that they don't all give her the same wrong answer? Silly five-year old - she didn't even ask if any of them had a research focus in spectroscopy! So many people just don't understand how expertise is distributed within the science research community.

The recent MIT Online Education Policy Initiative report advocates for a profession of "learning engineer". People with a foot in the sciences - good science graduate students and post-docs, able to follow the research literature, and talk with researchers, and create correct and insightful education content. People also with a foot in education, having similar training in education research, technology, and pragmatics, and in current research on human learning. It's a daunting goal. But reading that, did you think, it's the "correct and insightful" bit, that may be the implausibly hard part?

Three stories

Here are three stories for perspective.

First story. Access to scientific expertise is harder than you think.

The Harvard Center for Astrophysics is both a first-tier astronomy center, and a leader in science education research. And it has a focus on the importance of misconceptions in education. And yet, for years I've asked CfA graduate students (a longer form of), "A five-year old asks: I'm told the Sun is a big hot ball! What color is the ball?" Almost all of them get it wrong. Wrong in a way that also compromises their understanding of light and color, of sensors, blackbodies, and more. There just happens to be a widespread misconception. And of the few who get it right, half-ish of them (a very small n), learned it in a CfA class on misconceptions in education, rather than in their "successful" first-tier undergraduate and graduate astronomy education.

This case has been known to be broken for decades. The incentives and communication within the field have simply been inadequate to fix it. Despite astronomy education's focus on terminal introductory courses. If you check a well-thought-of text, written by astronomy professors, and revised over years, and it will likely get this wrong, or fudge it. Same with outreach content from NASA and ESA. A randomly selected professor of astronomy, with a focus on education, regrettably cannot be counted on to have the domain expertise needed to do a crayon sketch of the Sun.

Learning engineers might fix this. For example, by keeping at hand, the various lists of things often done wrong, and simply not doing those. But what does this example say about the cost of creating content that is more correct? For I know of no reason to believe those lists are comprehensive. So what if this example wasn't on them?

Kahn of Khan Academy once described his development process this way. He gets a box of textbooks from Amazon, distills them, and creates a video. Learning engineers might bring greater familiarity with the textbooks, and create more effective content. But that doesn't get you to correct Sun color.

It seems discussions of learning engineers sometimes assume science domain expertise is widely distributed, and easily accessible. Learning engineers can focus on learning, as the science is well in hand. And yet, even for something as basic as "what color is the Sun?", here we need something like 10x redundant checking of post-docs and textbooks, to even detect that there's a well-known misconception. So I suggest an alternate model.

If correctness is needed, then learning engineers will have to fact check even the most basic of concepts, directly with researchers whose research focus is that specific topic. They must assume their own understanding, and that of other random members of the field, lacks robustness, and is compromised by misconceptions. Dealing with interdisciplinary questions will be especially challenging. Learning engineers will require a great deal of support from the science research community. And they will need to focus at least as much on the science, as on the learning.

Second story - a shorter one. But is correctness needed?

Suppose you are writing a children's picture book about atoms. You've heard that a challenge in later teaching stoichiometry, is students still not realizing atoms are real physical objects. What graphics can you use for an atomic nucleus?

Can you find a naked-eye photograph of a nucleus? Many texts say "too small to see". An MIT physics professor, with a Nobel prize, may tell you nuclei only glow in X- and gamma rays. It takes talking with a physicist whose research focus is nuclei deformation, to hear, yes, of course, a few can be made to glow visibly. And there's a nice photograph somewhere of a vacuum vessel window, with a bright green dot - a single nucleus, trapped and stripped, bombarded and fluorescing. But even then, good luck finding the photo. As an alternative, how about using cartoons of nuclei shape, generated from QM simulations, as something a bit less bogus than the usual ball of red and blue marbles? Well, how good are you at wrestling with old fortran code?

Ok, that could be costly. But who cares? Maybe balls of red and blue marbles are good enough. I'll argue both sides. I believe it's an open question.

The correctness and insightfulness of descriptive science education content simply does not matter. Misconceptions do not matter. Everything hard about science education - like communication, numeracy, conservation laws, system decomposition, and so on - these can all be taught in a fantasy land. Perhaps Minecraft. Indeed, these might best be taught in such a world. A world which can be optimized to destabilize naive misconceptions. A world which can replace our current cartoonish description of the physical world. Current assessment and content has so very little connection with expert understanding, not because it's badly dysfunctional, but because it's a synthetic setting for teaching skills. And it might work better if it were even less concerned with reflecting the real world. And do less harm.

Alternately, perhaps correctness and insightfulness are critically important for descriptive science education content. Misconceptions are extremely toxic. Even at low doses, misconceptions poison the estimation and facile shifting of perspective which are required for developing expert mastery. Fine-grained individual formative assessment of conceptual understanding and misconceptions, is simply a prerequisite for transformative improvement of science education. Current content is simply far too slapdash.

So I don't know if greater insight and correctness is needed. But if it is, it's going to cost.

Third story. An intractable cost?

Consider an MIT professor writing an children's picture book on photosynthesis. They ask, what determines how much phytoplankton exists? They think it an important to clarify. Now, this is their subfield. They have graduate student minion support. It's one line, of a children's picture book. And yet it costs days of effort.

Perhaps they were wrong to believe it important, but for discussion, let's assume not.

So if you are a learning engineer, writing a children's picture book on photosynthesis, what should you do? Should you say, here's a concept important for understanding and transferable knowledge, but researching it is too hard, so let's punt? Should you say, excuse me professor (with graduate students trailing like ducklings, struggling for a moment of attention), excuse me, could your research group burn a few days on this question of mine?

Which seems implausible. Something new would be needed, to extract such effort from the science research community.

And yes, if you're wondering, the photosynthesis book got the color of the Sun wrong.

Bonus story. Last week (from the time of the first draft of this note) I was lucky to be fly-on-the-wall for a conversation between two wizzy cell biology professors, discussing what exactly we do and don't know about an aspect of cell division (physics of anaphase). And sharing cautions to bear in mind when reading the papers from particular labs. And mentioning the misleadingness of an educationally common concept (mitotic spindle).

Going to first-tier research talks in a variety of fields, I frequently think, that video, that graphic, that concept, that story... that should be part of every introduction to this topic, whenever it's taught. Down to primary school even. And it won't be any year soon. And just as the primary literature, is only a partial substitute for these talks, the talks are only a partial substitute for these rare conversations.

How do you systematize the gathering of that?

Wrap-up

So where does that leave us? If our goal is a transformational improvement of educational content, it seems likely to require a not-small investment in new incentives, culture, and organization, within the science research community itself.

Which needs to start being appreciated and discussed. And for now, perhaps we can at least be more thoughtful in our estimation of, and utilization of, available expertise.

Thanks to Sanjoy Mahajan for conversation, and the question 'But do you really need [that degree of correctness and insight]?'.

Page history

2016-12
Fixed typo, and tweak title.
2016-11
Tweaked and posted here
2015-05
Drafted response
2015-04
MIT report published
About
-announce
Follow
Like
+1
Home
Mark it up No JS?
Comment
Email me

Tweet
Fb share
Preview