Before reading the abstract,
you might want to do
"What color is the Sun?" - your answer?
A hypothetical 5-year-old asks "I'm told the Sun is a big hot ball! Awesome! What color is the ball?". Anecdotally, most astronomy graduate students will get this wrong. And some professors. So at least some science education misconceptions - basic, simple, significant, and easily falsifiable - are not restricted to introductory classes. They persist into the profession, where they compromise textbooks, despite collaboration. And survive for decades, despite awareness efforts. Being largely ignored, it's an open question how typical this is, and how much educational content might be improved by addressing it. But for now, for correct crayon choice, ask a spectroscopist.
The last decade of science education research surprised the education community. Student understanding turned out to be poorly integrated, non-transferable, and riddled with misconceptions. But an assumption exists, that student understanding eventually gets fixed. That over the course of undergraduate and graduate work, ramshackle novice understanding becomes a qualitatively different expert understanding. At least within the field studied.
Here is one misconception that persists. I suggest it illustrates a larger problem, with implications for curriculum and content creation. And that we need to reassess who has sufficient expertise to create accessible and insightful content.
A 5-year-old comes up to you and asks "I'm told the Sun is a big hot ball! Awesome! What color is the ball?"
Take a moment, and say an answer.
Anecdotally, most astronomy graduate students will get this wrong. And at least some astronomy faculty. For correct crayon choice, it appears the 5-year-old should ask a spectroscopist.
Why? In textbooks, the question isn't a focus, and its treatment is usually ambiguous. The underlying problem I'll describe later. The issue is known, with years of folks pointing out confused students and colleagues. But awareness remains low, and little changes.
The intriguing question is, how much better might content become, if more and better experts were used. If a spectroscopist, rather than confused faculty, were asked "What color is the Sun?" If K picture books, and graduate astronomy texts, both got the crayon color right.
'A hypothetical 5-year-old asks "I'm told the Sun is a big hot ball. Awesome! What color is the ball?"' I've asked this, conversationally, for several years. Small n, some tens, mostly MIT and Harvard graduate students.
I see five common answers, plus a few outliers.
The answer I expect depends on the person.
Yellow is the most common answer. A follow-up question of "what color is sunlight?" almost always results in recognition that something is wrong. White is an infrequent answer in all groups, except for astronomy post-docs, for whom I can't say.
White is unambiguously the correct answer.
So what is going on? "The Sun ball is white" is a concept accessible to 5-year-olds. So why do astronomy graduate students not know what color the Sun is? Why do physical-sciences graduate students not even know what color is? What implications might these have for science education content creation?
This failure mode is at least arguably atypical. But it's existence has implications.
A misconception can... persist into graduate school. While this is a single example, there is literature on ramshackle understanding persisting into professions. At least some of the perception that "misconceptions resolve" appears similar to the familiar "our teaching simply must be working" fallacy.
A misconception can... be an artifact of the field, not just of pedagogy. With a feedback loop between misconception and poor content, stable for decades. In the current case, the field is unable to bring its collective understanding to bear on creating content. Diffusion of knowledge within the field, and textbooks' "hundreds of collaborators", are here not sufficient. Even in this field with a singular emphasis on terminal introductory courses.
Creating content requires better experts. And more of them. Not merely random faculty, whose expertise outside of their specific research area, unsurprisingly looks much like a graduate student. For example. An MIT microbiology professor writes a nifty children's' picture book on photosynthesis. With such concern for correctness, that it has an errata page... which doesn't (yet) mention, the book's yellow Sun(s). This shouldn't surprise you now - an astronomy consult was needed for correct crayon paint choice.
Understanding can be poorly integrated and exercised. "Sunlight is white" and "the Sun is yellow", don't make sense together. Yet many astronomy graduate students believe both. This undetected conflict among conceptions, demonstrates the existence of poorly integrated knowledge. But they also instantly recognize them as inconsistent, when both are mentioned in these conversations. Suggesting a long-term failure to exercise knowledge - that in all their years of study, these two concepts haven't collided before.
Content could be better. Much. Consider integrated and transferable knowledge. Well shaped and chosen building blocks, exercised to knock off misconceptions, fit and fused together as foundation and scaffolding. If you believe "the Sun is yellow", that misshaped block interferes with others. Many related concepts don't make sense: blackbodies; star color; blue sky; how color works; and others. Making them in turn harder to learn. And discouraging exercise, over this broken pavement.
Get it right, get it insightful, get it understood without misconceptions. Even if failures to achieve these were rare, their impact could still be large. And instead of rare, they are pervasive. Suggesting the improvements in content over the coming decades, will not be a thing of refinement and decoration. But something much more profound.
How common is astronomy graduate students not knowing what color the Sun is? And astronomy professors? We don't know. These anecdotes don't stretch that far. I look forward to someone testing it.
What is clear, is that at least some graduate students are confused. In ways that surprise people. And which suggest content needs to be better. And can be.
The core question is "What color is the Sun?". The rest is there to constrain people's answers.
The "ball" helps clarify intent - 'object, viewed from space', not 'eyeball observation made through atmosphere'. It works well, but not always. Some "red or orange" responses remain. And it creates guarded "well, under <some eyeball condition>, it looks <color>" responses. These can be fixed with a reply like "yes, and viewed through rock it's black - but what is it itself?". Cone saturation almost never comes up.
The "5-year-old" is better way of saying "one-word common color name". Getting such names from people is hard. As is creating recognition that not providing one constitutes failing to answer the question. Without the 5-year-old, puzzled people start rifting through everything they vaguely remember that might be related, and finally end with a "did I get it?". Some responses of "5700 Kelvin" remain, but can be fixed by replying "5-year-old!", or "the 5-year-old looks puzzled". The 5-year-old also makes the conversation more upbeat - 'helping a child', instead of 'suffering a quiz'. And it reduces the frequency of "well, does it really matter what color the Sun is?".
The "What color is sunlight?" follow-up is an easy way to demonstrate that some answer was wrong. The "sunlight is white" meme is widespread, and its recitation so reflexive, that the response almost always precedes the "oh, then that doesn't make sense, does it?" moment. It also provides a nice illustration of knowledge being poorly integrated, for later discussion.
Textbooks, sigh. How about just one:
Ah, deadlines. So this is online, but still needs work. Hopefully before any viral attention.
Abstract: clarify that the interesting bit is the insight provided by the failure, more than the failure itself.
Asking the question: Sketch the conversation protocol.
Getting it wrong: Discuss why white is the right answer? Sketch what little I know of the yellow failure's history and mechanisms? Mention that I welcome feedback and comments. Link to past awareness efforts & people? Describe the rainbow 'color is monochromatic' failure. Do I have anything useful to say about people's contortions when trying to answer? Mention composite misconceptions (eg 'greenish yellow because of sodium')? Time to upgrade post-docs from "seen" yellow to "expect" it? Sigh.
Implications: Focus on estimation of 'how good could content become', and it's implications for 'how we go about improving'. Link Linder thesis? Anything more recent, and surveyish? Discuss astro vs other fields. Culture of "colorful exaggeration"?
Other. Find the Sun touring museum exhibit that's without a single true-color image. Add a 'opportunity: survey for real data' page. Ask-question page: +pretty; user test. Get a better backstory from NSO. Try contacting the Mall SS person again.
Do less harm. Add a corrective for possible wingnut "see, scientists are confused" misinterpretation - my core point being the opposite: that outside of one's direct research area, if any, understanding gets dysfunctionally ramshackle.
Opportunities. Explain why the Sun is white. Here, it's a focus distraction. But maybe I should link to less obvious resources like the NATO/Spanish long-term sky color study. Another: sketch of misconceptions common in textbooks (color as dominant frequency; hot familiar object glow is blackbody;...). Another: just for happy warm fuzzies, it'd be nice to find a single intro astronomy textbook that gets color right. There's a genre of "silly things in textbooks", but not one of "we found a text that did a great job on concept X!". Another: find a list of most commonly used texts, and report how the top few handle Sun color. Another: write or sketch an OER "'Sun is white' supplement to your chapter on the Sun", illustrating what could be done. Another: explain/explore issues in colorizing Sun images to indicate instrument frequency.