## Saturday, August 29, 2015

### Threshold Concepts

I first encountered the notion of threshold concepts in David Didau's marvelous book. He provides links there to this article, by Jan Meyer and Ray Land—who first wrote about threshold concepts—and this one, by Glynis Cousin, which provides a briefer introduction to the idea. Meyer and Land describe threshold concepts this way:

A threshold concept can be considered as akin to a portal, opening up a new and previously inaccessible way of thinking about something. It represents a transformed way of understanding, or interpreting, or viewing something without which the learner cannot progress. As a consequence of comprehending a threshold concept there may thus be a transformed internal view of subject matter, subject landscape, or even world view. This transformation may be sudden or it may be protracted over a considerable period of time, with the transition to understanding proving troublesome.

Reading this, I recall my freshman year in college, when I took a lot of 101 courses: Anthropology 101, Biology 101 (I think it was 150 actually), even Theology 101 (though I'm quite sure it wasn't called that). What I enjoyed about these courses—though even then I was certain that I was not going to be a biologist, anthropologist, or theologian—was that they delivered these "previously inaccessible way[s] of thinking about something."

I highly recommend the more detailed exposition given at the links above.

The Cheese Stands Alone

When I think about the characteristics of these concepts, as outlined by Meyer and Land—they are transformative, often irreversible, integrative, bounded, and likely to be troublesome—it seems to become clear that threshold concepts are also, in notable ways, isolated. The authors use complex numbers as one example:

A complex number consists of a real part ($$\mathtt{x}$$), and a purely imaginary part ($$\mathtt{iy}$$). The idea of the imaginary part in this case is, in fact, absurd to many people and beyond their intellectual grasp as an abstract entity. But although complex numbers are apparently absurd intellectual artifacts they are the gateway to the conceptualization and solution of problems in the pure and applied sciences that could not otherwise be considered.

Notice the phrase "beyond . . . intellectual grasp." The authors themselves can't help but talk about a threshold concept as a node that lies above something cognitively "below" them, such that one can reach up from "below" to grasp them intellectually (or not). But if we accept the authors' characteristics of threshold concepts, it seems we should reject this implicit picture. Rather, threshold concepts stand more or less alone and disconnected (from below, at least). A threshold concept cannot be transformative and irreversibly alter our perspective while also simply being the last domino in a chain of reasoning. We can and do make sense of threshold concepts as continuations or extensions of our thinking, but we do so by employing a few tricks and biases.

We Can Make Up Stories About Anything

In his book You Are Now Less Dumb, featuring captivating descriptions of 17 human fallacies and biases (and ways to try to overcome them), David McRaney alludes to one way in which we are practically hard-wired to misunderstand stand-alone concepts—the narrative bias:

When given the option, you prefer to give and receive information in narrative format. You prefer tales with the structure you’ve come to understand as the backbone of good storytelling. Three to five acts, an opening with the main character forced to face adversity, a turning point where that character chooses to embark on an adventure in an unfamiliar world, and a journey in which the character grows as a person and eventually wins against great odds thanks to that growth.

Thus, we need for threshold concepts like complex numbers to be the middle part of some storyline. So we simply invent an educational universe in which we believe it is always possible to "motivate" this middle by writing some interesting beginning. In reality, this might be at best intellectually dishonest and at worst delusional. Given their characteristics, threshold concepts may only truly make sense as the beginnings of stories. Yet our very own narrative biases can actually cause these concepts to be troublesome, because we search for ways in which they follow from what we know when those ways don't in fact exist. To master a threshold concept, one may need to take a leap across a ravine, not a stroll over a bridge. Louis C.K.'s mom said it well:

My mother was a math teacher, and she taught me that moment where you go, "I don't know what this is!" when you panic, that means you're about to figure it out. That means you've let go of what you know, and you're about to grab onto a new thing that you didn't know yet. So, I'm there [for my kids] in those moments.

Another blockade that exists and which prevents us from embracing the very idea of threshold concepts is the implicit assumption that learning is continuous and linear—and that it always moves forward. You can see that this is in some way related to the narrative bias discussed above. I'll simply quote Didau on this, as he deals with it at length in his book:

Progress is, if anything, halting, frustrating and surprising. Learning is better seen as integrative, transformative and reconstitutive—the linear metaphor in terms of movement from A to B is unhelpful. The learner doesn’t go anywhere, but develops a different relationship with what they know. Progress is just a metaphor. It doesn’t really describe objective reality; it provides a comforting fiction to conceal the absurdity of our lives.

Dealing Honestly with Threshold Concepts in Mathematics Education

In education, I think we could stand to be more comfortable recognizing concepts that simply have more outgoing arrows of influence than incoming arrows. These kinds of concepts do more powerful work in the world shedding light on other ideas and problems. Thus, it may be a far better use of our time to treat them as the first acts of our stories—and a waste of time to treat them as objects of investigation on their own. One would think that lighthouses make a lot more sense when you look at what their light is shining on rather than directly at their light.

It can seem disquieting, to say the least, to think that it might be better to approach certain concepts by simply "living inside them" for a while—seeing out from the framework they provide rather than trying to "understand" them directly. But the point I'd like to press is that this discomfort may be a result of our bias to see 'understanding' from just one perspective—as a causal chain or story which has as one of its endings the concept we are interested in. Some understanding may not work that way. More importantly, there is no reason—other than bias or ideological blindness—to believe that understanding has to work that way.

Another reason threshold concepts may be so troublesome is that perhaps we misunderstand historical "progress" within a field of study in precisely the same way we misunderstand "progress" for students: as a linear, continuous, always-forward movement to higher planes. How, you might ask, can I expect students to simply come to terms with ideas when humanity certainly did not do this? But your confidence in how humanity "progressed" in any regard is likely informed by a narrative bias writ large. You simply wouldn't know if chance had a large role to play, because the historians involved in collating the story, along with all the major characters in said story, are biased against seeing a large role for chance. Marinating in ideas over time, making chaotic and insightful leaps here and there—such a description may be closer to the truth about human discovery than the tidy tales we are used to hearing.

## Tuesday, August 25, 2015

### We Can Do Persistence

Seven hundred twenty-three Finnish 13-year-olds from 17 different metropolitan public schools were each given, at the beginning of the school year, two self-report questionnaires designed to measure "students' beliefs about school mathematics, about themselves as mathematics learners, their affective responses towards mathematics and their behavioural patterns in math classes."

The items on the questionnaires, from "strongly disagree" (-5) to "strongly agree" (+5), were used to create 9 separate affective scales:

 1. Self-efficacy 2. Low self-esteem 3. Enjoyment of maths 4. Liking of maths 5. Fear of mathematics 6. Test anxiety 7. Integration 8. Persistence 9. Preference for challenge

In addition, students were given a 26-problem math test "about numbers and different calculations, various spatial and word problems, and examination of patterns." This math test provided a tenth scale (a "performance" scale) on which students were measured. The results below show intercorrelations among the nine affective scales and the performance scale. (Researchers used a benchmark of 0.15 for significance.)

 3 4 5 6 7 8 9 10 1. EFF .41 .44 –.45 –.26 .16 .39 .45 .28 2. LEST –.37 –.39 .57 .49 –.02 –.32 –.43 –.42 3. ENJ .78 –.65 –.22 .25 .51 .61 .18 4. LIK –.65 –.27 .23 .49 .71 .26 5. FEAR .47 –.11 –.46 –.64 –.31 6. TANX .15 –.08 –.31 –.27 7. INT .51 .13 .01 8. PERS .44 .20 9. CHALL .30

Few Strong Correlations for Affect and Initial Learning

The first row of the table shows that students' self-efficacy correlated positively with their self-reported enjoyment of math (Column 3, 0.41) and with their test scores (Column 10, 0.28). In contrast, and as expected, self-efficacy correlated negatively with subjects' fear of mathematics (Column 5, -0.45) and their test anxiety (Column 6, -0.26). Once you get your head around the table, you'll find that there's nothing really surprising there. Nearly all of the "positive" measures are correlated negatively with the "negative" measures, and vice versa.

But take a look at the correlation data for what is called "integration." These are the values in Column 7 and the values in Row 7. The authors describe integration as the "tendency to integrate new math knowledge with previous knowledge and experience." Out of the nine data points for integration, five show no significant correlation, using the researchers' own metric for statistical signficance (0.15). And the correlation between integration and self-efficacy is just barely significant (0.16). The only other "insignificant" correlation shown is between test anxiety and persistence (-0.08).

So integration contains five sixths of the insignificant results in the study. Indeed, if one were to use the absolute values of the correlations in the table to find a mean correlation for each scale, integration would be at the very bottom (0.174), while fear of mathematics would be at the top (0.478 . . .). And integration was the only scale which did not interact in a statistically significant way with test scores.

Given that these data are correlational (and the raw data were taken only from self reports), it is impossible to draw reliable conclusions about causality. And generalizing from 13-year-old Finnish students to all learners would be irresponsible. Yet, it is interesting to note that the only factor that correlated moderately with integration in the study was persistence (0.51). Thus, if one could say anything, one might say that these results may provide yet another indication that integrating "new math knowledge with previous knowledge and experience" (some call this "learning") is not as interwoven with students' intrinsic personal/emotional qualities as we like to think—that it doesn't matter that they have low or high self-esteem or that they fear or do not fear mathematics or that they have or do not have test anxiety or that they like to challenge themselves or not.

What seems to matter more is that they show up and keep trying. Luckily, of all the affective traits mentioned in the study, persistence is the one that we might be able to design learning environments for without the need to pretend that we have degrees in counseling psychology.

Malmivuori, M. (2006). Affect and Self-Regulation Educational Studies in Mathematics, 63 (2), 149-164 DOI: 10.1007/s10649-006-9022-8

## Sunday, August 16, 2015

### Book Review: Read It Over and Over

I'll admit that when I saw that a book called What If Everything You Knew About Education Was Wrong? was coming out, I was committed to buying it based on the title alone.

There's a bias that helps explain my impulse. It's called confirmation bias, which is, as the book explains, "the tendency to seek out only that which confirms what we already believe and to ignore that which contradicts these beliefs."

Now, this isn’t necessarily a deliberate or partisan avoidance of contrary evidence; it’s just a state of mind to which it’s almost impossible not to fall victim. Let’s imagine, just for a moment, that you think maths is boring. If you’re told that learning maths is pointless, because most people get by using the calculators on their phones, you’re likely to accept it without question. If, on the other hand, you’re shown a report detailing the need for maths in high status jobs and calling for compulsory maths education until the age of 18, you’re likely to find yourself questioning the quality of the jobs, the accuracy of the report’s findings and the author’s motives.

Thus, since I tend to believe that we've got very little figured out about education—that we have much more pruning of bad ideas and learning from our mistakes left to do in this field—I was compelled, in part by confirmation bias, to read this book, as I suspected that it would validate those beliefs. Naturally, I was not disappointed.

But the fact that confirmation bias was at work in my decision to read this book (and David Didau's blog, Learning Spy)—and is no doubt at work in others' decisions to not read him—does not make any of our respective beliefs wrong. Nor does it make them right. This is a point to which Didau returns often and in subtle ways throughout Part 1 and the rest of the book. The existence of our cognitive fallibilities tells us that we should nurture our and others' skepticism and doubt (and sarcasm!), both self-directed and 'other'-directed, recognizing that we, along with all of our fellow travelers, are riddled with truth-blocking biases (many of which, such as the availability bias, the halo effect, and the overconfidence bias, Didau outlines in Part 1):

Maybe it’s impossible for us always to head off poor decisions and flawed thinking; knowing is very different to doing. I’m just as prone to error as I ever was, but by learning about cognitive bias I’ve become much better at examining my thoughts, decisions and actions after the fact. At least by being aware of the biases to which we routinely fall prey, and our inherent need to justify our beliefs to ourselves, maybe we can stay open to the possibility that some of what we believe is likely to be wrong some of the time.

Learning Is Not Performance

Didau takes you into Part 2 and then Part 3 of his book with the hope that Part 1 has left you "feeling thoroughly tenderised." It is here where, to my reading, his main thesis is developed. So I'll be brief—and if not that, circumspect—in my commentary.

Having called out all the ways we cannot rely solely on our own often biased observations and thinking about education in Part 1, the book's next two parts naturally draw heavily (but not laboriously) on the device humankind has invented to compensate for these weaknesses: the scientific method. We are given in this section to look at what we talk about as learning versus what scientific investigation says about it.

What are those differences? In particular, as Didau outlines here at his site (citing the work of Robert Coe), when we talk about "learning," what we are often really talking about is "performance," a proxy for learning. We behave, in our conversations and through our policies and pet ideologies, as though students are learning when they are:

• Busy, doing lots of (especially written) work.
• Engaged, interested, motivated.
• Getting feedback and explanations.
• Calm and under control.
. . . when in fact these operationalizations are only loosely tethered to what they are meant to describe. These proxies are what we talk about and debate about in public, rather than learning.

In contrast to these pedestrian notions of learning, the work of Coe, Nuthall, Sweller, Bjork, among many others cited and discussed in the book, tell us that forgetting can be a powerful aspect of learning, that 'difficult' learning is often better than easy, and that motivation and engagement aren't all they're cracked up to be.

What If I'm Wrong?

I should close by listing one way in which What If Everything You Knew About Education Was Wrong? helped change my mind. It was on the subject of "desirable difficulties," a phrase that has come out of the work of Robert Bjork—work that, unfortunately, I did not know about until I came across Didau's writing (maybe messages like Bjork's should be the ones on the TED stage).

Prior to reading this work, I was inclined to resist the notion that learning had to be difficult. This was likely due, in part, to my own biases, but I also can't help but think that, to the extent that I ever encountered arguments in the past drawn from this work, they were so misunderstood or poorly defended by their proponents as to bear no relationship to the ideas you see in that video.

At any rate, Bjork's work seems to make clear, not that learning has to be difficult, but that some difficulties (which he mentions above and are discussed in the book) improve longer-term learning in many contexts. It is a responsible, serious, evidence-based perspective that eroded some of my practiced resistance. I think you'll be able to say the same about Didau's book.

## Sunday, July 12, 2015

### The Gricean Maxims

When we converse with one another, we implicitly obey a principle of cooperation, according to language philosopher Paul Grice's theory of conversational implicature.

This 'cooperative principle' has four maxims, which although stated as commands are intended to be descriptions of specific rules that we follow—and expect others will follow—in conversation:

• quality: Be truthful.
• quantity: Don't say more or less than is required.
• relation: Be relevant.
• manner: Be clear and orderly.

I was drawn recently to these maxims (and to Grice's theory) because they rather closely resemble four principles of instructional explanation that I have been toying with off and on for over a decade now: precision, clarity, order, and cohesion.

In fact, there is a fairly snug one-to-one correspondence among our respective principles, a relationship which is encouraging to me precisely because it is coincidental. Here they are in an order corresponding to the above:

• precision: Instruction should be accurate.
• cohesion: Group related ideas.
• clarity: Instruction should be understandable and present to its audience.
• order: Instruction should be sequenced appropriately.

Both sets of principles likely seem dumbfoundingly obvious, but that's the point. As principles (or maxims), they are footholds on the perimeters of complex ideas—in Grice's case, the implicit contexts that make up the study of pragmatics; in my case (insert obligatory note that I am not comparing myself with Paul Grice), the explicit "texts" that comprise the content of our teaching and learning.

The All-Consuming Clarity Principle

Frameworks like these can be more than just armchair abstractions; they are helpful scaffolds for thinking about the work we do. Understanding a topic up and down the curriculum, for example, can help us represent it more accurately in instruction. We can think about work in this area as related specifically to the precision principle and, in some sense, as separate from (though connected to) work in other areas, such as topic sequencing (order), explicitly building connections (cohesion), and motivation (clarity).

But principle frameworks can also lift us to some height above this work, where we can find new and useful perspectives. For instance, simply having these principles, plural, in front of us can help us see—I would like to persuade you to see—that "clarity," or in Grice's terminology, "relevance," is the only one we really talk about anymore, and that this is bizarre given that it's just one aspect of education.

The work of negotiating the accuracy, sequencing, and connectedness of instruction drawn from our shared knowledge has been largely outsourced to publishers and technology startups and Federal agencies, and goes mostly unquestioned by the "delivery agents" in the system, whose role is one of a go-between, tasked with trying to sell a "product" in the classroom to student "customers."

Let's Talk About Something Besides How to "Sell" Content

I would like to devote more of my time thinking about explanation, with reference to the principles and possibly to Grice's maxims (in addition to other related theoretical frameworks). Perhaps a new notebook, similar to this one is the best route. Maybe one of these days I'll even write that book that has been writing itself in my brain for the past 10 years or so.

I'm not quite sure how to begin and what structure I should use to help maintain momentum. But I'll noodle on it.

## Friday, June 26, 2015

### The Curse of the Novice

If you find yourself in proximity to discussions about education, you might likely know something about the "curse of the expert". In an often cited study, for example:

One group of participants "tapped" a well-known song on a table while another listened and tried to identify the song. Some "tappers" described a rich sensory experience in their minds as they tapped out the melody. Tappers on average estimated that 50% of listeners would identify the specific tune. In reality, only 2.5% of listeners could identify the song.

The curse made an appearance in that experiment when the intimate knowledge of the song they were demonstrating caused "tappers" to greatly overestimate the clarity of those demonstrations from the perspective of novices. And the basic message for education is that, for those in the know, it is all too easy to overestimate how well students are making sense of their instruction.

These findings don't suggest (I'd like to say "obviously") that expertise itself is something to be avoided or that one must take a vow of instructional silence upon obtaining it. Instead, the takeaway seems to be simply that all educators would do well to have some evidence of instructional clarity beyond their own experience and senses—because both of those things can fool you.

The Magic and Mystery of Expertise

Novices, on the other hand, can be cursed in an almost opposite way which we rarely talk about. Rather than seeing the expert's performance or knowledge as much more obvious than it is—as the expert does—novices can interpret the behavior of an expert as being much more magical or mysterious than it really could possibly be.

I'm stricken by this curse every 4 years as I watch the Olympics and wonder how those people possibly do what they do, or when I listen to anyone who moves through mathematics in anything but a plodding, hesitant way like I do, or even when I see things like this . . .

. . . and wonder how people think of this stuff.

Even if I don't know exactly how these experts do what they do, sufficient experience with the world should tell me that their processes, however inconceivable they seem, are describable, technical, and replicable. But, rarely satisfied with the boring truth or with simply not knowing, the curse of the novice compels us to project onto the expert hypotheses about their performance or knowledge—and how they came to acquire it—that are not realistic, that are hand-wavily vague (practice! conceptual understanding!), or that tend to confirm unjustified biases about learning and excellence.

And these hypotheses, in turn, inform how we frame our fundamental mission as educators.

Reverse the Curse

Collectively, educators are both experts and novices in their fields. It makes sense to be watchful for ways in which our knowledge undermines our connection to students, but also mindful of ways in which we romanticize or mythologize those attributes of expertise we want our students to eventually have.

Of course, I'm just a young grasshopper myself about all of this. But I would offer that being less than explicit about the skills and knowledge required to reach expertise strikes me as a perfect example of falling for the curse of the novice—we allow ourselves to be too much in awe of the expert, so all we have to teach is the awe. We deliberately muddy and mystify even the most straightforward of concepts, smugly satisfied that we have not only avoided the curse of the expert but also allowed students to glimpse for a moment the true wonderful magical spirit-world nature of, say, factoring a quadratic expression.

Experts are not better than us. They're just experts. Unless we have good reason to do so, those parts of expertise that are not a mystery to us we shouldn't make a mystery to learners either. And those parts that are a mystery to us . . . we should work to solve those mysteries.

Audio Postscript

## Sunday, June 21, 2015

### 10,000

We've hit, early this morning, a bit of a milestone at the MathEdK12 Community, having now a total of more than 10,000 members. That's pretty cool.

I said there that I would write a "gushing thank-you" to celebrate the moment, so let this be it. Thank you to every member and especially to those active regulars who have kept the place buzzing over time (I'm sure I've left off several): +Paul Hartzer, +Mike Aben, +Norman Simon Rodriguez, +Michelle Williams, +David Hallowell, +Lee MacArthur, +Hertz Furniture, +Aner Ben-Artzi, +LThMathematics, +Benjamin Leis, +Addition, +Amadeo Artacho, +Raymond Johnson, +Susan Russo, +John Philip Jones, +Colin Hegarty, +Juan Camarena, +Thom H Gibson, +hemang mehta, +Kyle Pearce, +Caryn Trautz, +Daniel Kaufmann, +John Fitsioris, and our moderator, +John Redden. . .

It's an international community with a lot of diversity of backgrounds and viewpoints and interests and occupations, which is, again, pretty freakin' cool—and, really, essential, vital for any community to be a strong community, especially in education.

What We Are, So Far

Our community is "a forum for all stakeholders--teachers, students, mathematicians, researchers, and laypersons. The only requirement is that you have an interest in mathematics education." Joining is free and, at this point, completely anonymous. Google+ notified me for about the first thousand whenever someone joined. Now I just see the number tick up. I have no idea who new members are.

To join, you can click on the link above (you need a free Google+ account), and then click on "Join Community" when you see the bar below. Nothing happens after that if you don't want it to. You can simply visit and read whenever you want if that's all you want. Welcome.

The primary, baseline activity in the community is sharing information—articles, blog posts, research papers, lesson plans, apps, cartoons, job openings—that's it. Post and go. Discuss. We aggregate quality writing related to math ed on the Web.

And we need more people sharing what they read, write, or create. To do that, you can find the box that looks like the one at the right at the community's home page. Or you can share to the community from just about anywhere. Read here for more details.

Thanks again to our members, and we look forward to seeing more folks sharing more stuff over there in the future.

## Sunday, June 14, 2015

### The Pre-Testing Effect (and Posterizing)

In their book Make It Stick, authors Peter Brown, Henry Roediger III, and Mark McDaniel have this to say as an introduction to the study we'll look at in this post (along with a few other studies about the benefits of generating solutions [emphasis mine]):

Wrestling with the question, you rack your brain for something that might give you an idea. You may get curious, even stumped or frustrated and acutely aware of the hole in your knowledge that needs filling. When you're then shown the solution, a light goes on. Unsuccessful attempts to solve a problem encourage deep processing of the answer when it is later supplied, creating fertile ground for its encoding, in a way that simply reading the answer cannot. It's better to solve a problem than to memorize a solution. It's better to attempt a solution and supply the incorrect answer than not to make the attempt.

Those last bits in bold are what the current study is about. (And, as you'll hopefully see, they are functionally untrue without a significant amount of qualification.) Following up on results stretching back to 1917 and including those demonstrated by Bjork and Kintsch, which all point to the benefits of testing in improving retention, researchers in the study summarized here looked at the question of whether and to what extent those benefits were present even when participants did poorly during testing.

The Basic Setup(s)

This study actually consisted of 5 separate experiments, each with one basic plan: a group of about 60 undergraduates was given a text to read (one they had never seen before) and divided into two groups. In one group, the "test and study" group, participants were given (before their reading) a 5-question pretest containing fill-in-the-blank or short response items covering material addressed directly in the text. In a second group, participants were simply given an "extended study" time to read. Finally, after reading, both groups were given a 10-question posttest, which, for the "test and study" group, contained 5 of the sentences they saw in the pretest and 5 they did not see. And of course, participants in the "extended study" condition did not previously see any of the 10 questions on the posttest.

Result(s)

The results for each of the experiments were essentially the same, so I'll write just a few notes about the first experiment. I encourage you to read the full paper at the link above.

The bar graph at the right shows the results from Experiment 1. The dark shaded bar tells us that recall was significantly better for those items that were pre-tested (and answered incorrectly) instead of just studied.

This is noteworthy, especially for those critics who often find themselves arguing against all manner of problematized activity in the classroom.

We should also note, however, that there was no significant benefit to recall for items that were not pre-tested. In other words, what we're looking at is more likely a specific effect for specific items, not a general effect of "challenge" or "struggle" before the reading. This is consistent with results across all 5 experiments. Indeed, in Experiment 5 one sees a significant positive effect for testing and failing even over trying to memorize the tested questions; yet here again the benefits were for the specific items pre-tested rather than for the items in general.

Now Watch How Fast I Can Turn All of This Work Into Crap

"Posterizing" like this takes an already somewhat sensationalized statement (just because it's in a book doesn't mean the truth hasn't been sanded down a little) and makes it functionally false by generalizing it and removing the context.

Even if we ignore the fact that the samples in this study were all composed of undergraduates (who, as a group, are not as diverse as the general population) reading texts, the best we could do—and still be intellectually honest—would be something like "if you want to better retain the information you study, being tested on the specific items you want to retain before studying is likely better for you than simply studying longer."

Of course, that's hard to fit on a poster.

Richland, L., Kornell, N., & Kao, L. (2009). The pretesting effect: Do unsuccessful retrieval attempts enhance learning? Journal of Experimental Psychology: Applied, 15 (3), 243-257 DOI: 10.1037/a0016496

## Saturday, June 6, 2015

### Whited Sepulchres

A central—and remarkable—argument in Steven Pinker's recent work, The Better Angels of Our Nature, is that a decline in collective moralization may be a significant cause of the decline over time in human violence. In other words, less morality (or rather, "morality"), less violence:

The world has far too much morality, at least in the sense of activity of people's moral instincts . . . . the biggest categories of motives for homicide are moralistic. In the eyes of the perpetrator, of the murderer, it's capital punishment—killing someone who deserves to die, whether it's a spouse who's unfaithful or someone who dissed him in an argument over a parking space or cheated him in a deal. That's why people kill each other. . . .

The human moral sense does not consist of a desire to maximize well-being, to prevent people from harm. But it is a hodgepodge of motives that include deference to a legitimate authority, conformity to social and community norms, the safeguarding of a pure divine essence against contamination and defilement.

This idea helps me put some language around my discomfort with a lot of education discourse outside the policy and research levels. We moralize far too much about teaching and learning there. Or, rather, we moralize badly too often. Our "ought"s are not centered in the empirical, but in the ideal. Consider:

"Children, go get dressed for dinner. A family should look their best at mealtimes together" is moralizing. "Children, go get dressed for dinner. I have an important client coming over, and I want to impress her" is not. The reasoning attached to the second request is embedded in a real-world reality. He wants to impress a client, so he asks the children to get dressed for dinner. By contrast, the reasoning used to support the first request is rooted in "conformity to social norms." The speaker wants the children to get dressed for dinner because doing so will bring them (and him) closer to an ideal he has in his head. Similarly, "Doctors are gentlemen, and gentlemen's hands are clean" is moralizing—an idealistic "ought" (in this case, an "ought not") untethered to reality.

In education, we have that students shouldn't just sit in rows and listen to a teacher; that they should persevere and fail; that we should be less helpful; that students ought to create on their own, collaborate, and behave like real scientists and mathematicians do. To the extent that these are simply ideals for what students "ought" to be like, disconnected from evidence, they are moralizings: visions of a "pure and divine essence"; pictures in our heads of self-reliant, creative, free, and mature students; pictures that are, however well-intentioned, divorced from reality. It seems to me that in many ways the reforms inspired by these moralizings simply succeed in making children pretend they are accomplished, so that the adults can feel good about themselves.

If, as the research Pinker references suggests, our moral instincts are not as well calibrated as we think they are for modern life, and the population of "ought"s in our community is not controlled by predatory "is"s delivered by scientific thinking, we should be, at the least, increasingly wary of educational moralizing rather than increasingly comfortable with it.

Audio Postscript

## Saturday, May 23, 2015

### A Meditation on Ratio

What do you think a ratio is? I emphasize "think" because I'd like you to be interested only in the fragmentary pictures conjured by the question—you want to be aware of what you think a ratio is, not what you think you know about ratios, which are two different thinks.

So, perhaps try now to answer the question, but don't censor your first "thinks" here. Pay close attention to what thoughts arise, but don't try to change them into anything else. And if very few thoughts arise, simply notice that too. Don't try to manufacture thoughts about this. The idea is to simply be aware of your initial impressions about a particular concept, not to judge what's coming.

Only after you have noticed what you have noticed about your thoughts about 'ratio' can you then set this bubble of incomplete thoughts, bits of pictures, and perhaps even some emotional reactions in front of you for criticism, editing, and analysis.

Good. So Now We Share Our Noticings.

I'd like to share with you one noticing of mine about the concept of ratio that has come out of something like this 'math meditation' described above, and it is my hope—and my sense—that you will be able to relate to it:

There is a 'twoness' about ratio that shouldn't be there.

That is, my impressions—my "first thinks"—about ratio are of two parts, two quantities, and I have to work a little to see that 'ratio' has a meaning and identity as a single object. Incidentally, by contrast, 'sum' and 'product' each have immediate meaning to me as individual things. One is the result of addition and the other is the result of multiplication. They are each single values, and the work involved is in the other direction: I have to work a little for 'sum' and a little more for 'product' to see these as being decomposed into two or more parts. This is as it should be. When I think mathematically, my primary mental access to these concepts should be as coherent units, not as collections of parts. Analogously, if my first access to 'cat' is "see ay tee" or "whiskers, claws, tail" and I have to work to identify 'cat' as a single thing, my cognition about this animal will be impaired—a deficit that will become more obvious the more complex is my work with cat concepts.

This seems to be the situation we're in with regard to multiplicative reasoning in particular in schooling, beginning possibly with the concept of ratio (but likely even "before" that with the concept of multiplication). The primary psychological relationship we allow students to have with ratio is one in which a ratio is two things rather than one. If you doubt this, perhaps you can imagine giving students (or even adults) a simple prompt to write 5 ratios. How many do you think would write a whole number (not written in fraction form) as one of their responses? I would expect close to none. But perhaps another good test is to watch the video here and notice whether something about it goes against your grain. That feeling—I would suggest—is likely the result of the collision of the two notions of ratio: the one we have primary access to, and the one that would allow for a more productive relationship with this concept.

I could be totally wrong, of course. You should just imagine that written on a sign and slung over every post here.

## Sunday, May 17, 2015

### Worked Examples for Algebra

The Common Core State Standards for Mathematics (CCSS-M) include not only a list of knowledge objectives at each grade level, such as 5.OA.1: "Use parentheses, brackets, or braces in numerical expressions, and evaluate expressions with these symbols", but also a list of 8 "mathematical practice" standards, among which are "1. Make sense of problems and persevere in solving them" and "3. Construct viable arguments and critique the reasoning of others."

I should quickly point out the obvious, apparently, that nowhere in the Practice Standards does it say "do gallery walks" or "have students collaborate on projects." There isn't just one interpretation of these standards that is correct, even though our collective practice tends to congeal around a few interpretations, making those interpretations appear to be the "correct" ones.

Which brings me to the lonely and unfortunately out-of-fashion worked example, the centerpiece of a 2013 action study by Booth, et al. designed to improve students' algebra performance.

Using Example Problems, Part I

In the first experiment of the study, students in the control condition were given their normal suite of guided practice problems (on the topic of solving two-step equations), while students in the three treatment conditions were also given either (a) correctly worked-out examples, (b) incorrectly worked-out examples, or (c) both correctly and incorrectly worked-out examples. In each of the three treatment conditions, students were not only shown an example problem, marked as correctly worked out or incorrectly worked out, but were also asked to explain "what was done in the example and why the strategy was either correct or incorrect."

This act of not only reading the worked examples but interacting with them by explaining what was happening and why is called self-explanation:

Explaining instructional material has been shown to improve learning by forcing students to make their new knowledge explicit (Chi, 2000; Roy & Chi, 2005). Logically, it then follows that asking students to explain examples could further improve their learning over having them simply study examples. Indeed, Renkl, Stark, Gruber, and Mandl (1998) found that including self-explanation prompts with examples of interest calculation problems fosters both near transfer of problem solving skills (i.e., solving the type of problem they practiced) and far transfer (i.e., solving problems that are related, but not isomorphic to those practiced (Haskell, 2001)).

The results from this experiment (on 116 Algebra I students across 9 classrooms) suggested that, unsurprisingly, all three treatment conditions were superior to the control on measures of conceptual knowledge and procedural transfer. However, none of the three examples-plus-explanations treatment groups performed significantly better than the other ones.

Using Example Problems, Part II

In order to further distinguish between the three treatment conditions in Experiment 1, researchers conducted a second experiment with a different sample of students, this time 8th grade Algebra I students. I encourage readers to look at the study for the methodological details, as I will only describe, in general, the results.

The strongest consistent result from this experiment came from the treatment group given a combination of correctly and incorrectly worked-out examples (along with prompts for explanation). Here, students showed significantly fewer encoding errors of conceptual features of the problems and significantly greater conceptual knowledge of "the meaning of the equals sign, negative signs, and like terms"—features identified as critical for success in algebra from prior research.

The Best of Both Worlds

To me, worked examples with self-explanation combine the best of both worlds: (1) explicit teaching and (2) cognitive engagement. And both are not only represented in the research, as shown above, but are consistent with the CCSS-M Practice Standards. While we should focus efforts to improve both of these aspects of education, we should not do so by de-emphasizing either one.

Update: This favorably timed blog post throws some theoretical and philosophical weight onto the conclusion in my last paragraph above.

Audio Postscript

Booth, J., Lange, K., Koedinger, K., & Newton, K. (2013). Using example problems to improve student learning in algebra: Differentiating between correct and incorrect examples Learning and Instruction, 25, 24-34 DOI: 10.1016/j.learninstruc.2012.11.002