With a research grant, Geddes asked tutors who he had trained in metacognitive tutoring to record the issues they believed were causing students to struggle in their learning. 40 tutors working with about 80 students in tutorials that lasted an hour logged 522 reports, using this template to record the learning problem:
Problem
Please identify the problem(s) which led the student to seek tutoring. (You can choose more than one option.)
Doesn't grist the material in class
Experiencing difficulty seeing the relationship between what is covered in class and what is reflected on tests
Doesn't know to use the textbook or doesn't use the textbook
Doesn't know how to take notes
Attempts to memorize material only
Student is overwhelmed by the volume of information they are required to learn
Doesn't grasp what the professor is talking about in class.
Which lead to these results overall:
Results from Leonard Geddes research on learning problems his tutors recorded encountering. Click on image to see more fully. |
Now, there's a lot I don't know about this research --how tutors determine which problem is at play, for example. So I hope later posts by Geddes get at what the distinction is from "Doesn't grasp the material in the class" to "Doesn't grasp what the professor is talking about during class." The first post reports that the categories were chosen from years of tutoring reports and documents, but hearing a bit more about the process for deriving the categories, especially where there seems some overlap -- is poor note-taking a cause of not grasping material? or does not being able to grasp make it hard to take good notes?
Note added 8/1/2014: Geddes second post is in fact getting at some of the questions above. I especially appreciate the views into actual tutoring sessions.
Geddes defines what metacognitive tutoring, writing:
Whereas traditional tutoring focuses on a particularly challenging subject area, and supplemental instruction addresses specific challenging courses, metacognitive tutoring focuses on students’ interaction with content, in general, across domains and academic tasks. We like to call it listening with a “third ear.” Metacognitive tutors address the immediate cognitive problems their students are experiencing while also remaining open to underlying metacognitive conditions that may be contributing to students’ academic problems.
I hope too that there's insight into a tutoring session, perhaps with some record of the discussion to illustrate more fully what metacognitive tutoring is in practice.
But all those and other questions aside, I'm engaged by the results above and like the idea of attempting to map learning problems, to excavate them and address them with students. So tutoring isn't about studying content alone, but studying with each student their own learning process and skills.
My role with Macmillan Higher Education Publishing involves the study of teaching and learning, and I wonder, looking at this, why textbooks score relative low as an issue, yet grasping material in the class is higher. Isn't a textbook a means of delivering course material? If tutors report that students know how to use a textbook, but that they still aren't grasping course materials, is there something textbooks can do more effectively?
When I started at Macmillan, it was with a company called Bedford/St. Martin's*, whose co-founder, Chuck Christensen, said to me in my job interview, that we were not in the textbook business, but rather the pedagogical tools business. And so as textbooks evolve with digital technologies to become more obviously pedagogical tools, where learning analytics, engagement analytics, adaptive learning, personalized learning, and other possibilities emerge, will there be a way to make our course materials such that students can grasp them more fully?
Is it possible to learn from the kind of metacognitive research Geddes and others are doing to build into pedagogical tools metacognitive aides for students?
I think so. Formative assessment that measures not just learning, but also that correlates learning with engagement, with prompts and questions to help students see if they're studying wisely, using their time well, taking good notes, and so on. Creating tools that invite written reflection -- that prompt note-taking while reading, that prompt active study planning (not just delivering links to recommending content after an assessment), that offer a learning journal, or the ability to form study teams. That is, I think it would be a mistake to simply make things that push and pull students, that force them to a path. Instead, we can make things that give students a formative look at where they are, where they need to go to meet course goals, and then choices to follow, suggestions, that students have to choose among.
Without that action -- student agency and choice -- metacognition means so much less.
And, given how important coaching can be in learning, making it so that students can let tutors see into the system, so that tutors can advise them on choices.
__
* In December, Macmillan reorganized its educational companies. Bedford/St. Martin's went from being an independent business, with its own president, marketing department, production, promotions, and other publishing infrastructure to an imprint under Macmillan Higher Education.
2 comments:
Hi Nick,
I appreciated your thoughts and wanted to understand more about your reflections regarding formative assessment that led into engaged learning through the use of a variety of prompts (last bigger paragraph).
Specifically, what would these formative assessments look like? How would they be built in? You did a good job of explaining what the prompts would look like and how the students would benefit (i.e. prompting the students to reflect, to associate, to give the material further personal meaning, etc.), but assessment is key in this metacognitive tutoring model. I also think that corrective feedback is key as well and plays directly into assessment.
I'm not sure in every case. But imagine an issue that Geddes' project identified, say "Doesn't Know Hot to
Take Notes." Suppose that's identified as an issue for the student. So a unit might work like this: a learner is given a list of note-taking situations, maybe hearing a lecture, reading an article one has to summarize and respond to, exploring a spreadsheet for the correlation of data.
The module for say notes on a lecture might begin with a self-assessment/reflection that ask students to recall prior lecture notetaking events, recalling how they did, what they remember being a challenge (keeping up, reading their notes accurately later, etc.).
And they might be asked to choose one area to focus on as the prepare to take notes on some lectures that will be presented. There might be several recorded lectures, of different lengths and topics, and the student choose one. To simulate a live lecture setting, there might be no pause or back button on the recording; it will play through.
The student can take notes and then depending upon the technology, submit them for review by an automated writing evaluation tool that can parse the notes for accuracy and give feedback. Or they might submit their notes and get feedback from another student who compares the submitted notes to expert notes. Or they might get posted and a tutor might come in.
But there's feedback some how and a chance to try again. As the student gets better at note-taking, they might be given longer lectures to hear.
For reading, they might be given interesting stuff to read using Fleisch-Kincaid other reading analysis tools to mark the complexity of the text. A student who says the follow basketball might get a local feature story of a player, and they might take a comprehension test and do well. But as they read more complex pieces, their comprehension might fall off, suggesting a tipping point where more practice might help.
The software might say something like "the reading level you begin to struggle at is grade 11, a reading typical of many college textbooks. Based on your score, we recommend that you allow an extra half hour for reading for every 10 pages of assigned reading. Keep a dictionary handy and given the comprehension questions you got wrong, a good reading technique for you might to write a after every major section to use as a note taking technique.
Then, form a reading study group with classmates and use your summaries to discuss the text and its key ideas and concepts. "
Meanwhile, if the student does that for a few weeks, they might come back into the software and try another comprehension test and they might see that their skills have grown. And the formative feedback on the follow up analysis test might set new recommendations and reading goals.
Reflection on their progress using likert scale/survey style questions and written reflections in a reading journal, which reflections might be discussed with fellow students working on reading or an academic coach would help.
Anyway, those are some thoughts.
Post a Comment