Thursday, August 07, 2014

Notes on Software and Teaching Jobs in Higher Ed After Skimming the Pew report on AI, Robotics, and the Future of Jobs

Pew canvassed 1896 people by sending an " 'opt in' invitation to experts who have been identified by researching those who are widely quoted as technology builders and analysts and those who have made insightful predictions to our previous queries about the future of the Internet." 

52% think technology will lead to new and better jobs overall, as it historically has done, but 48% worry that technology is replacing jobs and not creating enough new ones.
Oremus framed it this way:
Replacing manual labor with machines on farms and in factories was one thing, the worriers say. Those machines were dumb and highly specialized, requiring humans to oversee them at every stage. But the 21st century is witnessing the rise of far smarter machines that can perform tasks previously thought to be immune to automation.

Today’s software can answer your calls, organize your calendar, sell you shoes, recommend your next movie, and target you with advertisements. Tomorrow’s software will diagnose your diseases, write your news stories, and even drive your car. When even high-skill “knowledge workers” are at risk of being replaced by machines, what human jobs will be left? Politics, perhaps—and, of course, entrepreneurship and management. The rich will get richer, in other words, and the rest of us will be left behind.

All of which has brought John Maynard Keynes’ concept of “technological unemployment” back into the academic discourse, some 80 years after he coined the phrase.
Both the Oremus piece, which includes links to other sources, and more importantly the Pew report itself, would do well in a writing course exploring technology and culture, or technology and economics. That nearly 1900 experts are divided, nearly evenly, on the future of jobs, makes this the kind of topic that will not have easy answers.
And remember, this use of technology is getting into education too, with the push for self-pace (lone learner) platforms that seemingly seek to be student proof or teacher proof  by using adaptive tools, automated assessment, interstitial quizzing and other means to funnel students over a prescribed learning path until they come out the other end having satisfied the learning gauntlet. Pearson's Propero is good example of this kind of learning technology; it's being offered to colleges as a way have students take a course and earn college credit, all with no instructor. It's software designed to replace a person with an MA or PhD who would otherwise teach the course.

So it's not just future jobs that are at stake, but current jobs too. There's a balance lost, especially in learning, when machines take over decision making. Software with fixed paths, even if adaptive and adjusting the content to what is not yet been mastered (as measured by what the software can measure), really doesn't teach learners to learn.

Deep learning requires that learners get advice from fellow learners and instructors, reflect on their prior learning choices and next options, and then make decisions about their own learning, good and bad ones, and learn from those decisions, whether good or bad, something about the subject of the course as well how they learn.

But I don't want to say all software is bad. Adaptive learning, automated writing assessment tools, data on engagement, and other information gleaned from learner action and learner choices, can do great things for students and teachers. When designed to support a learning community, defined broadly as students and teachers talking to and learning with and from one another, educational software can provide evidence, show patterns, recommend learning strategies that students can discuss with classmates, teachers, advisers, and academic coaches. Learning, especially learning to read, write, and think critically is hard and sometimes messy work. Software can supply an ordered view into that messy process. Learning comes from understanding what the views mean or tell, and making choices about what is working, what not, and making changes, experimenting, and reassessing over and again.
Good technology -- software, for example, like Eli from MSU, or MARCA from UGA's Calliope Initiative, or MyReviewer from USF, TTU's RaiderWriter, LightSide's project to use AWE as a basis for student/teacher discussion instead of teacher replacement -- can all give teachers and students useful evidence of learning, or of engagement, that becomes the basis for reflection, planning, conferencing, talking to peers, revising, learning from seeing what is working, what is not, and from all that helping the writer and learner to make a choice -- provides a matrix for transformative assessment -- about what to work on next and how to go about it.
And that's what's useful about the Pew report and its divide. It teases out possibilities as well as perils.
Also, as an aside, Pew URL is interesting for a cool social networking feature. As you read the sampling of comments some of the 1896 people made, you'll see embedded in the text Twitter's bird icon. That indicates excerpts from the findings that Pew has tweeted and a link to the tweet where it was used. Kind of cool and rhetorical move worth its own discussion.

Friday, August 01, 2014

Getting Ready to Have no Office

Macmillan Science and Education, where I work, is consolidating offices. New York offices at 33 Irving Place and 41 Madison Ave, along with one or two other locations will move into one building in Manhattan. Our company leaders herald the new office layout -- an open design -- as a campus, where private offices and/or personal cubicles give way to rows of computer terminals, scattered couches and chairs, maybe some tables in nooks and crannies, or small, fish-bowled conference rooms for meetings.

I work in Boston and once leases expire, the plan, we hear, for Boston will be as it is in New York and was in London: a move to a campus-based office model. In anticipation of that move, though it may be years away, I removed from my current office anything personal because campus offices eliminate the personal. I also removed work related materials -- manuscript hard copy, notes from meetings, scholarly books I own that I consult, books we publish, print outs, whiteboards and other workplace desiderata.

So now my office looks like this:

My office as it now stands. Once the whiskey is finished, the glasses will be packed in that white box on the low left shelf and the box will go home with me. The laptop is on a spare chair as a way to create a standing desk. 

Of course, when MHE sets up their campus offices, they'll supply plants, color, furniture, and other amenities, that, like a new library or student union on a college campus, will look contemporary and vibrant, designed no doubt for cross-colleague communication and serendipitous discoveries of new ideas. That's a key pitch for the campus version of the open office. And too as an education publishing company, operating on a campus metaphor resonates with the mission to foster research and learning.

In the campus office, employees share space, much the way that on a college campus the library, learning center or student union each provides open seating for students to use on their visits to those places. Visually, based on glimpses from similar MSE offices in London pictured in a company video, the offices resemble a fairly well appointed college student union or library. Here, take a look:
:
A shared computer area in an open office, where I believe any open terminal can be used by any person who needs it. Much like a computer lab on a college campus.

A conference room, a tradition in traditional offices as well. So nothing new, but often these are now designed so that walls are glass.

Little nooks where colleagues can work in semi-private in small groups or alone. 

Tables placed in places where there's room, borrowing from library design.
If you've been to a college campus in the U.S., you know that the images above all are like the kinds of spaces colleges provide to students and faculty: open, no assigned or named seating, workstations may be present, but laptops abound, students cluster where they will, classrooms, conference room, study rooms, nooks for two or three, supplement the commons areas.

If you've been to college campuses, have walked through areas like these, you'll have seen energy: students in conversation, small study groups at work in a conference room, fingers flying on keyboards, and the immobile work of quiet readers in lone concentration. The campus office hunts for that vibe. Whether it can capture it, I'm not so sure.

For when students and faculty go to those public spaces, they carry in with them only what they can carry out. They do not bring plants, nor photos of family or friends, nor art from their walls, nor mementos, nor cork boards or white boards, nor more books than can fit in a backpack, nor anything else that they might leave out or hang or use or keep in their their faculty offices or dorm room or apartment off campus. They cannot carry those things because there's no where to leave them. Their use of the space is temporary, impermanent, and random.

The campus work spaces that look like the images seen above are not, like a publishing office, used 8, 9, 10, 11, 12 hours a day, five days a week. On a college campus, unlike an office borrowing the design, open work stations, open seating, shared conference rooms and small group nooks are all temporary and interstitial spaces, way stations used on the way from offices/dorms/apartments to classes. Or, if in the library, used for short bursts, relative to where most time is spent.  (Though when deep into research that requires months of long days, the library can feel like home.)

Intermittent work in open, shared and public spaces, differs from working in office with a door one can shut for privacy and concentration, where there's a desk one leave work out on for more efficient resumption each morning,  and where small personal accoutrements -- art, plants, favorite pens, coffee cups or teapots -- make coming in a little more pleasant.  And so it's no surprise that some of the research (summarized here by Julie Beck in The Atlantic Monthly and here by Maria Konnikova in the New Yorker) on open offices shows that productivity falls, workers become more stressed, and the office becomes a place employees are happy to not go to when they can arrange to work elsewhere.

I'm among those who will not do well in an open office, not if I need to spend all my time there. As amenable and contemporary as their design may be, with color, furniture, use of space, they are by design more impersonal, and thus despite their colors and comforts, colder places to work. By design a worker has no permanent place, no private place. I've worked in factories and other jobs where I've had no place, but for maybe a locker to hang and store a coat. And in all those places, workers were easily replaced, some less so than others, but replaceable still. Now publishing isn't factory work. The offices MHE proposes differ mightily from the noise, grime, and grind of machinery I experienced in my factory jobs. 

These new office models -- where a place to sit is make due, a place to talk quietly or privately is make shift, and that make no room for the personal installation of art, photos, awards, degrees, books, papers, plants, toys, and yes, a bottle of whiskey maybe -- reduce the sense of who people are. And by that reduction, reduce the sense that people belong. Right now, when I go through our current offices, even when a colleague's not at her desk, I can glimpse who they are, what their children and grandchildren are into from where photos are taken, the kind of work they do from what stays on their desk, the cultural references they know from their art and knicknacks, how they think and plan from the way they plot out their whiteboards, and so much more. I can know them better as people with lives outside the office and colleagues with skills and talents inside the office.

But in the new office design, if people are not there that day, not in, there will be no trace of them. No sense of them. And in that lack of trace, these new office designs make it feel to me like people who work there are more transient and replaceable than before. Because when folks are out of the office they are as gone from view as passengers in an airport lounge whose flights have departed.

In a review of Cubed: A Secret History of the Workplace by Nikil Saval, Jenny Diski describes vividly an open office failure:
Jay Chiat decided that office politics were a bar to inspirational thinking. He hired Frank Gehry to design his ‘deterritorialised’ agency offices in Venice, California in 1986. ‘Everyone would be given a cellular phone and a laptop computer when they came in. And they would work wherever they wanted.’ Personal items, pictures or plants had to be put in lockers. There were no other private spaces. There were ‘Tilt-A-Whirl domed cars … taken from a defunct amusement park ride, for two people to have private conferences. They became the only place where people could take private phone calls.’ One employee pulled a toy wagon around to keep her stuff together. It rapidly turned into a disaster. People got to work and had no idea where they were to go. There were too many people and not enough chairs. People just stopped going to work. In more formal work situations too, the idea of the individual workstation, an office or a personal desk, began to disappear and designers created fluid spaces where people wandered to settle here and there in specialised spaces. For some reason homelessness was deemed to be the answer to a smooth operation.
So I see the open campus office coming. Yet as resplendent as they may come to be, and as nice as they might be to visit, I cannot help but also see them as soulless and empty. Which is why I've decorated my current office accordingly and only go into it when I have a face to face meeting. Otherwise, I work from home, or, when I travel to attend conferences or visit colleges, I work from airports, from hotels, and yes, from the campus libraries and student unions my offices back home will soon come to imitate.




Friday, July 25, 2014

What Can Metacogntive Tutoring Research Teach Textbook Publishers?

Leonard Geddes, Associate Dean of Co-Curricular Programs and Director of the Lohr Learning Commons at Lenoir-Rhyne University, began a began a series of LearnWell Project posts 7/23/2014 with a first titled, "A Metacognitive Peer Tutoring Model: Linking Thinking, Learning and Performance in a Peer Tutoring Program."

With a research grant, Geddes asked tutors who he had trained in metacognitive tutoring to record the issues they believed were causing students to struggle in their learning. 40 tutors working with about 80 students in tutorials that lasted an hour logged 522 reports, using this template to record the learning problem:

Problem

Please identify the problem(s) which led the student to seek tutoring. (You can choose more than one option.)

Doesn't grist the material in class
Experiencing difficulty seeing the relationship between what is covered in class and what is reflected on tests
Doesn't know to use the textbook or doesn't use the textbook
Doesn't know how to take notes
Attempts to memorize material only
Student is overwhelmed by the volume of information they are required to learn
Doesn't grasp what the professor is talking about in class.

Which lead to these results overall:

Results of Lenoir-Rhyne Tutors Observations of Student Learning Problems
Results from Leonard Geddes research on learning problems his tutors recorded encountering. Click on image to see more fully.




Now, there's a lot I don't know about this research --how tutors determine which problem is at play, for example. So I hope later posts by Geddes get at what the distinction is from "Doesn't grasp the material in the class" to "Doesn't grasp what the professor is talking about during class."  The first post reports that the categories were chosen from years of tutoring reports and documents, but hearing a bit more about the process for deriving the categories, especially where there seems some overlap -- is poor note-taking a cause of not grasping material? or does not being able to grasp make it hard to take good notes?

Note added 8/1/2014: Geddes second post is in fact getting at some of the questions above. I especially appreciate the views into actual tutoring sessions.

Geddes defines what metacognitive tutoring, writing:
Whereas traditional tutoring focuses on a particularly challenging subject area, and supplemental instruction addresses specific challenging courses, metacognitive tutoring focuses on students’ interaction with content, in general, across domains and academic tasks. We like to call it listening with a “third ear.” Metacognitive tutors address the immediate cognitive problems their students are experiencing while also remaining open to underlying metacognitive conditions that may be contributing to students’ academic problems.

I hope too that there's insight into a tutoring session, perhaps with some record of the discussion to illustrate more fully what metacognitive tutoring is in practice.

But all those and other questions aside, I'm engaged by the results above and like the idea of attempting to map learning problems, to excavate them and address them with students. So tutoring isn't about studying content alone, but studying with each student their own learning process and skills.

My role with Macmillan Higher Education Publishing involves the study of teaching and learning, and I wonder, looking at this, why textbooks score relative low as an issue, yet grasping material in the class is higher. Isn't a textbook a means of delivering course material? If tutors report that students know how to use a textbook, but that they still aren't grasping course materials, is there something textbooks can do more effectively?

When I started at Macmillan, it was with a company called Bedford/St. Martin's*, whose co-founder, Chuck Christensen, said to me in my job interview, that we were not in the textbook business, but rather the pedagogical tools business. And so as textbooks evolve with digital technologies to become more obviously pedagogical tools, where learning analytics, engagement analytics, adaptive learning, personalized learning, and other possibilities emerge, will there be a way to make our course materials such that students can grasp them more fully?

Is it possible to learn from the kind of metacognitive research Geddes and others are doing to build into pedagogical tools metacognitive aides for students?

I think so. Formative assessment that measures not just learning, but also that correlates learning with engagement, with prompts and questions to help students see if they're studying wisely, using their time well, taking good notes, and so on. Creating tools that invite written reflection -- that prompt note-taking while reading, that prompt active study planning (not just delivering links to recommending content after an assessment), that offer a learning journal, or the ability to form study teams. That is, I think it would be a mistake to simply make things that push and pull students, that force them to a path. Instead, we can make things that give students a formative look at where they are, where they need to go to meet course goals, and then choices to follow, suggestions, that students have to choose among.

Without that action -- student agency and choice -- metacognition means so much less.

And, given how important coaching can be in learning, making it so that students can let tutors see into the system, so that tutors can advise them on choices.

__
* In December, Macmillan reorganized its educational companies. Bedford/St. Martin's went from being an independent business, with its own president, marketing department, production, promotions, and other publishing infrastructure to an imprint under Macmillan Higher Education.

Tuesday, July 22, 2014

Goodnight Dull Assignments and Hello Goodnight Moon

Would I assign Margaret Wise Brown's Goodnight Moon in a college writing course?
I think I would, were I teaching at the moment, after reading "What Writers Can Learn From 'Goodnight Moon'" by Aimee Bender in the New York Times (http://opinionator.blogs.nytimes.com/2014/07/19/what-writers-can-learn-from-good-night-moon/).
Bender describes getting several copies of the book birthing twins and settling in to read it for the first time. She then says,
The babies listened in their sleepy baby way, and as the pages turned, I felt a growing excitement — a literary excitement. Not what I expected from this moment. But I was struck and stunned, as I have been before, by a classic sneaking up on me and, in an instant, earning yet again another fan.
It also seemed to me to be an immediately useful writing tool.

“Goodnight Moon” does two things right away: It sets up a world and then it subverts its own rules even as it follows them. It works like a sonata of sorts, but, like a good version of the form, it does not follow a wholly predictable structure. Many children’s books do, particularly for this age, as kids love repetition and the books supply it. They often end as we expect, with a circling back to the start, and a fun twist. This is satisfying but it can be forgettable. Kids — people — also love depth and surprise, and “Goodnight Moon” offers both. Here’s what I think it does that is so radical and illuminating for writers of all kinds, poets and fiction writers and more.
Her piece goes to a reading of Goodnight Moon that explores and celebrates her observation.

I'm attracted to this kind of thing in a writing course because Bender's essay is assignable, and for many students who've read or have had the book read to them, their be the warmth of recollection, and for those who haven't read the book, it's readable.

I can see using the Bender essay and Wise Brown book to discuss form, to invite even the most novice and unsure writers to experiment with voice, to play with style, to think about the idea of simply finding a way to surprise their readers a bit.

Also, the idea makes sense as more and more states require their public colleges and community colleges to eliminate or reduce reliance on developmental reading, writing, and math courses. For example, in Florida, students who graduate high school can go right into a first year writing course even if a writing placement test they take indicates they would be better in a developmental writing course. A few years ago, in fact, they would have had no choice but to take that developmental course first, possible two or even three developmental courses. That's ended and so many professors are seeing in their first year courses writers with more varied ability, some stronger, some weaker. Or in other places, their are accelerated learning programs (see http://alp-deved.org/), progressive approaches to helping developmental writers stay on track and do well in a first year writing course along side students who did not require a developmental course.


To make more varied ability writing courses work, college writing teachers need to learn about differentiated teaching practices of the kind that elementary educators are trained to apply. A lot of college writing teachers already get to differentiated practices, but many will struggle to become comfortable with the approach.

Assigning first Goodnight Moon, discussing it, and then assigning Bender's essay carries with it a native differentiated element. It starts with a simply elegant book to read, a picture book, but one that is being read by writers, following Bender, to learn about writing. So while the book is a children's book, it's also a classic, great literature, and it teaches. The reading can kick off a range of possible discussions -- remembering other books read, or for those who did not grow up as readers or being read to, other stories heard or watched as children.  This would borrow from literacy narrative assignments.

The turn to Bender's essay would, for some weaker readers, be less troubling because they'll have read and discussed a bit the book Bender's writing about. That is, it's a different experience reading a review or analysis of a work of art one's already seen and thought about.  So placing Bender's essay in a digital setting where writers can share notes and comments as collaborating readers, including inviting them to read the comments that will have been archived at the New York Times site, makes reading communal.

And then because of Bender's points about how writing can surprise, float above its forms, subvert its own rules, the first year course can invite writers of all abilities to play with the rules and conventions of standard edited English. Even the seemingly most inept writer, the student who cannot get a subject or verb to agree, a tense to hold, a paragraph to cohere, can play with words and learn to find some joy in the experiments that come with trying to give their readers both "depth and surprise."




Friday, June 27, 2014

Understanding Teaching with Mike Rose

Note: This is a post originally written for an internal Macmillan Education blog on December 7, 2013

More and more the work we do in higher education publishing necessarily and inevitably includes professional development for the teachers who will use our books and media. The business rationale for this is simple: if teachers understand the value and support and insight our stuff brings to them and their students, they'll use it more, require it, depend upon it, and students will then be required to log into our software and services, required to do some of their learning with us, required to buy, open, study from our books. Essential books and software equals profit. Simple. But even more important our mission as educational publishers is to foster better teaching and learning because the world requires good teaching and a humane and educated people.  By all of us understanding teaching and learning, and then applying that understanding on campus from the first sales call to ongoing training and professional development for adopting professors, we enact the better natures of books and media. We become what we promise professors can be.

Mike Rose, who has three books, two as co-editor, in Bedford/St. Martin's professional resource series -- An Open Language is the one solo authored and is a collection of his essays --  has the first of three posts in the Washington Post Online that looks at the complexities of teaching here: http://t.co/btzId4s2sq. He's posting as a guest blogger in Valerie Strauss's The Answer Sheet, a blog that covers education.  Here's how Strauss sets up Mike's work:
Here is a thoughtful piece on the essence of teaching and the kind of teacher education programs we really need from Mike Rose, who is on the faculty of the UCLA Graduate School of Education and Information Studies and author of “Back to School: Why Everyone Deserves a Second Chance at Education.”  This is longer than your average blog post but well worth the time, and is the first of three pieces on teacher education by Rose.
Those who know Mike won't be surprised by the quality and kindness of thought. Here is but one excerpt  that resonates for me, in main because it simply reminds me of my wife, Barbara, a brilliant fourth grade teacher whose job teaching fourth grade is so much more complicated by all kinds of factors and needs than my occasional job teaching college writing courses.
Teaching done well is complex intellectual work, and this is so in the primary grades as well as Advanced Placement physics. Teaching begins with knowledge: of subject matter, of instructional materials and technologies, of cognitive and social development. But it’s not just that teachers know things. Teaching is using knowledge to foster the growth of others. This takes us to the heart of what teaching is, and why defining it primarily as a craft, or a knowledge profession, or any other stock category is inadequate. I’m not sure there is any other work quite like it.

The teacher sets out to explain what a protein or metaphor is, or how to balance the terms in an algebraic equation, or the sociological dynamics of prejudice, but to do so needs to be thinking about how to explain these things: what illustrations, what analogies, what alternative explanations when the first one fails? This instruction is done not only to convey particular knowledge about metaphors or algebraic equations, but also to get students to understand and think about these topics. This involves hefty cognitive activity, as any parent knows from his or her experiences of explaining things to kids, but the teacher is doing it with a room full of young people—which brings a significant performative dimension to the task.

Thus teaching is a deeply social and emotional activity. You have to know your students and be able to read them quickly, and from that reading make decisions to slow down or speed up, stay with a point or return to it later, connect one student’s comment to another's. Simultaneously, you are assessing on the fly Susie’s silence, Pedro’s slump, Janelle’s uncharacteristic aggressiveness. Students are, to varying degrees, also learning from each other, learning all kinds of things, from how to carry oneself to how to multiply mixed numbers. How teachers draw on this dynamic interaction varies depending on their personal style, the way they organize their rooms, and so on—but it is an ever-present part of the work they do.
The full piece by Mike, which goes on to look at the role of college teacher education programs and the balance of classroom preparation for teaching to on the job experience, reminds those of us who do professional development that when we work with educators we are teachers and those educators are students. And what matters so much in our work then is following good pedagogy, applying the things our books suggest. So a workshop might open with a prompt or question and then asking people to write a few minutes on it before discussion, a simple writing to learn activity that helps focus the mind on the topic, gives people a private place to think though they're in a public space, assures that there will be things to be said when one asks for responses, and helps support learner to learner discussion. But what we also need to remember is how people learn, the role of anxiety and aspiration. When we do workshops, our rooms are full of teachers who will have, to borrow from Mike, silences, slumps, aggressiveness, and more.

To understand professional development then, to understand training, we need to understand that we are educators and that what we are doing is as important and complex and demanding and necessary as the work the teachers we are supporting will go forth and do in their classrooms.

Thursday, June 26, 2014

Draft --> Writing Software with Analytics for Writers

Another item originally written for folks at Macmillan Education.

Traci Gardner, who contributes to Bits, a Bedford/St. Martin's blog for teachers at http://bedfordstmartins.com/bits, tweeted a link to a ProfHacker entry by Konrad Lawson about Draft, a tool for collaborative writing, at "http://chronicle.com/blogs/profhacker/draft-for-collaborative-writing/49427."  Lawson observes,
Draft is designed for drafting and collaborative writing of text. It is not a blogging platform or a live editing environment like Google Docs, and the documents created within Draft are not designed to have their final home there. As it functions now, you write, collaborate, edit, and then export or directly publish the documents to your cloud hosting service such as Dropbox, Evernote, or Google Drive, to a social platform such as Blogger, WordPress, Tumblr, and Twitter or, if you use the Chrome or Firefox additions, directly back into a text box in another browser tab. Get a quick overview of its features here: https://draftin.com/features .

I want to draw attention here to the analytics (https://draftin.com/features#analytics) features. Especially this idea, described by Draft's creator, Nathan Kontny:
One mistake I keep seeing people make, when they publish their writing, is that they don't pay enough attention to attributes that might affect how much traction that writing will get.  They'll publish 2000 word posts, when their audience would prefer 500. Or they publish on Friday night, when no one might be paying attention and Monday morning might be a better idea. I wanted to make this type of analysis a lot easier to understand, and help people, including myself, learn what makes our writing get more attention than other writing.

This is the kind of thing -- finding ways to give writers data they can use to make decisions about their writing -- that really allows the writer to adapt; it gives them agency. That's a distinct from the kind of adaptive learning that pushes a learner to one desired end, a preset end, and where the software varies content and activities based on performance for the learner to do until they reach the end, where the software is adapting, not the learner.

Note the kinds of things Kontny's trying to help with -- audience awareness, the ability for writers to see differences in drafts. A lot of online writing tools (http://grammarly.com, for example) can give statistics on word count, or in the case of Grammarly, the number of sentence level errors made. And there's some use in that some times, but giving writers information on what in their writing finds an audience, when it's best to post, what the reading level of their writing is (Draft uses the Flesch reading level.), and other information lets a writer see things over time. This intranet does some of that, by the way; it reports (click the "Content" tab and then filter to see your own content) what of your contributions have been viewed and how many times, commented on, liked, replied to. That's standard stuff in social networks and should be standard stuff in the learning spaces we make for students. What would up the value on that would be aggregation for totals of all those items, and a view that shows trends over time.

The other feature of Draft is the ability for a writer to compare drafts side by side, whether one's own writing or writing being co-edited by fellow writers offers a powerful tool as well. Teachers struggle to get writers to address global revision, what is sometimes called higher order revision. Students tend to focus on tweaking sentences here or there. So a tool like Grammarly or Word's spell checker/grammar checker draw attention to surface level issues. But a tool like Draft shows changes. That's the kind of thing we need in the writing tools we make for students and teachers. There needs to be a drafting tool that supports version control, to encourage writers to save as or upload drafts. And a tool that not only lets writers compare drafts, but that highlights changes. And not only highlighting changes, but it should also tell writers what percentage of the draft is changed: how many new words were added, how many new sentences were created, new paragraphs, how many deletions were made. The idea would be a tool that can summarize larger revision from minor editing. And then something that aggregates that so over assignments and drafts, a writer sees trends.

Giving writers and writing teachers these views -- evidenced based insights into writing and revision processes -- gives them tools for articulating change and describing growth. A teacher can ask for a new draft and can tell the writer that she wants to see at least a 25% increase in new ideas. Peer reviewers can return to see which of their feedback ideas lead to bigger revisions by the writer, which would help reviewers see the value of their work and why it matters.

A Look at the Automated Analysis of Constructed Responses, Or Why Jared is Thin

Note: Like the post on Art Graesser's work, this entry comes from notes taken from a meeting with innovative professors doing great research on using automated response to writing, in this case, writing to learn. One of the privileges of being in publishing is meeting professors with great ideas.


So: Jared, the Subway guy, lost a lot of weight. Where did the weight go?

That's the kind of question that requires biology students to NOT simply recall key concepts, but asks them to explain a process, and unlike a multiple choice question, it does not allow the student to look at choices for hints. It asks them to construct a response that answers the question. So, where did Jared's weight go?  I won't tell you the answer, but it was the illustrative question used by to explore "How Can Automated Assessment of Constructed Responses (AACR) Provide Automatic Evaluation of Written Formative Assessment in LaunchPad?"

The presenters were (with descriptions pulled from http://create4stem.msu.edu/project/aacr:
Mark Urban-Lurain, Associate Professor and Associate Director of the Center for Engineering Education in the College of Engineering, Michigan State University directs the technology development and implementation of AACR.

John Merrill, Director of the Biological Sciences Program, Michigan State U, lead the development of the core biology curriculum, and provides disciplinary expertise in the biology portions of the work plan, coordinating with faculty who teach introductory biology courses to implement the materials in their courses.

John began by stating the essential problem: multiple choice questions are not adequate measures of what students know; writing -- in this project, short answer (from one - 40 words or so) reveals better student understanding (or misunderstanding). However, in large lecture courses, sometimes with up to four or five hundred students, even just skimming, let alone reading, sorting, scoring and getting a composite understanding of what students know doesn't work. So John and Mark embarked on a project to have students write, and then use machine analysis to not only score the writing, but sort it by categories and from those categories reveal where students have misconceptions or misunderstandings, giving instructors two things: a better guage of student learning and the means to see broad trends and thus to adjust their teaching to what students need clarification and help with. Currently AACR is an instructor tool: instructors download questions that AACR can score, sort, and report on, instructors deliver the questions to students, extract their answers as a spreadsheet, upload that spreadsheet to AACR, and the AACR team runs the answers and delivers to the instructor a report.  Cumbersome yes, but the team has won a $6 million NSF grant to both improve how AACR works and -- this is key to the mission -- create a means for faculty development, with advice provided to and from faculty communities of users on what kinds of changes to make to their teaching in response to the data AACR provides.

Attending the presentation also were James Morriss, first listed author of Biology: How Life Works (http://www.macmillanhighered.com/Catalog/product/biologyhowlifeworks-firstedition-morris) and Melissa Michael, the lead assessment author on the book. Both James and Melissa concurred on the power of writing to foster learning and to reveal better than multiple choice questions what students know and do not.  James told a story about how he sometimes use both a multiple choice question and a short answer question on the same test, and students will have wildly different takes, getting more right on MC questions but then seeing from written responses that students don't really know the subject matter or at the very least cannot articulate it on their own. An MC question might have language the triggers recall of lecture notes or textbook language, but when asked to write, and thus required to cull that out on their own, things fall apart.

So as to writing. Note the acronym AACR and the phrase "constructed responses."  While we might use the term open ended responses from our use of surveys or assessments we're used to designing, Merrill and Urban-Lurain used "constructed responses" to emphasize two things: first, students have to 'construct' a response, but the questions used are meant to be open ended, as you see in a survey with things like "other" or "tell us more." Instead the questions seek specific responses to key concepts in the course; and second, when fully considered as a concept in learning, a constructed response might not be text only (though AACR is for text only written response) but can include the use of images, artifacts, data, tables and so on.

Earlier I posted on the meeting we had w/ Art Grasser and his use of Latent Semantic Analysis. The AACR project doesn't use that approach. Instead, they automate responses by doing a word analysis -- the presence of key words in students answer, nouns -- of students answers and matching the prevalence of those words to prior student answers. The machine is trained to score student writing and to categorize answers according to core ideas in -- this demo -- biology, so that teachers can see which students are using language that correlates with understanding and which are not, and where students are not, where they are misunderstanding things. The design of the software uses an program called SPSS (http://www.ibm.com/software/analytics/spss/), statistical analytic and predictive software now owned by IBM. They chose it in part because it's designed for non specialists to use but is also robust enough for their purposes. They applied NodeXL, a program that creates associational graphs of Excel data to produce concept clusters and association graphs (So a word used a lot has a bigger ball, and a word it appears with a lot in student answers has a thicker line to that word's ball. Go to http://www.nodexlgraphgallery.org/Pages/Default.aspx to see NodeXL graph images to get a sense). The images you see at NodeXL are more complex than an example from a single question in AACR, and so the data is easier to read from the graph, but John Merill noted that even so, and even for science professors, teaching instructors how to read the graph, understand what it means about student learning, and then coming up with a response in their teaching to address what they see is necessary.

Here's my understanding of how things worked:

SPSS was fed WordNet, an open source dictionary funded by the National Science Foundation (http://wordnet.princeton.edu/). Wordnet links/associate words not just by meaning, like a thesaurus, but concepts and varied meetings, so a richer lexical matrix. Merrill and Urban-Lurain also added to the dictionary terms and meanings particular to biology, so that additional and occasional specialized language or specialized meanings of common language (such as 'mean' in statistics) that students might use in their answers was accommodated.

As student answers were added, SPSS produced a report that pulled and analyzed key works -- nouns -- in student answers and did a first pass association at suggesting categories/concepts for those words. Merrill and his team of biology professors then fine-tuned that, correcting and fine-tuning the categories and which words should be associated them. Categories were then associated with concepts key to the course.  So for example, on where Jared's weight went, 37.3% of answers were associated correctly that the reasons were cellular (metabolic rate, calorie burn, and weight leaves as CO2) and another 37% said they had to do with physiological actions (sweat, urine, feces and other means of departure). Guess which is right, or rather, where the student answers should be clustering? With the results, at a glance and computed quickly and more accurately than a professor could from reading and categorizing and counting answers by category, a teacher sees whether students have the key concepts at stake down, and with greater accuracy even than a easier to auto score multiple choice question. Very powerful stuff information.

Questions were drafted and student responses were uploaded. In the sample question we studied, 374 student responses were gathered and two things happened: one human readers applied a rubric and scored those, and then, the answers, categories and concepts were tweaked so that SPSS could give a predictive score -- note the word predictive -- that says, essentially, based on the vocabulary we see, we the software predicts a human reader, reading the full answer, would give the answer score X. Overtime that prediction matched human scorers in the 83% or higher range for high and low scores (of three levels the humans used) but matched 43% on mid-range scores (where humans also show widest variance.

The labor intensity is in question authoring, adding vocabulary (though both of these would subside if more instructors used the program and added stuff, one of the goals of the NSF grant, a kind of crowd sourcing to get more questions in. The labor comes in establishing predictive outcomes from SPSS that matches the scores or normed human graders (humans trained to apply a rubric consistently, so readers get the same, or nearly the same [depends on the rubric] score on a given sample of student constructed response. It took 374 items scored by humans, for example, to account for the range of responses and lexical variation of student work. That's a lot of norming for one question. Multiple that by just five per chapter to go with a book, and one can see a tremendously labor intensive process.

But that said, the pedagogical benefits and outcomes possible, and the ability to perhaps adapt the machine to not only score and give a report to an instructor, but to also give information directly back to students, adaptive information (so imagine a LearningCurve made of "constructed responses" instead of just multiple choice questions, and you see can see where this might go.

Right now the technology is young, despite ten years of research, and the NSF grant is only 6 months or so (out of five years) in. So there's time to see where this goes and maybe experiments the biology team can try. My own concern in the humanities, where our textbooks, which are the point of sale to pedagogy, are lower, significantly lower, and so the labor to build the questions that the current methodology uses would be something that we couldn't afford. But boy, they got a lot right and it'll be cool to see where this goes and whether biology and other science books in MHE can do experiments.

Art Graesser and MITSC


A colleague at Macmillan Education arranged a meeting in New York for editors to meet with Art Graesser, a Psychology professor at the University of Memphis who researches and designs in the Memphis Intelligent Tutoring Systems Center (MITSC), which is part of the Advanced Distributed Learning Center for Intelligent Tutoring Systems Research and Development (ADL-CITSRD), a government partnership, which is located in the FedEx Institute of Technology (FIT, and yes, there will be an acronym test at the end of this post; it is 50% of your grade.)  The purpose of the meeting was to learn about different approaches to automated writing assessment.

On the way to discussing automated assessment of writing, Art described some other projects from MITSC:

AutoTutor (http://www.memphis.edu/mitsc/capabilities/team-memphis-projects/autotutor/index.php), where, as a student works at a self-paced tutorial, two "agents," software coded to track what the student is doing (or not doing), are triggered by student actions in the tutorial. So a student might make a mistake in identifying a key idea, and the first agent, might trigger a text or audio message asking the student a question. The second agent might comment on the first agent's question or on the student response, creating a kind of learning dialog among the two agents and the student about the item under study. The Turing Test becomes Turing Tutors. Now, if this sounds bizarre, wait: the research from Art and his team shows that students who study in the tutoring software do slightly better on shallow knowledge (recall, definition, summary) of content than students who read only, but do significantly better at deeper knowledge (reasoning, synthesis, and communication) than students who read only. The acts of dialog, of drawing student attention to thinking in new ways, of answer or at least considering questions the agents posed, leads to deeper learning.

That's not surprising on the face of it, but what's powerful is the creation of software that helps a lone learner come to the kind of deeper engagement necessary for deeper learning.

And that's the nub of Art's work -- deeper learning through deeper engagement via dialog and writing.

Art and his colleagues and Memphis are also doing research with the Center for the Study of Adult Literacy (CSAL) at Georgia State University, which, like MITSC, has won grants from the Institute for Education Sciences (IES), a federal initiative that studies learning sciences. I'm linking to both CSAL and IES because they're sites worth visiting, doing a lot of useful research that we can draw on for validation and direction of editorial initiatives.

On writing, one of the first things you'll want to check out is the work MITSC has been doing with cohesion metrics of written text. To quote from the project, it involves the creation of a tool that generates "Automated Cohesion and Coherence Scores to Predict Text Readability and Facilitate Comprehension," affectionately dubbed, and this will be on the test so pay attention, Coh-Metrix. To put this in simpler terms, Coh-Metrix measure readability, only in ways more sophisticated than Lexile, Gunning-Fog, and perhaps best known (because it's built into MS Word) Flesch-Kincaid. You can get a good explanation of what Coh-Metrix measures here -- http://cohmetrix.memphis.edu/cohmetrixpr/cohmetrix3.html -- but what you might want most to do is to go to their Text Easability Assessor (TEA) --http://tea.cohmetrix.com/ -- create an account and have a TEA party* with some prose of your own.

So. On to auto assessing writing. In the discussion, Art described three broad ways to auto assess text:
  1. Compare the text to an ideal and score it for how close it comes --- We can do this crudely already with one word answers for example. A student writes a word into an answer, and it matches the word (correctly spelled in our limited engine so it fully compares) we've designated as correct, the student gets full points for the question. The software used in automated assessment allows answers more sophisticated than a single, correctly written word and more nuanced scoring than right or wrong.
  2. Using a cluster of answers and mapping to them. That is, instead of comparing to a single idea, a range of responses -- A, B, C, and D answers might be available and student submissions are compared for features that match somewhere in the cluster. So if the writing has features associated with an A -- vocabulary, length, and other measures -- the writing is scored an A and so on.
  3. C-Rater level (C-Rater is an ETS tool that we see in use in writing courses as Criterion). Here, the software is trained to prompts, and a corpus of sample student writing in response to those prompts. The prompts are designed so that submissions will fall into the range of samples given (so a bit of 1 and 2 above happens), but in addition to using that corpus as a tool and way to do the analysis, C-Rater also uses Latent Semantic Analysis, a means of analyzing the text submitted in more sophisticated ways.

The psychology team and the biology team have done experiments with MITSC using the Latent Symantic Analysis (LSA) engine they've designed. The process went something like this:

1. A textbook was turned over to MITSC as .txt files.
2. Those files were scanned and described for the latent semantic features using the engine.
3. Wikipedia's entries on psychology were also scanned, to extend the corpus and to provide a richer semantic matrix. This creates a LSA Space, a corpus of writing that student answers to short answer questions are analyzed and scored against.
4. In an experiment, 6 questions were used and student answers were auto scored and compared to human scores. Now this process had some significant steps I won't get into, and the answers were short answer -- 20 - 100 words or so. But the results were interesting in two ways:
A. The scores on the first three test questions, the ones used to train the machine to score, matched human raters.
B. The machine was able to score on the second three questions accurately, without being trained. That is the algorithm designed on the first three test questions carried over and worked on the second set of three.

The potential here is tremendous -- the ability to deploy in all texts -- questions that draw on the text being read for accuracy and a range of answers. Imagine this in a system where questions occur as students read, engaging them as at key points to help them pause. And imagine the data that might result, where a student sees not only how they did, but what class averages are. Imagine a tutoring agent stepping in from time to time as answers come in, helping students learn deeper based on the answers. That is, with a well designed automated assessment tool, one doesn't just have to give a score alone, the score could trigger dialogic tutoring agent, or suggest a study group of fellow students to chat with, or a trigger a message to an on campus tutor.

It's not the score so much as what is done with the score to foster further learning. The key is that score comes from writing, an act that research shows deepens learning more fully than just reading, just highlighting, and just multiple choice (even in LC) answering.

My understanding is that these questions were designed for convergent thinking (see the graphic here -- Faultless Facilitation – Leveraging De Bono’s Six Thinking Hats -- on that) and not divergent.

There's more, but I'm out of time.

*I apologize

in praise of typo, errata, and error

http://www.theatlantic.com/technology/archive/2014/06/a-corrected-history-of-the-typo/373396/

The URL leads to a piece by Adrienne LaFrance,  "A Corrected History of the Typo" with the subtitle notation, "In the beginning, print was not about perfection; it was a space for collaboration."

LaFrance interviews "Adam Smyth, an English literature fellow at the University of Oxford who specializes in the instability of early modern texts," and he walks her through early printing, the role of errata, and how the relationship of early writers, printers,and readers coalesced around the expected excavation and discussions around error. LaFrance steers the discussion to Smyth with how error is treated online, where it often is erased in corrections (though wiki's most notably keep versions). Together they observe, and this just a sample:
Errata lists in the early days of printed books, then, were themselves a sort of early comment section—the place where revisions were made and ideas were exchanged. They were "confessional spaces" and "emblems of a new culture of accuracy," but errata lists were also a way of seeing books as a collaboration between reader and writer, rather than just the one-way broadcasting of a set of ideas. Which means that print, in its infancy, didn't actually lead to "better, more accurate texts," but to "the dissemination of blunders," Smyth says. It is in this way that the dawn of book printing sounds a bit like where we find ourselves today on the Internet—a fluid and collaborative space for ideas that sometimes seems to be equal parts information-rich and error-riddled. The difference in early print, though, is that errors "were not hidden away." And while screengrabs capture some evaporated Internet writing for posterity, much of what's published today simply disappears or changes with all the imperceptibly of a distant keystroke. 
As a writing teacher, I'd assign this to my students. It's useful for exploring collaboration, for thinking "about tolerating, rather than eliminating, reasonable mistakes," about the social life of information, about the transition into new technologies and how the new starts out by aping what came before it. 

But on the idea of "reasonable mistakes," it would be especially useful for weaker, less confident writers whose fear of error might make it harder to draft, and as well for a discussion of peer review and workshopping, where it might be possible to create a practice of not calling error out for correction but using it as an occasion for discussion and exploration. 

Too, the other lesson to be drawn from this is one wiki's teach as well -- reminding writers to use save as and other techniques for storing drafts, preserving versions of work with errors intact. The goal for most writers will remain on most occasions to publish (or in a course, to turn in a final draft) with as little error as possible, all the way to no errors at all. But the point is, as Paul Krebs notes in another piece I'd assign writers, that error will happen, is in fact necessary for writing to proceed, for thinking to improve, for learning to happen. 

Error is a good thing, and to hide and treat it as a shame, to shame writers who make error, is to shame writing.

Thursday, May 15, 2014

How UT Austin Helps Students Graduate

Apologies for quoting heavily, but there's a lot in this article -- "Who Gets to Graduate" by Paul Tough at www.nytimes.com/2014/05/18/magazine/who-gets-to-graduate.html that applies to teaching, professional development for teachers -- whether those teachers are classroom instructors, TA's,  or tutors.
When you look at the national statistics on college graduation rates, there are two big trends that stand out right away. The first is that there are a whole lot of students who make it to college — who show up on campus and enroll in classes — but never get their degrees. More than 40 percent of American students who start at four-year colleges haven’t earned a degree after six years. If you include community-college students in the tabulation, the dropout rate is more than half, worse than any other country except Hungary.
The second trend is that whether a student graduates or not seems to depend today almost entirely on just one factor — how much money his or her parents make. To put it in blunt terms: Rich kids graduate; poor and working-class kids don’t. Or to put it more statistically: About a quarter of college freshmen born into the bottom half of the income distribution will manage to collect a bachelor’s degree by age 24, while almost 90 percent of freshmen born into families in the top income quartile will go on to finish their degree.
When you read about those gaps, you might assume that they mostly have to do with ability. Rich kids do better on the SAT, so of course they do better in college. But ability turns out to be a relatively minor factor behind this divide. If you compare college students with the same standardized-test scores who come from different family backgrounds, you find that their educational outcomes reflect their parents’ income, not their test scores. Take students like Vanessa, who do moderately well on standardized tests — scoring between 1,000 and 1,200 out of 1,600 on the SAT. If those students come from families in the top-income quartile, they have a 2 in 3 chance of graduating with a four-year degree. If they come from families in the bottom quartile, they have just a 1 in 6 chance of making it to graduation.

The full article begins with a profile of University of Texas, Austin student Vanessa Brewer, the challenges and choices she makes, putting her face and story on the kind of data above, and then shifts into a profile UT's David Laude, a chemistry professor by degree who is now a senior vice provost focusing on improving graduation.
In 1999, at the beginning of the fall semester, Laude combed through the records of every student in his freshman chemistry class and identified about 50 who possessed at least two of the “adversity indicators” common among students who failed the course in the past: low SATs, low family income, less-educated parents. He invited them all to apply to a new program, which he would later give the august-sounding name the Texas Interdisciplinary Plan, or TIP. Students in TIP were placed in their own, smaller section of Chemistry 301, taught by Laude. But rather than dumb down the curriculum for them, Laude insisted that they master exactly the same challenging material as the students in his larger section. In fact, he scheduled his two sections back to back. “I taught my 500-student chemistry class, and then I walked upstairs and I taught this 50-student chemistry class,” Laude explained. “Identical material, identical lectures, identical tests — but a 200-point difference in average SAT scores between the two sections.”
Laude was hopeful that the small classes would make a difference, but he recognized that small classes alone wouldn’t overcome that 200-point SAT gap. “We weren’t naïve enough to think they were just going to show up and start getting A’s, unless we overwhelmed them with the kind of support that would make it possible for them to be successful,” he said. So he supplemented his lectures with a variety of strategies: He offered TIP students two hours each week of extra instruction; he assigned them advisers who kept in close contact with them and intervened if the students ran into trouble or fell behind; he found upperclassmen to work with the TIP students one on one, as peer mentors. And he did everything he could, both in his lectures and outside the classroom, to convey to the TIP students a new sense of identity: They weren’t subpar students who needed help; they were part of a community of high-achieving scholars.
Tough goes on to profile one more person in some detail, David Yeager, a UT assistant professor who does work on the psychology of education and who studied how best to help students through their anxiety about college, especially students like Vanessa Brewer.

Before he arrived at U.T. in the winter of 2012, Yeager worked as a graduate student in the psychology department at Stanford, during an era when that department had become a hotbed of new thinking on the psychology of education. Leading researchers like Carol Dweck, Claude Steele and Hazel Markus were using experimental methods to delve into the experience of students from early childhood all the way through college. To the extent that the Stanford researchers shared a unifying vision, it was the belief that students were often blocked from living up to their potential by the presence of certain fears and anxieties and doubts about their ability. These feelings were especially virulent at moments of educational transition — like the freshman year of high school or the freshman year of college. And they seemed to be particularly debilitating among members of groups that felt themselves to be under some special threat or scrutiny: women in engineering programs, first-generation college students, African-Americans in the Ivy League.

The negative thoughts took different forms in each individual, of course, but they mostly gathered around two ideas. One set of thoughts was about belonging. Students in transition often experienced profound doubts about whether they really belonged — or could ever belong — in their new institution. The other was connected to ability. Many students believed in what Carol Dweck had named an entity theory of intelligence — that intelligence was a fixed quality that was impossible to improve through practice or study. And so when they experienced cues that might suggest that they weren’t smart or academically able — a bad grade on a test, for instance — they would often interpret those as a sign that they could never succeed. Doubts about belonging and doubts about ability often fed on each other, and together they created a sense of helplessness. That helplessness dissuaded students from taking any steps to change things. Why study if I can’t get smarter? Why go out and meet new friends if no one will want to talk to me anyway? Before long, the nagging doubts became self-fulfilling prophecies.
[ . . . ]

Yeager began working with a professor of social psychology named Greg Walton, who had identified principles that seemed to govern which messages, and which methods of delivering those messages, were most persuasive to students. For instance, messages worked better if they appealed to social norms; when college students are informed that most students don’t take part in binge drinking, they’re less likely to binge-drink themselves. Messages were also more effective if they were delivered in a way that allowed the recipients a sense of autonomy. If you march all the high-school juniors into the auditorium and force them to watch a play about tolerance and inclusion, they’re less likely to take the message to heart than if they feel as if they are independently seeking it out. And positive messages are more effectively absorbed when they are experienced through what Walton called “self-persuasion”: if students watch a video or read an essay with a particular message and then write their own essay or make their own video to persuade future students, they internalize the message more deeply.

The piece goes on to explore Laude's creation of a "University Leadership Network, a new scholarship program that aims to develop not academic skills but leadership skills. In order to be selected for U.L.N., incoming freshmen must not only fall below the 40-percent cutoff on the Dashboard; they must also have what the financial-aid office calls unmet financial need. In practice, this means that students in U.L.N. are almost all from families with incomes below the national median."

There's a lot of useful detail in the piece, more insights, and it's really worth reading in full.

Sunday, July 21, 2013

Blogging via Moleskine: the Pedagogy of Transcription


Writing in Slate, blogger Justin Peters describes his blog-post writing process:
I’ve always enjoyed writing things by hand, but I didn’t formalize the process until I started blogging daily for Slate. Almost every morning, before the day starts and I start drowning in emails, I go to a coffee shop with a pen and a small Moleskine notebook. There, I try to conceive and write drafts of two separate posts before 10:30 a.m. Then, it’s back to my apartment, where I shed my pants, transcribe, and refine what I’ve written. (One of the nice things about writing my posts by hand is that it allows for a built-in revision process.) 
I can write in my notebook anywhere and everywhere. I will frequently bring it with me and dash off a rough draft while in transit—waiting for the subway, sitting on a bus. This is very convenient, as it allows me to be productive on the go without having to own a smartphone. (My current cellphone is at least 10 years old … but that’s a story for another time.)
This reminds me of something I saw close to thirty years ago, and I forget where, maybe 60 Minutes?, but it was a profile of Woody Allen and he described how he'd write short pieces for the New Yorker by long-hand on a legal pad before typing the piece for submission. But more to the point, Peters observation on revision recalls this bit from Craig Fehrman:
In the last 30 years, however, technology has shifted again, and our ideas about writing and revising are changing along with it. Today, most of us compose directly on our computers. Instead of generating physical page after physical page, which we can then reread and reorder, we now create a living document that, increasingly, is not printed at all until it becomes a final, published product. While this makes self-editing easier, Sullivan thinks it may paradoxically make wholesale revision, the kind that leads to radically rethinking our work, more difficult.
Fehrman's piece draws much from the work of Hannah Sullivan and her recently published The Work of Revision (http://www.hup.harvard.edu/catalog.php?isbn=9780674073128). Sullivan's main focus is on how technologies -- cheap paper, typewriters for faster drafting -- made possible the kind of revision that literary writers (Hemingway, Wolfe, Pound, Eliot, and other modernists she studies) did. But what's key is that the act of switching  -- or not switching -- composing media, from pen and paper notebook draft to online draft (or not), alters how writers revise.

In teaching writing, then, there's a virtue to be found and built on in urging writers to use varied writing tools, everything from audio notes recorded via a phone, to a series of tweets, to notes on napkins, to drafts in notebooks, to writing in discussion boards, to drafts in a word processor, or entries in a blog, and so on. Moving one's thinking on an idea from writing space to writing space, from writing tool to writing tool, from writing occasion to writing occasion lets an idea both age and blossom in the way wine does. With any luck,  no idea would be served before its time (and place and purpose and audience).

Wednesday, July 10, 2013

On the Death of My Wife's Cat

Ebony beloved cat of Barbara Crowley-Carbone, died, after 20 years of devotedly sleeping on her head most nights, in his owner's lap on Sunday, July 7. He is buried in our backyard, in a place that will become a small flowerbed, likely perennial bulbs that will bloom early each spring.  Ebony was predeceased by a brother, Bogart, who died of kidney failure in 2004, and a sister, Simba, the runt from their litter, who died at two weeks old, found tightly curled --and very stiff-- on a sun-streamed bedroom pillow. All three cats were as dark as Ebony's name, with small triangular Siamese faces, and, if you looked closely, tiger stripes of a deeper black.

Ebony lived to and died from old age and his final months came with weaker vision, less spring, confusion that left him wailing on occasion, and towards the very end, it seemed, the search for a quiet place to let go of living. And so we'd find him in places he never went before: a very tight spot behind a book case, behind a book bag under an old school desk, asleep on top of basket of keys that sit atop a cabinet where we store coffee and tea, and, for some reason only after a bath, he'd sleep in the litter box. But still, his favorite place to sit, was on my wife; if she sat on the couch to grade, he'd walk over her legs, across the math-paper pile on her lap, and climb to her neck, and settle, with his head under her ear, fore legs over her shoulder, and rump on her breast.

Combined with summer heat and a new-to-her cat allergy, the location wasn't the best of places, but as often as not Barbara would take a Zyrtec, shift the papers to her left and his rump a bit to the right, adjust her bra, and let him be, knowing he had little time left.  And on the not-times, when things were too hot or itchy for her, he'd wander over and sit on me. I don't like cats on me. The rule of thumb for pets in our house is that they don't belong to me. I don't feed them, hold them, groom them, pick up after them. I'd occasionally flick a string at the cat to get him to jump, or throw a toy his way, but I was just as likely to toss a pillow to get him off the bed, or a dish towel to get him down from the table. Still, as he got weaker and weirder, it got harder to push him away, and so in the last few weeks he'd come to me, searching for the same location. I'd tuck a couch pillow on my lap, lacking as I do the perch he found with my wife, so he could climb to where he wanted to be. It made reading or writing a trick, and so when he came my way, I'd often switch to a gin-and-tonic and Netflix moment as a bribe to myself to be kind to him in his dotage dolorous.


Ebony, in his dotage, allowed on the coffee table so he can look at the skinks.

Still, for the odd crying jags, the more frequent search for new quiet spaces, the increased lap time, there were moments when Ebony would be or try to be himself: his appetite stayed good until the last day or so, when he switched to water; he'd pop his head up at the window when a rabbit stopped outside of it to eat the clover that grows in our yard; if one my daughters trailed a cat toy, he'd follow after it (though they had to play the game in slow motion); and he'd come to the table for scraps, sniffing and beseeching for a nibble of pork, chicken, fish, or pop-corn if we had a fresh-popped batch.

My kids got in the habit of sneaking him food because when Bogart, his brother, was alive, if Ebony didn't finish his breakfast before Bogart had finished his own, Bogart would Bogart Ebony's food, shoving him out of dish with a head bump and a look. So Bogie was plump, Ebony thin, and my daughters in empathy got him hooked on scraps. And that habit held on to the end, an end marked by the things that come with getting old punctuated by habits from a life lived. He carried on, doing what he could on his own when could, crying honestly when he was lonely and confused,seeking time with those he was about to leave behind, and despite occasionally doing things that would be embarrassing -- sleeping in a litter box when wet of fur -- seeming never ashamed of who he was and how his days were spent.

Strange to say, for a pet that wasn't mine, whom I grudgingly acknowledged, and certainly didn't love, I do miss his presence. I think in maybe the same kind of way George Bailey discovers, in It's a Wonderful Life, that he's happy to see even the exasperating broken newel on his staircase. Don't get me wrong, I'm glad to be one pet less, and that much closer to no litter dust or stench, no pet food odor, no scratch marks, no dander, and all kinds of wonderful no mores to come (after the remaining cat moves out with my daughter if she ever can afford to move out). And so as happy as I am to have cat-things diminished, Ebony will be missed. Maybe because when he begged for a bite, the kids had a running joke they'd make, or when he did one of those dumb things cats do, like slide in panic on a new-polished floor, my wife would laugh a certain way because it was him, and that laugh, that one special one for that occasion, won't be heard again.

Thursday, June 13, 2013

Students: We'll Give You Our Print Textbooks When You Pry Them From Our Cold, Dead Hands

Digital Book World did a $45 webcast yesterday on e-textbook trends. Happily, they provided a free summary today because the news isn't new and instead confirms what we knew.
While publishers are increasingly creating and selling digital materials and students increasingly have the devices on which to consume that content, only 3% of students last semester used a digital textbook as their primary course material (for a specific course). That’s down from 4% for the fall semester.

Overwhelmingly, students prefer print, according to the survey of 1,540 undergraduate college students at both four-year and two-year institutions of higher education.

When asked why, about half “prefer the look and feel of print;” nearly half say they like to highlight and take notes in the textbooks; and a third cite that they can’t re-sell digital textbooks.
So it goes.

U.S. director of Bowker Market Research, Carl Kulo, who presented his research, predicts e-textbooks will take off in the standard prediction range: 3 - 5 years, the rate of take off now for the past 15 - 20 years. Hey, don't laugh, predict this often enough and it's bound to right.

But digital sales are increasing  -- "Despite the stagnation of digital textbook adoption, some publishers are reporting that a significant portion of their revenue is 'digital'.” -- with Pearson reporting that 50% percent digital revenue.

So let's ask this: if e-textbooks aren't selling, what is? Homework systems? Tools like e-portfolios? Course design and faculty development services delivered digitally?

What purpose does a textbook serve? To dump information, advice, activities, assignments, guidelines into a single device -- for years a print device bound by covers and held by string and glue -- so that an instructor could direct students to read, do, discuss, remember, write about, lab about, and even learn about what the book covered.

Students are right -- simply taking that print thing and dumping into a PDF and delivering it on a screen does suck compared to having the print book. It's like difference between drinking a fresh gin and tonic on a sunny afternoon on the back porch overlooking a quiet beach from drinking a gin and tonic that was poured a few days ago and delivered flat and diluted for your drinking pleasure in a place where the view is of the brick wall across from you stuck-shut window in the room with no a.c.. Sure, it's still a gin and tonic, but it don't look and taste the same, and isn't improved by being in a hot room with no breeze and no view.

Now we do PDF-based e-books because they're cheap, fast, and make more money than they lose for being cheap and fast and allow us to say to the market we have lower-priced options and books that can be read on an iPhone (even though Bowker finds most students prefer laptops still), and other reasons.

e-textbooks will take off -- are taking off in fact -- when and where the word 'book' means something different than what the print thing is. Where the purposes and contents and activities once in print live a native digital life, with a digital rationale, where things are unbound from covers and rebound to learning purposes and needs. 

Tuesday, May 28, 2013

Unsolicitated Solicitations Solutions

From: Carbone, Nick
Sent: Tuesday, May 28, 2013 2:40 PM
To: 'Breanna Colin'
Subject: RE: Publishing Industry Prospects


I am an F level executive at a D level subsidiary of a B level conglomerate.  I make no decisions and being where I am, only know by rumor the names of people, not how to contact them were I ever to dare to, who could make a decision about purchasing a database such as yours. We generally try not to know these things as the news from others in the business is usually glum, depressing us about our prospects, or glad, depressing us even more than bad news because they’re invariably doing better.

If I happen to be in your database, it's probably best, for your product's reputation, that I be removed. If you've ever ordered ice-cream for dessert at a nice restaurant and found a used band-aid in the scoop after your second or third spoonful, you'll know what having me in your database will be like for your customers.

Nick Carbone
Director of Digital Teaching and Learning
ncarbone AT bedfordstmartins DOT GOES HERE com
phone: (six one seven) two seven five – one eight seven two

_____________________________________________________________________________________________________________________

From: B*** C*** 
Sent: Tuesday, May 28, 2013 2:34 PM
To: Carbone, Nick
Subject: Publishing Industry Prospects
Importance: High


Hi,

I hope you are the right person to discuss about Printing & Publishing Database which include complete contact details including verified email addresses of Owners, Presidents, C-Level Executives & all other Decision Makers from:

  • News Paper Publishing Companies
  • Paper Manufacturing Companies
  • Commercial Printing Companies
  • Periodical Publishing Companies
  • Book Printing Companies
  • Signage Printing Companies
  • Miscellaneous Publishing Companies
  • Marketing & Advertising Companies
  • Many more Companies
  • And also other industries as per your target specifics

Please let me know your thoughts with geography, so that I can send you the costs of the lists and we could assist you in a much better way.

Also, we provide the database of key professionals from various businesses as per the target criteria. These are recently updated contacts and you can be amongst the quickest to reach out to these professionals

Each record in the list comes with complete contact information such as Contact name, title, company name, mailing address, phone, fax, email, employees, annual sales, SIC code, industry and website beneficial for multi-channel marketing purposes.

If the above listed are not your prospects, please let me know your target market and I will go ahead and email the number of records that we will be able to provide you.
  • We also provide Data Appending services (Append missing information like Email address, Telephone numbers etc to you in house database).
  • We also provide De-duping Services (All you need to do is send just the partial information from your existing file(may be just the company names or the email domains)& we usually run a de-dupe process through our data centre with our master database to determine only the net new contacts (the contacts apart from your existing ones).

Please review and let me know your thoughts.

Awaiting your response.

Regards,
B**** C*****
Business Development Executive
Ph no : ###-###-####

Note :To Unsubscribe mention the same in the subject line