So, you want stars do you? Reviews
by readers of your books, of diners of your restaurant, of sleepers in
your bread and breakfast, of subscribers who view the movies you stream.
Stars, likes, micro reviews, and more. They matter. "The Harvard
Business School’s Michael Luca has found, for example, that a one-star
uptick in a Yelp review can lead to a nine percent improvement in
revenues for independently owned restaurants. Other studies have shown a
similar impact for independent hotels—and for books," notes Tom
Vanderbilt writing in Wilson Quarterly (http://www.wilsonquarterly.com/article.cfm?aid=2292).
But as Vanderbilt goes on to say, the quality of the written reviews
can be paltry, and which reviewers get higher rankings has more to do
with being early to a review than to being good; also, has an average
number of stars emerges, reviews that deviate from the average, even if
well-reasoned, get less respect:
My
use of online reviews follows the split -- I'll skim for a technology
purchase, or to buy a tool, or to hire an electrician or carpenter --
but not ever for things like restaurants, hotels, music, books, films,
and other things where the use is for immediate enjoyment while
consuming, watching, reading, hearing, or tasting the thing to bought.
Vanderbilt's
piece goes on to focus on the quality of written reviews, the stuff
that justifies the stars. Most of what he finds in written reviews isn't
useful because as so many opinions accumulate, it's hard to know how to
weigh the picture of the whole, especially in a review system where
stars seem to be generously given for often vague reasons. Who's right
-- the person who decrees the food the best ever and most authentic or
the person who decries it as wretched, with poor service to boot? I
don't want to wade through that thicket.
What
interests me as an editor and some one who occasional writes on things
I've read a lot about, or on or in areas I've done a lock of work in,
often for audiences who know my prior work and my name, is how amateur
reviewers often try to signal their authority. Vanderbilt again, "If the
Internet was supposed to wrest criticism from elites, a good deal of
the reviewing energy on Yelp (and other sites) is precisely an effort to
establish one’s bona fides. In the reviews for a new seafood restaurant
in my neighborhood, a number of the writers tout themselves as “New
Englanders,” thus implying that they implicitly know of what they
speak."
What
professional reviews offer, restaurant, move, book, travel, is a
consistent view and voice. I don't need all critics to agree on a movie
before I decide to plunk dollars for a ticket, or all notices of a
restaurant to be raves. I just need a few critics whose views I
understand and can trust to be consistent. Even if I come to disagree
with their tastes, if I know their tastes, then I know what will work
for my own.
You
don't get that kind of consistency in the Yelpland, where the wisdom of
the crowds seems nothing more than the aggregation of exhaustion.
But curiously, as more ratings trickle in, a study by business professors David Godes and Jose Silva has found, the average rating begins to decline. “The more reviews there are,” Godes and Silva suggest, “the lower the quality of the information available”; later reviewers tend to be either less serious or less disposed to like the book, or to respond to other reviewers rather than to the book itself. While one might think a five-star review would summon more passion than a four-star review, one study found that four-star reviews were, on average, longer.So the crowds aren't so much wise as lazy maybe? Another interesting tidbit Vanderbilt exposes -- when people judge others reviews, they tend approve critiques of objects more readily than critiques of art, or put another way, a critique that carries a resemblance to fact more than opinion, will get more likes and favorites:
What consumers make of reviewers is also a fertile field of study. A team of Cornell University and Google researchers, for example, found that a review’s “helpfulness” rating falls as the review’s star rating deviates from that of the average review—as if it were being punished for straying. As the team noted, defining “helpfulness” is itself tricky: Did the review help people make a purchase, or were they rewarding it for conforming with what others were saying? There are a number of feedback effects: Early reviews tend to draw more helpfulness votes, simply because they’ve appeared online longer. The more votes a review has, the more its “default authority,” and the more votes it tends to attract.
As Temple University’s Susan Mudambi and David Schuff found, people tend to rate longer reviews for “search goods”—such as cameras or printers—more positively than those for “experience goods.” A strong negative review for a camera might reflect some discrete product failure (pictures were blurry), but a strong negative review for a book might simply be another person’s taste getting in the way.
No comments:
Post a Comment