Advertisement

No, There Weren’t Only 8 Bad Albums in the Last 4 Years

Rest assured, the negative music review is alive and well

Advertisement

    Music fans used to read a lot more album reviews, probably because listening to music used to cost a lot more money. Even a single album was an investment, and reviews were meant to help you make good investments. Today, you can stream 20 different albums in a single day and it’ll cost you nothing but time.

    So readers are reading fewer reviews, and in response, many publications are cutting back on the staffs that write them. But as the genre shrinks, is it also getting nicer?

    On Sunday, Neil Shah at the Wall Street Journal asked, “What happened to the negative music review?” Shah talks to many industry professionals who all agree that failing grades are getting rarer and rarer. But Shah goes further and highlights some evidence that suggests negative reviews have all but disappeared.

    Advertisement

    The source is Metacritic.com, a review aggregator. This website takes several reviews from multiple sources, converts those reviews into grades on a scale of 0-100, and averages the scores out. The result is called a Metascore.

    As Shah reports, according to Metacritic.com, critics have only given what the website defines as a “Generally Unfavorable” review — a Metascore of less than 40 out of 100 — eight times in the past four and a half years. Critics, according to this analysis, pretty much liked everything.

    I respect Shah as a journalist and admire him for asking interesting questions. But this analysis is flawed, and there are better ways to understand the state of modern criticism. And no, there weren’t only eight bad records in the last four years.

    Advertisement

    The Metascore is not an objective rating system. First of all, some publishers, such as The New York Times, don’t include grades or stars alongside their reviews. A person who works at Metacritic is tasked with reading that review and subjectively assigning it a score from 0-100.

    Of course, Metacritic doesn’t include every single review in the world, because that would be impossible. Instead, a list of tracked reviewers is “curated.” Consequence of Sound is on the list, as it happens. According to their numbers, the average CoS review is 5.2 points lower than the average overall (again, on a scale of 0-100). In contrast, Entertainment Weekly is 5.4 points higher — that is, 5.4 points more positive than average.

    All reviewers are not equal, according to Metacritic. As the website puts it, they “assign more importance, or weight, to some critics and publications than others, based on their quality and overall stature.” So never mind the fact that a ‘B’ from Entertainment Weekly doesn’t necessarily mean the same thing as a ‘B’ from CoS or Pitchfork or whomever you’d like. One of those two ‘B’s is given a greater influence in the Metascore than the other, according to a subjective category called “quality and stature.” This subjective weighting is not public.

    Advertisement

    Furthermore, we know that ratings websites can be miscalibrated or biased, either purposefully or by accident. The data journalism site FiveThirtyEight did a great breakdown of ratings fraud at Fandango. But Metacritic doesn’t have to be fudging the numbers in order for their ratings to be miscalibrated. In fact, I think they’re simply mislabelled.

    According to Metacritic, the average music review score is a 67, which is labelled as a “Generally Favorable Review.” A Metascore of 61 — a score that is below average — is also defined as “Generally Favorable.” A score of 40, which is significantly below average, is labelled as “Mixed or Average Review.” That is why the Metascore only rated eight albums as “Generally Unfavorable” in a four-year span. What they say is average has nothing to do with the actual average score.

    But even if Metacritic were calibrated correctly, high scores aren’t necessarily a problem. At the end of the day, you should expect professional musicians to be competent.

    Advertisement

    When you were a senior in high school, you probably had very few classmates who were getting “F’s”. That’s because most of the students who were likely to get an ‘F” didn’t make it to senior year. This is called “Survivor Bias.” And if your school claims that its students graduate with an average GPA of, say, 3.0, that score will be much higher than it would have been if the dropouts had been included.

    The same goes for music. The kind of artist capable of releasing a genuine, consensus ‘F’ rarely gets the chance to make another album. Most albums are at least competent, because incompetent people get weeded out. This should not be a surprise.

    Having said that, there is such a thing as grade inflation, and many experts seem to agree that it’s happening in music reviews. Even if that is true, why is it a problem?

    Advertisement

    There are eras of baseball history where a .250 hitter could be a Hall of Famer, and there are eras where that same stat gets you a bus ride to the minor leagues. Averages change. That’s not good or bad; it’s just something to be aware of. We are currently in a more positive review environment. Perhaps, in a few years, scathing take-downs will become the norm (the Internet isn’t far away from that already), and today’s “B” will become tomorrow’s “C.”

    Finally, let’s not forget that these grades and scores are nothing but made-up numbers, and the whole obsession with them is absurd. Art is not sport; there’s no such thing as a champion album, a best of the best. The value of reviews, insomuch as they have value, is not that they rate music; it’s that they help us to understand music. A good review can make new songs a part of a broader conversation that stretches back to primitive dances and campfire songs. Music is ancient, part of what it means to be human. It’s important that the conversation continue. If the old review is dying, then critics had better start figuring out what comes next.

Advertisement