Everyone pretends they’re real…
That is until a radio station like Entercom’s The Rock 98.9 gets embarrassingly low ratings, then out come the excuses alongside the specific demographic and hourly breakdowns to explain away the bad news.
Welcome to the world of radio ratings!
It’s how the ad sales game is played and it’s evolved into high art.
Radio sales jockeys and ad agency media buyers have become masterful and convincing in their explanations and presentations to clients and members of the media.
Yet the dirty, little secret – the flawed, unexplained “science” behind those ratings – is just that, flawed, inexact and highly questionable.
However it’s the only game in town at present.
The inherent problems with radio ratings include, ridiculously small sample sizes, questionable “technology” and zero oversight or insight into the exact methodology being employed.
In other words, smoke and mirrors – Wizard of Oz stuff.
Check out what media experts have been saying about these ratings for years:
In “The Unfortunate Farce of Radio Ratings,” Mark Ramsey writes:
“I just delivered a research project for a broadcaster in a relatively small market. The study contained the opinions of 600 people. Now this market, like virtually all markets, has its radio usage measured by Nielsen – in this case, by diaries.
“Do you know how long it takes Nielsen to recruit a sample in this market as large as the sample in my research project? Two years. That’s right. The sample sizes in markets like this one – and markets like yours – are almost laughably small. In fact, we would all laugh if our direct incentive was to do anything but cry.”
Because in the wide world of polls and surveys, size matters.
One KC radio professional told me he’s heard Nielsen puts out 800 people meters for the entire KC metro to guesstimate what two million locals might be listening to.
Think about it; that to cover everything from midtown Kansas City, Missouri to KCK, Blue Springs, Olathe, Independence, Shawnee, Overland Park, Grandview, Belton – the list goes on.
Thus the 12-plus ratings are derived from that microscopic sampling.
Now break that down into categories, like women 18 to 34 or men 25-54 and imagine how few folks from who knows what part of town are being cited as representingg the listening habits of hundreds of thousands of people.
Yet radio sales execs have no quibble presenting these sliced and diced demographic breakdowns as if they were scientific fact.
In one study there were 147 women respondents between the ages of 18 and 34.
“Now if you take those 147 voters and divide them by age and sex and ethnicity and then spread out their behaviors over many day parts, dozens of stations, and, potentially, dozens of online streams, you have data which is militantly opposed to accuracy and is, rather, an illusion,” Ramsey adds. “And not a very good one, at that. In too many cases, you’re better off believing in the historical accuracy of Game of Thrones than in the accuracy of your ratings. But it’s not just an illusion, it’s a dangerous one.”
Consumer technology journalist Phil Baker and his wife agreed to carry Nielsen’s people meters late last year and were told they’d be paid up to $50 a month for doing so.
“A few days later, we each received a package that contained the device we’d carry with us,” Baker writes. “It was surprisingly bulky and archaic looking. In fact, it looked similar to one of the old SkyTel pagers from the mid 90s with its belt clip and small display. It came with a charging cradle and AC adapter, where it was to be put back each night to charge and to communicate the day’s results to Nielsen using its built-in cell modem…
“We both took our responsibilities seriously…Our focus was to be sure we used the units as much as possible…But after a few days, my wife started getting calls and emails from Nielsen reminding her to wear the device more often. She often didn’t wear the pager, just placed it next to her on the couch when she watched TV. She had no belt to clip it on or large pockets to carry it in. But that created a problem: after 30 minutes of detecting no motion, its light would start blinking, requiring her to move it to keep it engaged.
“Clearly, with its large belt clip, the device was not made for her nor for most women to conveniently carry. She surmised the product was likely designed by a man with little thought given to making it more convenient for women to use. I had to agree and wondered whether this might even bias the ratings…We finally decided participating was just too complicated and time consuming: the charging of the device, carrying it everywhere we went from waking up to going to bed, the incessant emails and calls, and signing off and on when going out of town.”
Baker’s insight into Nielsen’s methodology was telling.
“What was so surprising to me was, in this era where there are so many technically innovative products being developed, the Nielsen solution was so archaic and technically deficient, particularly when it impacts the shows we watch. You have to wonder how accurate their ratings are when their measuring methodology requires so much effort.
“And you would think Nielsen could be sampling a much larger and more diverse population using crowdsourcing through the use of an app on a smartphone, rather than using such primitive hardware. It would know exactly where we were at all times, could communicate the information to Nielsen’s cloud in real time, and we’d be more likely to have the phone with us all the time. Clearly this is an opportunity for another company to do a much better job..Based on this experience, when I now look at Nielsen ratings, I have a lot less confidence in their numbers.”
Late last month, radio trade pub Inside Radio called Nielsen’s ratings into question.
“(Nielsen’s) sample sizes have long been seen as inadequate…” Inside Radio wrote. “Broadcasters complain that a significant change in listening in one household, or the removal of a heavy-listening household, can have a major impact on their ratings. Larger sample sizes would reduce the inordinate impact individual households can have on the ratings and improve sample quality. ‘We need bigger samples,’ says one broadcast researcher. ‘There are so few people in the sample that listen to a lot of radio’ that when one or two heavy listening households are removed, ‘the ratings go haywire,’ the exec says.”
Inside Radio’s bottom line:
“Nielsen is under pressure to improve its local rating services—in both TV and radio—and small sample sizes are an issue for both industries.”
There you have it.
A for-profit business supplying questionable, clandestine marketing data to three for-profit industries – radio, television and ad agencies – by employing archaic methodology and ridiculously small sample sizes with little to no oversight or transparency. So that station’s rookie ad salesmen and naive media buyers can cash in on what’s left of a dying industry – terrestrial radio.
All of that said, don’t think the handwriting isn’t on the wall.
“Because the more your clients understand about the intricacies of the ratings system, the more likely they are to be appalled – particularly in the presence of precise metrics from online radio players like Pandora and Spotify and digital natives like Google and Facebook,” Ramsey concludes. “When I can go through Facebook’s ad creation process and arrive at a specific number of consumers who will be impacted by my messaging with no estimates or random guesses required, what is the long term effect of this on attitudes about media measurement?”
And this just in…
Johnny Dare‘s station The Rock 98.9’s 12-plus Nielsen numbers were up .6 of a ratings point for October.