If you want to work out what the people are thinking, one thing is for sure, you can’t just go out and ask them.

The failures of political polling over recent elections have taught us that opinion surveys can no longer be trusted. If you were betting on the winner, you would have been better off putting your money on the predicted losers.

This was a $5.2 million lesson for betting company Sportsbet when it pre-emptively paid out Bill Shorten backers – two days early ­– based on the fact that seven out of every ten wagers supported a Labor win in May. Labor lost, the gamblers got it wrong.

And it is not just the polling and betting companies that have lost credibility as truth-telling tools. Science is having its own crisis over the quality of peer-reviewed research.

Just one sleuth, John Carlisle (an anaesthetist in the UK with time on his hands) has discovered problems in clinical research, leading to the retraction and correction of hundreds of papers because of misconduct and mistakes.

The world of commerce is no better at ensuring that decisions are backed by valid, scientific research. Too often, companies employ consultants who design feedback surveys to tell clients what they want to hear, or employers hire people based on personality questionnaires of dubious provenance.

Things are further complicated by poor survey questions, untruthful answers, failures of memory and survey fatigue (36 per cent of employees report receiving surveys regularly, three or more times per year).

Why bother?

All of this may give rise to the notion that asking people for their opinion is an utter waste of time. However, that is not the conclusion drawn by Adrian Barnett, president of the Statistical Society of Australia and professor at the Queensland University of Technology.

Barnett, who studies the value of health and medical research, says people should view all surveys with a healthy skepticism, but there is no substitute for a survey with a good representative sample.

“I do think there is a problem, yes, but it is potentially overblown, or overstated” he says.

“We know that, in theory, we can find out what the whole population is feeling by taking just a small sample and extrapolating up. We know it works and it’s a brilliant, cheap way of finding out all sorts of things about the country and about your customers,” he says.

However, it is getting harder to get that representative sample. As people have replaced their landlines with mobile phones, researchers can no longer rely on the telephone book to source an adequate spread of interviewees. And, even if they make contact via a mobile phone, people are now reluctant to answer calls from unknown numbers in case they are scammers, charities … or market researchers.

“(Also) on controversial topics, it can be extremely challenging to get people to talk to you,” Barnett says.

You need the right people

Reluctance to participate is one of the problems identified in political polling. In a post-election blog, private pollster Raphaella Crosby described the issue: “You can have a great, balanced, geographically distributed panel such as ours or YouGov’s – but it was very difficult to get conservatives to respond in the last three weeks.

“I presume phone pollsters had the same issue – the Coalition voters just hang up the phone, in the same way they ignored our emails. All surveys and polls are opt-in; you simply can’t make people who think their party is going to lose do a survey to say they’re voting for a loser.”

The Pew Research Centre reports that response rates to telephone surveys in the US are down to 6 per cent.

The polling industry is conducting an inquiry into election polling methods, which include a combination of calling landlines, mobile phones, robo-dialling and internet surveys. Each of these channels can introduce biases and, then, there can be errors of analysis and a tendency to “groupthink”.

However, Barnett says the same problems do not necessarily hamper market research.

Market research does not usually require the large pool of participants (up to 1,600 is common in pre-election polls) which are needed to narrow the margin of error. Business can question a small number of customers and get a clear indication of preferences, he says.

Identifying a representative sample of customers is also much easier than a random selection of voters who must represent an entire population.

Putting employees to the test

When it comes to business, the use of engagement surveys presents an interesting case. Billions of dollars are spent by business every year to try to increase employee engagement – yet little benefit can be seen in the engagement survey statistics.

According to polling company Gallup, a mere 14 per cent of Australians are engaged in their work, “showing up every day with enthusiasm and the motivation to be highly productive”. This is down from 24 per cent, six years earlier.

Jon Williams has 30 years of experience to back up a jaundiced view of the way employee surveys are used. Co-founder of management consultancy Fifth Frame, Williams was previously PWC’s global leader of its people and organisation practice, managing principal at Gallup in Australia, and managing director of Hewett Associates (Aon Hewett).

“Clearly, engaged places are better places to be and, if we are going to work on that stuff, we are going to create better workplaces. But can it actually be linked to success? Does it really drive more successful companies? I think you would struggle to really prove that.”

Williams says people fail to understand that correlation is not causation. A company may put a lot of effort into its high engagement and also be very successful, but that success may, in fact, stem from other factors such as its place in the market, timing, dynamic leadership or the economy.

“It is a false attribution because we love the idea of certainty and predictability,” he says.

This same desire to codify success also drives the use of personality testing in recruitment – which appears to have done little to rid the workplaces of bullies, psychopaths and frauds.

“Business just wants something that looks like a shiny tool with a brand name on it that they can assume, or pretend, is efficient,” he says.

“People like [the tests] because they give the appearance of rigour. Very few of those tools have any predictive reliability at all.”

Williams says the only two personality measures that have any correlation with job success are intelligence and conscientiousness.

Meanwhile, many organisations still ask their employees to undertake an assessment with Myers-Briggs Type Indicator – a personality test that divides people into 16 different personality types and based on the work of a mother-daughter team, who had no training in psychology or testing, but a devotion to the theories of Swiss psychiatrist Carl Jung.

“Myers Briggs has the same predictive validity as horoscopes. Horoscopes are great for starting a conversation about who you are as a person. Pretending it is scientific, or noble, is obviously stupid,” says Williams.

What can we do better?

Williams is not advocating that organisations stop asking employees what they think. He says, instead, that they should reassess how they regard that information.

“Don’t religiously follow one tool and think that is the source of all knowledge. Use different tools at different times. Then, don’t keep measuring the same thing for the next five years, because you’ve done it [already]. Go to a different tool, use multiple inputs. Just use all of them intelligently as interesting pieces of data.”

Barnett has his own suggestions to increase the reliability of surveys. Postal surveys may seem rather “old school” these days, but can allow researchers to engage a bit more, allowing them to establish their bona fides with people who are (justifiably) suspicious of attempts to “pick their brains”.

Acknowledging that the interviewees’ time is valuable can help elicit honest, thoughtful responses. Something as simple as including a voucher for free coffee or a chocolate can show value.

Adding a personal touch, using a real stamp on the return envelope, will also encourage participation. ‘”It shows you spent time reaching out to them,’” Barnett says.

Citizens’ juries can also help make big decisions, using a panel of people to represent customers or a population.

Infrastructure Victoria used this technique in February when it set up a 38-member community panel to consider changing the way Victorians pay for the transport network.

“We know there are problems with [citizens’ juries], but the reason we have them is that you are being judged by your peers. If you can get a representative bunch of your customers, then I think that is an interesting idea,” he says.

“You really might not like what they say, or you may be surprised by what they say – but those surprising results can sometimes be the best in a way, because it may be something you have been missing for a long time.”

Questions you should ask

The American writer Mark Twain was well aware that numbers can be contrived to back any argument.

“Figures often beguile me, particularly when I have the arranging of them myself; in which case the remark attributed to Disraeli would often apply with justice and force: ‘There are three kinds of lies: lies, damned lies, and statistics’,” he wrote.

The observation could be made of many of the survey results found online. Many have been produced as a marketing exercise by companies and reproduced by others who want numbers to add some weight to their arguments.

However, rather than accepting, on face value, survey claims that 30 per cent of the population thinks this or that, some basic checks can help determine whether the research is valid.

A good place to start is the Australian Press Council’s guide for editors, dealing with previously unpublished opinion poll results:

  1. The name of the organisation that carried out the poll
  2. The identity of any sponsor or funder
  3. The exact wording of the questions asked
  4. A definition of the population from which the sample was drawn
  5. The sample size and method of sampling
  6. The dates when the interviews were carried out
  7. How the interviews were carried out (in person, by telephone, by mail, online, or robocall)
  8. The margin of error

Other questions may be to ask where the participants were found and are they typical of the whole population of interest?

If only a small proportion of people responded, then you may deduce that the survey is biased towards the people who have strong feelings about the subject.

If the subjects were paid, this might affect their answers.

In the UK, there is a public information campaign Ask For Evidence, to encourage people to request for themselves the evidence behind news stories, marketing claims and policies.

This article was originally written for The Ethics Alliance. Find out more about this corporate membership program. Already a member? Log in to the membership portal for more content and tools here.