For many British train passengers, there’s a sizeable disconnect between their own perceptions of UK rail services and the assessment from more official sources. While regular customers continue to complain about soaring fares, sweat-box carriages and overcrowded platforms, wider statistics suggest that the performance of train operating companies is on a continual upward curve.
Such was the case with the latest National Passenger Survey (NPS), a twice-yearly survey of British rail passengers undertaken by UK public transport watchdog Passenger Focus. The most recent NPS, which took the views of more than 30,000 passengers across the UK and was published at the end of January, revealed record overall passenger satisfaction levels of 85 per cent, with no train operating company (TOC) scoring less than 80 per cent.
Less than a month later, however, the NPS’s mostly sunny conclusions were met with a dissenting voice – one that seemed a better fit with the outlook of Britain’s beleaguered commuters. The second annual train satisfaction survey by consumer watchdog Which?, published in mid-February, made grimmer reading for the industry. The survey of around 7,500 regular rail users found that more than half of the UK’s TOCs had a customer satisfaction rating of 50 per cent or lower, and despite regular fare increases, a mere 22 per cent of those surveyed felt that services were improving.
At first glance, it seems an impossible scenario – two passenger satisfaction surveys, published within weeks of each other, recording wildly opposing satisfaction levels for the same group of passengers. If we assume that both surveys can’t be right, there’s an obvious question: is the British public satisfied with its rail services or not?
On closer inspection, that assumption about the impossibility of both surveys being correct might be premature. These surveys vary drastically in their scope, objectives and, most importantly, methodology – a point that might have confused casual observers as the methodology of the surveys isn’t made completely clear in the press releases accompanying them.
By far the biggest difference between the surveys is the scope of questions asked of passengers. The Which? survey asked passengers to rate their train journeys over the last 12 months, based on two key criteria – overall satisfaction with the brand and the likelihood that the respondent would recommend the brand to a friend. The responses to these questions fed into an overall customer score.
The NPS, meanwhile, limited the scope of its questions to the journey that respondents had taken on the day they were surveyed. According to Passenger Focus, this method allows its survey to build a much more specific picture of the UK rail landscape, down to individual routes and times. Limiting questions to that day’s journey also helps combat the generally accepted principle that bad experiences have a greater effect on a customer’s perception of a brand than positive ones.
It would be easy to accuse Passenger Focus of tailoring the reach of its questions to paint a rosier picture of UK rail services, especially given its origin as a government-created organisation. After all, under the NPS the respondent could rate a single good journey positively, even if the last 10 trips were a disaster. But it’s impossible to deny the advantages of the NPS’s methodology to support its goal of improving customer service through focussed feedback.
The perceived contradiction between these two surveys is ultimately a red herring. The surveys were conducted with different methods, and the idea that passengers could be happy with individual journeys but dissatisfied with TOC performance over a longer stretch of time doesn’t necessarily contradict itself.
The Which? survey’s value lies in recording the long-term impressions of daily commuters and other regular rail users, many of whom clearly feel overcharged and underserved by their operators. The NPS survey, meanwhile, provides an important service by identifying problem areas with an accuracy and specificity that is arguably unmatched anywhere in the world.
But for casual media observers and the general rail-going public, who are likely to take away little more than the top-line statistics (a BBC news report pointed out the contradiction between the surveys but didn’t attempt to explain it), confusion is the unsurprising result of accepting complex information at face value. And perhaps the responsibility lies with organisations like Passenger Focus and Which? to ensure that the information they present is placed within the proper context, both in the reports themselves and in the press releases that bring them to the public’s attention.
Read the full feature here: https://www.railway-technology.com/features/featureuk-rail-passenger-satisfaction-british-public/