How Comic Sans got useful

Martha Gill's Irrational Animals column.

Whenever I want to impress someone at a party, I let them know I’m distantly related to Eric Gill. There’s always a pause as it sinks in. You know, Eric Gill. Eric Gill, for God’s sake – yes, the Eric Gill! They’re usually too polite to make a big deal of it, but to make sure they feel comfortable around me, I often end up doing most of the talking from then on in.

Well, he invented the typeface Gill Sans. It’s a sans-serif font and a British font – indeed, it would be hard to find a more British font. Its clean lines permeate the railways, the BBC, Penguin Books and the Church of England, and it has meshed itself with the establishment so deeply that it was a surprise to everyone to discover, in the late '80s, that its inventor once shagged his dog.

Yes. This font has a dark, dark history. So dark, in fact, that on unearthing it last year, Digital Arts magazine announced an immediate boycott, along with every typeface Gill ever molested (Perpetua, Joanna), in a piece titled “Art versus Evil”.

Digital Arts, I apologise for him. And perhaps you are right to leave this beautiful, clear-cut lettering out of your publication – but not necessarily for the reasons you think.

A recent paper by Daniel M. Oppenheimer entitled, pleasingly, “Fortune favours the Bold (and the italicised)” delivered a blow to lovely fonts everywhere by demonstrating that we absorb information better when it is a little hard to read. It seems our eyes just skim over Times New Roman and Helvetica, but stick when we reach a smudged, cramped line of type, finally ready to engage.

The researchers took classroom material and altered the fonts, switching from Helvetica and Arial to Monotype Corsiva, Comic Sans Italicised and Haettenschweiler. The teachers already taught each class in two sections. One section was taught using the “fluent” texts, the other, the “disfluent”. After several weeks, the researchers put the students through some tests. They found that those taught using dirtier fonts retained information significantly better.

To the experimenters this was a challenge to one of teaching’s basic assumptions - that when learning is easier, it’s better. Rather, adding a few superficial difficulties to the reading experience is more likely to make pupils engage with the text. This ties in with other studies in “disfluency” - which show that a slightly challenging delivery can make people process information more carefully.

Difficult by design

The results are counterintuitive, and not only for the world of teaching. Neuroscientists expanding on the study note that the field of digital advancements also relies on the same idea - that the easier and more fluent our access to information, the better. But perhaps our oversensitive brains demand a strategy with a little more nuance.

The novelist Jonathan Franzen touched on the problem recently when he said that e-books make for a less fulfilling reading experience. He associates this with the permanence of books (“A screen always feels like we could delete that, change that, move it around”), but perhaps the feeling is also something to do with the uncanny ease of moving the text into view. Words presented to us with the effortlessness and clarity of motorway signs demand shallow engagement. A screen’s familiar form presents no mental barrier between an advert for Starbucks and lines from Shakespeare.

Perhaps then we should take cues then from Gill’s life, if not his works, and seek out our information in unfamiliar and dog-eared forms.

Gill Sans.

Martha Gill writes the weekly Irrational Animals column. You can follow her on Twitter here: @Martha_Gill.

This article first appeared in the 18 June 2012 issue of the New Statesman, Drones: video game warfare

Getty
Show Hide image

A quote-by-quote analysis of how little Jeremy Hunt understands technology

Can social media giants really implement the health secretary’s sexting suggestions? 

In today’s “Did we do something wrong? No, it was social media” news, Health Secretary Jeremy Hunt has argued that technology companies need to do more to prevent sexting and cyber-bullying.

Hunt, whose job it is to help reduce the teenage suicide rate, argued that the onus for reducing the teenage suicide rate should fall on social media companies such as Facebook and Twitter.

Giving evidence to the Commons Health Committee on suicide prevention, Hunt said: “I think social media companies need to step up to the plate and show us how they can be the solution to the issue of mental ill health amongst teenagers, and not the cause of the problem.”

Pause for screaming and/or tearing out of hair.

Don’t worry though; Hunt wasn’t simply trying to pass the buck, despite the committee suggesting he direct more resources to suicide prevention, as he offered extremely well-thought out technological solutions that are in no way inferior to providing better sex education for children. Here’s a quote-by-quote analysis of just how technologically savvy Hunt is.

***

“I just ask myself the simple question as to why it is that you can’t prevent the texting of sexually explicit images by people under the age of 18…”

Here’s Hunt asking himself a question that he should be asking the actual experts, which is in no way a waste of anybody’s time at all.

“… If that’s a lock that parents choose to put on a mobile phone contract…”

A lock! But of course. But what should we lock, Jeremy? Should teenager’s phones come with a ban on all social media apps, and for good measure, a block on the use of the camera app itself? It’s hard to see how this would lead to the use of dubious applications that have significantly less security than giants such as Facebook and Snapchat. Well done.

“Because there is technology that can identify sexually explicit pictures and prevent it being transmitted.”

Erm, is there? Image recognition technology does exist, but it’s incredibly complex and expensive, and companies often rely on other information (such as URLs, tags, and hashes) to filter out and identify explicit images. In addition, social media sites like Facebook rely on their users to click the button that identifies an image as an abuse of their guidelines, and then have a human team that look through reported images. The technology is simply unable to identify individual and unique images that teenagers take of their own bodies, and the idea of a human team tackling the job is preposterous. 

But suppose the technology did exist that could flawlessly scan a picture for fleshy bits and bobs? As a tool to prevent sexting, this still is extremely flawed. What if two teens were trying to message one another Titian’s Venus for art or history class? In September, Facebook itself was forced to U-turn after removing the historical “napalm girl” photo from the site.

As for the second part of Jezza’s suggestion, if you can’t identify it, you can’t block it. Facebook Messenger already blocks you from sending pornographic links, but this again relies on analysis of the URLs rather than the content within them. Other messaging services, such as Whatsapp, offer end-to-end encryption (EE2E), meaning – most likely to Hunt’s chagrin – the messages sent on them are not stored nor easily accessed by the government.

“I ask myself why we can’t identify cyberbullying when it happens on social media platforms by word pattern recognition, and then prevent it happening.”

Jeremy, Jeremy, Jeremy, Jeremy, can’t you spot your problem yet? You’ve got to stop asking yourself!

There is simply no algorithm yet intelligent enough to identify bullying language. Why? Because we call our best mate “dickhead” and our worst enemy “pal”. Human language and meaning is infinitely complex, and scanning for certain words would almost definitely lead to false positives. As Labour MP Thangam Debbonaire famously learned this year, even humans can’t always identify whether language is offensive, so what chance does an algorithm stand?

(Side note: It is also amusing to imagine that Hunt could even begin to keep up with teenage slang in this scenario.)

Many also argue that because social media sites can remove copyrighted files efficiently, they should get better at removing abusive language. This is a flawed argument because it is easy to search for a specific file (copyright holders will often send social media giants hashed files which they can then search for on their databases) whereas (for the reasons outlined above) it is exceptionally difficult for algorithms to accurately identify the true meaning of language.

“I think there are a lot of things where social media companies could put options in their software that could reduce the risks associated with social media, and I do think that is something which they should actively pursue in a way that hasn’t happened to date.”

Leaving aside the fact that social media companies constantly come up with solutions for these problems, Hunt has left us with the burning question of whether any of this is even desirable at all.

Why should he prevent under-18s from sexting when the age of consent in the UK is 16? Where has this sudden moral panic about pornography come from? Are the government laying the ground for mass censorship? If two consenting teenagers want to send each other these aubergine emoji a couple of times a week, why should we stop them? Is it not up to parents, rather than the government, to survey and supervise their children’s online activities? Would education, with all of this in mind, not be the better option? Won't somebody please think of the children? 

“There is a lot of evidence that the technology industry, if they put their mind to it, can do really smart things.

Alas, if only we could say the same for you Mr Hunt.

Amelia Tait is a technology and digital culture writer at the New Statesman.