Let's not act like selfies and food pics are 21st century phenomena

No, taking a photo of your brunch isn't a "revolutionary" act. Taking a selfie isn't one, either. We've been doing them both for centuries.

Instagram held a press conference today to announce that it was adding a messaging service to its app. That's all. Messaging.

Just to make that clear:

Kevin Systrom is the co-founder of Instagram, and his presentation contained some choice cuts of ludicrous Silico-speak. At one point he literally described the act of taking a photo of one’s brunch as “revolutionary”.

We can only wonder what he makes a painting like this:

(Image: Wikimedia Commons)

That's Caravaggio's Still Life with Fruit (1601-1605), a painting of some brunch (or lunch, maybe breakfast). It's food, is the point. The art galleries of the world are filled with boring pictures of food - it's a topic that has sustained artists for centuries. There is nothing new about fixating on food. The animals on the walls of Bhimbetka and Chauvet might even count as food portraits.

Ancient human-like figures, like these ones painted onto rock in the Cederberg region of South Africa, might even be selfies:

(Image: Wikimedia Commons)

That's a generous interpretation, I realise, but the self-portrait is one of the defining artistic subjects of human art, throughout the world. There are 141 self portraits in the National Gallery's collection, for example. It makes the response to the Oxford English Dictionary's decision to name "selfie" word of the year utterly baffling - there is nothing new about us documenting ourselves.

Think pieces that talked about the selfie's "screaming narcissim" that "sits at the excess of the ultimate theatricalising of the self" seem to treat something rather mundane as something that's - here's that word again - "revolutionary". Smartphones and digital cameras have made it easier to take photos of ourselves and our foods. They've also made it easier to take pictures of landscapes, but you don't see that getting parodied or turned into a Time cover story about the self-obsession of a generation. The difference between now and the Renaissance is the barrier to entry for those who couldn't afford paint and canvas.

The question it feels more worth asking here is this: why do we use new technologies the same as our old ones? Why is that we keep picturing the same things, again and again, but faster and faster? When is a technology amplifying something in our society, rather than actually changing it? And will every technology always end up, inevitably, a thing for porn?

It rather feels that focusing on the method, instead of the motive, misses the point a lot of the time.

Rembrandt pouting for a selfie, c.1630. (Image: Wikimedia Commons)

Ian Steadman is a staff science and technology writer at the New Statesman. He is on Twitter as @iansteadman.

GETTY IMAGES/LIFE IMAGES COLLECTION
Show Hide image

Back to the future – mankind’s new ideas that aren’t new at all

Rethink: the Surprising History of New Ideas by Steven Poole reviewed.

When Steven Poole writes a book review, he likes to lie to himself. His only conscious decision is to jot down a few notes as the deadline approaches. There is no pressure to think deep thoughts, he tells himself, or to reach the required word count. Then invariably, in a few hours, he has written the entire review. This happens time and again. No matter how many times he convinces himself he is merely jotting and thinking, the result is a finished article.

Human beings are extraordinarily good at deceiving themselves and possibly never more so than when they think that they have had a new idea, as Poole makes clear in this fascinating compendium of new ideas that aren’t new at all. He digs deep into subjects as various as cosmology, economics, health care and bioethics to show that, as the writer of Ecclesiastes put it (long before Poole), “There is nothing new under the sun.” This is demonstrated in the re-emergence of ideas such as therapeutic psychedelic drugs, inherited traits that aren’t programmed into the genome, cognitive behavioural therapy, getting our protein from insects, and the multiverse.

Poole explores these propositions deftly enough, but they are not what interest him here. Rather, his subject is the way that we have seen them all before. He ties together what he concedes is a “highly selective snapshot of the looping evolution of ideas” with the observation that: “Any culture that thinks the past is irrelevant is one in which future invention threatens to stall.” Originality, he argues, is overrated.

The book might be something of a downer for those who like to gaze at “progress” with wide-eyed admiration. The starkest takeaway is that we are clearly hopeless at putting good ideas to work. In his discussion of artificial intelligence, for instance, Poole mentions the emerging idea of a universal basic income, which is likely to become a necessary innovation as robots take over many of the least demanding tasks of the human workforce. Yet he traces it back to 1796, when Thomas Paine first published his pamphlet Agrarian Justice.

Maybe this tells us something about the limits of the brain. It has always innovated, thought through its situations and created solutions. But those solutions can only be drawn from a limited pool of possibilities. Hence we get the same ideas occurring ­inside human skulls for millennia and they are not always presented any better for the passing of time. Richard Dawkins and his ilk provide a salient example, as Poole points out: “Virtually none of the debating points in the great new atheism struggles of the 21st century . . . would have been unfamiliar to medieval monks, who by and large conducted the argument on a more sophisticated and humane level.”

So, perhaps we should start to ask ourselves why so many proposed solutions remain unimplemented after what seem to be thousand-year development programmes. It is only through such reflection on our own thinking that we will overcome our barriers to progress.

Sometimes the barriers are mere prejudice or self-interest. After the Second World War, Grace Hopper, a computer scientist in the US navy, created a language that allowed a computer to be programmed in English, French or German. “Her managers were aghast,” Poole writes. It was “an American computer built in blue-belt Pennsylvania” – so it simply had to be programmed in English. “Hopper had to promise management that from then on the program would only accept English input.”

It is worth noting that Hopper was also a victim of postwar sexism. In 1960 she and several other women participated in a project to create COBOL, the computing language. Critics said there was no way that such a “female-dominated process” could end in anything worthwhile. Those critics were
wrong. By the turn of the century, 80 per cent of computer coding was written in COBOL. But this is another unlearned lesson. A survey in 2013 showed that women make up just 11 per cent of software developers. A swath of the population is missing from one of our most creative endeavours. And we are missing out on quality. Industry experiments show that women generally write better code. Unfortunately, the gatekeepers only accept it as better when they don’t know it was written by a woman.

Solving the technology industry’s gender problems will be a complex undertaking. Yet it is easy to resolve some long-standing difficulties. Take that old idea of providing a universal basic income. It appears to be a complex economic issue but experimental projects show that the answer can be as simple as giving money to the poor.

We know this because the non-profit organisation GiveDirectly has done it. It distributed a basic income to an entire community and the “innovation” has proved remarkably effective in providing the means for people to lift themselves out of poverty. Projects in Kenya, Brazil and Uganda have made the same discovery. As Poole notes, even the Economist, that “bastion of free-market economics”, was surprised and impressed. It said of the scheme: “Giving money directly to poor people works surprisingly well.” You can almost hear the exclamation “Who knew?” – and the slapping sound of history’s facepalm.

Michael Brooks’s books include “At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise” (Profile)

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article first appeared in the 21 July 2016 issue of the New Statesman, The English Revolt