31 July 2014 Facebook and OKCupid are the latest iterations in the architecture of psychological manipulation When we find out our favourite websites have been experimenting on us, we instinctively recoil in disgust - but Facebook and OKCupid are only the most recent human-made environments where psychological manipulation is a key factor in how design choices are made. A couple walks in a supermarket food aisle in Paris, France. Product placement in supermarkets is guided by intensive study of shopper habits. Photo: Getty Images Sign UpGet the New Statesman's Morning Call email. Sign-up In 1935, 29-year-old Arthur Weever Melton was appointed head of the psychology department at the University of Missouri. Originally from Arkansas, he'd secured his undegraduate degree from Washington University in St Louis in 1928, and then completed his PhD at Yale in 1932. Later in life, Melton would go on to make valuable contributions to the study of human learning, forgetfulness, cognition and perception, as part of a distinguished career as a behavioural psychologist. After Pearl Harbor, the United States Air Force tests used to select fighter pilots were derived from Melton's research. In the years after the war he became editor of the Journal of Experimental Psychology, was elected to the US National Academy of Sciences in 1969, and received the Gold Medal Award of the US National Psychological Association in 1976, two years before his death. But it’s the ambitious young Melton of 1935, about to publish his first major paper, who we can thank for the belief that people instinctively turn right when they enter a supermarket. Melton’s PhD supervisor at Yale, Edward Robinson, had been working with the American Association of Museums (now the American Alliance of Museums) since 1925 to investigate how people actually acted when to going to an exhibition. Curators would lay out galleries in a certain way, only to see people amble around in an order they hadn’t anticipated; some might not even make it to the end. They wondered, were they tired of walking? Or were they mentally fatigued? How educatonal is a dioarama, anyway? What information should a plaque contain? How should paintings be hung relative to each other? The AAM turned to Robinson, and then also Melton, to produce the first major body of research into the psychological effects of museum architecture and design. Melton's first monograph - Problems of Installation in Museums of Art - came in 1935 after three years of watching visitors to the Philadelphia Museum of Art. Some of his findings may sound obvious (or even glib) to us now, but this was a period where curators were moving away from the "labyrinthine" stuffiness of the traditional museum - picture the permanent collections at Tate Britain, or vast palaces like the Hermitage - and thinking about exhibition spaces in new ways. Melton’s observations were vital to informing this new style. Back in 1926, Robinson had made the case to the AAM that it wasn’t enough to study museums as they were - instead, “one or two institutions should be willing to modify their exhibits and labels for experimental purposes”. In this vein, Melton used slight differences between the wings of the museum to compare curatorial decisions, just as a scientist would control variables during a lab experiment. He found that the old, crowded way of hanging paintings (like this) diluted visitor attention; far better was to space them out as a single, orderly row along a wall, of no more than 18 at a time. The closer a visitor was to an exit the less attention was paid to objects on display, too, and he also documented the disruption to normal visitor flowpaths through the museum if a particularly popular exhibition was on. And, of course, Melton found that 75 per cent of visitors, given the choice of turning left or right, will choose right. (It didn't just apply to the entrance hall, either - it applied to each room, gallery or corridor.) He appeared to have found evidence for an unconscious, innate preference, one that people were unaware that they held. The art critic Amos McMahon, reviewing Problems, wrote: This is an important report which every museum director, curator and architect will wish to study carefully. Its points are probably of even greater value for owners of department stores and others whose fortunes depend more immediately on the customers’ looking and buying." It’s also, in a roundabout way, of great interest to Facebook and OKCupid. Bear with me. Melton’s 75 per cent stat has a strange legacy. Within the fields of architecture, urban design and “visitor studies”, the debate over whether people truly have an instinctive desire to turn right at intersections is still discussed with regards to the design of museums, zoos, parks and so on. Many more recent studies have found the same rightward bias that Melton found, but there are also plenty who have found that it disappears - or reverses - in environments where physical cues are controlled for. Yet there’s one group that embraces the right-turn factoid more than any other: retail companies, and in particular the people who design malls and supermarkets. This shouldn't surprise us. Malls are fake public spaces, a kind of privatised high street with a language that emerged in its modern form after the Second World War on the outskirts of cities and towns across the United States, Europe and beyond. Owners would fiddle with the factors that they thought would influence how must time and money shoppers would spend - like lighting, seating, store locations, even smells - and, relatively quickly, a folk vernacular transformed into a serious field of academic study. Modern malls and town centre high streets are both retail destinations in the same way an F-15 and the Spirit of St Louis are both aeroplanes. Perhaps even more dedicated to the modelling and manipulation of visitor behaviour are supermarkets, though, where each aisle is tuned for maximum customer interest. This 2002 piece in the Harvard Business Review by Eric Bonabeu breathlessly details how Sainsbury’s gathers data: Camera studies have found, for example, that the average time a customer spends on buying milk is five seconds, versus 90 seconds for selecting a bottle of wine. In the agent-based model, each shopper has a different list of items (based on real data collected from the bar code readers at the cash registers in the Sainsbury’s stores). As the virtual people make their way through the aisles and choose their goods, the software tracks the customer densities throughout the store as well as the wait times at the checkout counters. Different layouts (such as relocating the frozen foods department) can be tested easily to judge their impact on store congestion. Of course, enhancing the efficiency of shopping is not the only criterion. Store managers often want to separate high-traffic areas (the meat and baked goods sections, for example) to encourage impulse buying as shoppers travel between them. Sometimes “hot spots” (areas of congestion) are desirable locations for sale items or free samples. Furthermore, responding to customer psychology is important. A supermarket might want to place its produce section near the entrance, for instance, to impress customers with the freshness of its vegetables and fruits. Bonabeu also cites the bestselling Why We Buy by business consultant Paco Underhill - “a Sherlock Holmes for retailers”, in the words of one reviewer - which documents the results of thousands upon thousands of hours of studying shoppers. According to Underhill, 65 per cent of men who try on a pair of jeans will buy them, compared to 25 per cent for women; signs advertising in-store deals are more effective a third of the way between the entrance and the first aisle, rather than immediately next to the entrance door; and most arriving customers will, obviously, instinctively turn right. Insights like these can, it is alleged, increase sales by as much as 20 per cent. (And this was 12 years ago - the models are even more sophisticated now.) All of which is to demonstrate that the manipulation of a man-made environment can have psychological effects of which we, the shoppers, are completely unaware. This helps us understand our discomfort at being experimented on by sites like Facebook and OKCupid. To recap: in June, a study into “emotional contagion” was published. The scientists involved - some of whom worked for Facebook - were interested in how mood transferred through social groups. Could seeing a sad status update make other people also sad? What about happy updates? To test this, nearly 700,000 users spent a week with their news feed showing either more negative or more positive updates than a control group, with the result being that, yes, there appears to be a (just about) measurable influence. Facebook had deliberately made some of its users depressed. The one major exception to the wave of revulsion directed towards Facebook after this came from an unexpected source - OKCupid, a dating site. Founder Christian Rudder, in a blog post this week entitled “We Experiment On Human Beings!”, wrote: I’m the first to admit it: we might be popular, we might create a lot of great relationships, we might blah blah blah. But OkCupid doesn’t really know what it’s doing. Neither does any other website. It’s not like people have been building these things for very long, or you can go look up a blueprint or something. Most ideas are bad. Even good ideas could be better. Experiments are how you sort all this out. We noticed recently that people didn’t like it when Facebook “experimented” with their news feed. Even the FTC is getting involved. But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work. He described three such experiments, the third of which comes closest to “Facebook made us depressed” territory. Like almost every dating site, OKCupid matches up users based on a score of how similar their interests are - 34 per cent compatible, 76 per cent, 99 per cent, and so on. The idea is the score draws the user into clicking on the profile of someone else, and, if they like what they see, messaging them. OKCupid's developers were curious if their algorithm for matching people up was actually any good, though. Maybe, users were messaging each other because they were told they'd like someone, not because they actually liked the content of their profile. The easiest way to test this was to swap the scores around - someone who'd normally appear as a ten per cent match for one user would appear to them as a 90 per cent match, and vice versa. The results were pretty conclusive, as the messaging rate didn't change much. Users were still messaging "good matches" at the same rate even though OKCupid's algorithm, behind the scenes, thought they should hate each other. “The mere myth of compatibility works just as well as the truth,” wrote Rudder. The typical reponse to both of these "experiments" has been to frame it as an issue of individual consent versus technological innovation. That's what I did in my own piece, for example, pointing towards Silicon Valley's fondness for A/B testing - the rapid test-implement-test-implement model that works so well on the web, and which can (and has) been used for everything from choosing the colour of buttons to electioneering. Websites can partition off a subsection of users, split them into two groups, and present them with two slightly-different versions of the same thing. Whichever works best, whichever option makes users do more of what the website owner wants them to do, can be rolled-out sitewide, and the next stage of refinement can begin. In this context, Rudder's absolutely correct to say that "experiments are how you sort all this out" - it's the methodology of the web, and everyone does do it. Yet, of course, there's also a qualitative difference in what OKCupid did compared to Facebook: the former's experiment was done with no ill intent (unless you count "being cocky bastards" as such), with the worst-case scenario being that users might end up messaging someone they'll hate; the latter's experiment was done explicitly to influence user psychology, with a theoretical - though still debatable - possibility that it might actually work. The debate has centred on whether clicking "accept" to Facebook's terms and conditions means that you or I also accept that we might be experimented on this way, and whether the most important factor at play is that Facebook intended to abuse its power and harm its users, regardless of its ability to do so. This isn't a specifically new problem - after two decades of web use, we're familiar with the idea that many websites are free on the condition that their owners can exploit our data to sell - or let other people sell - stuff to us. The temptation, then, is to defend experiments on site users as a kind of marketing tactic that is fundamental and necessary to running an online business, no different to coming up with an ad campaign, or rebranding a product; and there's a temptation, too, to see the ethical violation as something that happens when users give away more than they thought as part of the bargain, like signing up for a new bank account and getting bombarded with annoying junk mail. That is, it's wrong because we only said we'd like to be exploited to a certain limit. Maybe this is wrong, and maybe our disquiet makes more sense if we realise that websites are a kind of architecture for human interaction, just like a museum - or a mall. Just as it's false to see what happens online and in "real life" as somehow an oppositional binary, it's a mistake to ignore the fact that we're augmenting our experience of reality with technology that is often as designed to exploit us as any supermarket. Facebook's key business goal, after all, is to convince its users that it's all the web that they need. Get your news from the news feed, hang out with your friends, play some games, maybe buy some products from our verified partners. Just don't go splashing in the fountain or racing trolleys. Cameras are watching, and security will deal with troublemakers. It's not impossible to conceive of Nike, for example, wanting to run an experiment on people who own Fuelbands - would reporting the number of calories they've burned as ten per cent lower than the true number motivate them to exercise more? What about 15 per cent? Or 20 per cent? Would they stop using it altogether, frustrated at the lack of progress? Would reporting a higher number of calories burned make users lazier? The benefit of a more quantified self for the individual feels like poor value compared to the behaviour models that Nike would gain, and in turn use to further sell more to us, more effectively. Google's driverless cars might, some believe, supplant the role of public transport in our cities and towns. Do we want a company whose business model is based on tracking people also taking charge of moving us around? Facebook's news feed algorithm can kill 40 per cent of the traffic another website receives - how does it damage democracy that the revenue model of online media is dependent on the developmental algorithms of a single company? The growth of the web from a place where people message each other into a conglomeration of technologies that have an active business interest in watching and recording how society works, on a massive scale, is reminiscent of the growth of behavioural modelling from a niche interest into a valuable business practice. Looking back on Melton's original work, it's notable that he was commissioned by a non-profit group of museums that wanted to help ordinary people enjoy museums more. Yet that motive, noble or otherwise, is irrelevant to the legacy of that research, and to the cultivated, commercialised descendents of the Philadelphia Museum of Art: supermarkets, shopping malls and, yes, the social web. Arthur Melton, it seems, was in the business of A/B testing gallery design. Our disgust at being experimented on by Facebook or OKCupid, then, is possibly the same type we feel whenever we hear that another public square has been privatised, or a high street has been replaced by an out-of-town retail park, or a democratically-accountable state body's responsibilities have been outsourced to a private company which is immune to freedom of information requests. (And at least those things started out as public - the web we've embraced, the one that we're treating as our new town square, was never that.) Websites are built environments, not magazines, and we are beckoned inside with a smile and a promise of pleasure, for a price. It's not enough to congratulate the social web for giving individuals a louder voice without also pointing out who owns the bullhorn, and who controls the direction it points. Our presence gives these sites their value and their utility, and their forms change to reflect both how we use them and how those watching us want us to act. Do we really want to talk about moving democracy online, to a web dominated by a site that claims to be able to influence elections? More than anything else, this has echoes of another example of the manipulation of crowds: Haussmann's redevelopment of Paris, with wide, straight boulevards that were more conducive to military marches - and harder for the people to barricade. › Listen: Mary Beard, Laurie Penny and Helen Lewis on outspoken women Ian Steadman is a staff science and technology writer at the New Statesman. He is on Twitter as @iansteadman. Subscribe For more great writing from our award-winning journalists subscribe for just £1 per month!