Students follow a lesson in a biology laboratory at the Roma Tre university (Photo credit: Tizani/AFP/Getty Images)
Show Hide image

Scientists criticise new “open access” journal which limits research-sharing with copyright

Restrictive copyright licenses and expensive submission fees have led to a significant number of scientists to criticise Science Advances, a new journal due to launch next year, for failing to live up to its open access principles.

One hundred and fifteen scientists have signed an open letter to the American Association for the Advancement of Science (AAAS), one of the world’s most prestigious scientific societies and publisher of the journal Science, expressing concerns over the launch of a new scientific journal, Science Advances. The AAAS describes Science Advances as open access, a term used to describe free online access to research for members of the public - but the scientists who have signed the open letter say they are "deeply concerned" with the specifics of its model, claiming it could stifle the sharing of scientific knowledge.

The journal, expected to debut in 2015, asks scientists for up to $5,500 (roughly £3,300) to publish their research. Although most open access journals are supported by charging a similar article processing fee, Science Advances has an additional charge of $1,500 for articles more than ten pages long. Leading open access journals, such as PeerJ, the BMC series and Plos One, do not have such surcharges. Studies in Science Advances will also be published under a Creative Commons license which prohibits sharing by any commercial entity, which critics consider means that the journal is not truly open access.

Jon Tennant, an Earth scientist from Imperial College London and the person who initiated the open letter, said via email:

The $1500 surcharge for going over ten pages is ridiculous. In the digital age it's completely unjustifiable. This might have made sense if Science Advances were a print journal, but it's online only."

The 115 open access advocates propose that page surcharges will negatively impact the progression of academic research. They may encourage researchers to unnecessarily omit important details of their studies, cutting them short to make sure papers make it under the ten-page limit. Although an AAAS spokesperson describes their prices as “competitive with comparable open-access journals”, critics haven't been convinced:

The licensing issue is also controversial, as the use of a non-commercial license like the Creative Commons BY-NC one fails to meet the standards set out by the Budapest Open Access Initiative. Creative Commons licenses work by using copyright legislation - which usually tries to prevent the re-use of creative work - against itself, by explicitly releasing work with a license which states that certain kinds of remixing and sharing are allowed. However, the non-commercial CC license chosen by the AAAS is not used by organisations such as the Research Councils UK and Wellcome Trust, as it isn't seen as compatible with the principles of open access.

Open access should mean the unrestricted, immediate, online availability of scientific research papers. It allows people from around the world, including those who work outside academic institutions, to read and share scientific literature with no paywalls, and the right to freely reuse things like scientific papers without fear of copyright claims. "There is little evidence that non-commercial restrictions provide a benefit to the progress of scholarly research, yet they have significant negative impact, limiting the ability to reuse material for educational purposes and advocacy," the open letter argues. Using CC BY-NC would mean work published in Science Advances couldn't be used by Wikipedia, newspapers or scholarly publishers without permission or payment, for example. The journal will offer scientists the choice of a license without these restrictions, but anyone opting for this more open option will have to pay a further fee of $1,000 (£602). 

On 28 August, the AAAS appeared to respond to the open letter through Paul Jump of the Times Higher Education magazine, after surprise within the scientific community that the organisation had appointed open access sceptic Kent Anderson as its publisher. However, the New Statesman was later informed by Tennant that he had been told by Science Advances' editor-in-chief, Marcia McNutt, that a newly-created FAQ page on the AAAS site was in fact the formal response to the open letter. Tennant wrote:

The response in the form of an FAQ that does not acknowledge the open letter, or address any of the concerns or recommendations we raised in the letter, is breathtakingly rude and dismissive of the community the AAAS purport to serve."

Scientific knowledge is communicated and distributed more effectively when there are no restrictions. Many studies have showed that research papers made available through open access journals are cited more often than those in toll-based journals. The open access movement increases the chances of scientific research being discovered, which can lead to the collaboration of ideas, and the generation of potentially life-changing scientific insights.

"The AAAS should be a shining beacon within the academic world for progression of science," Tennant explains. “If this is their best shot at that, it's an absolute disaster at the start on all levels. What publishers need to remember is that the academic community is not here to serve them - it is the other way around."

(Update: This piece originally stated that all CC licenses have copyleft provisions when only the CC Share-Alike license does, and has been corrected.)

Flickr: Alex Brown
Show Hide image

The rise of the racist robots

Advances in artifical intelligence are at risk of being held back by human ignorance. 

As far as dystopian visions of the future go, you can’t get much worse than the words “Nazi robots”. But while films, TV and games might conjure up visions of ten-foot titanium automatons with swastikas for eyes, the real racist robots – much like their human counterparts – are often much more subtle.

Last night, Stephen Hawking warned us all about artificial intelligence. Speaking at the opening of the Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, Hawking labelled AI “either the best, or the worst thing, ever to happen to humanity.” It’s an intelligent warning – many experts are already worried that AI will destroy the world of work – but it homogenises humans. What is the “best thing” for some may be the “worst thing” for others, and nowhere is this clearer than the issue of race.

It started with the Nikon Coolpix S630. In 2009, Joz Wang, a Taiwanese-American, bought the camera for her mother, and was shocked when a message on the screen asked “Did someone blink?” after she took a picture of herself. In July 2015, Google Photos came under fire after its image recognition software tagged Jacky Alciné and his friend, both of whom are black, as “Gorillas”. In September of the same year, a video showing an automatic soap dispenser refusing to respond to a black hand went viral. You might dismiss these examples as harmless bugs or honest mistakes, but they still tell us a lot about the way the technology industry tests its products – and therefore, which customers it values.

But then it got worse. This year alone, the first beauty contest judged by AI had only one dark-skinned winner out of 44, Princeton academics discovered that a popular language-processing algorithm found “black” names unpleasant, and an American software used to predict future criminals rated black people as higher risk. And who can forget Microsoft’s ill-fated chatbot Tay? The bot – which was taught to converse by mimicking other Twitter users’ speech – was taken offline after 16 hours because it began spurting sexist and racist messages.

We could sit here and debate whether an AI can truly be considered racist, but it wouldn’t change the outcome of events. Even though these algorithms and machines aren’t explicitly programmed to be racist – and their designers usually aren’t prejudiced themselves – it doesn’t change the consequences of their use. The more and more dominant AI becomes in our world, the more problematic this will become. Imagine the consequences of racial bias in AI job-screening tools, dating sites, mortgage advisers, insurance companies, and so on.

“Bias in AI systems is a vital issue,” says Calum Chace, the best-selling author of Surviving AI and a speaker on how the technology will affect our future. “We humans are deplorably biased – even the best of us. AIs can do better, and we need them to, but we have to ensure their inputs are unbiased.”

To do this, Chace explains, we need to figure out the root of the “racism”. Pretty much no one is deliberately designing their AI to be racist – Google’s chief social architect, Yonatan Zunger, responded quickly to the “Gorillas” incident, Tweeting “This is 100% Not OK.” But the fact that only two per cent of Google employees are black is perceived as part of the problem, as in many of these instances the technology was designed with white people in mind. “The chief technology officer of the company that ran the beauty contest explained that its database had a lot more white people than Indian people and that it was ‘possible’ that because of that their algorithm was biased,” says Chace.

There are also technical solutions. Chace explains that machine learning systems work best when they are fed huge quantities of data. “It is likely that the system was trained on too few images – a mere 6,000, compared with the many millions of images used in more successful machine learning systems. As a senior Googler said, machine learning becomes ‘unreasonably effective’ when trained on huge quantities of data. Six thousand images are probably just not enough.”

Now more than ever, it is important to straighten out these issues before AI becomes even more powerful and prevalent in our world. It is one thing for intelligent machines to drive our cars and provide our insurance quotes, and another thing for them to decide who lives and dies when it comes to war. “Lethal autonomous weapons systems (LAWS) will increasingly be deployed in war, and in other forms of conflict," says Chase. "This will happen whether we like it or not, because machines can make decisions better and faster than humans.  A drone which has to continually “phone home” before engaging its opponent will quickly be destroyed by a fully autonomous rival.  If we get it right, these deadly machines will reduce the collateral damage imposed by conflict. But there is plenty of work to be done before we get to that enviable position.”

Whether or not this vision of the future comes to fruition in 10, 20, or 100 years, it is important to prepare for it – and other possibilities. Despite constant advances in technology, there is still no such thing as a "concious" robot that thinks and feels as humans do. In itself an AI cannot be racist. To solve this problem, then, it is humans that need fixing. 

Amelia Tait is a technology and digital culture writer at the New Statesman.