New Times,
New Thinking.

The problem with Instagram’s new child “age estimation” software

Age verification AI raises questions about reliability, bias and privacy.

By Sarah Manavis

If Instagram’s last year has been defined by anything, it’s the mounting evidence suggesting the platform has a negative impact on children. Despite the presumed connection between the popularity of Instagram face filters and the rise of cosmetic surgery, and widespread assumptions that social media increases insecurities in young people, Instagram had always been able to claim that it had no awareness of any adverse impacts on children’s mental health. But since the so-called Facebook Papers were leaked in autumn 2021, including an internal study that suggested Instagram, which is owned by Facebook, was having a harmful effect on teenage girls’ body image, the company has been in damage control mode.

The changes over the past nine months have included the introduction of several new controls for parents and the cancellation of an ill-timed plan to create a children’s version of the app. And now, the company has added what appears to be its most robust technological intervention to improve safeguarding for teens: facial age estimation. 

One concern about Instagram’s effects on young people was the ease with which any child with access to a smartphone could create an account. Although the app says users have to be at least 13 to join, all kids much younger than that had to do to get around the limit was lie about their age. Instagram had developed an AI that combed through the app to try to find children who were 12 or younger, but it did not appear to be all that reliable.  

This time, rather than developing their own AI, Instagram has worked with Yoti, a company based in London that “specialises in privacy-preserving ways to verify age”. This new AI, Instagram says, will scan a video selfie provided by the user (in lieu of an ID, if the user can’t provide one), assess their age, and provide that information to Instagram; both companies will then delete the data. If someone in the US attempts to edit their date of birth on Instagram from under the age of 18, to an age that is 18 or over, they will be required to verify their age. 

“Understanding someone’s age online is a complex, industry-wide challenge,” Instagram said in its announcement. “Many people, such as teens, don’t always have access to the forms of ID that make age verification clear and simple. As an industry, we have to explore novel ways to approach the dilemma of verifying someone’s age when they don’t have an ID.”

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

While this appears to be a genuine technological innovation, it seems likely that this new system will ultimately prove useless, while treading into murky ethical territory – effectively making it little more than a PR exercise, with few benefits for children or parents.

In terms of reliability, the white paper on Yoti’s website gives some indication of the margin for error. The company says the AI has a True Positive Rate of of 99.65 per cent for estimating that 13-17 year olds are under 23. This may seem very accurate – but there is a mean absolute error of 1.53 years for 13-19-year-olds, meaning the technology is less reliable when working with smaller margins.

There are ethical issues, too. AI technology has long been proven to have biases when it comes to race because of the assumptions it is usually trained with. Yoti has said that gender and skin tone bias is “minimised”, but to what extent it can guard against this is yet to be seen. Though facial age estimation software is different to facial recognition software, and Yoti states that users are not individually identified, privacy could also be of concern. Yoti and Instagram have said that they will delete the identification videos immediately after analysis is complete, and Yoti has said that it will comply with the EU’s General Data Protection Regulation, which should increase the safety of children’s data. Yet this is not a fool-proof system for keeping children’s images private and provide no guarantee when it comes to data breaches.  

Julie Dawson, chief policy and regulatory officer at Yoti, said: “Countries around the world are rightly bringing in Age Related Design Codes to ensure experiences are appropriate for young people. Our facial age estimation technology offers secure, privacy-preserving solutions. Built in accordance with the ‘privacy by design’ principle in the UK GDPR, our technology has been independently audited by the ICO, KPMG, the German age regulatory bodies the KJM and the FSM, the BBFC’s appointed auditor NCC Group and the Age Check Certification Scheme. No individual can be identified by the AI model and we immediately delete all images of users after their age has been estimated. The technology does not recognise anyone.”  

Children’s safety on social media is a real issue – one that stretches far beyond Instagram – and it is rightly gaining the mainstream attention it has long deserved. But we also have to wonder: is keeping children off Instagram really the biggest thing we can do to protect children’s safety and mental health? Or is the real issue what happens when they are of the age to have one? By offering only slick, highly technical solutions that sound robust but will inevitably prove inadequate to the task at hand, social media platforms show how uninterested they are in solving these problems, which they hope eventually will just quietly go away.

[ See also: The question is not why the birth rate is falling – it’s why anyone has kids at all ]

This article was updated on 12 July 2022 to reflect that Yoti is age-estimation software rather than face-recognition software, and to add a response from Yoti’s chief policy and regulatory officer, Julie Dawson.

Content from our partners
Can green energy solutions deliver for nature and people?
"Why wouldn't you?" Joining the charge towards net zero
The road to clean power 2030

Topics in this article : , , ,