New Times,
New Thinking.

  1. Politics
25 November 2013updated 26 Sep 2015 10:31am

Applying lessons from self-driving cars to self-driving wheelchairs

Technologies like 3D scanning can be used to help wheelchair operators move more independently.

By Ian Steadman

Self-driving cars are going to be A Big Deal, but advancements in autonomy aren’t restricted to that one class of vehicle. Wheelchair users could also end up benefitting from advancements in robotics.

In this case, we’re not talking about electric wheelchairs that can be programmed with a destination just like a car, as nice as that would be. Instead, this is about the subtle improvements to mobility that can be found by integrating autonomous technology into wheelchairs, for those who find accurate control difficult.

I’ve just become aware of “Robotic Adaptation to Humans Adapting to Robots” (or Radhar), an EU-funded initiative that, since 2010, has been “building a system that can develop better wheelchairs for children suffering from multiple sclerosis, cerebral palsy or a variety of other syndromes, such as autism and hereditary muscles diseases.”

It finished development in September, and the idea is that people who have disabilities which prevent them using manual wheelchairs are also likely to find it tricky to steer the big, battery-powered electric ones. That could mean banging into walls, knocking things over, and even causing injury.

That said, it’s not particularly useful to make completely autonomous wheelchairs – that’s not great for giving people a sense of freedom and control in their lives. Instead, a better method is to smooth out jerky journeys, estimating where it thinks its owner wants to go and making subtle alterations to the journey route that leave the user feeling absolutely in control, without the sensation of being ferried around.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

It does this using two Kinect cameras. One scans the environment ten times per second, building a basic map. The on-board computer uses this to spot obstacles, doorways, and other possible destinations that a user might want to head towards. Five times every second it generates a range of different routes that the wheelchair user could take – these are used to make sure that the wheelchair’s movement is smooth.

One of Radhar’s videos shows trials that were run at a school in Begium to see which wheelchair users might benefit from such a system:

A further clever bit is that the second camera scans the wheelchair user, assuming that the destination correlates with the direction of gaze. It also recognises if someone has a part of their body sticking out to the side, and knows to warn of the danger of hitting something as a result. Think of it a little like the way many cars come with systems that kick in to help drivers when they skid over a patch of ice, or brake suddenly.

The idea is that this is a system you’d have installed in your wheelchair when you’re young, and you’d use it for the rest of your life. It would learn from you, but you would learn from it, too – and, to paraphrase Futurama, it will work best when it feels like it’s not doing anything at all.

Content from our partners
The road to clean power 2030
Why Rachel Reeves needs to focus on food in schools
No health, no growth