Posted on July 25, 2018 by staff

AI Briefing: Dance like no AI is watching


Google’s latest ‘learning experiment’ is one you might want to try in the privacy of your own home.

Using a webcam, the search giant’s latest app ‘Move Mirror’ tracks 17 points on your body to interpret the unique shapes you’re pulling.

Whether you rely on classics like the running man and ‘the robot’, or you’re a master ‘flosser’, doesn’t really matter.

The app will match your pose against its 80,000-strong photo archive to find someone else in the world who shares your moves.

You can then export your performance, with the accompanying fellow dancers following your routine, as a video or GIF – or just keep it to yourself!

A sign of things to come

Short of finding a magic lamp, there’s no better way to have your wishes granted than with a voice assistant. The likes of Amazon’s Alexa and Apple’s Siri make interacting with technology feel natural and easy.

Digital marketers are currently hailing this tech as The Next Big Thing, but that couldn’t be further from the truth if you can’t hear or speak.

For those of us who sign, a voice assistant is the least accessible tech you could imagine.

Enter all-round tech genius Abhishek Singh, who has developed the incredible technology you see below; a sort of translator for Alexa which interprets sign language and shows its responses onscreen.

The fall guy

It’s a good job that we haven’t yet taught robots to feel embarrassment because they’re clumsier than a three-legged horse on a diving board.

It’s not their fault. We’ve had them try to walk before they know how to fall.

Kris Hauser is trying to fix that before they realise how silly the look.

“If a person gets pushed toward a wall or a rail, they’ll be able to use that surface to keep themselves upright with their hands,” said Hauser, associate professor of electrical and computer engineering and of mechanical engineering and materials science at Duke University in  North Carolina.

“We want robots to be able to do the same thing.”

Microsoft improves its AI inclusivity

Global tech giant Microsoft has announced that its facial recognition technology has been improved to better recognise a wider set of skin colours.

Facial recognition technology has come under scrutiny for its less-than-perfect assessment of people with darker skin tones.

As the AI-powered technology is adopted into society, some fear that it could lead to unintentional bias.

“Collecting more data that captures the diversity of our world and being careful about how to measure performance are important steps toward mitigating these issues,” said Ece Kamar, a senior researcher in Microsoft’s research lab.

Robots enter racism debate

Humans apply their racial biases over to robots, according to a new study.

Researchers at Monash University in Melbourne and Canterbury University in New Zealand wondered why most robots seem to be white and investigated whether people ascribed race to them – and whether this altered how they treated the robot.

It did this using the ‘shooter bias’ test where people assume the role of a police officer, are shown images of people and must decide whether to shoot them or not. The original study showed participants of various backgrounds images of people who were white or black, and armed or unarmed. In this study they were shown robots of two different colours.

“If you ask anybody, ‘Are you racist?’ of course they will say no,” one of the study’s authors Dr Christoph Bartneck told RN Drive.

“What we observed is that the exact same bias observed with humans can also be observed with robots. People changed their behaviour towards brown robots in comparison to white robots.

“There’s no particular reason why robots all should be white. Racism in general is a big problem and robot developers have a responsibility not to make it any worse.”