Earlier this month we learned about the tragic murder of hitchBOT, a friendly hitchhiking robot.
Now researchers in Japan — always ahead of the curve when it comes to mechanoids — may have documented the type of human-on-bot aggression that led to hitchBOT's demise.
The scientists dropped off a polite robot in a Japanese shopping mall, fully anticipating random violence. Sure enough, gangs of children beat the bolts out of their mechanized friend.
As Kate Darling at IEEE Spectrum reports in not-so-unbiased language, the new study shows that:
[...] in certain situations, children are actually horrible little brats may not be as empathetic towards robots as we’d previously thought, with gangs of unsupervised tykes repeatedly punching, kicking, and shaking a robot in a Japanese mall.
But the scientists didn't stop with simply documenting the attacks; they turned the data into artificial intelligence (AI) to help robots to anticipate and avoid dangerous hordes of children.
The researchers chose a Robovie II for their bait bot. Robovie II is an assistive humanoid robot on wheels that's designed to help elderly people shop for groceries, for example:
In nine out of the 13 days, however, 28 children in total showed "serious abusive behaviors" toward the innocent, bug-eyed machine:
In a followup study, they used this data to build a computer model that helps robots predict the likelihood of being clobbered by kids — and get the hell out of there if things look bad:
Capable helper bots of the future won't come cheap, especially not at first; by some estimates, an early model may cost about $50,000. So consumers and insurers will demand some kind of AI that's good enough to help automatons avoid damage, be it by walking into oncoming traffic or getting beat up by packs of rogue children.
If there's one other thing we learned from the study, it's that young kids may possess frightening moral principles about robots.
Granted, the studies' sample sizes were small, and it's easy to skew a child's response to research questions (even if you're using tried-and-true techniques).
But it's more than a bit unnerving when nearly three-quarters of the 28 kids interviewed "perceived the robot as human-like," yet decided to abuse it anyway. That and 35% of the kids who beat up the robot actually did so "for enjoyment."
The researchers go on to conclude:
From this finding, we speculate that, although one might consider that human-likeness might help moderating the abuse, humanlikeness is probably not that powerful way to moderate robot abuse. [...] [W]e face a question: whether the increase of human-likeness in a robot simply leads to the increase of children’s empathy for it, or favors its abuse from children with a lack of empathy for it
In other words, the more human a robot looks — and fails to pass out of the "uncanny valley" of robot creepiness — the more likely it may be to attract the tiny fisticuffs of toddlers. If true, the implications could be profound, both in practical terms (protecting robots) as well as ethical ones (human morality).
Read about the full range of abuse dealt to mechanized helpers from the future at IEEE Spectrum, and see it for yourself in the video below.