pj_mukh 8 hours ago

Why does this article read like the robot actually got angry at a spectator. It did not, it does not have that capacity.

This was definitely a glaring safety issue and the company should review all its failure modes that show up in public but an ”emotional” response this was not.

  • glenstein 8 hours ago

    I understand the concern but I feel like it was on the right side of the line. The article says:

    >displayed aggressive behavior

    >swinging its arms in a manner described as aggressive and violent, similar to human behavior

    I can understand aggressive and violent as descriptions of behavior that don't necessarily (on charitable interpretation) imply an internal emotional state.

    • whamlastxmas 10 minutes ago

      If I trip and fall and throw an arm out to catch myself instead of face planting, and my arm accidentally hits someone, this is not aggressive behavior. This is exactly what happens in the video, the robot lost balance and was jerkily trying to correct itself. This article is terrible clickbait trying to stir controversy

    • pj_mukh an hour ago

      Aggressive is totally the wrong word. Malfunction is the right word. So it's really waaay off the line.

  • TeMPOraL 8 hours ago

    We could claim it was self-preservation or collision avoidance routine, perhaps mistakenly triggered by bad sensor input and/or coding error. However, that's also the reasons for which a human could get "angry at the spectator" and display the same behavior.

    Obviously there's a difference, but similarities are uncanny.

  • digbybk 8 hours ago

    The AI emotion discussion is interesting, but at the end of the day, does it matter? The question is, is it safe? And if it's not safe, how unsafe is it?

    • janalsncm 5 hours ago

      Right. We don’t need to debate whether a loose tiger is angry, confused, or hungry in order to know that they’re not safe around people. We can leave understanding the contents of its mind to the philosophers.

      What is very clear from the video is that the robots are an order of magnitude heavier and stronger than a person. That’s all you really need to know.

  • lurk2 8 hours ago

    It's important to note that a robot gone rogue is still a killer robot even if the robot doesn't hate you.

  • givemeethekeys 8 hours ago

    "As you can see from the smile on the robot's, let's call it face, your honor, this was clearly a friendly gesture."

  • DocTomoe 8 hours ago

    That's the other side of anthropomorphic robots: they get anthropomorphic attributes associated with them. A 4 axis robot arm hitting a worker is a machinery with bad safety settings and/or a worker who ignored them. A humanoid robot hitting a worker looks a lot like another worker hitting a worker.

    Also take into account that these humanoid robots are specifically designed to integrate into spaces which usually were not used by robots before, which immediately means more potential contact between them and non-trained personnel, even civilians.

    I feel we are quickly approaching ST:TNG "The Measure of a Man" territory here: At what point does a machine stop being a machine and becomes a being, a strange, technological being, sure, but a being nonetheless. After all, there's a good argument to be made that we are essentially biological robots.

  • inglor_cz 8 hours ago

    We antropomorphize animals all the time, it won't be any different with robots.

    When one wants to be picky, we can't be certain about other people's emotions either. A psychopath may not feel any anger when hitting you, or he might be feeling something very different from what a normal person would label "anger".

janalsncm 8 hours ago

The video is pretty unsettling, kind of showing how strong robots can be compared with humans. (We already knew this, but it’s good to remember.)

Reminds me a bit of the chess robot that broke a child’s finger: https://www.theguardian.com/sport/2022/jul/24/chess-robot-gr...

What annoyed me at the time was them describing the child as having broken some “rule” about waiting for the robot or something.

We should reject this framing. Robots need to be safe and reliable if we’re going to invite them into our homes.

  • Gracana 8 hours ago

    In an industrial setting, these robots would be placed behind interlocked barriers and you wouldn't be able to approach them unless they were de-energized. Collaborative robotics (where humans work in close proximity to unguarded robots) is becoming more common, but those robots are weak / not built with the power to carry their 50kg selves around, and they have other industry-accepted safeguards in place, like sensors to halt their motion if they crash into something.

  • scratchyone 8 hours ago

    That article is absurd, describing the robot as being completely safe while blaming the kid saying "children need to be warned".

themanmaran 8 hours ago
  • pinkmuffinere 8 hours ago

    It is ridiculous to attribute intent to the motion, but I understand why people do — it really does give the impression of an aggressive, upset human. That’s unfortunate.

    • bumby 8 hours ago

      There is an often unaddressed risk in robotics because there is a lack of theory-of-mind. We’ve evolved to intuit what others humans are thinking (based on words, body language and other context) which helps us predict behavior and mitigate risk. Unfortunately we can’t do the same with robuts so there is a potential for more latent risk (same as dealing with “crazy” humans where our mental models fail to predict behavior).

      IMO this means we won’t be comfortable with robuts and safety critical applications until they are well, well beyond human capabilities. This is where I think the crowd that aims for “human-level performance” is wrong; society won’t trust robuts until they are much, much better than humans.

      • pinkmuffinere 7 hours ago

        Ya, that makes sense to me, this is roughly how I feel about self driving cars as well — I want very good proof that it outperforms even the best drivers by a wide margin before I’ll actually use it. I feel that my friends and I are better drivers than average, even though I know that’s mathematically unlikely. So I need the self driving to be _really_ good before it attracts me. I know this is irrational; what I feel does not obey rational rules.

    • themanmaran 8 hours ago

      Yea, likely this was some kind of trip + glitch that happened to look like an attack. But it really did have a "boxing" style movement.

      I saw a video of the Unitree [1] robot doing a kung fu routine the other day. I imagine developers are constantly programming in some pre-scripted moves. Similar to all the Boston Dynamics demo videos. They're great for showing off movement. Conceivable that someone could run the wrong demo routine. Imagine the Atlas robot doing it's classic backflip in the middle of a crowd.

      [1] https://www.youtube.com/watch?v=iULi4-qz22I

    • bentcorner 8 hours ago

      Agree - it looks like low-quality robotics code, probably one-off written for this festival.

      This article seems to try to ride the fears of AI and "bots are taking our jobs" but really this looks like plain old badly written software.

      Large machines operating near people should always have failsafes. Having handlers who are expected to drag the bot around IMO isn't enough.

    • janalsncm 8 hours ago

      Doing that unpredictably is almost worse. Functionally, the point of anthropomorphizing is to tell a story that makes things predictable. In other words, “unsafe if angry, safe otherwise”.

      But if you can’t tell if it’s “angry” then we have to assume it’s always unsafe. Of course this was always true.

    • givemeethekeys 8 hours ago

      Many people will older brothers will recall the time when the brother had no intention of actually hurting them. They were merely swinging their hands, moving closer.. and closer.

    • randomfrogs 8 hours ago

      Don't anthropomorphize robots. They hate that.

  • usaphp 8 hours ago

    it looks like the robot tripped at the barricades and balanced itself.

    • imhereforwifi 8 hours ago

      thats what it looks like to me. It looked like it was trying to continue the handshake while the person was pulling back and the robot was moving forward and stuttered and tripped on the bottom of the barricade causing it to lunge and try and stabilize itself.

  • datadrivenangel 8 hours ago

    Video of the robot punching someone in the crowd... Weird behavior.

wewewedxfgdf 8 hours ago

We need some sort of special police unit to liquidate renegade robots.

Those police officers need a catchy name.

  • daotoad 8 hours ago

    These heroic officers will have to deal with malefactors that want to rock and roll all night and party every day. This is why they will need an unlimited supply of waterfalls and sandwiches.

  • cholantesh 8 hours ago

    Liquidate sounds a tad aggressive, maybe 'retire'?

mattlondon 8 hours ago

Years ago when I was a student at uni I volunteered to take part in a research study with robots.

I went to a rented house near campus where they had a normal living room set up and sat me down on a dining chair in the room and handed me a box with a button on it.

"The robot will approach you. Just press the button when you feel like it is getting too close" they said.

They left the room so I was alone, and a few minutes later the wheeled robot entered the room and started slowly but deliberately to move towards me.

Let's just say the robot got too close.

I was sat there alone as the robot moved towards me. I was frantically mashing at the button but it did not stop until it actually collided with my feet and then stopped.

To this day I am not sure if it was meant to stop or not, or even if it was a robotics research project at all or actually a psychology research project.

In hindsight it was as terrifying as it sounds. Still, I got £5 for it.

  • bluSCALE4 8 hours ago

    Terrifying. I get the sensation playing VR games and have robots swing at me. Don't enjoy it at all.

  • DocTomoe 8 hours ago

    5 quid? The was a psychology research project, good sir. I hope you got lab hours for that.

v9v 8 hours ago

Seems to me that it loses its balance and extends its arms in order to rebalance itself, similar to what's happening in their demo videos https://www.youtube.com/watch?v=GtPs_ygfaEA?t=24 I've worked with their robot dogs before and they kick their legs really fast when they sense that they are falling over.

bicepjai 5 hours ago

>> The manufacturer, Unitree Robotics, attributed the incident to a "program setting or sensor error." Despite this explanation, the event has heightened ethical and safety concerns regarding the use of robots in public venues. The local community is calling for urgent measures to ensure that robots' actions align with social norms, emphasizing the need for regulatory and legal frameworks to govern robot-human interactions.

I did not think I’m going to see this in my lifetime after watching Animatrix

moribvndvs 8 hours ago

Tangentially, this is the sort of thing that generally bothers me most about AI. Well, second most thing. The first is it being abused by humans to do terrible things. The second is it being built and maintained by humans, where a thing can easily malfunction in ways the people building and maintaining them can’t comprehend, predict, or prevent, especially when it’s built by organizations with a “move fast and break things” mentality and a willingness to cut corners for profit. The torrents of half-broken tech we are already drowning in don’t exactly inspire confidence.

thallavajhula 8 hours ago

This was one of my major concerns when Elon announced Tesla Optimus. There's a real need for government regulation on the bot specs. I blogged about this a while ago.

Something like:

1. They shouldn’t be able to overpower a young adult. They should be weak. 2. They should be short. 3. They should have very limited battery power and should require human intervention to charge them. 4. They should have limited memory.

  • glitchcrab 7 hours ago

    I don't disagree with you, but at the same time if you cripple the robot too much then it has no value - no sane company would develop a product which nobody would want. That's commercial suicide.

    • thallavajhula 6 hours ago

      True. I guess, there has to be a balance to get to a sweet spot.

Havoc 8 hours ago

There is a video floating around of it. It's a sudden forward movement that certainly looks alarming but wouldn't call it "unexpectedly displayed aggressive behavior".

More like it's hardcoded to do something (maintain balance or whatever) without limits on how fast it can move to achieve the goal.

i.e. bad safety controls rather than malice

cryptoz 8 hours ago

Reminds me of that time in Russia at the chess tournament, where they repurposed some industrial robot to play chess and it crushed a kid’s hand.

Also reminds me of when Uber got kicked out of California to test self driving cars, so they moved to Nevada and promptly killed a woman.

I guess it’s not surprising that safety is taking a back seat in robotics development everywhere in the world. It’s a mad race for profits of untold scale. But it would be so great if the companies that win would be the companies that don’t fumble on human safety, taking perhaps a slower approach but one that kills/maims fewer people.

  • inglor_cz 8 hours ago

    For self-driving cars, safety is probably not taking a back seat, otherwise there wouldn't be much profit in them. We just have an unrealistic expectation that they can and must be 100 per cent harmless, which we would never extend to human drivers.

    The vast majority of previous transport tech, including horses and mules, was way more gory and dangerous than self-driving cars are.

    This includes quite recent developments. How many people died on a Segway?

    • throwanem 8 hours ago

      One? Heselden's the only one to come immediately to mind. That the owner of the company should drive its product off a fatally high cliff is embarrassing, but still quite literally n=1.

baq 8 hours ago

Mr Asimov would like a word.

i5heu 8 hours ago

What incredible click bait. There is nothing "Incident", "Raising Alarm" or "Shocking" about this.

"Robot in Tianjin stumbles" there i fixed the Title.

  • lcnPylGDnU4H9OF 8 hours ago

    Assuming it's true that it just stumbled and the "incident" part is a nothingburger, the headline as written is still literally accurate. It's also still worth discussing: was the robot appropriately cautious of its surroundings when it performed that maneuver? Perhaps it could not have been aware enough to avoid all potential damage and that could be another discussion.

    • i5heu 8 hours ago

      At which point does a "thing that moves by itself" like a Railway crossing and a thing that keeps itself in an upright position become a robot that needs to determine if an action can lead to physical harm on something else or itself?

      I do not thing this thing can be cautious because it is a remote controlled car with 2 legs and everything the "AI" part is doing build down to keeping balance and locating the position that it hardcoded needs to grab.

      Or in other words there is an operator, like with an RC car or a real car.

limaoscarjuliet 8 hours ago

So it begins.

(Sorry, could not stop myself :-)

robomartin 7 hours ago

Hopefully this starts a discussion/trend towards failure-tolerant robotics. Much as we have seen on commercial aircraft, relying on such things as a single sensor (or not being able to tolerate the failure of a single sensor) could spell trouble and even tragedy.

Having been involved in failure tolerant design for mechanical, electronic and software systems, I think I can say that this is an aspect of engineering that is well understood by those working in industries that require it.

Generalizing --perhaps unfairly-- I imagine that most engineers working on this class of robot have had little, if any, exposure to failure tolerant designs. They cost more, require more attention and analysis of designs and lots of testing. However, as robots of many forms interact with humans, this type of resiliency will become critically important.

A practical home or warehouse robot that can lift and manipulate useful weights (say, 20 or 30 Kg) will have enough power to seriously hurt someone. If a single sensor failure, disconnection or error can launch it into uncontrolled behavior, the outcome could be terrible.

hooverd 7 hours ago

You should treat robots like you treat horses or lawnmowers. They're all surprisingly deadly.

yapyap 8 hours ago

safety regulations are written in blood