collected by :lilly Rody
according to “These signals can dramatically improve accuracy, creating a continuous dialogue between human and robot in communicating their choices.”
Instead, the team focused on error-related potentials or ErrPs, which are generated when the human brain notices what it regards as a mistake.
As the robot indicates which choice it plans to make, the system uses ErrPs, read via the machine-learning algorithms, to determine whether the human agrees with the decision.
The robot, picking up the reading from the machine learning algorithms could then correct their behaviour.
“As you watch the robot, all you have to do is mentally agree or disagree with what it is doing.
This robot can read your mind and do what you’re thinking
according to The robot can tell when it’s made a mistake in an object-sorting task by detecting a change in the signals.
Scientists are developing a robot that does what we’re thinking.
She said: “Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word.
“A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven’t even invented yet.”
It gets all embarrassed and blushes when it’s wrong, but then corrects its actions based on the person mentally agreeing or disagreeing with it.
In other words, the operator can warn the robot when it’s doing something wrong without consciously thinking about it.
The CSAIL team says that the ErrP signals are extremely faint, so the feedback loop needed some tweaking to get the right results.
The signal is sent automatically to the robot and it takes on the burden of learning instead of the human.
It’s one thing to order a robot to do this or that, but it’s another thing entirely to get to do it right.
Direct control from brain to machine may do away with the problems of a mechanical interface or teaching a robot to respond to voice commands, but by itself it isn’t enough.