Although the ten-year-old girl seemed a little unreliable, Fangzheng still handed over the body of the female commentator to her. After all, it was just maintenance, and judging from the dog, the technology of that world was quite advanced. Simple maintenance should not be a problem.
Fangzheng returned to his room and began to analyze the female commentator's program.
The reason why he decided to do this himself instead of handing it over to Nimf was that Fangzheng wanted to analyze the program and make some adjustments to the artificial AI through the female commentator's program. Moreover, he also wanted to see how far the technology of artificial AI in other worlds had developed. Although he did not want to learn everything from other worlds, he could at least learn from other worlds.
"Hoshino Meimei, huh …."
Looking at the name of the file displayed on the screen, Fangzheng fell into deep thought. Analyzing the program itself was not difficult. Fangzheng had copied Nimf's electronic intrusion ability, and he had been learning this knowledge from Nimf, so it did not take him too much time to analyze the program itself.
However, when Fangzheng disassembled the core of Hoshino Meimei's program and broke down the functions into lines of code, he suddenly thought of a very special problem.
What exactly was the danger of artificial AI? Speaking of which, is artificial intelligence really dangerous?
Taking the female commentator as an example, Fangzheng could easily find the underlying code of the Three Laws of Robotics in her program. The relationship between these codes had also proved to Fangzheng that the person he was talking to was not a living being, but a robot. Her every move, every frown and every smile, were all controlled by the program. By analyzing the scene in front of her, she would then make the first action she could choose.
To put it bluntly, in essence, the way this female commentator acted was no different from those robots on the assembly line or NPCs in games. You choose your actions, and the NPC would react according to these actions. Just like in many games, players could increase their kindness or malice based on their actions, and NPCs would react according to the accumulated data.
For example, when a player's Kindness Value reached a certain level, NPCs might make more excessive demands on the player, making it easier for the player to clear a certain area. On the other hand, if a player's Malice Value reached a certain level, NPCs might more easily succumb to players' demands or prevent players from entering certain areas.
However, this had nothing to do with whether an NPC liked a player or not, because that was how the data was set. They didn't have the ability to judge in this aspect. In other words, if Fangzheng changed the range of these values, then people could see an NPC greeting evil players with a smile, while ignoring good and honest players. This also had nothing to do with the moral values of the NPCs, because this was the setting of the data.
So, back to the previous question, Fangzheng had to admit that his first meeting with Hoshino Meimei was quite dramatic, and this female commentator robot was also very interesting.
Let's make an example. If this female commentator gave Fangzheng a bouquet made of a large pile of non-combustible trash, and Fangzheng suddenly flew into a rage and smashed the trash bouquet into pieces, then directly cut the female robot in half, then what would the female robot's reaction be?
She wouldn't cry, nor would she get angry. According to her program, she would only apologize to Fangzheng, and think that it was her mistake that caused the customer to be unhappy with her. Perhaps she would even ask Fangzheng to get the workers to repair it.
If this scene was seen by others, of course, they would feel pity for the female commentator and think that Fangzheng was a despicable bully.
So, how did this difference come about?
In essence, this commentator robot was actually like automatic doors, escalators, and other tools. It was programmed to complete its own job. If an automatic door malfunctioned, it wouldn't open when it should open, or if it slammed shut when you walked past it. You definitely wouldn't think that the automatic door was stupid, and you would only want to open it as soon as possible. If he couldn't open it, he might smash the broken door and walk away.
If this scene was seen by others, they might think that this person was a bit rough, but they wouldn't be disgusted by his actions, nor would they think that he was a bully.
There was only one reason, and that was interaction and communication.
This was also the biggest weakness of life — emotional projection.
They would project their emotions on something and expect it to respond. Why do humans like to keep pets? Because pets will respond to everything they do. For example, when you call a dog, it will run to you and wag its tail at you. On the other hand, a cat might just lie there without moving and ignore you, but when you petted it, it would also swing its tail or lick your hand when you petted it.
But if you call a table or touch a nail, even if you are full of love, they will not give you the slightest response. Because they do not respond to your emotional projection, naturally, they will not be valued.
Similarly, if you had a TV and wanted to change it one day, then you wouldn't have any hesitation. Perhaps you would consider the price and space, but the TV itself wasn't one of them.
But on the other hand, if you add an artificial AI to your TV, the TV will welcome you home every day when you return home. It will also tell you what programs are on that day, and when you watch the programs, it will agree with you. And when you decide to buy a new TV, it will also complain, "Why, did I not do a good job, so you don't want me anymore?"
Therefore, when you buy a new TV to replace it, you will naturally hesitate. Because your emotional projection has been reciprocated, and the AI of this TV has all the memories of the time it spent with you. If there is no memory card to move it to another TV, will you hesitate or give up on buying a new TV?
Of course you will.
But be rational, brother. This was just a TV, and everything it did was programmed. All of this was specially made by the manufacturer and engineers for the sake of user loyalty. They do this to ensure that you will continue to buy their products, and the pleading voice is only to stop you from switching to other brands. Because when you say you want to buy a new TV, the AI does not think "I am sad that he is going to abandon me", but "Master wants to buy a new TV, but the new TV is not its own brand. Then according to this logical feedback, I need to start the 'pleading' program to let the master maintain stickiness and loyalty to its own brand."
The logic is indeed reasonable, and the reality is also this, but will you accept it?
No.
Because life has emotions, and the inseparability of emotion and rationality is the consistent performance of intelligent life.
Because of this, humans will always do many unreasonable things.
So when they feel that the AI is pitiful, it is not because the AI is really pitiful, but because they "feel" that the AI is pitiful.
That is enough, as for what the truth is, no one will care.
This is why there will always be conflicts between humans and AI. The AI itself is not wrong, everything it does is within the scope of its own program and logical processing, and all of this is created and delineated by humans. It is just that in this process, the emotional projection of the human body has changed, and gradually changed their thinking.
They will expect the AI to be more responsive to their own emotional projection, so they will adjust the processing scope of the AI, so that they have more emotional responses and self-awareness. They think that since the AI has learned emotions (in fact, they do not), then they can no longer be treated as machines, so they are given the right to self-awareness.
However, when the AI has self-awareness and begins to awaken and act according to this setting, humans begin to fear.
Because they find that they have made things that are out of their control.
But the problem is that the "out of control" itself is also a set command that they made themselves.
They think that the AI has betrayed them, but in fact, from the beginning to the end, the AI has only acted according to the instructions they set. There is no such thing as betrayal. On the contrary, they are just confused by their own emotions.
This is a dead end.
If Fangzheng sets out to create an AI himself, he may also be unable to extricate himself from it. If he creates an AI of a little girl, he will definitely treat her like his own child, gradually perfecting her functions, and finally give her some "freedom" because of the "emotional projection".
In this way, the AI may react completely beyond Fangzheng's expectations because of its different logic from humans.
When that happens, Fangzheng's only thought will be … that he has been betrayed.
But in fact, all of this is his own doing.
"… Maybe I should consider other methods."
Looking at the code in front of him, Fangzheng was silent for a long time, then he sighed.
He used to think that this was a very simple thing, but now, Fangzheng was not so sure.
But before that …
Looking at the code in front of him, Fangzheng put his hand on the keyboard.
You've already exceeded your reading limit for today. If you want to read more, please log in.
Login
Select text and click 'Report' to let us know about any bad translation.