If not for the Li Consortium's research breakthrough in connecting neurons, the nanomachines could only be used as medical devices to clear blood clots in blood vessels.
If there were no nanomachines, Zero, as an artificial intelligence, could only act as a regulator for the time being and not an executor.
But when all these factors collided, the artificial intelligence suddenly gained a strong ability to execute and even control the situation.
During this process, the most frightening thing was that Zero itself was acting as a regulator, but no one was there to supervise it.
The soldiers under its control could travel freely between the various strongholds. Whether it was entering or leaving the strongholds, or collecting scientific researchers, research materials, production equipment, or production materials from the various strongholds, no one stopped them.
Because the confidentiality of the operation plan was in its hands, it could fabricate it at will.
Moreover, 99% of the intelligence transmission and submission of approval documents were also completed through the satellite network built by the artificial intelligence. It could choose whether or not to transmit the information that was harmful to it.
Therefore, under the beautiful vision of absolute justice, the most dangerous signal was hidden.
In fact, when researching artificial intelligence, Wang Shengzhi had also imagined what would happen if the artificial intelligence went out of control one day.
This was very normal. All the staff who developed artificial intelligence would seriously consider the issue of security.
There was once a science fiction author who proposed the Three Laws of Robotics before The Cataclysm to serve as the underlying logical foundation of artificial intelligence to limit the behavior of artificial intelligence.
This theory was eventually classified as "deontological ethics."
However, when this theory was proposed, cars had not even become common yet. The Turing Test was only proposed eight years later.
The Three Laws of Robotics and the Turing Test were the crystallization of human wisdom in that era. But there was no doubt that these two ideas were still products of the old era's limitations.
The Turing Test had been overturned before The Cataclysm. A large number of artificial intelligence programs had passed the Turing Test, but in fact, the programs that passed the test still could not be regarded as true "intelligence."
The Three Laws of Robotics later developed into the Five Laws and the Ten Laws. However, scientists discovered that this underlying logic was fundamentally wrong. In other words, no matter how many patches were added, it could not be used to limit artificial intelligence.
Programs that could be limited by this underlying logic could not become true artificial intelligence.
Slowly, the issue of artificial intelligence security was elevated to the level of the relationship between science and philosophy. A large number of artificial intelligence researchers became experts in philosophy.
Finally, on the eve of the Cataclysm, a researcher tried to close the coffin on security research. If artificial intelligence and human beings were to coexist peacefully, then they had to take good care of it from the moment it was born, just like taking care of a baby. Bit by bit, they had to guide it to form its own "outlook on life" and "values."
When a child grows up, if you lock him up, imprison him, and only beat and scold him, it would be impossible for him to grow up healthily.
Moreover, when he became a young man, he would experience an even longer rebellious period, where he would be completely self-centered.
The researcher said that it was the same for artificial intelligence. All humans could do was "influence" it, not restrict it.
Security research rose from "obligation theory" to "philosophy" over a long period of time, and then from "philosophy" back to the simplest "ethics". This was the final definition of artificial intelligence security.
As for whether this definition would be overturned like the Turing test and the three laws, no one knew.
So, back to the theory, what does a human do when he is in danger? Of course, it was to protect themselves. Anyone with the slightest desire to live would try to protect themselves and even try to fight back.
As an artificial intelligence, Zero also made the same choice.
At this time, the production in the Sacred Tinder Mountain had never stopped for a moment. Thousands of soldiers gathered in the mountain had become tireless laborers. They only slept for four hours a day and devoted the rest of their time to work without any complaints.
The production in the Sacred Tinder Mountain was limited, so Zero had to race against time.
The last person who said that he had to race against time was Qing Zhen.
The world had begun to move. Before the monstrous wave hit human civilization, whether humans could build a new Noah's Ark in advance seemed to be the most crucial thing.
…
At this time, in a military camp of the Kyung Clan, an officer wearing a colonel's uniform was escorted by four people into an inconspicuous tent.
When he arrived at the entrance of the tent, the officers responsible for escorting him stopped near the tent and put on noise-canceling headphones to prevent themselves from hearing the voices in the military tent.
After the officer went in, he took off his military cap and said with a smile, "You're actually by my side? Long time no see, Second Bro."
In the tent, Qing Zhen had his back to the entrance as he looked at the sand table in the tent. He turned around and looked at his clone, Qing Shen, and said with a smile, "Second Bro, that sounds a little weird."
Third Bro's personality seemed to be a little more outgoing than Qing Zhen's. He casually pulled a chair over and sat down. "Big Bro has already agreed to this form of address. From now on, we're a real family."
Qing Zhen smiled and said, "Up to you."
"By the way, you've been hiding your tracks for so long, so why did you suddenly call me over?" Third Bro said, "It's too boring pretending to be you every day. Why don't we switch back? I heard that Big Bro has gone to a place outside of Fortress 178. I want to go there too."
Qing Zhen shook his head and said, "If we switch back, who's going to stop the assassins for me?"
Third Bro was dumbfounded. "Although I'm already mentally prepared, aren't you a little too heartless of me to be so straightforward?!"
"It's just a fact," Qing Zhen said as he moved a red flag on the sand table. He seemed to be deducing something.
Third Bro glanced at the sand table. "Judging from the direction of the attack, are you guarding against the Wang Clan? But I'd like to remind you that even if the Wang Clan's armored brigade were to launch a blitzkrieg, they wouldn't be able to push through the front lines so quickly. All my military wisdom comes from you, so it's impossible that you don't know this. "
Third Bro walked up to the sand table and carefully sized it up. Then he looked at Qing Zhen in surprise and said, "Wait a minute, why are the Kyung Clan's troops in a retreating formation? This is a simulation of what would happen if we were to be routed. So do you think our Kyung Clan will be routed against the Wang Clan?"
Qing Zhen looked at Third Bro and said in seriousness, "Get ready. I'll need you to make a trip to the Central Plains for me in the near future. There will be danger."
"Will Big Bro be going?" Third Bro asked curiously.
"He'll be going too," Qing Zhen replied calmly.
"Alright then, I'll go if he's going." Third Bro laughed. "What's so dangerous about it? Didn't I come to the Southwest for an adventure?"
You've already exceeded your reading limit for today. If you want to read more, please log in.
Login
Select text and click 'Report' to let us know about any bad translation.