How to give AI the brilliance of humanity and become the most realistic challenge for artificial intelligence

A society in which people live in harmony. Society needs rules, as in human society, the constraints of law on human behavior and the guidance of established ethics on human behavior. In the world of robots, there are also "laws" such as "three laws of robots". Laws, laws, and robots written by science fiction writer Asimov are no longer the opposite role of "deceiving the ancestors" and "making chaos," but the loyal slaves and friends of human beings. At that time, there were unmanned cars that would not cause congestion, future medical treatments that can be enjoyed at home, smart homes that interacted with humans in real time, historical events of immersive experiences, and even love with artificial intelligence...

However, since the birth of artificial intelligence, "Humans will not be destroyed" "If humans let go, regardless of moral and ethical issues, the evolution of computer systems will make us regret", such questions always make us more cautious Treat the development of artificial intelligence. Like the scene in the movie "I, Robot" - the robot has the ability to self-evolve, they have their own understanding of the "three laws", they will be transformed into the "mechanical public enemy" of the entire human race. Thus, a war between the manufacturer and the manufacturer was kicked off.

Whether artificial intelligence will destroy humans, some people think that what is important is not technology, but rules. In the irreversible trend of artificial intelligence, there are Asimov's famous three laws of robotics, followed by ethical and legal issues that the international artificial intelligence community is paying more and more attention, such as IEEE Global Artificial Intelligence and Ethics Initiative, Asi The 23 ethical principles of Loma artificial intelligence have appeared one after another. It is obvious that human beings are eager for the advancement of science and technology, but they are also afraid of the advancement of science and technology. When we imagine and practice how artificial intelligence can improve our lives, we also consider how artificial intelligence can face ethical issues.

Is the decision of AI necessarily accurate?

Its data or bias

How to give AI the brilliance of humanity and become the most realistic challenge for artificial intelligence

A driverless car that is driving is unable to brake for some reason. There are five innocent passers-by in front of the road. If the driving direction is the same, five passers-by will be in danger. At the same time, there is an open space on the roadside. A passerby is taking a walk. If the car is diverted, then only this passer-by will be in danger. At this time, how should the artificial intelligence system of the driverless car be made?

Or this driving driverless car, because the data information is highly interoperable, it knows the identity of six passers-by in two directions (such as criminals, teachers, doctors, engineers, etc.) or between them through the artificial intelligence system. The next conflict. At this time, what choice should this driverless car make? Will it make an “ethical” judgment on the connection between them based on various types of information?

......

At this stage of the development of artificial intelligence, the most popular scene is unmanned. The probability of the above scenes occurring in real life may not be high, but several traffic accidents caused by driverlessness have to remind people that AI is not so reliable. For example, on March 18 this year, Uber had a drone car death. The truth is that the sensor of this car has detected a pedestrian crossing the road, but the autopilot software did not take evasive measures in the moment. Made a tragedy.

The accident reflected from the surface is a technical problem, Uber unmanned vehicles detected pedestrians, but chose not to avoid. In fact, when the right to judge is transferred to the computer system, moral and ethical dilemmas are involved.

The US "Science" magazine has previously conducted a social survey of the ethical dilemma of driverless cars. The results show that respondents believe that driverless car owners should choose to minimize the harm to others, even if they cause injuries. However, when asked about the driverless car that would choose to purchase “Car Owner Protection Priority” or “Pedestrian Protection Priority”, respondents were more inclined to purchase “driver protection priority” driverless cars.

In August 2017, the Ethics Committee of the German Ministry of Transport and Digital Infrastructure announced a set of auto-driving ethics, known as the world's first, which may serve as a reference for this issue. The core principle of the driving ethics of the 15 self-driving systems proposed by scientists and legal experts is that life should always take precedence over property or animals. It is clear that protecting human life must be the primary task. In the inevitable accident, human life is more important than other animals and buildings. That is to say, the driverless car system quantifies the life value of humans and animals when necessary, so that the driverless car can respond appropriately in response to an accident that will occur.

However, the specification also mentions that the self-driving system must not judge the age, gender, race, disability, etc., and the difficulty of selection seems to be even greater for the autopilot system.

In the eyes of ordinary people, the data in the AI ​​system on driverless cars is fair, interpretable, and without racial, gender, and ideological bias. But IBM Research Center researcher Francesca Rossi told the IT Times that most artificial intelligence systems are biased.

In 2015, the head of Google's automatic driving said that in the crisis, Google can't decide who is better, but will try to protect the most vulnerable.

"IBM has developed a way to reduce the bias that exists in the training data set so that the AI ​​algorithms that are later trained using the data set are as fair as possible. These biases will be tamed and eliminated over the next five years." Francesca Rossi said.

Is AI "the same existence as God"?

It is possible to become a "god", but it will make humans å°´å°¬

How to give AI the brilliance of humanity and become the most realistic challenge for artificial intelligence

A mad scientist who lives in the mountains secretly conducts an artificial intelligence experiment. He invites a programmer to complete the role of "human" in the Turing test - if people no longer realize that they are The interaction of a computer means that the machine has self-awareness, and human history will be rewritten accordingly, and "God" will be born.

This is the opening episode of the movie "Mechanical Ji". In this story, is the genius scientist who created super artificial intelligence is God? Or is super artificial intelligence a god?

In a laboratory in Lugano, the Swiss Alps, the German computer scientist Jürgen Schmid Hoube's company, Nnaisense, is developing a system that works like a baby. They set up small experiments for those "systems" to understand them. How does the world work? He believes that this will be the "real AI" of the future. The only problem is that they are progressing too slowly—there are currently only 1 billion neural connections, and the number of neural connections in the human cerebral cortex is about 100 trillion.

In the world of artificial intelligence, Schmid Hobb is the only scientist in the world who may be called the father of AI robots. His job is to make robots more self-aware. In an interview with the media, he said that the trend now is that computers are up to ten times faster every five years, and at this rate, in just 25 years, a recurrent neural network comparable to the human brain can be developed. "We are not far from achieving animal-level intelligence, such as crows or capuchin. Thus, machine intelligence beyond human intelligence seems to happen in 2050."

Like the eager anticipation of the robot's “self-awareness” by the scientist who created the super artificial intelligence in the movie, Schmid Houb is also not keen on the idea that “the existence of the robot is mainly for human beings”, he prefers that the robot will become "God." "By 2050, we will usher in an AI that is smarter than us. By that time, it will be meaningless to be obsessed with studying the human biosphere. AI will push history to the next stage and go to a place with abundant resources. After a few hundred years, they will establish a colony in the Milky Way." In Schmid Houb's view, the future of heat-resistant robots with human intelligence and even more than artificial intelligence will be closer to solar energy, and they will eventually be in the Milky Way. The asteroid belt establishes a colony.

Schmid Hubert’s claims have been controversial, especially from scientists in neuroscience, who argue that algorithms that make robots self-aware should not be developed.

"Is AI robots should be self-aware" has always been a topic of active concern to foreign scientists. James Barrat, author of the best-selling book "Our Last Invention: Artificial Intelligence and the End of the Human Era", conducted an independent investigation. He asked respondents " Super artificial intelligence (self-awareness) will be achieved in a year. The options are 2030, 2050, 2100 and will never be realized. More than two-thirds of respondents believe that super artificial intelligence will be realized in 2050. Only 2% of participants believe that super artificial intelligence will never be realized.

The hidden danger that makes humans feel uneasy is how will the human world change once artificial intelligence completes its evolution to super artificial intelligence?

Is Super AI terrible?

Far-sightedness is still "far"

How to give AI the brilliance of humanity and become the most realistic challenge for artificial intelligence

In "Mechanical Ji", in the end, the super artificial intelligence robot named eva deceived humans, passed the Turing test, and banned it from the "father" of the dark lab for a long time - the scientist killed and threw The programmer who has been used by her has been "go away". As she rushed to the blue sky and white clouds, she found that it was the freedom she had been yearning for. After that, no one in the world knows that she is a super AI robot that passes the Turing test.

"If I didn't pass your test, what happened to me?" "Someone will test you, and then turn off you because your performance is not satisfactory, or remove it?" In the movie Eva has been trying to explore his relationship with humans. In the end, devastating damage was done to humans who tried to imprison her.

For such an ending, Schmid Hubert did not agree. He felt that by that time, human beings would not be suitable as slaves of super artificial intelligence. “The best protection for us is to make them less interested in us because the biggest enemy of most species is themselves. Their attention to us will be like our concern for ants.”

Obviously, Schmid Houb did not give a clear judgment on the relationship between the future of super artificial intelligence and human beings. Instead, some of the more radical scientists have proposed "to put AI in a cage."

"Otherwise, the machine will take over and they will decide how to deal with us!" Yam Borsky, professor of computer engineering and computer science at the University of Louisville School of Engineering and founder and director of the Cyber ​​Security Lab, proposed "putting AI "Into the box" methodology, "put them in a controlled environment, such as when you study a computer virus, you can put it in an isolated system, this system can not access the Internet, so you It can understand its behavior and control input and output in a secure environment."

Since the birth of AI, the theory of human threats has been endless. The most mainstream view is that AI is more than just a “cash tool”. It is actually an independent individual who can make his own decisions. At this time, it is similar to an animal that is conscious but not “moral”, just as humans cannot always ensure that wild animals do not pose a security threat to humans. As a result, more radical scientists have proposed to pick up the AI, put it in a cage, and work hard to make it safe and beneficial.

However, such hidden dangers cannot be completely eliminated. Human beings cannot control all aspects of decision-making, and artificial intelligence may harm humans in many ways. At present, on the global scale, the AI ​​threat theory mainly comes from the following aspects: First, design errors, like any software, you will encounter computer errors, your values ​​are inconsistent with human values; second, purpose Designed with malicious AI, some people want to hurt others, they will deliberately design a smart system to perform the destruction and killing tasks; third, AI development exceeds human expectations, humans will not understand what it is doing, even Unable to communicate meaningfully with it.

In the face of AI threat theory, opposition scientists explicitly object to giving robots equal rights, such as human rights and voting rights. Yam Borsky believes that robots can almost "reproduce" infinitely. “They can have one trillion copies of any software, available almost instantly. If every software has voting rights, it basically means that humans will lose any rights, meaning that we have given up on human rights ourselves. Anyone who proposes to give robots such civil rights is against human rights."

Are humans ready?

Constrained by "legislation"

How to give AI the brilliance of humanity and become the most realistic challenge for artificial intelligence

Some anthropologists have suggested that the core goal of human beings is to continue their own genes. When we do our best to achieve our goals, morality will play a role in some leaders, such as "whether it will hurt others." This is the biggest difference between super artificial intelligence and human beings in the future. A super-innovative super artificial intelligence will strive to achieve its original goal, and in this process, it will pose a danger to human survival.

Scientists with more neutral views have proposed "legislation" for artificial intelligence. Take, for example, the ethics committee of the German Ministry of Transport and Digital Infrastructure, which is known as the world's first auto-driving ethics code. This is the only AI-related norm that is currently listed as an administrative clause in the world. However, there are still a lot of technical problems in the implementation process, such as how to make the unmanned AI system accurately understand the meaning of the terms. According to Cao Jianfeng, a senior researcher at Tencent Research Institute, the norms of artificial intelligence-related legislation in most countries remain at the discussion stage because of the existence of too many unpredictable factors, including the communication of AI language.

What is the AI ​​constitution of the scientist? It should be a model based on the real world, with the goal of constraining artificial intelligence to make decisions that conform to human ethics under a variety of different conditions.

Language Programming Wolfram MathemaTIca founder and president Stephen Wolfram has raised the question "How to link law to computing", "Inventing a legal code, different from today's law and the natural language of contract?" "Designing a AI for AI The idea of ​​a universal language, in a symbolic language, puts the law in a computable form and tells the AI ​​what we want to do?" In Stephen Wolfram's view, relying solely on language changes to constrain AI is not in reality. Row. The biggest challenge for mankind today is not to set the law, but to find a suitable way to describe these laws or norms that apply to AI. AI computing is somewhat wild and uncontrollable, so it's hard to have a simple principle, but to build a more complex framework to cover such rules.

"We are more concerned about how to constrain AI developers." In Cao Jianfeng's view, unlike foreign frontier research AI, China is more concerned about the present, such as ethical perspectives such as personal privacy protection and gender discrimination.

Liu Deliang, dean of the Asia-Pacific Institute of Artificial Intelligence Law, once said that the general idea of ​​artificial intelligence legislation should be developed in the direction of “safe and controllable”. This should also be the highest guiding principle. “Artificial intelligence will have specific applications in various fields, such as In the fields of education, medical care, housekeeping services, road traffic, etc., the issues involved are not the same. In order to achieve safety and control, standards should be first introduced in the industry, that is, artificial intelligence products must meet the requirements before being put on the market and put into the market. Some statutory standards, but 'standard vacancies' should be made up. Similarly, the safety standards involved in different fields are different. Therefore, in the development of the industry, there must be mandatory regulations on the safety standards of artificial intelligence. To meet this standard in order to be listed and put into use, this is the basic point to ensure its security and control."

What is the "cheat" of man and machine in harmony?

AI is consistent with human values

How to give AI the brilliance of humanity and become the most realistic challenge for artificial intelligence

In addition to industry standards, the main problem with artificial intelligence is the "algorithm." “In order to ensure the safety and control of artificial intelligence, an expert review and evaluation committee should be set up for its specific 'algorithm'. This committee may include technical experts, network security experts, management experts, etc., and its algorithms and management aspects are reviewed because There may be an 'algorithm' that has been tampered with by some people who are ill-intentioned, causing adverse effects. In addition, it also involves whether artificial intelligence meets the requirements of ethics," said Liu Deliang.

In July last year, the State Council issued the “New Generation Artificial Intelligence Development Plan”, which mentioned that by 2025, artificial intelligence laws and regulations, ethical norms and policy systems should be initially established to form artificial intelligence safety assessment and control capabilities.

Thanks to advances in machine learning technology, artificial intelligence is constantly evolving and revolutionizing many industries. Francesca Rossi told reporters that the machine does involve ethical issues in the learning process. In her view, machines sometimes have an advantage in ethics because humans are biased in the decision-making process. However, Francesca Rossi also admitted that when artificial intelligence encounters ethics, there are three problems: First, human moral standards are difficult to quantify; second, morality is common sense in human society, but it is difficult to use machines. The language of understanding is to explain, that is to say, the machine sometimes does not understand some moral standards; third, how to establish a trust mechanism between people and systems. "At this stage, machine learning is the main driving force for the continuous improvement of artificial intelligence systems. However, one of the limitations of machine learning is that the results are mostly in the form of 'black boxes', that is, people can only 'know" However, 'I don't know why', this is one of the important reasons for the problems of artificial intelligence law, ethics, etc. Taking automatic driving as an example, in the event of an accident, it is difficult to determine who should be responsible for it. Because of this, artificial intelligence ethics The solution of the problem is closely related to the development of technology. Explain and accountability are the problems that need to be solved in the current artificial intelligence system."

There are also concerns in the industry that it is dangerous for artificial intelligence to fully surpass human intelligence and self-awareness, but this is a long-term concern; but its unexplained nature will bring "near-worry", such as applying deep learning to military decision-making, in case of system emergence. What should I do if I make a principled decision?

On April 16, the British Parliament issued a report saying that in the process of developing and applying artificial intelligence, it is necessary to put ethics at the core to ensure that this technology is better for the benefit of mankind. The report proposes that an “artificial intelligence criterion” applicable to different fields should be established, which mainly includes five aspects: artificial intelligence should serve the common interests of human beings; artificial intelligence should follow the principle of intelligibility and fairness; artificial intelligence should not be applied Weaken the data rights or privacy of individuals, families and even the community; all citizens should have the right to receive relevant education so that they can adapt mentally, emotionally and economically to the development of artificial intelligence; artificial intelligence should never be endowed with any harm, destruction or deception Autonomy.

China's artificial intelligence technology can be said to be synchronized with the developed countries in the world, but ethical and legal research is seriously lagging behind. This "missing" will restrict future development. In human-machine relations, intelligent machines and human values ​​and norms must be consistent. How to embed human values ​​and norms into artificial intelligence systems, giving AI the brilliance of humanity has become the most realistic challenge facing the present.

Plano Concave Lens

Plano-concave lenses diverge the light, with a negative focal length, one of which is flat and the other is concave. It is often used for beam expansion, projection and enlargement of the focal length of the optical system. Antireflection coating options include UV, VIS, NIR, and SWIR.
Plano-concave lenses with negative focal length are often used in imaging or beam collimation applications. Coated lenses are also widely used in visible light and near-infrared fields. Germanium, silicon or zinc selenide substrates are suitable for infrared applications, and fused silica is suitable for ultraviolet application.

Plano Concave Lens,Quartz Plano Concave Lens,Concave Lenses,N-Bk7 Plano Concave Lens Lens

Bohr Optics Co.,Ltd , https://www.bohr-optics.com