Security & Risk of Artificial Intelligence

0
Want create site? Find Free WordPress Themes and plugins.

Whatever we love about the civilization is a creator of intelligence; to amplify the human intelligence with artificial intelligence is having the ability to help with civilization. In this article you’ll learn the fundamentals of Artificial Intelligence (AI) to solve real-world problems and the way to apply them.

What is Artificial Intelligence (AI)?

All the technology position, from SIRI to self-driving cars, web search, face recognition, industrial robots, missile guidance and artificial intelligence (AI) is developing rapidly with the time. Whereas science fiction often portrays AI as robots with human-like characteristics, AI can easily grid anything from Google’s search algorithm to IBM’s Watson to self-governing weapons.

Today Artificial Intelligence is known as narrow AI or weak AI that is designed to perform a narrow task. For example only facial recognition, internet searches or only just driving car task are performed. On the other side, long-term goal of researchers is to create the general AI or strong AI like playing chess or solving equations, AGI would outperform humans at nearly in every intellectual task.

What for Research AI Safety?

In the near time, AI’s impact keeps the society beneficial and motivates research in different areas, from economics and law to technical topics such as verification, validity, security and control. Whereas the short-term challenge is preventing a devastating arms race in lethal autonomous weapons. A question arise here in long term that what will happen if the investigate for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. According by I.J. Good in 1965, designing better AI systems is a cognitive task by itself. The invention of revolutionary new technologies like Superintelligence may help us eliminate war, disease, and poverty. Also the creation of strong AI might be the biggest event in the human history.

Some experts have question whether the strong AI will be achieved or not, while some other insist that the creation of Superintelligent. Both of these possibilities are recognized potentially for an artificial intelligence system to deliberately or undeliberately create big threat. We do believe that research will help us better prepare for and prevent from such kind of potentially negative harms in the future.

Threats of Artificial Intelligence

Most of the researchers are agree that Super-intelligent AI is same to express human emotions like love or hate. There in no way expecting hostile or tolerant, in addition, there are two scenarios to show how AI might become a risk.

1: The AI is intended to act like devastating programmed: The autonomous weapons bases on artificial intelligence systems that function to kill. If it comes to the hands of wrong person, these weapons can create massive casualties. Moreover, al AI arms race could lead an AI war unintentionally to cause mass causalities. By avoiding thwarting by the enemy, these weapons would be created to be extremely tough to simply “turn off” where humans could possibly lose control of such cases.

     2: The AI wants to act beneficially, but it destructed to achieve its goal: This can happen whenever we fail to fully align the AI’s goals that are difficult. If you are asking an obedient intelligent car to get you to the airport fast, it may get you there chased by helicopter and covered you in vomit. This is what you don’t want but asked for it. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we may face with problems. A key goal of AI safety research is to never place humanity in the position of any threat.

Why the Recent Interest in AI Safety?  

Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and other big names in science and technology have expressed recently in the media and via open letters about the risk and threats of AI. Most AI researches at the 2015 Puerto Rico Conference stated that human-level AI is centuries far away. that may happen before 2060. AI has the potential to become more intelligent than the humans; we have no way of predicting how it will behave. We can never use any past technological developments because we have never created such thing having the ability to outsmart us. People are controlling the planet now, not because we are the fastest, strongest or biggest but the smartest.

Top Myths about Advanced AI

There is a controversial conversation about the future of artificial intelligence and the meaning of it for the humanity. There are also more fascinating controversies where the world’s leading experts disagree like AI’s future impact on the job market; when human-level AI will be developed; whether this will be leading to an intelligence explosion; and whether this is what we should welcome or fear. To help ourselves focus on the interesting controversies and not on the misunderstandings, there are some common myths to clear up the questions.

Timeline Myth

This is related with the timeline that how long will it take until machines supersede human-level of intelligence? The famous myth is that we are sure we’ll get superhuman AI this century. However, our history is full of technological over-hyping. Where are such powers plants and flying cars we had promised that we’d have now? On the other hand, a popular counter-myth is we are sure that we won’t get superman AI this century. The researchers have made a wide range of estimates for how far we are from superhuman AI. However, we can’t say with courage that the probability is zero in this century.

For example, Ernest Rutherford, the greatest nuclear physicist of his own time in 1933, said that less than 24 hours before Szilard’s invention of the nuclear chain reaction; that nuclear energy was ‘moonshine’. After that, the astronomer Royal Richard Woolley stated interplanetary travel ‘utter bilge’ in 1956. The most extreme form of this myth is that superhuman AI won’t be arrived because it is impossible physically.  \

Controversy Myths

The second common misconception is that some people have boring concepts about AI. When Stuart Russell, the author of the standard AI textbook, stated this during his presentation, the audience laughed. A related misconception is that all the supporting AI safety research is controversial. Actually, supporting a generous investment in AI safety research, the people don’t need any motivation about high risk. Media have made the AI safety debate seem more controversial than in reality it is. As a result, two people who only know about each other’s positions disagree more than they do. For example, a techno-skeptic who just read about the position of Bill Gate in a British tabloid may think Gates believes super-intelligence mistakenly.

Myths Related to Risks of Superhuman AI

The fear of machines turning evil is another type of getting worry; the real worry isn’t malevolence, but competence. The humans never hate ants but we are more intelligent than them in common. Therefore, if we want to build a hydroelectric dam and there’s an anthill there, so sad for the ants. The beneficial-AI movement wants to prevent placing humanity in ant’s position. The consciousness misconception depends on the myth that machines don’t have aims. Obviously, machines have goals or aims in the narrow sense of expressing goal-oriented behavior. In case you feel threatened by a machine whose goals are misaligned with yours, so this is its goal that troubles you.

Finally, the misconception of robot is related to the myth which can’t control the humans. The Intelligence enables control: the humans control tigers not because of we are stronger but smarter. This means that if we keep on position as smartest on our planet, it’ll be easy and possible that we may also keep on our control.

Did you find apk for android? You can find new Free Android Games and apps.
Share.

About Author

Leave A Reply