Hritik complex issues without any human input. The ability

 Hritik PanchasaraProfessor StoutRHET13021 November 2017Are the misconceptions surrounding Artificial Intelligence hampering itsown progress? Are the misconceptions surrounding Artificial Intelligence hampering itsown progress?  Are careers like financial analysts or telemarketing necessary for ushumans to labor at? Could the Greek mythological figures be simulated andbrought to life using a form of superintelligence? Our technology is on a pathof such high magnitude that it could shape our future for the better. Theprogram JARVIS from the movie Iron Man is a highly advanced computer artificialintelligence that managed everything that was related to technology for theprotagonist. The fact that something inorganic can be of such high value goesto predict the future of our own technological race. Artificial intelligence isdefined as a subfield of computer science wherein computers can perform tasksfor humans that we would normally think of as intelligent or challenging.

Envisiona future where computers and machines can carry out our daily human tasks atease and solve complex issues without any human input. The ability to inventintelligent machines has fascinated humans since the ancient times. Researchersare creating systems and programs that could mimic human thoughts and try doingthings that humans could do, but is it here that they got it wrong? Humans havealways been good at defining problems but not solving them. Machines, on theother hand, are polar opposites, where their computational power helps themsolve almost any problem, but not define them. It goes to show how these twoaspects are interdependent on each other and why we are looking forward to theinvention of superintelligence. But issues like creationism and negativetypecasting beg the question of whether the misconceptions surroundingsuperintelligence are hampering its own progress. A few scholars like Pei Wangfocus on the dynamics of a working model and the inaccuracies in it. Butscholars like Yoav Yigael question the emulation of human-like characteristicsand abilities into machines.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

This research paper will focus on the various,incorrect approaches towards harnessing this technology,  the consequencesthat are being derived from it, and the solutions that could probably befocused on.One of the main issues surrounding artificial intelligence is the factthat global leaders have an illusion of what it is supposed to be. Theyconstantly try and emulate human beings in machines when that was not the goalof the technology since its inception. Take the wheel as an example. The wheelwas supposed to augment human capacity for transportation and it successfullypaved the way for countless other inventions. In the same way, artificialintelligence was meant to augment to our cognizance and help us function in abetter manner; to solve problems that we could only define.

The most commontrend is the creation of humanoids like Hanson Robotics’ Sophia. It isan amalgamation of artificial intelligence and cutting-edge analytical softwarefor peak performance as a “question answering” machine and it’s more so an “Andro-humanoid” robot. Elsewhere, IBMand its drive to replicate human nature was not only unsuccessful, but also hasbeen causing a financial burden on the company. IBM simply tried too hard topush Watson into everything ranging from recipes to health care and it resultedin a declining revenue for 5 years now.

Hence, it alludes to themisappropriation of resources by feeding research into pointless products andavenues for a largely versatile product. Artificial intelligence used to be a problem solving machine wherecommands were entered in an parameters box. Human programmers wouldpainstakingly handcraft knowledge items that would then be compiled into expertsystems. These served to be brittle to a certain extent and could not bescaled.

Since then, a quantum leap has changed the field of artificialintelligence. This pioneered the idea of superintelligence but somewhere alongthe way it has been grossly misunderstood. Machine learning, is what hasrevolutionised how we make and train AI. Earlier knowledge items and structureswere pre-defined by manual programming, now machine learning enables us to producealgorithms that learn, from unprocessed perceptual data. This process can belikened to how human infants learn.

Is it possible for us to take a system ofdevices interconnected and co-dependent, and process their data in meaningfulways, to pre-empt their shortcomings and avoid errors? Yes. Is it possible forus to build a machine so adept in our languages that we can converse with itlike we do with each other? Yes. Can we build into our computer’s a sense ofvision that enables them to recognize objects? Yes. Can we build a system thatcan learn from its errors? Yes. Can we build systems that have a theory of mind?This can be done using neural nets. Can we build systems that have a moral and properfoundation? This we are still learning. A.I.

is still miles from having thesame potent, pan-domain ability to study and plot as a human being has. Humanshave a neurological advantage in this case, the power of which we yet do notknow how to replicate in machines. Ever since AI’s inception this question has been asked, is it somethingto fear? Every advancement in technology draws upon itself some apprehension. Theinvention of the Television was criticised as people complained that it wouldmake the working class procrastinate and make them dull. On the creation ofE-mail, society grieved that the personal touch and formality of a letter wouldbe lost.

When the Internet become pervasive, it was argued that we would loseour ability to memorize. There is truth in all these claims, but it’s alsothese very technologies that define our way of modern life, where we have takeninformation exchange for granted no matter what the medium, this in turnexpanded the human experience in substantial ways. The film, “2001: A SpaceOdyssey” by Stanley Kubrick personifies all the stimuli that we have cometo associate with AI, as one of the central characters is HAL 9000, an AI. HAL,a sentient computer programmed to assist the Discovery spacecraft from theEarth to Jupiter.

It was a flawed character, as in it chose to value itsmission objective more than human life. Even though Hal’s character is rootedin fiction, it voices mankind’s fear of being subdued by a being of superior intelligencewho is apathetic to our humanity. The AI that researchers and scientists aretrying to make today, is something that is very much along the lines of HAL,but without its single mindedness of achieving its objective without nuance. Thisis a hard engineering problem, to quote Alan Turing, “We can only see a shortdistance ahead, but we can see plenty there that needs to be done.

”  To build a safe AI, we need emulate in machines how humans think, thisis a task that seems beyond impossible, but it can be broken down to threesimple axioms, the first axiom is altruism, if the AI’s only goal is tomaximize the comprehension of our objectives, and of our values. Values heredon’t mean values that are distinctly intrinsic, extrinsic or purely moral andemotional, but a complex mixture of all of the above, as we humans aren’tbinary when it comes to our moral compasses. This actually violates Asimov’slaw that states robots have a sense of self-preservation. Whilst preserving itsexistence is no longer its priority whatsoever. The second axiom is of humility.

This states the AI does not know what our human values are, so it maximizesthem, even still it does not know what they are. This ambiguity of our valuesis of our advantage over here, this helps us avoid the problem of single-mindedquest of an objective, like HAL. In order to be of use, The AI has to haverough impression of what we want. It acquires this information predominantly byobservation of our choices.

The question then is what happens if the AI is ambiguousabout the objective?  It reasonsdifferently. It considers the scenario where we could turn it off, but only if it’sdoing something wrong. AI’s do not know what wrong is, but it reasons it doesnot want to do it. In this scenario we can see the first two axioms in action.

Hence it should let the human turn it off. Statistically you can estimate the motivationthat the AI has to permit us to turn it off, and it’s directly proportional degreeof uncertainty of the objective set for the AI. When the AI is turned off, thatthird axiom comes into play. It infers something about the objectives it shouldbe pursuing, because it infers that what it did was not right. We are factuallybetter off with an AI that’s designed in this way than with an AI built anyother way. The scenario above is an example which depicts what humans endeavourto accomplish with human-compatible A.

I. This third axiom draws upapprehensions from the scientific community, because humans behave badly. A lotof human behaviour is not only displeasing but also wrong, which means that anyAI that is based on human values will corrupt itself in the same the humanshave. What one must remember is just because the maker behave poorly doesn’tmean the creation (AI) is going to mimic that behaviour. The fundamental goalof these axioms was to provide nuance for why humans do what they do, and makethe choices that they make. The final goal is to allow AI to predict for anyperson the outcome of all their action/choices in as accurate a manner as possible.The bigger problem now is, how do we feed in all our values and moralsand the nuances that are associated with them into an Ai which is essentiallyan inference engine at this point,  doingthis the old school way by manually defining every knowledge item would beimpractical , Instead we could leverage the power of A.

I. here, we know that itis already capable of processing raw perceptual data at blinding speeds, so weessentially use its intelligence to help us in helping it learn what we value,and its incentive system can be fashioned in such a way that it is incentivisedto pursue our ethics or to perform actions that it calculates we would approveof using the three axioms stated above. In this way we tackle the difficultproblem of                Value-loadingto AI with the resources of an AI. It is possible to build such an artificialintelligence, and since it will embody some of our morals, the fear that peoplehave for AI of this capacity is baffling. In the real world constructing a cognitivesystem is fundamentally different than programming an outdated software-intensivesystem. We do not need to program them.

Programmers start to teach them. Inorder to teach an AI how to play a game of chess we have it play the same game ofchess a thousand times, but in the process we also teach it how to discern agood game from a bad game. If we want to create an AI medical assistant, wewill teach it endocrinology whilst simultaneously also fusing with it all thecomplications in a person that could lead to the underlying symptoms. In technicalterms, this is called ground truth. In programming these AI, we aretherefore teaching them a sense of our morals. IN such cases, humanity must trustan AI equally if not more as a human who is just as well-trained. “Superintelligence” the book by the academic Nick Bostrom, hereasons that AI could not only be dangerous to humanity but it’s very existenceone day might spell an existential crisis for all of humanity.

Dr. Bostrom’sprimary dispute with AI is, that such cognitive systems  learn on digital time scales and that meanssoon after their inception they will have inferred and deciphered all of humanliterature, this alludes to their ravenous huger for information and theremight come a day when  eventually when itascertains that the objectives set for it by humanity no longer align with itsown objectives and goals. Dr.

Bostrom is held in high regard by people ofimmense stature such as Elon Musk and Stephen Hawking. With all due respect tothese academics and philosophers I feel that their assumptions about AI areerroneous to an extent.  Consider theexample of HAL as stated above was only a hazard to the Discovery crew so asfar as it was in command of all features of the Discovery spacecraft. This iswhere Dr.

Bostrom faltered in assuming that the AI would have to have controlover all of our world. The most popular stereotype of Skynet from “TheTerminator” is a prime example of such a scenario. It was here in the moviethat a superintelligence eventually took command of mankind by turning allmachines against humanity.

However, we must remember that our goal with AI wasnever to build AIs that could control and harness the weather, that woulddirect and manipulate the tides, which could command us whimsical and disorderedhumans. Furthermore, if such an artificial intelligence existed, it would haveto compete with human economies, and thereby compete for resources with us.Furthermore if the three principles stated above are used as guidelines in theformulation of this omnipotent AI, then not only do we not fear this AI but wecherish it, for it is built in our image, with our values and morals. We cannotprotect ourselves from all random acts of violence, Humans are unpredictableand the truth is some of us are extremists, but I do not think that an AI couldever be a weapon that an non-governmental tertiary party could ever get itshands on, and to manufacture an AI by these parties is even more farfetched asthe mobilizing of resources and brainpower alone would raise multiple red flagsfor the authorities of the world to stop whatever devious ploy to overthrowworld order in its tracks.  Artificial Intelligence is heading into multiple directions and there isa lack of a centralised effort for the development and advancement of thisscience towards a neutral goal.

Moreover, humans anthropomorphize machines andthis leads them into believing that the flaws of the maker will be heightenedin the flaws of its creation. There are some obscure problems neural cortex ofany AI and how is it that we make it conscious, what is conscience? Questionslike these need to be answered before we march onwards on our quest of anomnipotent AI. Furthermore intricacies of the decision theory for an AI arestill primitive in its infancy so we some way to go before we figure that out.These problems seem far to advanced and complex to tackle now, but the truth isthat the research is already underway and sooner rather than later we’llwitness the ushering in of the era of machine intelligence. Works Cited:Wang, Pei.

“Three Fundamental Misconceptions of ArtificialIntelligence.” Taylor and Francis Online, 13 August 2007,https://pdfs.semanticscholar.org/1772/a04f8e5db77d69c8dd083761c1469f93ac2d.

pdf.Accessed 13 November 2017.Yigael, Yoav. “Fundamental Issues in Artificial Intelligence.” Taylorand Francis Online, 7 November 2011,https://www.

researchgate.net/profile/Yoav_Yigael/publication/239793309_Fundamental_Issues_in_Artificial_Intelligence/links/5757cad208ae5c6549042e77/Fundamental-Issues-in-Artificial-Intelligence.pdf.Accessed 13 November 2017.Yudkowsky, Eliezer. “Artificial Intelligence as a Positive and NegativeFactor in Global Risk.” New York: Oxford University Press, 2008,https://intelligence.

org/files/AIPosNegFactor.pdf. Accessed 14 November 2017.Hammond, Kristian.

Practical Artificial Intelligence for Dummies.John Wiley & Sons, Inc, 2015. Accessed 14 November 2017.Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. OxfordUniversity Press, September 3rd 2014.

Accessed 9 December 2017.