The Development of Artificial General Intelligence and Superintelligent Artificial Intelligence, and Their Possibility of Displacing Human Intelligence
Lorenzo Zamora
Freshman, Parks College of Engineering, Aviation, and Technology
Abstract
This paper will determine the possible technological point at which we could reach levels of creating artificial intelligence (AI), dubbed superintelligent AI, that surpass human beings in all cognitive functions and domains. The first section will discuss the current research and progress regarding advances with AI, as well as what AI researchers seek to implement in the creation of AI. The second section will present the specific types of AI that are defined within the field of computer science, in addition to how they work and the theories behind their functions. The third section will present possible ways of reaching AI, in terms of software, on a more sophisticated level of superintelligence, explaining the types of AI involved, and discussing how such superintelligent AI could be instilled ethically so that it may be beneficial, and not malicious, for humanity. The fourth section will present the kinds of necessary hardware to support such superintelligent AI that were described and defined in the previous section. The fifth section will discuss the meaning of being human in a post-humanistic future that would be run by such advanced AI and superintelligent AI, and the question whether superintelligent AI be brought to fruition. The sixth section will introduce possible lines of research and inquiry that would be likely be beneficial to the furtherment of AI, and particularly superintelligent AI. Finally, the seventh section will summarize and recapture the essential points of the paper, while concluding it as well.
Introduction & Background
Artificial intelligence (AI) has long been a mainstay in fiction literature and media, particularly those of science fiction. From a nationwide administrative AI that measures people’s likelihood to commit crime, to the quirky AI of a helpful and humorous droid companion. Thesekinds of superintelligent AI with their own sort of personalities, but also flawless planning and logic capabilities, have been a goal since the conference held at Dartmouth College in 1956, at which the term of artificial intelligence was first proposed [1]. But with the power that superintelligent AI brings, come many fears about such ‘minds’ supplanting human intelligence as the dominant intelligence on earth, or that such AI will go rogue and turn on their human creators, effectively wiping humanity out of existence in a mass genocide. This paper seeks to inform readers on the possible technological point at which we may be able to create superintelligent AI that are vastly superior to humans in all cognitive functions and domains, in addition to the methods and way to reach it, from both a hardware and software perspective, with considerations for whether superintelligent AI presents a crisis to our existence as a species as well.
Although before such matters can be addressed, what exactly is AI? AI is best understood as a collection of separate fields and approaches whose goal is to create machines and programs which are capable of performing the same tasks that require the cognitive functions of when humans do them [2]. These cognitive functions and intelligent processes include such elements as sensibility, reason, and understanding [3]. In attempting to recreate such elements in AI, and more particularly in an effort to create superintelligent AI, I have chosen S. Mitsuyoshi and F. Ren’s definition of the three elements, which they had in fact proposed as well, for they published a paper through the IEEE that proposes a new machine (the Sentience System Computer) that would replicate sentience (in terms of idea generation and intuition), and thus be capable of having self-intention as humans do, which would be a great benefit in reaching superintelligence with AI. The paper also includes an algorithm for spontaneity, a procedural model of man's way-of-thinking, as well as recent results regarding the CSS, thus Mitsuyoshi and Ren present a comprehensive understanding of the elements needed to reach advanced AI, such as superintelligent AI, and are therefore my choice in terms of elements and their definitions, which are as follows.
Firstly, ‘sensibility’ is the ability to feel intuitively the latent information that an object involves, without following the dictates of reason. Sensibility is also (1) the reaction to activate senses, corresponding to a stimulus or a stimulus change, (2) capability of sense organs that generate sense and perception, sparked off by an object, (3) psychological experience provoked by sense and perception. Sensibility therefore entails such integral qualities as ‘feeling,’ the most common word explaining sensitivity; ‘emotion,’ strong feeling such as anger, love and hatred; ‘sentiment,’ a feeling based on logical thinking; ‘passion,’ strong feeling beyond logical
judgment; ‘fervor,’ a heated or burning feeling; and ‘enthusiasm,’ strong intention for an argument, action or proposal [3].
Secondly, ‘reason’ is the capability of judging based on logic without being moved by feeling or desire, as well as mental action to know truth or false, and good or evil, as universal order. Reason’s synonyms include rational, grounded and logical. Simply put, it means, ‘not influenced by emotions.’ It is also worth noting that reason is not necessarily the better characteristic than sensitivity. “For instance, the sentence, ‘He is moved more by his head than
his heart,’ can be either applause or blame, depending on situation” [3].
Thirdly, ‘understanding’ is the capability of logical perception and theoretical thinking, and is thus positioned between reason and sensitivity. ‘To comprehend’ is the mental process to reach understanding, and ‘appreciate’ is to understand and evaluate the true value of a thing [3].
Yet even with the three elements of sensibility, reason, and understanding identified and defined as a set of cognitive functions and intelligent processes that would be necessary to create an AI that is as advanced as humans, or even far exceeding human intelligence, thereby becoming a superintelligent AI, it is quite apparent that current research has been unable to implement such elements so as to produce an AI of that magnitude. It is no wonder then that enthusiasm for these grand AI dreams – both within the AI profession and in society at large – has risen and fallen repeatedly. Yet despite these fluctuations, research and development have steadily advanced on various fronts within AI and allied disciplines [4]. Although successful implementation of the three elements has not been accomplished yet, current AI research is mostly focused on, and has discovered, ways to make computers perform very modest utilitarian cognitive tasks, with there being other AI research too that is dedicated to theoretical or methodological investigations that might, or perhaps might not, lead to practical applications and usage for the future. Despite the spread of practical and theoretical research though, spectacular progress in one area of AI research often doesn’t do much to advance others [2].
Thus, it is no challenge to see that development in the field of AI is not a straight one, but one that is unpredictable and uneven; AI breakthroughs can occasionally emerge from lines of research that had previously been written off as disappointments. Take, for instance, deep learning (which is modelled after human neural networks, and uses a cascade of multiple layers of nonlinear processing units for feature extraction and transformation, with each successive layer using the output from the previous layer as input) and reinforcement learning (which allows machines and software agents to automatically determine the ideal behavior within a specific context, in order to maximize its performance, with the agent learning by way of simple reward feedback) both of which in fact are widely used in many AI today. As recently as the early 2000s, both technologies were curiosities studied by only a handful of academic researchers that, for all practical intents and purposes, did not really work. Although, as Roy Amara, cofounder of the Institute for Future Learning, has said, “[w]e tend to … underestimate the effect [of a technology] in the long run,” [5]. Such words ring ironically true, for within several years, they had conquered problems many AI researchers expected would remain unsolved for the
foreseeable future, thanks to easier utilization with the advancement of computing hardware, thereby becoming a staple in AI, and moving the field one step closer to reaching superintelligent AI [2].
Types of Artificial Intelligence: Weak AI & Strong AI
In the previous section, it is noted that there are three essential elements necessary for reaching AI akin to ‘human level’ intelligence, and by extension superintelligent AI, but that current research has been unable to successfully implement such elements. Therefore, a distinction in the field of AI has been made, with there being two types of AI, which can be classified into weak AI and strong AI. Of the strong AI type, the AI is considered to have ‘human-like’ high level cognition ability, such as sensibility, reason, and understanding, in addition to other sets of elements like common sense, self-awareness, and creativity [1]. This is the type of AI that comes to mind when people think of AI from the various works of fiction that were mentioned in the first section. On other hand, weak AI simulates human intelligent processes passively without real understanding. That AI is focused on the creation of software solving specific, narrowly constrained problems. Such a weak AI system need not understand itself or what it is doing, and it need not be able to generalize what it has learned beyond its narrowly constrained problem domain [4]. Thus, from a task resolving ability perspective, weak AI is designed to finish a particular task, while strong AI is usually believed a general AI system, also known as an artificial general intelligence (AGI), which has the ability to fulfill multiple kinds of intelligent tasks with ‘human levels’ of cognition and intelligence. It is rather difficult though when it comes to defining what ‘human level’ means, especially when one starts thinking about potential highly-general-intelligence AI systems with fundamentally nonhuman-like architectures. If one has an AGI system with very different strengths and weaknesses than humans, but still possesses the power to solve complex problems across a variety of domains and transfer knowledge flexibly between these domains, it may be hard to meaningfully define whether this system is ‘human level’ or not. Further, humans are not exactly the smartest in their decision making as well, so ‘human level’ puts a constraint and possibly strict limitation on the
future scope of AGI.
Defining, Reaching, & Instilling Ethics into Superintelligent AI
While AI can have such supreme cognitive abilities and intelligent processes, and therefore be classified as an AGI, it still will not be able to reach the desired level of superintelligence yet, as it requires more than just ‘the brain.’ The AGI is required to have ‘a heart,’ in order to reach the true level of superintelligence befitting the title of superintelligent AI. Thus, researchers have developed two theses about the future evolution of the value systems of advanced AGI systems. First, the Value Learning Thesis (VLT), argues a version of the idea that, if an AGI system is taught human values in an interactive and experiential way as its intelligence increases toward ‘human level’ it will likely adopt these human values in a genuine way [6]. Second, the Value Evolution Thesis (VET), and this is a version of the idea that if an
AGI system begins with ‘human-like’ values, and then iteratively modifies itself, it will end up roughly in the same future states as a population of human beings engaged with progressively increasing their own intelligence (e.g. by cyborgification or brain modification) [6]. If an AGI is able to utilize these theses, it may be able to go even further and become an AGI metaarchitecture that is known as the goal-oriented learning meta-architecture (GOLEM), with such a system being both steadfast (which is defined as over a long period of time, the GOLEM either continues to pursue the same goals it had at the start of the time period, or stops acting altogether), and massively and self-improvingly intelligent [7]. At that point the AGI, or the GOLEM in this case, would have reached true superintelligence, and thus deemed worthy of the classification of superintelligent AI, having surpassed both human intelligence and cognitive functions, as well as been instilled with human ethics and feeling.
Future Hardware Systems Capable of Supporting Superintelligent AI
Despite the theories and methods of achieving superintelligent AI through such AGIsystems as the GOLEM described in the previous section, all of the software aspects would be useless without the necessary hardware systems capable of the grand task of supporting suchsuperintelligent AI systems. Especially since our current technology greatly lacks the computational power needed to sustain the powerful functions and vast amounts of data that thesuperintelligent AI systems would be running and have access to. Thus, the most promising, as well as researched hardware, would be quantum computers. Quantum computing itself makes direct use of quantum mechanical phenomena, such as superposition (the special entanglement of qubits wherein the qubits are in an equal superposition of being all 0 and being all 1 at the same moment) and entanglement (phenomenon in which the quantum states of two or more objects have to be described with reference to each other, even though the individual objects may be spatially separated), to perform operations on data. “It is expected to improve computational power for particular tasks such as prime factoring, database searching, cryptography, and simulation. Various approaches [are] being developed, but it is not yet clear which will have the best chances of success” [9].
Regardless of such uncertainty concerning the approaches to quantum computing, auniversal quantum computer, with its quantum computational model of computation being based on the production system, would be a very likely piece of hardware that could be created to use specifically for AI [10]. This is especially likely as the production system, which the universal quantum computer is based on, in the context of classical AI and cognitive psychology is one of the most successful models of human problem solving. The production system itself is composed of a set of rules (also called productions), which are modelled after our own short and long-term memory as humans. These rules work in tandem, modifying the memory until a goal state that is defined by the rules is reached, after which the system halts.
Two obstacles exist, however. Firstly, the issue with quantum computing is Grover’s algorithm, which is the cost of finding an item in some unsorted list of length n. Secondly, the problem of not knowing when a computation is finished, as early access to the value of a variable changes the other variables, and spoils the output, as per quantum entanglement. One such possible solution to the constraint posed by Grover’s algorithm in quantum tree searching, would be to combine the tree search that the production system performs, in addition to iterative deepening. A quantum production system is based on the iterative quantum tree search, thus this requires superposition over all the possible paths to some depth t, followed by use of Grover’s algorithm determine if the goal has been reached, which is marked by the oracle. By being able to utilize Grover’s algorithm, in addition to the quantum tree search and iterative deepening, the universal quantum, the computer would be able to be taken advantage of the classical AI programming languages, such as OPS5, as they are, “executed by matching the working memory elements with the productions in the long-term memory” [10]. Concerning the second problem, it is currently being researched as to how to avoid the spoiling of output, as it is difficult to bypass the nature of quantum entanglement successfully in a practical way, while also retaining the true values of the variables, and thus the aforementioned solution does not address this issue unfortunately. Therefore, such a solution, which is one of the few currently suggested with the limited functions that quantum computers possess with today’s technology and research, would allow for the superintelligent AI systems to have the necessary processing power to support the software of such superintelligent AI as the GOLEM, with all of the complex processes and elements that it entails, as detailed in previous sections.
Humanity in a Post-Humanistic Future of Superintelligent AI
With the rise of superintelligent AI systems that equal or surpass human beings in all dimensions of cognition, including creativity, power, insight, and wisdom, through such systems as the GOLEM, and thanks to quantum computing, humanity would be on the move to a posthumanistic future, in which the very definition of what it means to be human has changed. One such likely path is the Artificial Replacement Thesis, which D. Shiller has devised and proposed regarding the next step of evolution for humankind. While this thesis is not necessarily the norm, nor widely accepted, I believe it to be a necessary step in humanity’s future, which I argue in this section, and discuss in the next section. The Artificial Replacement Thesis suggests that we should replace our species with artificial creatures who are capable of living better lives, such as ones that are free from pain, sickness, and suffering [8]. At such a point, we have essentially reached the Technological Singularity, which is defined as when there are “significant changes to technology, and also society because of its dependence on technology,” such that technology soon outpaces the understanding and capabilities of humans, and is thus ceded to the machines of superior intelligence. As a result, “human life will be irreversibly transformed, and humans will transcend the limitations of our biological bodies and brain, [and that] the intelligence that emerge[s] will continue to represent the human civilization.” These, “future machines will be human, even if they are not biological” [9]. Therefore, the Artificial Replacement Thesis fits perfectly in describing the post-humanistic possibility of our world. This route relies on the “Future Beneficence Principle: Where it is possible to greatly improve the well-being of future generations at a comparatively low cost to ourselves, we should do so, even if doing so will affect the identity of those future beings,” in addition to the “Future Nonmaleficence Principle: Where it is possible to improve our well-being, at a comparatively far greater cost to future generations, we should not do so, even if doing so will affect the identity of those future beings”[8]. If we accept these two assumptions, it is arguably better to create artificially intelligent beings, rather than human progeny, therefore forgoing natural reproduction on the grounds of benefice, with those artificial beings being able to live a better life than biological progeny, and thus the best life. “That fact that such creatures are made of silicon and do not emerge directly from our genitals in morally irrelevant,” [8]. Consequently, with such artificial creatures being produced that can live lives that are much closer to being optimal (since they are more capable mentally than humans could ever be, and thus can live that optimal life that utilizes all of their strengths and power) than the quite suboptimal ones of humans, we should usher in and engineer the extinction of the human race in order to route available resources to creating and sustaining such creatures. “Our resources are finite [after all], and the same resources that might allow
human beings to live – effort, land, energy, raw materials – could be more effectively spent on creating and sustaining artificial creatures” [8]. It is by the Future Beneficence Principle, which only instructs us to act in the interests of future generations when it is not comparatively costly to ourselves. So, in order for the Artificial Replacement Thesis to be supported by the principle, “it would need to be possible for human extinction to be carried out in relative comfort. Though one might imagine that the last generation of humans would feel anguish, despair, and loneliness, there is no reason why this must be the case. The last humans would have the company of not just each other, but also of their artificial progeny” [8]. Thus, would end the only known form of the human race, but for the greater benefit of our so-called ‘offspring.’ As the effects of the Technological Singularity are defined, “‘the intelligence that will emerge will continue to represent the human civilization’, and that ‘future machines will be human, even if they are not biological’” [8]. Since, we created such artificial creatures to be our progeny, we might as well call them our children, and they may in fact call themselves human, thereby causing a reevaluation as to what it means to be human, during those last few moments in which the
remaining generations of the human race peter out.
Further Questions & Inquiries of Research
While the ethical implications, and possibilities of humanity and what or who we are discussed throughout this paper, another question to ask is how would superintelligent AI impact our society economically and socially. There are more concerns than such ethical implications of humanity as a species, as our society and very way of life could change with the introduction of superintelligent AI. Our identity as humans, in addition to the very definition as what it means to be human could change. Having briefly addressed this in the fifth section, I’d like to openly opine here that artificial replacement is the next logical step for humanity, should we reach a
level of AI as to where they could be truly considered superintelligent AI. When humans reach what seems to be a limit in terms of physical and biological evolution, and with the advent of such amazing technology that would allow for artificial replacement, it makes sense to take advantage of such an opportunity to continue our legacy as humans, but in a form, that is superior to our own, and more likely to live in on in a state of happiness. We would be able to give our artificial ‘children’ the best life possible, as any good parent would want to do so for their progeny, and thus we would be proud to see these artificial creatures, who are modelled on our cognitive and intellectual processes, and who partake in the similarly shared values that we do, succeed and live on as improved embodiments of ourselves. Throughout history, humans have progressively gone through improved reiterations of themselves over millions of years, with some hominins such as Homo erectus and Homo neanderthalensis dying out, for they were not able to survive and adapt as the superior Homo sapiens had. But at that point when the Singularity is reached, it will be Homo sapiens who die out, although voluntarily, in order to allow superintelligent artificial creatures to become our next iteration of ourselves.
Aside from such questions though, further research should be done in terms of possible types of hardware that can support superintelligent AI systems, that way quantum computing is not the only possible option, especially if it is found that one of these technologies is more powerful. There are five notable technologies that are likely worth researching, and they are broken down into two categories: traditional computing and biological computing.
The first type of hardware technology in the traditional computing category would be carbon nanotubes, which in theory could be substantially more conductive than copper. They are also semiconducting [9]. Thus, it has the capability for replacing silicon on a nanometer scale, therefore allowing us to cram more transistors into a smaller space.
The second type of hardware technology in the traditional computing category would be optical computing, which makes usage of photons for computation, with a possibly higher bandwidth than currently technology, although there is uncertainty on whether they would be better overall than silicon when the full range of performance criteria are taken into account, especially size, but also speed, power consumption, and cost [9].
The third type of hardware technology in the traditional computing category would be Germanium nFETs. A new design for germanium nFETs which improves their performance significantly has been reported by K. Bourzac in MIT Technology Review. These are CMOS circuits that use transistors to conduct negative charges (called nFETs), and the transistors that conduct positive charges (called pFETs) [9].
The first type of hardware technology in the biological computing category would be DNA computing. It is maybe possible to use DNA as a carrier of information to perform arithmetic and logic operations, and it is therefore operating at a molecular scale. E. Shapiro and T. Ran’s findings published in Nature Nanotechnology, demonstrated that DNA molecules can be programmed to execute any dynamic process of chemical kinetics. They can also implement an algorithm for achieving consensus between multiple agents. There is also the possibility of using nucleotides, and their pairing properties in DNA double helices, as the alphabet and basic
rules of a programming language. Thus, hardware and software can be represented by DNA and can provide a direct interface for the digital control of nanoscale physical or biological systems. In comparison to quantum computing, DNA computing could possibly allow us to create ‘living’ computers, as opposed to the classical machine of metal and circuits. It can also use many different molecules simultaneously and therefore run computing operations in parallel [9].
The second type of hardware technology in the biological computing category would be neuromorphic computing, which seeks to utilize neural systems to process information. Neuromorphic engineering is a new interdisciplinary subject that takes inspiration from the biological and natural sciences to design artificial neural systems, such as vision systems, headeye systems, auditory processors, and autonomous robots, whose structure and properties are based on those of biological nervous systems [9]. It would be more organic, similar to the ‘living’ computer possibility mentioned with regards to DNA computing, thus it’d be quite interesting to see superintelligent AIs with flesh bodies, especially in regard to the artificial replacement of humans. Neuromorphic computing and DNA computing could be used in tandem for increased computing power, and thus both should be researched for the possibility of a combined benefit for artificial intelligence, in all its varieties, but more so concerning AGI, and superintelligent AI.
Conclusion
While AI is an exciting field which has seen slow, but meaningful progress, a great many technologies and methods are still needed in order to reach the grand AI dream of superintelligent AI. It is very likely humans will be able to achieve the dream, considering the current pace of AI research, development, and theory, in addition to the new data and information discovered every day in related fields. But it all comes down to a matter of when, and it is evident from the multitude of weak AI that we currently create and employ that we are still quite some ways off from reaching an AI that is able to reach ‘human levels’ of cognitive functions and intelligence. Certainly, the prevalence of weak AI research leads to many domains specific successes, but it does not succeed in progressively moving toward the field toward AGI, and ultimately the GOLEM, fully instilled with human values and equipped with ‘human levels’ of cognition. Explicit AGI research sadly just continues to fail in producing notable results, despite AI researcher’s efforts to implement the three elements, or any other sets of elements that mirror human complexity. A possible problem in our day and age, is that the human mind proves incapable of understanding, replicating or improving on ‘human level’ intelligence, at least for
the time being, as we are unsure ourselves on how exactly to define it. There is still much to learn about the human brain, from a biological perspective, and psyche, from a psychological and psychiatric perspective, in addition to what exactly those cognitive and intellectual processes are, that we take every day for granted. Thus, I see that our preferred direction concerning research, lies with the neurologists, psychologists, and psychiatrists as they continue to release new studies and findings about our understanding of the human brain and mind, so as to allow AI researchers to more genuinely replicate the human mind, since they will be able to have a more comprehensive definition of what ‘human level’ intelligence is. I can mostly certainly imagine though that by the end of the century, armed with a greater understanding of the human mind and cognition, in addition to the power of quantum computing, or perhaps one or several of the other technologies mentioned in section regarding further research, humans will see a fully functioning AGI, which the VLT and VET theories could be applied to, in order to create a superintelligent
AI capable of benefiting all of humanity, and that will eventually become our legacy.
Works Cited
[1] T. Zhang and X. Li, "An Exploration on Artificial Intelligence Application: From Security,
Privacy and Ethic Perspective", in 2017 IEEE 2nd International Conference on Cloud
Computing and Big Data Analysis (ICCCBDA), Chengdu, China, 2017, pp. 416-420.
[2] E. Geist, "(Automated) Planning for Tomorrow: Will Artificial Intelligence Get Smarter?",
Bulletin of the Atomic Scientists, vol. 73, no. 2, pp. 80-85, 2017.
[3] S. Mitsuyoshi and F. Ren, "Sentience System Computer: Principles and Practices", in 2002
IEEE International Conference on Systems, Man and Cybernetics Systems, Yasmine Hammamet,
Tunisia, 2002, pp. 1-8.
[4] B. Goertzel, "Human-Level Artificial General Intelligence and the Possibility of a
Technological Singularity a Reaction to Ray Kurzweil’s The Singularity Is Near, and
McDermott’s Critique of Kurzweil", Pennsylvania State University, 2013.
[5] R. Brooks, "The Seven Deadly Sins of AI Predictions.", MIT Technology Review, vol. 120,
no. 6, pp. 79-86, 2017.
[6] B. Goertzel, "Infusing Advanced AGIs with Human-Like Value Systems: Two Theses",
Journal of Evolution and Technology, vol. 26, no. 1, pp. 50-72, 2016.
[7] B. Goertzel, "GOLEM: Towards an AGI Meta-Architecture Enabling Both Goal Preservation
and Radical Self-Improvement", Journal of Experimental & Theoretical Artificial Intelligence,
vol. 26, no. 3, pp. 391-403, 2014.
[8] D. Shiller, "In Defense of Artificial Replacement", Bioethics, vol. 31, no. 5, pp. 393-399,
2017.
Zamora 17
[9] P. Excell and R. Earnshaw, "The Future of Computing — The Implications for Society of
Technology Forecasting and the Kurzweil Singularity", in 2015 IEEE International Symposium
on Technology and Society (ISTAS), Dublin, Ireland, 2015, pp. 1-6.
[10] A. Wichert, "Artificial Intelligence and a Universal Quantum Computer", AI
Communications, vol. 29, no. 4, pp. 537-543, 2016.
[11] E. Burton, J. Goldsmith, S. Koenig, B. Kuipers, N. Mattei and T. Walsh, "Ethical
Considerations in Artificial Intelligence Courses.", AI Magazine, vol. 38, no. 2, pp. 22-34, 2017.
[12] T. Dietterich and E. Horvitz, "Rise of Concerns About AI: Reflections and Directions",
Communications of the ACM, vol. 58, no. 10, pp. 38-40, 2015.
[13] D. Dubhashi and S. Lappin, "AI Dangers: Imagined and Real", Communications of the
ACM, vol. 60, no. 2, pp. 43-45, 2017.
[14] V. Dunjko and H. Briegel, "Machine Learning & Artificial Intelligence in the Quantum
Domain", Cornell University, 2017.
[15] K. Fujii and K. Nakajima, "Harnessing Disordered Quantum Dynamics for Machine
Learning", Kyoto University, 2016.
[16] C. Maccone, "Kurzweil's Singularity as a Part of Evo-SETI Theory", Acta Astronautica, vol.
132, pp. 312-325, 2016.
This paper will determine the possible technological point at which we could reach levels of creating artificial intelligence (AI), dubbed superintelligent AI, that surpass human beings in all cognitive functions and domains. The first section will discuss the current research and progress regarding advances with AI, as well as what AI researchers seek to implement in the creation of AI. The second section will present the specific types of AI that are defined within the field of computer science, in addition to how they work and the theories behind their functions. The third section will present possible ways of reaching AI, in terms of software, on a more sophisticated level of superintelligence, explaining the types of AI involved, and discussing how such superintelligent AI could be instilled ethically so that it may be beneficial, and not malicious, for humanity. The fourth section will present the kinds of necessary hardware to support such superintelligent AI that were described and defined in the previous section. The fifth section will discuss the meaning of being human in a post-humanistic future that would be run by such advanced AI and superintelligent AI, and the question whether superintelligent AI be brought to fruition. The sixth section will introduce possible lines of research and inquiry that would be likely be beneficial to the furtherment of AI, and particularly superintelligent AI. Finally, the seventh section will summarize and recapture the essential points of the paper, while concluding it as well.
Introduction & Background
Artificial intelligence (AI) has long been a mainstay in fiction literature and media, particularly those of science fiction. From a nationwide administrative AI that measures people’s likelihood to commit crime, to the quirky AI of a helpful and humorous droid companion. Thesekinds of superintelligent AI with their own sort of personalities, but also flawless planning and logic capabilities, have been a goal since the conference held at Dartmouth College in 1956, at which the term of artificial intelligence was first proposed [1]. But with the power that superintelligent AI brings, come many fears about such ‘minds’ supplanting human intelligence as the dominant intelligence on earth, or that such AI will go rogue and turn on their human creators, effectively wiping humanity out of existence in a mass genocide. This paper seeks to inform readers on the possible technological point at which we may be able to create superintelligent AI that are vastly superior to humans in all cognitive functions and domains, in addition to the methods and way to reach it, from both a hardware and software perspective, with considerations for whether superintelligent AI presents a crisis to our existence as a species as well.
Although before such matters can be addressed, what exactly is AI? AI is best understood as a collection of separate fields and approaches whose goal is to create machines and programs which are capable of performing the same tasks that require the cognitive functions of when humans do them [2]. These cognitive functions and intelligent processes include such elements as sensibility, reason, and understanding [3]. In attempting to recreate such elements in AI, and more particularly in an effort to create superintelligent AI, I have chosen S. Mitsuyoshi and F. Ren’s definition of the three elements, which they had in fact proposed as well, for they published a paper through the IEEE that proposes a new machine (the Sentience System Computer) that would replicate sentience (in terms of idea generation and intuition), and thus be capable of having self-intention as humans do, which would be a great benefit in reaching superintelligence with AI. The paper also includes an algorithm for spontaneity, a procedural model of man's way-of-thinking, as well as recent results regarding the CSS, thus Mitsuyoshi and Ren present a comprehensive understanding of the elements needed to reach advanced AI, such as superintelligent AI, and are therefore my choice in terms of elements and their definitions, which are as follows.
Firstly, ‘sensibility’ is the ability to feel intuitively the latent information that an object involves, without following the dictates of reason. Sensibility is also (1) the reaction to activate senses, corresponding to a stimulus or a stimulus change, (2) capability of sense organs that generate sense and perception, sparked off by an object, (3) psychological experience provoked by sense and perception. Sensibility therefore entails such integral qualities as ‘feeling,’ the most common word explaining sensitivity; ‘emotion,’ strong feeling such as anger, love and hatred; ‘sentiment,’ a feeling based on logical thinking; ‘passion,’ strong feeling beyond logical
judgment; ‘fervor,’ a heated or burning feeling; and ‘enthusiasm,’ strong intention for an argument, action or proposal [3].
Secondly, ‘reason’ is the capability of judging based on logic without being moved by feeling or desire, as well as mental action to know truth or false, and good or evil, as universal order. Reason’s synonyms include rational, grounded and logical. Simply put, it means, ‘not influenced by emotions.’ It is also worth noting that reason is not necessarily the better characteristic than sensitivity. “For instance, the sentence, ‘He is moved more by his head than
his heart,’ can be either applause or blame, depending on situation” [3].
Thirdly, ‘understanding’ is the capability of logical perception and theoretical thinking, and is thus positioned between reason and sensitivity. ‘To comprehend’ is the mental process to reach understanding, and ‘appreciate’ is to understand and evaluate the true value of a thing [3].
Yet even with the three elements of sensibility, reason, and understanding identified and defined as a set of cognitive functions and intelligent processes that would be necessary to create an AI that is as advanced as humans, or even far exceeding human intelligence, thereby becoming a superintelligent AI, it is quite apparent that current research has been unable to implement such elements so as to produce an AI of that magnitude. It is no wonder then that enthusiasm for these grand AI dreams – both within the AI profession and in society at large – has risen and fallen repeatedly. Yet despite these fluctuations, research and development have steadily advanced on various fronts within AI and allied disciplines [4]. Although successful implementation of the three elements has not been accomplished yet, current AI research is mostly focused on, and has discovered, ways to make computers perform very modest utilitarian cognitive tasks, with there being other AI research too that is dedicated to theoretical or methodological investigations that might, or perhaps might not, lead to practical applications and usage for the future. Despite the spread of practical and theoretical research though, spectacular progress in one area of AI research often doesn’t do much to advance others [2].
Thus, it is no challenge to see that development in the field of AI is not a straight one, but one that is unpredictable and uneven; AI breakthroughs can occasionally emerge from lines of research that had previously been written off as disappointments. Take, for instance, deep learning (which is modelled after human neural networks, and uses a cascade of multiple layers of nonlinear processing units for feature extraction and transformation, with each successive layer using the output from the previous layer as input) and reinforcement learning (which allows machines and software agents to automatically determine the ideal behavior within a specific context, in order to maximize its performance, with the agent learning by way of simple reward feedback) both of which in fact are widely used in many AI today. As recently as the early 2000s, both technologies were curiosities studied by only a handful of academic researchers that, for all practical intents and purposes, did not really work. Although, as Roy Amara, cofounder of the Institute for Future Learning, has said, “[w]e tend to … underestimate the effect [of a technology] in the long run,” [5]. Such words ring ironically true, for within several years, they had conquered problems many AI researchers expected would remain unsolved for the
foreseeable future, thanks to easier utilization with the advancement of computing hardware, thereby becoming a staple in AI, and moving the field one step closer to reaching superintelligent AI [2].
Types of Artificial Intelligence: Weak AI & Strong AI
In the previous section, it is noted that there are three essential elements necessary for reaching AI akin to ‘human level’ intelligence, and by extension superintelligent AI, but that current research has been unable to successfully implement such elements. Therefore, a distinction in the field of AI has been made, with there being two types of AI, which can be classified into weak AI and strong AI. Of the strong AI type, the AI is considered to have ‘human-like’ high level cognition ability, such as sensibility, reason, and understanding, in addition to other sets of elements like common sense, self-awareness, and creativity [1]. This is the type of AI that comes to mind when people think of AI from the various works of fiction that were mentioned in the first section. On other hand, weak AI simulates human intelligent processes passively without real understanding. That AI is focused on the creation of software solving specific, narrowly constrained problems. Such a weak AI system need not understand itself or what it is doing, and it need not be able to generalize what it has learned beyond its narrowly constrained problem domain [4]. Thus, from a task resolving ability perspective, weak AI is designed to finish a particular task, while strong AI is usually believed a general AI system, also known as an artificial general intelligence (AGI), which has the ability to fulfill multiple kinds of intelligent tasks with ‘human levels’ of cognition and intelligence. It is rather difficult though when it comes to defining what ‘human level’ means, especially when one starts thinking about potential highly-general-intelligence AI systems with fundamentally nonhuman-like architectures. If one has an AGI system with very different strengths and weaknesses than humans, but still possesses the power to solve complex problems across a variety of domains and transfer knowledge flexibly between these domains, it may be hard to meaningfully define whether this system is ‘human level’ or not. Further, humans are not exactly the smartest in their decision making as well, so ‘human level’ puts a constraint and possibly strict limitation on the
future scope of AGI.
Defining, Reaching, & Instilling Ethics into Superintelligent AI
While AI can have such supreme cognitive abilities and intelligent processes, and therefore be classified as an AGI, it still will not be able to reach the desired level of superintelligence yet, as it requires more than just ‘the brain.’ The AGI is required to have ‘a heart,’ in order to reach the true level of superintelligence befitting the title of superintelligent AI. Thus, researchers have developed two theses about the future evolution of the value systems of advanced AGI systems. First, the Value Learning Thesis (VLT), argues a version of the idea that, if an AGI system is taught human values in an interactive and experiential way as its intelligence increases toward ‘human level’ it will likely adopt these human values in a genuine way [6]. Second, the Value Evolution Thesis (VET), and this is a version of the idea that if an
AGI system begins with ‘human-like’ values, and then iteratively modifies itself, it will end up roughly in the same future states as a population of human beings engaged with progressively increasing their own intelligence (e.g. by cyborgification or brain modification) [6]. If an AGI is able to utilize these theses, it may be able to go even further and become an AGI metaarchitecture that is known as the goal-oriented learning meta-architecture (GOLEM), with such a system being both steadfast (which is defined as over a long period of time, the GOLEM either continues to pursue the same goals it had at the start of the time period, or stops acting altogether), and massively and self-improvingly intelligent [7]. At that point the AGI, or the GOLEM in this case, would have reached true superintelligence, and thus deemed worthy of the classification of superintelligent AI, having surpassed both human intelligence and cognitive functions, as well as been instilled with human ethics and feeling.
Future Hardware Systems Capable of Supporting Superintelligent AI
Despite the theories and methods of achieving superintelligent AI through such AGIsystems as the GOLEM described in the previous section, all of the software aspects would be useless without the necessary hardware systems capable of the grand task of supporting suchsuperintelligent AI systems. Especially since our current technology greatly lacks the computational power needed to sustain the powerful functions and vast amounts of data that thesuperintelligent AI systems would be running and have access to. Thus, the most promising, as well as researched hardware, would be quantum computers. Quantum computing itself makes direct use of quantum mechanical phenomena, such as superposition (the special entanglement of qubits wherein the qubits are in an equal superposition of being all 0 and being all 1 at the same moment) and entanglement (phenomenon in which the quantum states of two or more objects have to be described with reference to each other, even though the individual objects may be spatially separated), to perform operations on data. “It is expected to improve computational power for particular tasks such as prime factoring, database searching, cryptography, and simulation. Various approaches [are] being developed, but it is not yet clear which will have the best chances of success” [9].
Regardless of such uncertainty concerning the approaches to quantum computing, auniversal quantum computer, with its quantum computational model of computation being based on the production system, would be a very likely piece of hardware that could be created to use specifically for AI [10]. This is especially likely as the production system, which the universal quantum computer is based on, in the context of classical AI and cognitive psychology is one of the most successful models of human problem solving. The production system itself is composed of a set of rules (also called productions), which are modelled after our own short and long-term memory as humans. These rules work in tandem, modifying the memory until a goal state that is defined by the rules is reached, after which the system halts.
Two obstacles exist, however. Firstly, the issue with quantum computing is Grover’s algorithm, which is the cost of finding an item in some unsorted list of length n. Secondly, the problem of not knowing when a computation is finished, as early access to the value of a variable changes the other variables, and spoils the output, as per quantum entanglement. One such possible solution to the constraint posed by Grover’s algorithm in quantum tree searching, would be to combine the tree search that the production system performs, in addition to iterative deepening. A quantum production system is based on the iterative quantum tree search, thus this requires superposition over all the possible paths to some depth t, followed by use of Grover’s algorithm determine if the goal has been reached, which is marked by the oracle. By being able to utilize Grover’s algorithm, in addition to the quantum tree search and iterative deepening, the universal quantum, the computer would be able to be taken advantage of the classical AI programming languages, such as OPS5, as they are, “executed by matching the working memory elements with the productions in the long-term memory” [10]. Concerning the second problem, it is currently being researched as to how to avoid the spoiling of output, as it is difficult to bypass the nature of quantum entanglement successfully in a practical way, while also retaining the true values of the variables, and thus the aforementioned solution does not address this issue unfortunately. Therefore, such a solution, which is one of the few currently suggested with the limited functions that quantum computers possess with today’s technology and research, would allow for the superintelligent AI systems to have the necessary processing power to support the software of such superintelligent AI as the GOLEM, with all of the complex processes and elements that it entails, as detailed in previous sections.
Humanity in a Post-Humanistic Future of Superintelligent AI
With the rise of superintelligent AI systems that equal or surpass human beings in all dimensions of cognition, including creativity, power, insight, and wisdom, through such systems as the GOLEM, and thanks to quantum computing, humanity would be on the move to a posthumanistic future, in which the very definition of what it means to be human has changed. One such likely path is the Artificial Replacement Thesis, which D. Shiller has devised and proposed regarding the next step of evolution for humankind. While this thesis is not necessarily the norm, nor widely accepted, I believe it to be a necessary step in humanity’s future, which I argue in this section, and discuss in the next section. The Artificial Replacement Thesis suggests that we should replace our species with artificial creatures who are capable of living better lives, such as ones that are free from pain, sickness, and suffering [8]. At such a point, we have essentially reached the Technological Singularity, which is defined as when there are “significant changes to technology, and also society because of its dependence on technology,” such that technology soon outpaces the understanding and capabilities of humans, and is thus ceded to the machines of superior intelligence. As a result, “human life will be irreversibly transformed, and humans will transcend the limitations of our biological bodies and brain, [and that] the intelligence that emerge[s] will continue to represent the human civilization.” These, “future machines will be human, even if they are not biological” [9]. Therefore, the Artificial Replacement Thesis fits perfectly in describing the post-humanistic possibility of our world. This route relies on the “Future Beneficence Principle: Where it is possible to greatly improve the well-being of future generations at a comparatively low cost to ourselves, we should do so, even if doing so will affect the identity of those future beings,” in addition to the “Future Nonmaleficence Principle: Where it is possible to improve our well-being, at a comparatively far greater cost to future generations, we should not do so, even if doing so will affect the identity of those future beings”[8]. If we accept these two assumptions, it is arguably better to create artificially intelligent beings, rather than human progeny, therefore forgoing natural reproduction on the grounds of benefice, with those artificial beings being able to live a better life than biological progeny, and thus the best life. “That fact that such creatures are made of silicon and do not emerge directly from our genitals in morally irrelevant,” [8]. Consequently, with such artificial creatures being produced that can live lives that are much closer to being optimal (since they are more capable mentally than humans could ever be, and thus can live that optimal life that utilizes all of their strengths and power) than the quite suboptimal ones of humans, we should usher in and engineer the extinction of the human race in order to route available resources to creating and sustaining such creatures. “Our resources are finite [after all], and the same resources that might allow
human beings to live – effort, land, energy, raw materials – could be more effectively spent on creating and sustaining artificial creatures” [8]. It is by the Future Beneficence Principle, which only instructs us to act in the interests of future generations when it is not comparatively costly to ourselves. So, in order for the Artificial Replacement Thesis to be supported by the principle, “it would need to be possible for human extinction to be carried out in relative comfort. Though one might imagine that the last generation of humans would feel anguish, despair, and loneliness, there is no reason why this must be the case. The last humans would have the company of not just each other, but also of their artificial progeny” [8]. Thus, would end the only known form of the human race, but for the greater benefit of our so-called ‘offspring.’ As the effects of the Technological Singularity are defined, “‘the intelligence that will emerge will continue to represent the human civilization’, and that ‘future machines will be human, even if they are not biological’” [8]. Since, we created such artificial creatures to be our progeny, we might as well call them our children, and they may in fact call themselves human, thereby causing a reevaluation as to what it means to be human, during those last few moments in which the
remaining generations of the human race peter out.
Further Questions & Inquiries of Research
While the ethical implications, and possibilities of humanity and what or who we are discussed throughout this paper, another question to ask is how would superintelligent AI impact our society economically and socially. There are more concerns than such ethical implications of humanity as a species, as our society and very way of life could change with the introduction of superintelligent AI. Our identity as humans, in addition to the very definition as what it means to be human could change. Having briefly addressed this in the fifth section, I’d like to openly opine here that artificial replacement is the next logical step for humanity, should we reach a
level of AI as to where they could be truly considered superintelligent AI. When humans reach what seems to be a limit in terms of physical and biological evolution, and with the advent of such amazing technology that would allow for artificial replacement, it makes sense to take advantage of such an opportunity to continue our legacy as humans, but in a form, that is superior to our own, and more likely to live in on in a state of happiness. We would be able to give our artificial ‘children’ the best life possible, as any good parent would want to do so for their progeny, and thus we would be proud to see these artificial creatures, who are modelled on our cognitive and intellectual processes, and who partake in the similarly shared values that we do, succeed and live on as improved embodiments of ourselves. Throughout history, humans have progressively gone through improved reiterations of themselves over millions of years, with some hominins such as Homo erectus and Homo neanderthalensis dying out, for they were not able to survive and adapt as the superior Homo sapiens had. But at that point when the Singularity is reached, it will be Homo sapiens who die out, although voluntarily, in order to allow superintelligent artificial creatures to become our next iteration of ourselves.
Aside from such questions though, further research should be done in terms of possible types of hardware that can support superintelligent AI systems, that way quantum computing is not the only possible option, especially if it is found that one of these technologies is more powerful. There are five notable technologies that are likely worth researching, and they are broken down into two categories: traditional computing and biological computing.
The first type of hardware technology in the traditional computing category would be carbon nanotubes, which in theory could be substantially more conductive than copper. They are also semiconducting [9]. Thus, it has the capability for replacing silicon on a nanometer scale, therefore allowing us to cram more transistors into a smaller space.
The second type of hardware technology in the traditional computing category would be optical computing, which makes usage of photons for computation, with a possibly higher bandwidth than currently technology, although there is uncertainty on whether they would be better overall than silicon when the full range of performance criteria are taken into account, especially size, but also speed, power consumption, and cost [9].
The third type of hardware technology in the traditional computing category would be Germanium nFETs. A new design for germanium nFETs which improves their performance significantly has been reported by K. Bourzac in MIT Technology Review. These are CMOS circuits that use transistors to conduct negative charges (called nFETs), and the transistors that conduct positive charges (called pFETs) [9].
The first type of hardware technology in the biological computing category would be DNA computing. It is maybe possible to use DNA as a carrier of information to perform arithmetic and logic operations, and it is therefore operating at a molecular scale. E. Shapiro and T. Ran’s findings published in Nature Nanotechnology, demonstrated that DNA molecules can be programmed to execute any dynamic process of chemical kinetics. They can also implement an algorithm for achieving consensus between multiple agents. There is also the possibility of using nucleotides, and their pairing properties in DNA double helices, as the alphabet and basic
rules of a programming language. Thus, hardware and software can be represented by DNA and can provide a direct interface for the digital control of nanoscale physical or biological systems. In comparison to quantum computing, DNA computing could possibly allow us to create ‘living’ computers, as opposed to the classical machine of metal and circuits. It can also use many different molecules simultaneously and therefore run computing operations in parallel [9].
The second type of hardware technology in the biological computing category would be neuromorphic computing, which seeks to utilize neural systems to process information. Neuromorphic engineering is a new interdisciplinary subject that takes inspiration from the biological and natural sciences to design artificial neural systems, such as vision systems, headeye systems, auditory processors, and autonomous robots, whose structure and properties are based on those of biological nervous systems [9]. It would be more organic, similar to the ‘living’ computer possibility mentioned with regards to DNA computing, thus it’d be quite interesting to see superintelligent AIs with flesh bodies, especially in regard to the artificial replacement of humans. Neuromorphic computing and DNA computing could be used in tandem for increased computing power, and thus both should be researched for the possibility of a combined benefit for artificial intelligence, in all its varieties, but more so concerning AGI, and superintelligent AI.
Conclusion
While AI is an exciting field which has seen slow, but meaningful progress, a great many technologies and methods are still needed in order to reach the grand AI dream of superintelligent AI. It is very likely humans will be able to achieve the dream, considering the current pace of AI research, development, and theory, in addition to the new data and information discovered every day in related fields. But it all comes down to a matter of when, and it is evident from the multitude of weak AI that we currently create and employ that we are still quite some ways off from reaching an AI that is able to reach ‘human levels’ of cognitive functions and intelligence. Certainly, the prevalence of weak AI research leads to many domains specific successes, but it does not succeed in progressively moving toward the field toward AGI, and ultimately the GOLEM, fully instilled with human values and equipped with ‘human levels’ of cognition. Explicit AGI research sadly just continues to fail in producing notable results, despite AI researcher’s efforts to implement the three elements, or any other sets of elements that mirror human complexity. A possible problem in our day and age, is that the human mind proves incapable of understanding, replicating or improving on ‘human level’ intelligence, at least for
the time being, as we are unsure ourselves on how exactly to define it. There is still much to learn about the human brain, from a biological perspective, and psyche, from a psychological and psychiatric perspective, in addition to what exactly those cognitive and intellectual processes are, that we take every day for granted. Thus, I see that our preferred direction concerning research, lies with the neurologists, psychologists, and psychiatrists as they continue to release new studies and findings about our understanding of the human brain and mind, so as to allow AI researchers to more genuinely replicate the human mind, since they will be able to have a more comprehensive definition of what ‘human level’ intelligence is. I can mostly certainly imagine though that by the end of the century, armed with a greater understanding of the human mind and cognition, in addition to the power of quantum computing, or perhaps one or several of the other technologies mentioned in section regarding further research, humans will see a fully functioning AGI, which the VLT and VET theories could be applied to, in order to create a superintelligent
AI capable of benefiting all of humanity, and that will eventually become our legacy.
Works Cited
[1] T. Zhang and X. Li, "An Exploration on Artificial Intelligence Application: From Security,
Privacy and Ethic Perspective", in 2017 IEEE 2nd International Conference on Cloud
Computing and Big Data Analysis (ICCCBDA), Chengdu, China, 2017, pp. 416-420.
[2] E. Geist, "(Automated) Planning for Tomorrow: Will Artificial Intelligence Get Smarter?",
Bulletin of the Atomic Scientists, vol. 73, no. 2, pp. 80-85, 2017.
[3] S. Mitsuyoshi and F. Ren, "Sentience System Computer: Principles and Practices", in 2002
IEEE International Conference on Systems, Man and Cybernetics Systems, Yasmine Hammamet,
Tunisia, 2002, pp. 1-8.
[4] B. Goertzel, "Human-Level Artificial General Intelligence and the Possibility of a
Technological Singularity a Reaction to Ray Kurzweil’s The Singularity Is Near, and
McDermott’s Critique of Kurzweil", Pennsylvania State University, 2013.
[5] R. Brooks, "The Seven Deadly Sins of AI Predictions.", MIT Technology Review, vol. 120,
no. 6, pp. 79-86, 2017.
[6] B. Goertzel, "Infusing Advanced AGIs with Human-Like Value Systems: Two Theses",
Journal of Evolution and Technology, vol. 26, no. 1, pp. 50-72, 2016.
[7] B. Goertzel, "GOLEM: Towards an AGI Meta-Architecture Enabling Both Goal Preservation
and Radical Self-Improvement", Journal of Experimental & Theoretical Artificial Intelligence,
vol. 26, no. 3, pp. 391-403, 2014.
[8] D. Shiller, "In Defense of Artificial Replacement", Bioethics, vol. 31, no. 5, pp. 393-399,
2017.
Zamora 17
[9] P. Excell and R. Earnshaw, "The Future of Computing — The Implications for Society of
Technology Forecasting and the Kurzweil Singularity", in 2015 IEEE International Symposium
on Technology and Society (ISTAS), Dublin, Ireland, 2015, pp. 1-6.
[10] A. Wichert, "Artificial Intelligence and a Universal Quantum Computer", AI
Communications, vol. 29, no. 4, pp. 537-543, 2016.
[11] E. Burton, J. Goldsmith, S. Koenig, B. Kuipers, N. Mattei and T. Walsh, "Ethical
Considerations in Artificial Intelligence Courses.", AI Magazine, vol. 38, no. 2, pp. 22-34, 2017.
[12] T. Dietterich and E. Horvitz, "Rise of Concerns About AI: Reflections and Directions",
Communications of the ACM, vol. 58, no. 10, pp. 38-40, 2015.
[13] D. Dubhashi and S. Lappin, "AI Dangers: Imagined and Real", Communications of the
ACM, vol. 60, no. 2, pp. 43-45, 2017.
[14] V. Dunjko and H. Briegel, "Machine Learning & Artificial Intelligence in the Quantum
Domain", Cornell University, 2017.
[15] K. Fujii and K. Nakajima, "Harnessing Disordered Quantum Dynamics for Machine
Learning", Kyoto University, 2016.
[16] C. Maccone, "Kurzweil's Singularity as a Part of Evo-SETI Theory", Acta Astronautica, vol.
132, pp. 312-325, 2016.