A Solution to the Multi-Agent Value Alignment Problem Abstract AI Safety researchers attempting to align values of highly capable intelligent systems with those of humanity face a number of challenges including personal value extraction, multi-agent value merger and finally in-silico encoding. State-of-the-art research in value alignment shows difficulties in every stage in this process, but merger of incompatible preferences is a particularly difficult challenge to overcome. In this paper we assume that the value extraction problem will be solved and propose a possible way to implement an AI solution which optimally aligns with individual preferences of each user. We conclude by analyzing benefits and limitations of the proposed approach. AI Safety, Alternate Reality, Simulation, Value Alignment Problem, VR Keywords: 1. Introduction to the Multi-Agent Value Alignment Problem Since the birth of the field of Artificial Intelligence (AI) researchers worked on creating ever capable machines, but with recent success in multiple subdomains of AI [1–7] safety and security of such systems and predicted future superintelligences [8, 9] has become paramount [10, 11]. While many diverse safety mechanisms are being investigated [12, 13], the ultimate goal is to align with goals, values and preferences of its users which is likely to include all of humanity. AI Value alignment problem [14], can be decomposed into three sub-problems, namely: personal value extraction from individual persons, combination of such personal preferences in a way, which is acceptable to all, and finally production of an intelligent system, which implements combined values of humanity. A number of approaches for extracting values [15–17] from people have been investigated, including inverse reinforcement learning [18, 19], brain scanning [20], value learning from literature [21], and understanding of human cognitive limitations [22]. Assessment of potential for success for particular techniques of value extraction is beyond the scope of this paper and we simply assume that one of the current methods, their combination, or some future approach will allow us to accurately learn values of given people. Likewise, we will not directly address how, once learned, such values can be represented/encoded in computer systems for storage and processing. These assumptions free us from having to worry about safety problems with misaligned AIs such as perverse instantiation or wireheading [23], among many others [24]. The second step in the process requires an algorithm for value aggregation from some and perhaps even all people to assure that the developed AI is beneficial to the humanity as a whole. Some have suggest that interests of future people [25], potential people [26] and of non-human animals and other sentient beings, be likewise included in our “Coherent Extrapolated Volition” (CEV) [27], which we would like superintelligent AI to eventually implement. However, work done by moral philosophers over hundreds of years indicates that our moral preferences are not only difficult to distil in a coherent manner (anti-codifiability thesis) [28], they are also likely impossible to merge without sacrificing interests of some people [29, 30], we can say it is the Hard problem of value alignment. Results from research into multivariate optimization and voting based preference aggregation support similar conclusions [31–33]. Perhaps we should stop trying to make “one size fits all” approach to the optimization of the universe work and instead look at potential for delivering an experience customized to individual users. The superintelligent systems we are hoping to one day create, with the goal of improving lives of all, may work best if instead they strive to optimize their alignment with individual lives of each and every one of us, while giving us all freedom to be ourselves without infringing on preferences of other sentient [34, 35] beings. Such a system due to its lower overall complexity should also be easier to design, implement and safeguard. 2. Individual Simulated Universes It has been suggested that future will permit design [36] and instantiation of high fidelity simulated universes [37–41] for research and entertainment ([42], chapter 5) purposes as well as for testing advanced AIs [43–46]. Existing work and recent breakthroughs in virtual reality, augmented reality, inter-reality, haptics, and artificial consciousness combined with tremendous popularity of multiplayer virtual worlds such as Second Life [47–49] or Ultima Online [50] provide encouraging evidence for the plausibility of realistic simulations. technology We can foresee, in a not so distant future, a point at which visual and audio fidelity of the simulations, as well as for all other senses [51] becomes so high that it will not be possible to distinguish if you are in a base reality or in a simulated world, frequently referred as hyperreality [52, 53]. In principle, it should be possible to improve local fidelity (measurable by the agent) of the simulated reality to levels beyond base reality, for example to the point of more precise measurements being possible with special instrumentation. This would effectively reverse the resolution relationship between the two realities making the base reality less believable on local scale. A variant of a Total Turing Test [54, 55], we shall call a Universal Turing Test (UTT) could be administered in which the user tries to determine if the current environment is synthetic or not [56] even if it is complex enough to include the whole universe, all other beings (as philosophical zombies [57]/Non-Playing Characters (NPCs)) and AIs. Once the UTT is consistently passed we will know, the hyperreality is upon us. Consequently, we suggest that instead of trying to agree on convergent, universal, diverse, mutually beneficial, equalizing, representative, unbiased, timeless, acceptable to all, etc. moral/ethical norms and values, predicated on compromise [58], we look at an obvious alternative. Specifically, we suggest that superintelligent AIs should be implemented to act as personalized simulations — (ISU) representing customized synthetically generated [7, 59] mega-environments, in the “a universe per person multi-verse framework”, which are optimally and dynamically adjusting to align their values and preferences to the Personal CEV [60] of sentient agents calling such universes “home”. Individual Simulated Universes Aaronson describes the general idea as “… an infinite number of sentient beings living in simulated paradises of their own choosing, racking up an infinite amount of utility. If such a being wants challenge and adventure, then challenge and adventure is what it gets; if nonstop sex, then nonstop sex; if a proof of P≠NP, then a proof of P≠NP. (Or the being could choose all three: it’s utopia, after all!)” [61]. Bostrom estimates that our galactic supercluster has enough energy to support trillions of such efficiently [62] simulated universes [63]. Features of related phenomenon have been described in literature as [64]: dematerialization [65], ephemeralization [66], time-space compression [67], miniaturization [68], densification [69], virtualization [70], digitization [71], and simulation [72]. Faggella talks about opportunities presented in the virtual world over what is possible in the present reality [73]: “… ‘freedom’ could only extend so far in a real world as to border on impinging on the ‘freedom’ of others. Complete freedom would imply control over one’s environment and free choice to do what one would chose with it. It seems easy to understand how this might imply the threatening of the freedom of others in the same physical world. … Not to mention, the physical world has many impinging qualities that would hinder any semblance of complete freedom. Matter has qualities, light has qualities, and physical bodies (no matter how enhanced) will always have limitations. If you’d like to change an aspect of our character or emotional experience, for example, we’d have to potentially tinker with brain chemicals … . In a virtual reality, we are potentially presented not only with the freedom to extend beyond physical limitations (to transport to different times or places, to live within self-created fantasy worlds, to eliminate death and any physical risk), we would also be granted freedom from impinging or effecting others — and so allow for their full freedom an a separate virtual reality as well. … For this reason, it seems to make sense that … we might encounter a Bostrom-like ‘Singleton’ to rule the physical world, and a great sea of individual consciousnesses in the virtual world. The ‘Singleton’ could keep our computational substrates safe from harm and eliminate competition or danger in the physical world, while our virtual ‘selves’ would be capable of expressing and exploring the epitome of freedom on our own terms in a limitless virtual world of our own creation.” This means that an ISU can be anything a user truly wishes it to be including dangerous, adversarial, competitive, and challenging at all levels of user competence like levels in a well-designed video game. It will let a user be anything they want to be including a malevolent actor [74, 75], a privileged person (like a king) or the exact opposite (a slave), or perhaps just a selfish user in an altruistic universe. A personalized universe doesn’t have to be fair, or just or free of perceived suffering and pain [76]. It could be just a sequence of temporary fantasies and hopefully what happens in your personalized universe stays in your personalized universe. ISU’s goal is to cater to the world’s smallest minority and its preferences, you [77, 78]! Moreover, the good news is that we know that we are not going to run out of Fun [79] even if we live much longer lives [80]. If an agent controlling the environment is not well aligning with a particular individual for whom the environment is created (during early stages of development of this technology) it may be necessary to use precise language to express what the user wants. The now defunct Open-Source Wish Project (OSWP) [81] attempted to formulate in precise and safe form such common wishes as: immortality, happiness, omniscience, being rich, having true love, omnipotence, etc [23]. For example the latest version of the properly formed request for immortality was formalized as follows: “I wish to live in the locations of my choice, in a physically healthy, uninjured, and apparently normal version of my current body containing my current mental state, a body which will heal from all injuries at a rate three sigmas faster than the average given the medical available to me, and which will be protected from any diseases, injuries or illnesses causing disability, pain, or degraded functionality or any sense, organ, or bodily function for more than ten days consecutively or fifteen days in any year; at any time I may rejuvenate my body to a younger age, by saying a phrase matching this pattern five times without interruption, and with conscious intent: ‘I wish to be age,’ followed by a number between one and two hundred, followed by ‘years old,’ at which point the pattern ends — after saying a phrase matching that pattern, my body will revert to an age matching the number of years I started and I will commence to age normally from that stage, with all of my memories intact; at any time I may die, by saying five times without interruption, and with conscious intent, ‘I wish to be dead’; the terms ‘year’ and ‘day’ in this wish shall be interpreted as the ISO standard definitions of the Earth year and day as of 2006. [81]” Of course, this is still far from foolproof and is likely to lead to some undesirable situations, which could be avoided by development of a well-aligned system. technology 3. Benefits and Shortcomings of Personalized Universes ISUs can be implemented in a number of ways, either by having perfect emulations of agents reside in the simulated universe or by having current biological agents experience fully realistic simulated environments (while robotic systems take care of their bodies’ biological needs), see Faggella’s review of possible variants of virtual reality [82]. Both options have certain desirable properties, for example, software versions of users are much easier to modify, reset to earlier memory states [83], upgrade and backup [84, 85], while biological agents are likely to have stronger identity continuity [86]. Emulations can also be taken as snapshots from different points in the person’s life and set to exist in their own independent simulations multiplying possible experiences [34] for the subset of agents derived from that particular individual. In both virtual and uploaded scenarios, it is probably desirable for the user to “forget” that they are not in the base reality via some technological means with the goal of avoiding Solipsism syndrome . [1] Our proposal doesn’t just allow us to bypass having to find a difficult to compute approximation to a likely impossible to solve problem of multi-agent value aggregation, but it also provides for a much better “customer experience” free of compromise on even small details which may be important to that individual. Additionally, virtual existence makes it possible to have an “undo button” for actions/experiences user might regret, something not always possible in the world of physical reality. Last, but not least any existential risks related to this particular AIs failure are limited to the simulated universe and its virtual inhabitants, not to the humanity and all life forms. Of course, like any AI safety mechanism ours has certain weaknesses, which will have to be explicitly addressed. Those include having to withstand agents with extreme preferences, who may wish to prevent others from exercising their self-determination and may attempt to hack and sabotage ISUs or even base reality (which should be easier to secure, with most agents and their complex preferences out of the way). Another area of concern is problems with superintelligence serving as “operating system” for the base reality and allocating non-conflicting resources for the ISUs. Finally, we should study how the philosophical questions of living in a “fake” world vs “real” world, even if it is not possible to distinguish between them by any means, impacts human psychology and well-being. It is also important to figure out a metric to measure user-relative quality of the simulation experience not just from fidelity point of view but also from users overall satisfaction with how their values, goals and preferences are being serviced, such metrics are notoriously hard to design and easy to abuse [87]. Potential ideas may include user feedback both from within the simulation and while outside observing a recording of themselves in the simulation, feedback after trying other simulations and potentially all other simulations, and peer-review from other conscious agents both from outside and from within the same environment. It is possible to let users “play” in other’s universes and perhaps as other characters and to allow them to discover and integrate new values to which their universe will dynamically adopt. It may also be possible for two or more agents to decide to cohabit the same universe by coming to accept a mutually satisfying set of values, but of course their individual alignment with the environment would be reduced and so it is important to provide them with a “divorce” option. We are assuming a well aligned AI, which will not attempt to directly hack the agent to game the feedback score, but out of caution, we do not recommend evolutionary competition [88–90] between ISUs as that can lead to adversarial behaviors between superintelligent agents even the base reality superintelligence would not be able to resolve. 4. Conclusions In this exploratory paper, we advocated a solution to the hardest of the three subproblems of multi-agent value alignment, specifically value aggregation. Our “in the box” solution suggests replacing one-size-fits-all model of value satisfaction with customized and highly optimized approach which is strictly superior for all possible agents not valuing decreasing quality of value alignment for other agents. Some existing evidence from cosmology may be seen as suggesting that perhaps this approach is not so novel and in fact has already been implemented by earlier civilizations, and this universe is already a part of a multiverse [91, 92] generated by intelligence [93]. While some significant concerns with the philosophical [94], social [95] and security [96, 97] problems associated with personalized universes remain, particularly with regards to securing base reality, the proposal has a number of previously described advantages. Such advantages are likely to make it attractive to many users or to at least be integrated as a part of a more complex hybrid solution scheme. The decisions made by users of personal universes are also a goldmine of valuable data both for assessment of agents and for providing additional data to improve overall AI alignment [98]. We will leave proposals for assuring safety and security of cyberinfrastructure running personalized universes for future work. The main point of this paper is that a personal universe is a place where virtually everyone can be happy. Acknowledgments The author is grateful to Elon Musk and the Future of Life Institute and to Jaan Tallinn and Effective Altruism Ventures for partially funding his work on Safety. Special thank you goes to all NPCs in this universe. AI About author Roman V. Yampolskiy , roman.yampolskiy@louisville.edu @romanyam References Silver, D., et al., Science, 2018. 362(6419): p. 1140–1144. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Silver, D., et al., Nature, 2017. 550(7676): p. 354. Mastering the game of Go without human knowledge. Mnih, V., et al., Nature, 2015. 518(7540): p. 529. Human-level control through deep reinforcement learning. High, R., IBM Corporation, Redbooks, 2012. The era of cognitive systems: An inside look at IBM Watson and how it works. Moravčík, M., et al., Science, 2017. 356(6337): p. 508–513. Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Krizhevsky, A., I. Sutskever, and G.E. Hinton. . in . 2012. Imagenet classification with deep convolutional neural networks Advances in neural information processing systems Goodfellow, I., et al. . in . 2014. Generative adversarial nets Advances in neural information processing systems Bostrom, N., . 2014: Oxford University Press. Superintelligence: Paths, dangers, strategies Yampolskiy, R.V., ISRN Artificial Intelligence, 2011. 2012. AI-complete CAPTCHAs as zero knowledge proofs of access to an artificially intelligent system. Yampolskiy, R.V., . 2015: Chapman and Hall/CRC. Artificial Superintelligence: a Futuristic Approach Yampolskiy, R.V., . 2018: CRC Press. Artificial Intelligence Safety and Security Sotala, K. and R.V. Yampolskiy, Physica Scripta, 2014. 90(1): p. 018001. Responses to catastrophic AGI risk: a survey. Everitt, T., G. Lea, and M. Hutter, arXiv preprint arXiv:1805.01109, 2018. AGI Safety Literature Review. Soares, N. and B. Fallenstein, Machine Intelligence Research Institute (MIRI) technical report, 2014. 8. Aligning superintelligence with human interests: A technical research agenda. Dignum, V., ITU Journal: ICT Discoveries, 2017. Responsible Artificial Intelligence: Designing AI for Human Values. Evans, O., A. Stuhlmüller, and N.D. Goodman. . in . 2016. Learning the Preferences of Ignorant, Inconsistent Agents AAAI Kim, T.W., T. Donaldson, and J. Hooker, arXiv preprint arXiv:1810.11116, 2018. Mimetic vs Anchored Value Alignment in Artificial Intelligence. Ng, A.Y. and S.J. Russell. . in . 2000. Algorithms for inverse reinforcement learning Icml Abbeel, P. and A.Y. Ng. . in . 2004. ACM. Apprenticeship learning via inverse reinforcement learning Proceedings of the twenty-first international conference on Machine learning Sarma, G.P., N.J. Hay, and A. Safron. . in . 2018. Springer. AI Safety and Reproducibility: Establishing Robust Foundations for the Neuropsychology of Human Values International Conference on Computer Safety, Reliability, and Security Riedl, M.O. and B. Harrison. . in . 2016. Using Stories to Teach Human Values to Artificial Agents AAAI Workshop: AI, Ethics, and Society Trazzi, M. and R.V. Yampolskiy, arXiv preprint arXiv:1808.03644, 2018. Building Safer AGI by introducing Artificial Stupidity. Yampolskiy, R.V., Journal of Experimental & Theoretical Artificial Intelligence, 2014. 26(3): p. 373–389. Utility function security in artificially intelligent agents. Yampolskiy, R.V. . in . 2016. Taxonomy of Pathways to Dangerous Artificial Intelligence AAAI Workshop: AI, Ethics, and Society Mulgan, T., OUP Catalogue, 2008. Future people: A moderate consequentialist account of our obligations to future generations. Warren, M.A., Canadian Journal of Philosophy, 1977. 7(2): p. 275–289. Do potential people have moral rights? Yudkowsky, E., Singularity Institute for Artificial Intelligence, 2004. Coherent extrapolated volition. Purves, D., R. Jenkins, and B.J. Strawser, Ethical Theory and Moral Practice, 2015. 18(4): p. 851–872. Autonomous machines, moral judgment, and acting for the right reasons. Yampolskiy, R.V., , in . 2013, Springer Berlin Heidelberg. p. 389–396. Artificial intelligence safety engineering: Why machine ethics is a wrong approach Philosophy and Theory of Artificial Intelligence Sobel, D., Ethics, 1994. 104(4): p. 784–810. Full information accounts of well-being. Arrow, K.J., Journal of political economy, 1950. 58(4): p. 328–346. A difficulty in the concept of social welfare. Arrow, K.J., . Vol. 12. 2012: Yale university press. Social choice and individual values Gehrlein, W.V., Theory and decision, 2002. 52(2): p. 171–199. Condorcet’s paradox and the likelihood of its occurrence: different perspectives on balanced preferences. Yampolskiy, R.V., arXiv preprint arXiv:1712.04020, 2017. Detecting Qualia in Natural and Artificial Agents. Raoult, A. and R. Yampolskiy, Available at , 2015. Reviewing Tests for Machine Consciousness. https://www.researchgate.net/publication/284859013_DRAFT_Reviewing_Tests_for_Machine_Consciousness Knight, W., . December 3, 2018: Available at: . AI software can dream up an entire digital world from a simple sketch https://www.technologyreview.com/s/612503/ai-software-can-dream-up-an-entire-digital-world-from-a-simple-sketch Bostrom, N., The Philosophical Quarterly, 2003. 53(211): p. 243–255. Are we living in a computer simulation? Yampolskiy, R.V., , in . 2017: Available at: . Future Jobs — The Universe Designer Circus Street https://www.circusstreet.com/blog/future-jobs-the-universe-designer Yampolskiy, R., , in , C. Chase, Editor. p. 50–53. Job ad: universe designers Stories from 2045 Chalmers, D.J., Disputatio, 2017. 9(46): p. 309–352. The virtual and the real. Putnam, H., Reason, Truth and History, 1981: p. 1–21. Brain in a Vat. Tegmark, M., . 2017: Knopf. Life 3.0: Being human in the age of artificial intelligence Armstrong, S., A. Sandberg, and N. Bostrom, Minds and Machines, 2012. 22(4): p. 299–324. Thinking inside the box: Controlling and using an oracle AI. Yampolskiy, R., Journal of Consciousness Studies, 2012. 19(1–2): p. 1–2. Leakproofing the Singularity Artificial Intelligence Confinement Problem. Babcock, J., J. Kramár, and R.V. Yampolskiy, arXiv preprint arXiv:1707.08476, 2017. Guidelines for Artificial Intelligence Containment. Babcock, J., J. Kramár, and R. Yampolskiy, , in . 2016, Springer. p. 53–63. The AGI containment problem Artificial General Intelligence Boulos, M.N.K., L. Hetherington, and S. Wheeler, Health Information & Libraries Journal, 2007. 24(4): p. 233–245. Second Life: an overview of the potential of 3‐D virtual worlds in medical and health education. Yampolskiy, R.V. and M.L. Gavrilova, IEEE Robotics & Automation Magazine, 2012. 19(4): p. 48–58. Artimetrics: biometrics for artificial entities. Yampolskiy, R.V., B. Klare, and A.K. Jain. . in . 2012. IEEE. Face recognition in the virtual world: recognizing avatar faces Machine Learning and Applications (ICMLA), 2012 11th International Conference on Simpson, Z.B. . in . 2000. The in-game economics of Ultima Online Computer Game Developer’s Conference, San Jose, CA Bushell, W.C. and M. Seaberg, , in . December 5, 2018: Available at: . Experiments Suggest Humans Can Directly Observe the Quantum Psychology Today https://www.psychologytoday.com/us/blog/sensorium/201812/experiments-suggest-humans-can-directly-observe-the-quantum Baudrillard, J., . 1994: University of Michigan press. Simulacra and simulation Eco, U., . 1990: Houghton Mifflin Harcourt. Travels in hyper reality: essays Harnad, S., ACM SIGART Bulletin, 1992. 3(4): p. 9–10. The Turing Test is not a trick: Turing indistinguishability is a scientific criterion. Schweizer, P., Minds and Machines, 1998. 8(2): p. 263–272. The truly total Turing test. Yampolskiy, R.V., Physica Scripta, 2016. 92(1): p. 013002. On the origin of synthetic life: attribution of output to a particular algorithm. Chalmers, D.J., . 1996: Oxford university press. The conscious mind: In search of a fundamental theory Bostrom, N., , in . 2009: Available at: at: . Moral uncertainty — towards a solution? Overcoming Bias http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html Faggella, D., . August 27, 2018: Available at: . Programmatically Generated Everything (PGE) https://danfaggella.com/programmatically-generated-everything-pge/ Muehlhauser, L. and C. Williamson, Machine Intelligence Research Institute, 2013. Ideal Advisor Theories and Personal CEV. , in . December 19, 2018: Available at: . Visions of a Better World Scientific American https://blogs.scientificamerican.com/cross-check/visions-of-a-better-world Yampolskiy, R.V., Journal of Discrete Mathematical Sciences & Cryptography, 2013. 16(4–5): p. 259–277. Efficiency Theory: a Unifying Theory for Information, Computation and Intelligence. Bostrom, N., Utilitas, 2003. 15(3): p. 308–314. Astronomical waste: The opportunity cost of delayed technological development. Smart, J.M., Acta Astronautica, 2012. 78: p. 55–68. The transcension hypothesis: Sufficiently advanced civilizations invariably leave our universe, and implications for METI and SETI. Wernick, I.K., et al., Daedalus, 1996: p. 171–198. Materialization and dematerialization: measures and trends. Fuller, R.B., . 1982: Estate of R. Buckminster Fuller. Synergetics: explorations in the geometry of thinking Harvey, D., . Vol. 14. 1989: Blackwell Oxford. The condition of postmodernity Feynman, R. and D. Gilbert, Reinhold, New York, 1961: p. 282–296. Miniaturization. Leskovec, J., J. Kleinberg, and C. Faloutsos. . in . 2005. ACM. Graphs over time: densification laws, shrinking diameters and possible explanations Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining Lévy, P. and R. Bononno, . 1998: Da Capo Press, Incorporated. Becoming virtual: Reality in the digital age Negroponte, N., et al., Computers in Physics, 1997. 11(3): p. 261–262. Being digital. Chalmers, D., Science Fiction and Philosophy From Time Travel to Superintelligence, 2003. 36. The Matrix as metaphysics. Faggella, D., . May 14, 2013: Available at: . Transhuman Possibilities and the “Epitome of Freedom” https://danfaggella.com/transhuman-possibilities-and-the-epitome-of-freedom/ Pistono, F. and R.V. Yampolskiy, arXiv preprint arXiv:1605.02817, 2016. Unethical Research: How to Create a Malevolent Artificial Intelligence. Brundage, M., et al., arXiv preprint arXiv:1802.07228, 2018. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. Pearce, D., . 1995: David Pearce. Hedonistic Imperative Rand, A., . Vol. 4. 1988: Penguin. The Ayn Rand lexicon: objectivism from A to Z Rand, A., . 1964: Penguin. The virtue of selfishness Ziesche, S. and R.V. Yampolskiy, arXiv preprint arXiv:1606.07092, 2016. Artificial Fun: Mapping Minds to the Space of Fun. Kurzweil, R. and T. Grossman, . 2005: Rodale. Fantastic voyage: live long enough to live forever Anonymous, , in . 2006: Available at: . Wish For Immortality 1.1 The Open-Source Wish Project http://www.homeonthestrange.com/phpBB2/viewforum.php?f=4 Faggella, D., . May 28, 2018: Available at: . The Transhuman Transition — Lotus Eaters vs World Eaters https://danfaggella.com/the-transhuman-transition-lotus-eaters-vs-world-eaters/ Lebens, S. and T. Goldschmidt, . 2017: Ann Arbor, MI: Michigan Publishing, University of Michigan Library. The Promise of a New Past Hanson, R., . 2016: Oxford University Press. The Age of Em: Work, Love, and Life when Robots Rule the Earth Feygin, Y.B., K. Morris, and R.V. Yampolskiy, arXiv preprint arXiv:1811.03009, 2018. Uploading Brain into Computer: Whom to Upload First? Parfit, D., . 1984: OUP Oxford. Reasons and persons Manheim, D. and S. Garrabrant, arXiv preprint arXiv:1803.04585, 2018. Categorizing Variants of Goodhart’s Law. Lehman, J., J. Clune, and D. Misevic. . in . 2018. MIT Press. The Surprising Creativity of Digital Evolution Artificial Life Conference Proceedings Lowrance, C.J., O. Abdelwahab, and R.V. Yampolskiy. . in . 2015. Springer. Evolution of a Metaheuristic for Aggregating Wisdom from Artificial Crowds Portuguese Conference on Artificial Intelligence Yampolskiy, R.V., L. Ashby, and L. Hassan, Journal of Intelligent Learning Systems and Applications, 2012. 4(02): p. 98. Wisdom of artificial crowds — a metaheuristic algorithm for optimization. Carr, B., 2007: Cambridge University Press. Universe or multiverse? Vilenkin, A. and M. Tegmark, Scientific American. Retrieved from: , 2011. The case for parallel universes. http://www.scientificamerican.com/article/multiverse-the-case-for-parallel-universe Gardner, J.N., . 2003: Inner Ocean Publishing. Biocosm: the new scientific theory of evolution: intelligent life is the architect of the universe Vallentyne, P., , in . 2014, Routledge. p. 108–125. Robert Nozick: Anarchy, State, and Utopia Central Works of Philosophy v5 Turchin, A., . 2018: Available at: . Wireheading as a Possible Contributor to Civilizational Decline https://philpapers.org/rec/TURWAA Faggella, D., . August 17, 2018: Available at: . Substrate Monopoly — The Future of Power in a Virtual and Intelligent World https://danfaggella.com/substrate-monopoly/ Faggella, D., . July 15, 2018: Available at: . Digitized and Digested https://danfaggella.com/digitized-and-digested/ Zhavoronkov, A., , in . December 12, 2018: Available at: . Is Life A Recursive Video Game? Forbes https://www.forbes.com/sites/cognitiveworld/2018/12/12/is-life-a-recursive-video-game [1] https://en.wikipedia.org/wiki/Solipsism_syndrome Credits Image by from javier alamo Pixabay