Too Long; Didn't Read
The field of AI Safety is still in the process of identifying its challenges and limitations. We prove that it is impossible to predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know terminal goals of the system. Unpredictability of AI is one of many impossibility results in AI Safety also known as Unknowability or Cognitive Uncontainability. The amount of unpredictability can be formally measured via the theory of Bayesian surprise, which measures the difference between posterior and prior beliefs of the predicting agent.