Too Long; Didn't Read
“<em>The robots are coming! The robots are coming!</em>” Cliched sci-fi movie lines aside, the ethics of artificial intelligence are a topic worthy of debate. Whilst I have argued previously that AI shall hugely benefit humanity, lingering questions and fears still remain amongst many, even those who advocate for greater AI development: Will they evolve into some <em>Skynet</em>-style entity and launch various nuclear weapons, destroying all of humanity? Where should we draw the line when it comes to developing AI and other robotic, computational or machine entities? Will those androids like the replicants of the <em>Blade Runner </em>series — how good was that sequel, am I right? — or the synths of <em>Fallout 4</em> become a reality? If so, what does it mean then to be human if, as the *spoilers* rather badass female replicant in <em>Blade Runner 2049</em> says to Ryan Gosling, the machines we create become “<em>more human than human</em>”?