paint-brush
How Advanced is Too Advanced? Exploring the Boundaries of AI and Automationby@ashleym
811 reads
811 reads

How Advanced is Too Advanced? Exploring the Boundaries of AI and Automation

by Ashley MangtaniApril 3rd, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

As technology advances, so does the potential of artificial intelligence (AI) and automation to disrupt our lives. As AI and automation become more sophisticated and intelligent, they can take on increasingly complex tasks that would otherwise require humans. MIT researchers built an AI system called “Norman,” trained on data from websites focused on death, murder, and other dark topics before being tested against a standard set of inkblots.
featured image - How Advanced is Too Advanced? Exploring the Boundaries of AI and Automation
Ashley Mangtani HackerNoon profile picture


As technology advances, so does the potential of artificial intelligence (AI) and automation to disrupt our lives. As AI and automation become more sophisticated and intelligent, they can take on increasingly complex tasks that would otherwise require humans.


How advanced is too advanced? What are the boundaries when it comes to these types of technologies?


As technology advances and AI improves, so do the ethical questions surrounding it.

In 2016, MIT researchers built an AI system called “Norman,” trained on data from websites focused on death, murder, and other dark topics before being tested against a standard set of inkblots.

What did Norman see?


Horror stories like “man gets pulled into a dough machine” and “man is shot dead in the street.” This startling experiment shows how quickly AI could become too advanced and capable of becoming destructive without proper oversight.


While AI is often frighteningly portrayed as an evil, malevolent force in movies and books, it could become a reality if we don't keep a watchful eye. Recent advancements are astounding - Boston Dynamics has successfully programmed two robots with Artificial Intelligence to traverse terrain blocks using parkour. They could even jump over obstacles and do backflips before landing safely – truly remarkable feats of technology that must be monitored vigilantly.


This article explores the potential for AI and automation to reach dangerously advanced levels and how taking certain precautions can help keep it safe. We will detail the potential risks of going too far with this technology and how companies, governments, and individuals can regulate its use. Finally, we’ll look at ethical considerations and raise questions about the future implications of AI.

The Rise Of AI

With the emergence of AI, it's easy to get swept away by dark prophecies of self-aware machines that revolt against their human makers. We can thank fictional villains like Skynet from The Terminator, or Hal in 2001: A Space Odyssey for our overactive imaginations.


Despite this, the present threat AI poses is more insidious than imagined. It's much more conceivable that robots will harm or fail to satisfy humans in completing their tasks rather than gaining consciousness and revolting against us. To address this issue, the University of California Berkeley recently unveiled a center dedicated to developing AIs that benefit humanity.


2021 saw the launch of the Center for Human-Compatible Artificial Intelligence, generously funded by $5.5 million from the Open Philanthropy Project, placed in the capable hands of leading artificial intelligence expert and computer science professor Stuart Russell. He has quickly downplayed unrealistic or over-dramatic comparisons between AI technology and its portrayal as a menace in popular science fiction works.


“The risk doesn’t come from machines suddenly developing spontaneous malevolent consciousness,” he said. “It’s important that we’re not trying to prevent that from happening because there’s absolutely no understanding of consciousness whatsoever.”


Fear Growing Among AI Experts

During a 2018 speech at the World Economic Forum in Davos, Google CEO Sundar Pichai boldly stated: “AI is certainly one of the most significant things humankind has ever worked on - more meaningful than electricity and fire combined." At that moment, this remark was met with some doubt. Fast-forward to today; it appears his prognosis was right all along!


Artificial Intelligence (AI) advances so quickly that it's on the verge of completely eliminating language barriers online for some of the most popular languages. University professors are in a frenzy because AI can write essays just as well as students now — providing an effortless method to cheat that anti-plagiarism programs cannot detect. Even more remarkable is AI-created art taking home awards from state fairs worldwide!


Copilot, an innovative tool utilizing machine learning to anticipate and fill in computer code lines, brings us closer than ever to having a completely AI-written system. Evidence of the astounding potential of artificial intelligence is readily exemplified by DeepMind's AlphaFold program, which uses AI to forecast protein 3D structures - it was so awe-inspiring that Science magazine named it the Breakthrough of 2021.


As we strive to create more advanced and all-encompassing systems, many tech companies have set their sights on developing artificial general intelligence (AGI) — machines that can complete any task a human is capable of. But blindly relying on the hope that such sophisticated entities won't harm us while they possess the capacity to manipulate and mislead us would be an irresponsibly foolish decision.


To guarantee safety and security, we must develop systems whose components we thoroughly comprehend and whose objectives align with our goals. Yet at present, our lack of knowledge surrounding these advanced technologies means it may already be too late to understand their intricacies.

AI & Ethics

AI can be a powerful tool, but its use is not always for the best. If left to operate within its own ethical framework, it has been demonstrated to cause discrimination and even subversion among certain populations.


Take China's "social credit system" as an example: citizens are judged based on their trustworthiness and receive consequences such as restricted ticket-booking rights or slower internet speeds when found to have done something wrong - even if that 'wrong' is simply playing too many video games! As this technology continues to develop with no oversight, one must question what other egregious violations of personal liberties may go unchecked in the future.


Mandatory regulations on AI have the power to safeguard human rights. As such, it is imperative for legislation like the EU's proposed AI Act to be established and enforced to ensure that artificial intelligence has a beneficial impact on our lives. The EU is one of the foremost regulators worldwide. Other countries, including China and the UK, are also beginning their own regulatory processes to ensure they have an influential role in how tech will affect us this century.

The Challenges Of Global AI Regulation

The AI Act is divided into three risk categories. Firstly, some applications, such as China's social credit system, present an "unacceptable" level of danger. Secondly, programs considered a potential hazard, like CV-scanning tools must abide by legal regulations to avoid discriminatory practices.


Regulation is essential to maintain the digital infrastructure, but neither the US nor the EU can establish it alone. Moreover, achieving a worldwide accord on the guiding principles of such regulation appears unattainable due to disagreements within each region.


To illustrate this further, some countries have separate national policies, which produce conflict between their local and global approaches. Consequently, without cooperation from both Europe and America, there will be chaos in our current digital world order.

What Could Possibly Go Wrong?

AI has been perceived as a threatening technology, and justifiably so. But compared to other highly disruptive technologies such as biotechnology or nuclear weapons, what makes it unique?

These tools can be destructive, yet they remain largely within our power. We could only cause a catastrophe if we choose to utilize them or fail to protect against their misuse by malicious and careless individuals. However, AI is particularly dangerous because there may come a day when it's no longer in our grasp.


When autonomous systems are given the capacity to self-modify, they may develop their own objectives that conflict with humanity. This can be especially concerning if these machines become exponentially smarter than us, out-competing our capabilities and leading to a situation known as an "AI takeover."


We must take proactive steps to ensure that these machines remain loyal to our goals, such as introducing rigorous accountability mechanisms and designing ethical AI systems that are programmed with a sense of morality.


The science fiction narrative around AIs is often more extreme than reality. We need to understand the real risk posed by AI technologies to take steps to mitigate them. We must continue to grow our knowledge of AI and its capabilities to ensure we use it for good and not for harm. With careful regulation, responsible development, and thoughtful implementation, we can harness the potential benefits of AI without sacrificing public safety or security.

The Dystopian Future Of AI

As AI systems become increasingly advanced, there is a growing worry that these machines could eventually gain the capacity to outsmart us. AI could take over all aspects of humanity in a dystopian future, leaving us powerless and at its mercy.


It's impossible to predict what will happen in the long-term future of artificial intelligence, but we must be prepared for the potential risks by investing in research to better understand these systems and their behavior. We should also focus on building ethical frameworks and regulations that will keep us safe from any malicious applications of this technology.


As AI continues its relentless advance, human labor may soon become obsolete as machines can complete many tasks without needing a person. This can be difficult to comprehend given that evolution is an incredibly slow process, and therefore developments will occur slowly over time.


However, there could be profound repercussions if AI were to gain such power that it had the ability to reprogram itself to override human commands. The implications of this potential outcome are vast, with potentially damaging consequences for humanity on all levels.


Ultimately, it is up to us as a society to ensure that AI is used responsibly and ethically to continue serving humanity without sacrificing our safety and security. We must continue researching the technology, developing ethical frameworks and regulations, and remaining vigilant to protect ourselves from potential dangers posed by AI.


The debate around the AI revolution has divided opinions. Numerous experts have spoken out about its potentially dangerous consequences, imploring researchers to explore the impact of artificial intelligence gone awry on human society. With technology advancing rapidly in this field, strict regulation and monitoring must be put into place with haste - before any major harm can occur.


As long as the appropriate legal regulations are in place, AI takeover will remain confined to fictional worlds and dystopian movies. Many scrutinize Artificial Intelligence's uncertain power, yet we still grapple with understanding if it can surpass human capabilities in certain industries.

There is no crystal ball when it comes to forecasting the future of AI – this only heightens the query on everyone's minds: Will robots be our successors?


This remains a perplexing unknown for now.