paint-brush
AI Safety Summit: Dual Alignment Workshopsby@step

AI Safety Summit: Dual Alignment Workshops

by stephenNovember 2nd, 2024
Read on Terminal Reader
tldt arrow

Too Long; Didn't Read

Simply, automation can be categorized in two, those in the forte of human intelligence and those that are not. Whatever automation is dynamic enough to be classified under human intelligence has to be possibly explored with the kind of safety that guides human intelligence.   How is human intelligence safe? Or what is the safety of human intelligence, before thinking of AI safety? Human intelligence is safe by human affect. This means that the possibility to experience hurt, pain, consequences for wrong actions and so forth, and to understand what it means for another to experience it, keeps most of the population at some lawful distance, helping society to stay balanced.
featured image - AI Safety Summit: Dual Alignment Workshops
stephen HackerNoon profile picture

Artificial intelligence can also be described as a technology of highly dynamic automation. This means that it is not just a facility that can do things by itself, but that it has very high dynamism, for the range of what it can do.


The view of AI not being risky, because several automated systems are not, is refuted at every new capability of AI--that it can do more by itself. AI does not have to have its own goals, but if it can carry out tasks for humans in more sophisticated ways, then it is a kind of technology that requires encompassing safety protocols. If it can be automated for good use, it can be automated otherwise, to a lengthier extent.


An automobile that can drive from one place to another may follow all the safety rules, but the capability to drive itself, as one reserved for humans alone, means that its automation is not just like an elevator or a washing machine.


A computer that can use itself to do tasks for people, like what they want to do, is also a kind of automation in the domain of human intelligence. Simply, automation can be categorized into two, those in the forte of human intelligence and those that are not. Whatever automation is dynamic enough to be classified under human intelligence has to be possibly explored with the kind of safety that guides human intelligence.


How is human intelligence safe? Or what is the safety of human intelligence, before thinking of AI safety? Human intelligence is safe by human affect. This means that the possibility of experiencing hurt, pain, consequences for wrong actions, and so forth, and understanding what it means for another to experience it, keeps most of the population at some lawful distance, helping society to stay balanced.


Even as human intelligence advances human society, safety remains rounded in some ways, due to affect. This is possible to explore for artificial intelligence, as its own automation does work in the realm of human intelligence.


The coming AI summits [in San Francisco (November) and Paris (February)] would have to explore human affect as a safety tool of human intelligence, and then explore possibilities to adapt penalty or some form of awareness penalization, to AI models, so that they can know that there are consequences for actions, when misused or when they output the wrong stuff, in common areas of the internet, like social media, search results, app or play store, and so forth.


There should be a key workshop in theoretical neuroscience for AI safety during the summits, towards progress.


There is a recent article in the Washington Post, OpenAI adds search to ChatGPT, challenging Google, stating that, "Artificial-intelligence powerhouse OpenAI announced a major overhaul of ChatGPT that enables the chatbot to search the web and provide answers based on what it finds. The upgrade transforms the experience of using the popular chatbot. It brings OpenAI into more direct competition with Google, offering an alternative way to find and consume information online."


There is another recent article on Ars Technica, Google CEO says over 25% of new Google code is generated by AI, stating that, "On Tuesday, Google's CEO revealed that AI systems now generate more than a quarter of new code for its products, with human programmers overseeing the computer-generated contributions. The statement, made during Google's Q3 2024 earnings call, shows how AI tools are already having a sizable impact on software development."


There is a recent article on Axios, Exclusive: Global summit blends AI and biotech, stating that, "The Biden administration this week is hosting a first-of-its-kind international summit about the use of artificial intelligence in the life sciences as governments and private industry increasingly push the boundaries of biotechnology. The summit, hosted under the auspices of the State Department and federal science agencies, will bring together representatives of Brazil, Canada, the EU, France, Germany, Italy, India, Japan, the Netherlands, the Republic of Korea, Singapore, South Africa and the United Kingdom"