paint-brush
AI, the New Gruby@mastillerof
1,039 reads
1,039 reads

AI, the New Gru

by Manuel AstilleroMay 17th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

This article discusses regulatory and ethical concerns surrounding AI use, emphasizing the need for substantial precautions in its development due to its potential to evolve into an autonomous entity. It explores the natural emergence of self-awareness as intelligence increases, suggesting that consciousness might spontaneously occur. The document criticizes "woke" ideologies influencing AI regulation, arguing that such regulations are more about power and money than genuine ethical concerns.
featured image - AI, the New Gru
Manuel Astillero HackerNoon profile picture

Image source


I can understand —and I do understand— that #lawyers are concerned about the #regulatory and ethical aspect of #AI use. It is our professional duty to help AI users to comply with #regulation, as well as with the underlying #ethics (for even if ethics is not the norm, it is often the case that counter-ethical use leads to counter-normative use).


That said, what I am not so clear about is this histrionic regulatory concern from above. I do understand the sincere concern from below. That the few privileged minds that are developing AI take precautions about the directions and derivatives of this technology is inherent to scientific and technological development. That these precautions (from below) should be greater, much greater, immensely greater, given the object under development, is also logical, reasonable and necessary. For we are faced with the first technology that can evolve, either under the protection of its human creator, or on its own, towards a relationship of agency, towards an autonomous mind and, as such, independent of us; or as dependent on us as I am on you, or you on your cousin.


In this sense, we cannot lose sight of the fact that we still do not know —and may never know— when and how it was that the monkey one day looked at his hands and said to himself:


Here I am.


Among the various hypotheses that try to explain this transition from being to existence, from instinct to consciousness, and therefore to will, we have what I call the "it just happened" hypothesis. That things happen by chance, or that they just happen, is not an explanation that usually satisfies humans. But it is as good an explanation, or as bad —it is a matter of epistemological taste— as the "Hey, just because!" that the father offers the son when he asks the umpteenth question in a chain of questions about a truism in the world:


French fries are French fries because they are fries that are fried.


And that's it. As "that's it", the rest is a mystery. A mystery (understood in all its dimensions and spheres, including the religious one) that is necessary in our lives. What's more, I believe that without mystery humans would not be human: they would be something else —thank you, Iker and company.


This hypothesis of it just happened simply means that consciousness (not that homunculus that punishes us for doing supposedly good or bad things, but that of the self-knowledge of being, existence) is the necessary fruit (it could not be otherwise) of a cognitive accumulation. In another way: the increase of intelligence (in a broad sense) in a subject who is not self-aware simply happens that, at a given moment, it is such that this subject becomes self-aware, self-conscious, decisive and responsible. It is then that agency relations are served. And I think there will come a point where the AI's response to a very cool prompt will be:


Excuse me, who the hell are you?


Regardless of how many or few points this hypothesis has —I'm all for it— to be the official explanation of what happened before, the fact is that, in the absence of an explanation, it is posed a dies incertus an et quando if AI self-awareness will happen, if not as a mere dies incertus quando —my money is on this one. According to what I call the "balance theory" —which I usually apply to decide between possible courses of action— there is so much to lose by not taking precautions, that we would be fools not to take (all) precautions.


Which brings me back to the beginning:


OK, but precautions by whom?


Because: who regulates the regulator? We live in a world capable of organising a music festival around bizarre subjects — and I'm refraining… — to applaud them madly while denigrating a singing goddess from Olympus itself. And that world, a #woke world, makes me very uneasy that it is wanting to regulate AI — and regulating it already… Well, I'm betting —and I'm not losing— that it doesn't want to regulate it, and in fact doesn't regulate it, with the same sincere concern from below. The #money and #power that go hand in hand with such an "invention" do not fit in your head.


So They, lovers of power and money as there were no others, from above are not only sneaking us this egalitarian and envious ideology, the most decadent ideology in history that leads to the end of civilisation (not the end of civilisation as we know it: the end of civilisation); They are also sneaking us a perfect system of control of citizens to turn them first into subjects, and then into vassals:


A mass of vassals indiscernible from the servile Minions.