paint-brush
AI Isn’t the Problem, Big Tech Isby@theantieconomist
977 reads
977 reads

AI Isn’t the Problem, Big Tech Is

by The Anti-EconomistNovember 17th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Examining the history of big tech giants and their poor data-security, what are the actual risks of the proliferation of AI? It seems like it has a lot more to do with the use of data by large corporations than the AI itself.

People Mentioned

Mention Thumbnail
featured image - AI Isn’t the Problem, Big Tech Is
The Anti-Economist HackerNoon profile picture


Is Artificial Intelligence as scary as we have been led to believe, or is it just the big tech giants that have a track record of unethical data use, that are going to continue to use AI the way they have been using our other data: to line their pockets as much as they possibly can.


Even those with the most rudimentary understanding of Artificial Intelligence understand that its strength lies in its ability to understand data, and if you want a Language Model or AI to be smarter or trained for a specific purpose, the key element is data. This is precisely where AI and Big tech begin their crossover, as the tech giants expectedly have the largest reserves of Cloud Data that can be used to train and develop AI Models.


Since 2022, Google, Microsoft, and Amazon all invested billions and forged strong relationships with the most advanced AI development companies of our time, shortly after the first version of Chat GPT was released.


Even within such deals, the employees often find themselves in ethical dilemmas related to the use of AI in big tech. Dario Amodei left Open AI, seemingly due to safety and ethical concerns over Microsoft's involvement. Shortly after, he founded Anthropic, to turn to the other evil stepsister, taking around $1.25 Billion dollars in investment from Amazon and $2 Billion from Google.


Given the turbulent past (and present) of Big Tech companies, privacy ethics, and their outpour of support for Artificial Intelligence, it's probable at this point to worry that the problem is not in the development of AI, but in the privacy concerns that we are all too familiar with.


Examining the relationship between tech giants, privacy concerns, the capabilities of AI language models, and government regulations, it's crucial to consider the risks associated with the high potential of AI when wielded by entities with malicious intentions.



The AI Revolution

Language Learning Models (LLMs) or Artificial Intelligence as most people know it, is a massive combination of a bunch of Algorithms that together, are able to act autonomously to create results based on the information it has been trained on.


AI is not as new of a tool as most think it is; many AI tools are used in our everyday lives. From the maps in our cars, to social media ads, to our Netflix recommendations, all of them use AI tools to learn our routine and habits, and make recommendations and guesses as to what we are likely to engage with.


Used correctly, Artificial Intelligence has the power to transform how we interact with technology in our daily lives. Not only could it make our lives more convenient, but it could also transform accessibility for those who aren’t otherwise able to interact with reality as easily as most. Those who are visually impaired for example, could use AI to narrate the world around them for better navigation.


AI is already being used to streamline many different processes, including E-commerce, healthcare technology, finance, agriculture, and education. Making people’s jobs easier. Being able to perform the automated tasks we as humans already do means that we don’t have to spend so much time on mundane tasks, a big part of many people's jobs, and focus on the areas where human ingenuity is paramount.


For example, AI makes it easier for me to decide the better commute route to work today, given the construction on Broadway, and the traffic stop downtown, which means I can have an extra 10 min of sleep and make myself a coffee before I leave, which will make my day much better and allow me to be more productive at work.


Something important to remember is that, unlike other tools that solely rely on chips to control their functions, AI is trained on large amounts of information, which it then learns and is able to recall when we feed it certain prompts.


This is largely where the relationship between the development of AI and big tech begins, as Google, Amazon, and Microsoft have some of the largest stores of human data (probably including yours) which they can and are leveraging to train their AI models.


It's a little bit alarming that the companies that have proved they are the least trustworthy rwith our Data are the ones who are leading the charge to develop even smarter AI.


Needless to say, it seems like a recipe for disaster. And we, the consumers, have the most to lose.


The Dark Side of the Artificial Moon

Many of us in the tech world are cautiously optimistic about what generative AI tools hold for the future and we continue to see inspiring innovation using AI technology with great potential.


Don’t get me wrong, AI can be a very good thing in our world and is currently being used to create crucial technology to help many in their everyday lives, such as but the concerns and reservations most people have about AI have to be adequately addressed when introducing such powerful technology into our daily lives.


As profit-centered as they are, Big Tech Giants have a responsibility to respect the data privacy of their consumers (a responsibility they have continually disregarded) and particularly with regards to AI that is trained using Data, it becomes extremely paramount to be very conscious of the kind of data you are using to train the Language Models. Some people don’t actually want their Facebook photos, Instagram stories, location history, financial data etc. to be used to train an AI model. Moreso with more sensitive data like medical, biometric, financial and location data.


Is AI Hysteria misplaced?

We have definitely seen an uptake in hysteria surrounding the capability of AI and worries about it stealing jobs, gaining sentience, and eventually overtaking the human race, but realistically speaking what should we be scared of is Google using our data against us to maximize profit. This is something they have BEEN doing, but the potential to leverage our data against us becomes more serious when we start dealing with AI that understands and records every aspect of your life, and is accessible to the wrong people.


Risks of AI models being trained on Sensitive/Personal data

There have already been incidents where people report that data collected using Amazon Alexa has been requested and given to law enforcement without a warrant, and used against them in a criminal investigation.


There have also been multiple incidents where personal data has been used to further political agendas and potentially spread misinformation. A multitude of data breaches in which the data collected by big tech companies, falls into the wrong hands, giving a criminal access to the personal information of millions of people.


Using personal data to train AI models opens a risk to potential problems with privacy violations as a result of unauthorized access in the process of data input and exchange. If not handled under scrutiny and with care, during the lengthy process of the data being used to train the AI models, it can be passed back and forth many times, with people having unregulated access to personal data. When handling such complex information in such large volumes, it’s not implausible that there will be data breaches as a result of a lapse in security measures that lead to the unauthorized release of sensitive information.


The importance of training AI on a diverse set of data is because of the very real and previously experienced probability of an AI model having biases and discrimination based on the training it has received. Take facial recognition for example, an AI model may be used to detect who in a given store is stealing, and feed security over a three-year period. If the people who appear in said security footage are predominantly from a particular race, the AI model may begin to predict that anyone who is outside of that particular pool of people is more likely to steal. If the training data is not diverse and representative, the AI model may struggle to accurately generalize its learning to a diverse population.


If AI models are trained on only a set of demographics, then that leaves the significant risk of potential biases in the Language Model itself, based on the bias in the data. The issue is relatively simple here; if certain groups have more personal data available, then how do we prevent biases within the AI? It may result in exclusionary outcomes for underrepresented communities within the data set.


There is also the factor of a lack of consent and transparency as to whether it is disclosed to users that their data is being collected for use in training AI models. Nowadays when we are constantly bombarded with information and often experience choice paralysis, 91% of people do not read through terms and conditions when signing up for any given application, and therefore raises the question of whether consumers actually know exactly what they are signing up for in regards to their data rights. This can play a huge role in eroding user trust, with studies already finding shockingly low trustworthiness, with 42% of people having low or now trust in big tech companies.


It is a very real threat that even these big tech giants are susceptible to cyber-attacks leading to massive data breaches, as recent history has thoroughly demonstrated, and such risks are only heightened with the increase in use and transmission of large amounts of data, in order to train AI. Tech giants like Facebook have a tense history with such cyber attacks, and leaks of user data, and it leaves the consumer wondering if it is wise to introduce another point of vulnerability when the tech giants aren’t even able to handle what is already on their plates.


Another potential problem we could see in coming years with the proliferation of AI technology, is the re-identification of anonymized data, which again could reveal sensitive information which could put individuals at risk.


Closing thoughts

It makes total sense that a lot of people have many reservations about artificial intelligence and what it could bring to the labor force, (although it likely wouldn’t change anything anyway) but I think in our natural human instinct to be skeptical about anything new, we forget that the problem might have been here all along.


It's not entirely the fault of the average worker, these multi-billion dollar corporations have probably put a considerable amount of effort into making sure that we don’t hate them as much as we should, but it is still quite interesting to see people demonize the tool that strikes and not the hand that wields it. Regardless of whether you have an understanding of AI or not, you still probably know about the data protection failings of these tech giants, and so it shouldn’t be a surprise when in the next 5 years we see controversy around AI being used unethically to track your every step and make money off of it.



Lead image by Adi Goldstein on Unsplash