paint-brush
How LLMs Have Transformed Working with Computersby@thomascherickal

How LLMs Have Transformed Working with Computers

by Thomas CherickalNovember 10th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

How did you perform machine learning in the past? And how do LLMs like GPT-4 do it now? Astounding stunning information, especially for the computer-savvy!

People Mentioned

Mention Thumbnail
featured image - How LLMs Have Transformed Working with Computers
Thomas Cherickal HackerNoon profile picture


All Images used in this article were created by the author with the Bing Image Creator .


The advent of Large Language Models (LLMs) like OpenAI's GPT-3 is a paradigm shift in the world of computing and artificial intelligence.


These models are revolutionizing the way we approach complex tasks, from machine learning and computer vision to more routine programming tasks like parsing.


The beauty of LLMs lies in their ability to understand and generate human language, enabling us to interact with them using plain English.


This has completely revolutionized work with computers. Suddenly, everyone is a creative genius.


In programming, especially for specific use cases, all the old libraries like PyTorch, TensorFlow, and JAX will not be used for simple tasks but as tools to create ever more powerful Transformers.


To understand what a Transformer is and the potential of this technology, you can read the article below.

Here’s what I mean


The concept is so simple that a 3-year-old child could do it. And the child would be more creative than most of us.


A kid so smart that he's getting white hair!


Here are ten common Artificial Intelligence/ Machine Learning use-cases, in contrast with the long complex multi-stage program creation of 3-month long projects and the corresponding LLM prompts done in seconds!



Sentiment Analysis:

# Import necessary libraries
import tweepy
from textblob import TextBlob

# Twitter API credentials
consumer_key = 'YOUR_CONSUMER_KEY'
consumer_secret = 'YOUR_CONSUMER_SECRET'
access_token = 'YOUR_ACCESS_TOKEN'
access_token_secret = 'YOUR_ACCESS_TOKEN_SECRET'

# Authenticate with the Twitter API
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)

# Define the search term and the date_since date
search_words = "#climatechange"
date_since = "2021-11-01"

# Collect tweets
tweets = tweepy.Cursor(api.search,
              q=search_words,
              lang="en",
              since=date_since).items(1000)

# Create a function to get the subjectivity
def getSubjectivity(text):
   return TextBlob(text).sentiment.subjectivity

# Create a function to get the polarity
def getPolarity(text):
   return TextBlob(text).sentiment.polarity

# Create a function to compute the negative, neutral and positive analysis
def getAnalysis(score):
  if score < 0:
    return 'Negative'
  elif score == 0:
    return 'Neutral'
  else:
    return 'Positive'

# Collect the tweets and their polarity
data = [[tweet.text, getAnalysis(getPolarity(tweet.text))] for tweet in tweets]

# Print the tweets and their sentiment
for tweet in data:
    print(f'{tweet[0]}: {tweet[1]}')


This program first authenticates with the Twitter API using your credentials. It then defines a search term and a date and collects tweets that match this criterion. It defines three functions to get the subjectivity, polarity, and sentiment analysis of a text. It then applies these functions to the collected tweets and prints the results.


LLM Prompt Equivalent:

"What is the sentiment of the text 'OpenAI's GPT-4 is amazing!'?"


My sis and I used to play Need For Speed Most Wanted 1 and NFS: MW 2 on a single system. This configuration would have been way better!




Image Recognition:

# Import necessary libraries
import numpy as np
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input, decode_predictions

# Load the VGG16 model
model = VGG16(weights='imagenet')

# Define a function to load and preprocess the image
def load_and_process_image(image_path):
    img = image.load_img(image_path, target_size=(224, 224))
    img_array = image.img_to_array(img)
    expanded_img_array = np.expand_dims(img_array, axis=0)
    preprocessed_img = preprocess_input(expanded_img_array)
    return preprocessed_img

# Define a function to make predictions
def predict_image(model, processed_image):
    predictions = model.predict(processed_image)
    label = decode_predictions(predictions)
    return label

# Define a function to print the predictions
def print_predictions(label):
    print('Top predictions of the image are:')
    for prediction_id in range(len(label[0])):
        print(f'{label[0][prediction_id][1]}: {round(100*label[0][prediction_id][2], 2)}%')

# Define a list of image paths for testing
image_paths = ['image1.jpg', 'image2.jpg', 'image3.jpg', 'image4.jpg', 'image5.jpg']

# Loop over all images and make predictions
for image_path in image_paths:
    print(f'Processing image: {image_path}')
    processed_image = load_and_process_image(image_path)
    label = predict_image(model, processed_image)
    print_predictions(label)
    print('---------------------------------------')


This program is designed to classify multiple images. It first loads the VGG16 model, and then defines three functions: one to load and preprocess the image, one to make predictions using the model, and one to print the predictions. It then defines a list of image paths and loops over this list, making predictions for each image and printing the results.


LLM Prompt Equivalent:

"What objects are in the image 'elephant.jpg'?"


A Ghost of the Engine of the Ferrari F50 color-coded by function! Yes, the engine is not alive, but what about a robot? Do we need to debate robot rights now?



Chatbots:


# Import necessary libraries
from chatterbot import ChatBot
from chatterbot.trainers import ChatterBotCorpusTrainer
from flask import Flask, render_template, request

# Create a chatbot
chatbot = ChatBot('Example Bot')

# Create a trainer for the chatbot
trainer = ChatterBotCorpusTrainer(chatbot)

# Train the chatbot on the English corpus
trainer.train("chatterbot.corpus.english")

# Define a function to get a response from the chatbot
def get_response(user_input):
    return chatbot.get_response(user_input)

# Create a Flask web application
app = Flask(__name__)

# Define the home page
@app.route("/")
def home():
    return render_template("index.html")

# Define the route to get the chatbot's response
@app.route("/get")
def get_bot_response():
    user_input = request.args.get('msg')
    return str(get_response(user_input))

# Run the web application
if __name__ == "__main__":
    app.run()




LLM Prompt Equivalent:

"Generate a response to the message 'Hello, how are you?'"


I wonder what a regular human being with a face like that would look like. Beauty is, of course skin deep, but sometimes, that's deep enough!


Text Summarization:

# Import necessary libraries
import requests
from bs4 import BeautifulSoup
from gensim.summarization import summarize

# Fetch the content from a webpage
url = 'https://en.wikipedia.org/wiki/Artificial_intelligence'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')

# Extract the text from the webpage
text = ''
for paragraph in soup.find_all('p'):
    text += paragraph.text

# Preprocess the text
text = text.replace('\n', ' ').replace('\r', '').strip()

# Summarize the text
summary = summarize(text)

# Print the summary
print(summary)


This program first fetches the content from a webpage, in this case, the Wikipedia page for Artificial Intelligence. It then extracts the text from the webpage and preprocesses it by replacing newline and carriage return characters with spaces and removing leading and trailing whitespace. It then summarizes the text using the summarize function from the Gensim library and prints the summary.


LLM Prompt Equivalent:

"Can you summarize this text?"


And this, ladies and gentlemen, is the text summarization of Starfield : 3000 RPG computer game! You see, all the letters are inscribed in stealth mode, you only detect technology and cars and planes. To learn more, play the game!


Language Translation:

Creating a language translation model from scratch is a complex task that involves deep learning and a large amount of data. It's typically done using sequence-to-sequence (seq2seq) models, a type of model that's ideal for many tasks including language translation.


Here's a simplified example of how you might start to approach this task using Python and the Keras library. This example doesn't include the training, testing, and deployment phases, as each of these would require a significant amount of code and computational resources.


# Import necessary libraries
from keras.models import Model
from keras.layers import Input, LSTM, Dense
import numpy as np

# Parameters for our model
batch_size = 64
epochs = 100
latent_dim = 256
num_samples = 10000

# Placeholder for input data
input_texts = []
target_texts = []

# Placeholder for character data
input_characters = set()
target_characters = set()

# Load your data here
# This could be a parallel corpus of English and French sentences
# For example, the English-French sentence pairs from http://www.manythings.org/anki/

# Vectorize your data

# Define an input sequence and process it
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)

# We discard `encoder_outputs` and only keep the states
encoder_states = [state_h, state_c]

# Set up the decoder, using `encoder_states` as initial state
decoder_inputs = Input(shape=(None, num_decoder_tokens))

# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

# Compile & run training
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
# Note that `decoder_target_data` needs to be one-hot encoded,
# rather than sequences of integers like `decoder_input_data`!
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
          batch_size=batch_size,
          epochs=epochs,
          validation_split=0.2)


This is a very simplified example and doesn't include many important steps such as data preprocessing, model evaluation, or the deployment phase. Also, it assumes you have a parallel corpus of English and French sentences.


LLM Prompt Equivalent:

"Translate 'Hello' to French."


Yes, all earth globes levitate mysteriously with the Apple iPad when bought with euros. Somebody explain this picture to me!



Speech Recognition:

Creating a speech recognition system from scratch is a complex task that involves deep learning and a large amount of data. It's typically done using Recurrent Neural Networks (RNNs), a type of model that's ideal for sequence data like audio.


Here's a simplified example of how you might start to approach this task using Python and the Keras library. This example doesn't include the training, testing, and deployment phases, as each of these would require a significant amount of code and computational resources.


# Import necessary libraries
from keras.models import Model
from keras.layers import Input, LSTM, Dense
import numpy as np

# Parameters for our model
batch_size = 64
epochs = 100
latent_dim = 256
num_samples = 10000

# Placeholder for input data
input_audios = []
target_texts = []

# Placeholder for character data
input_audios_characters = set()
target_characters = set()

# Load your data here
# This could be a parallel corpus of audio and transcript

# Vectorize your data

# Define an input sequence and process it
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)

# We discard `encoder_outputs` and only keep the states
encoder_states = [state_h, state_c]

# Set up the decoder, using `encoder_states` as initial state
decoder_inputs = Input(shape=(None, num_decoder_tokens))

# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

# Compile & run training
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
# Note that `decoder_target_data` needs to be one-hot encoded,
# rather than sequences of integers like `decoder_input_data`!
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
          batch_size=batch_size,
          epochs=epochs,
          validation_split=0.2)


LLM Prompt Equivalent:

"What is being said in this audio?"


There were 4 candidate pics to choose from - I went for tech-flavoured, but one of the images reminded me of Kim Kardashian! But let's not talk about that with kids around.


Anomaly Detection:

Creating an anomaly detection system from scratch is a complex task that involves machine learning and a large amount of data. It's typically done using algorithms like Isolation Forest, One-Class SVM, or Autoencoders.


Here's a simplified example of how you might start to approach this task using Python and the Scikit-learn library. This example doesn't include the training, testing, and deployment phases, as each would require a significant amount of code and computational resources.


# Import necessary libraries
from sklearn.ensemble import IsolationForest
import numpy as np
import pandas as pd

# Parameters for our model
outliers_fraction = 0.01
num_samples = 10000

# Placeholder for input data
input_data = []

# Load your data here
# This could be a dataset of network traffic, financial transactions, etc.

# Convert your data to a pandas DataFrame
df = pd.DataFrame(input_data)

# Define the model
model = IsolationForest(contamination=outliers_fraction)
model.fit(df)

# Predict the anomalies in the data
df['anomaly'] = model.predict(df)

# Print the anomaly prediction for each data point
for index, row in df.iterrows():
    print(f'Data point {index}: {"Anomaly" if row["anomaly"] == -1 else "Normal"}')


LLM Prompt Equivalent:

"Are there any anomalies in this data?"


Anomaly - no - just a butterfly image! A normal, common, butterfly image. Note: No connections to the Butterfly Effect metaphor in Chaos Theory!


Recommendation Systems:

Creating an anomaly detection system from scratch is a complex task that involves machine learning and a large amount of data. It typically uses algorithms like Isolation Forest, One-Class SVM, or Autoencoders.


Here's a simplified example of how you might start to approach this task using Python and the Scikit-learn library. This example doesn't include the training, testing, and deployment phases, as each of these would require a significant amount of code and computational resources.


# Import necessary libraries
from surprise import SVD
from surprise import Dataset
from surprise.model_selection import cross_validate

# Load the movielens-100k dataset
data = Dataset.load_builtin('ml-100k')

# Use the SVD algorithm
algo = SVD()

# Run 5-fold cross-validation and print results
cross_validate(algo, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)

# Train the algorithm on the trainset
trainset = data.build_full_trainset()
algo.fit(trainset)

# Predict a certain item
userid = str(196)
itemid = str(302)
actual_rating = 4
print(algo.predict(userid, itemid, actual_rating))

This is a very simplified example and doesn't include many important steps such as data preprocessing, model evaluation, or the deployment phase. Also, it assumes you have a dataset of network traffic, financial transactions, etc., which is not included here.


Creating an anomaly detection model from scratch is a complex task that typically requires a deep understanding of machine learning and statistical analysis, as well as access to a large amount of data and computational resources.


LLM Prompt Equivalent:

"What items would this user like?”


I recommend AGI for Ultron's use-cases (Avengers 2). Maybe Ultron could become like Vision if he fell in love with a Scarlet Witch!



Predictive Analytics:


# Import necessary libraries
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import pandas as pd

# Load your data here
# This could be a dataset of features and labels
data = pd.read_csv('data.csv')

# Split the data into features and labels
X = data.drop('label', axis=1)
y = data['label']

# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Define the model
model = RandomForestClassifier(n_estimators=100)

# Train the model
model.fit(X_train, y_train)

# Make predictions on the testing set
y_pred = model.predict(X_test)

# Print the accuracy of the model
print("Accuracy:", accuracy_score(y_test, y_pred))


This program first loads a dataset of features and labels, then splits this data into a training set and a testing set. It defines a RandomForestClassifier model, trains this model on the training set, and makes predictions on the testing set. It then prints the accuracy of the model.


LLM Prompt Equivalent:

“What will happen next based on this data?"


Now why doesn't my office computer system look like that? Maybe The Matrix Reloaded, Zion Control?


Natural Language Processing:

# Import necessary libraries
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from gensim import corpora, models
import string
import os

# Load your data here
# This could be a directory of text documents
directory = 'documents/'

# Placeholder for the documents
documents = []

# Read the documents
for filename in os.listdir(directory):
    if filename.endswith('.txt'):
        with open(os.path.join(directory, filename), 'r') as file:
            documents.append(file.read())

# Tokenize the text
tokenized_text = [word_tokenize(doc) for doc in documents]

# Remove punctuation
table = str.maketrans('', '', string.punctuation)
tokenized_text = [[word.translate(table) for word in doc] for doc in tokenized_text]

# Convert to lower case
tokenized_text = [[word.lower() for word in doc] for doc in tokenized_text]

# Remove non-alphabetic tokens
tokenized_text = [[word for word in doc if word.isalpha()] for doc in tokenized_text]

# Remove stop words
stop_words = set(stopwords.words('english'))
tokenized_text = [[word for word in doc if not word in stop_words] for doc in tokenized_text]

# Perform stemming
porter = PorterStemmer()
tokenized_text = [[porter.stem(word) for word in doc] for doc in tokenized_text]

# Create a dictionary representation of the documents
dictionary = corpora.Dictionary(tokenized_text)

# Convert the list of documents (corpus) into Document-Term Matrix using dictionary prepared above
doc_term_matrix = [dictionary.doc2bow(doc) for doc in tokenized_text]

# Create an object for LDA model
Lda = models.LdaModel

# Train LDA model on the document term matrix
ldamodel = Lda(doc_term_matrix, num_topics=3, id2word = dictionary, passes=50)

# Print the topics
print(ldamodel.print_topics(num_topics=3, num_words=3))


This program first loads a directory of text documents, and then tokenizes each document into individual words. It removes punctuation, converts to lowercase, removes non-alphabetic tokens, removes stop words, and performs stemming.


It then creates a dictionary representation of the documents and converts this into a document-term matrix. It creates an LDA model and trains this model on the document-term matrix. It then prints the topics identified by the model.


LLM Prompt Equivalent:

"What is the meaning of this text?"


Translate any language to any language? How about C++ to Greek? Of course, for some people, C++ is Greek and Latin.!


As you can see, the Python code for each task involves several steps and requires a certain level of technical expertise. On the other hand, the LLM prompts are simple English sentences that anyone can understand and use.


This is the power of LLMs: they democratize access to all types of technology available on the Internet, making it accessible to everyone, not just those with technical skills.



Conclusion

I said earlier that Llama2 was a new era in human existence.


Now I can confidently say,


The World Has Changed.



It will never again be the same.


Engineering and Specialized Skills Can Now Be Done using Plain English!


The technology landscape is evolving faster and faster.


What will be the skills that won’t be replaced by robots or automated?


There is a fundamental misunderstanding here.


We assume we will be replaced by our creations when they become sentient.


Now that is still an open question, but now, I can confidently say:


Artificial intelligence will not replace us but complement us. They will transform the human race forever.



Don’t think of it as a risk to your career.


Think of it as an addition to your skill set and an improvement in your scientific, technological, and artistic capacity.


Adapt, Don’t Give Up!


Learn to use LLMs.


And be happy.


Because it’s so easy that a child can do it!


To hold infinity in the palm of your hand. Eternity - that's a different question!

All Images Created Using the Bing Image Creator by the author.


To Learn Some More About Transformers, you can check out this article:



Beloved readers, if you’re reached the end, pat yourself on the back - it’s an achievement! The next article will be on Quantum Computing and GPT-4! Stay cool - get into AI right now! (my advice for anyone reading this article).