How LLMs Have Transformed Working with Computers

Written by thomascherickal | Published 2023/11/10
Tech Story Tags: llms | large-language-models | natural-language-commands | writing-prompts | sentiment-analysis-with-llm | image-recognition-with-llm | speech-recognition-ai | ai-anomaly-detection

TLDRHow did you perform machine learning in the past? And how do LLMs like GPT-4 do it now? Astounding stunning information, especially for the computer-savvy!via the TL;DR App

All Images used in this article were created by the author with the Bing Image Creator .

The advent of Large Language Models (LLMs) like OpenAI's GPT-3 is a paradigm shift in the world of computing and artificial intelligence.

These models are revolutionizing the way we approach complex tasks, from machine learning and computer vision to more routine programming tasks like parsing.

The beauty of LLMs lies in their ability to understand and generate human language, enabling us to interact with them using plain English.

This has completely revolutionized work with computers. Suddenly, everyone is a creative genius.

In programming, especially for specific use cases, all the old libraries like PyTorch, TensorFlow, and JAX will not be used for simple tasks but as tools to create ever more powerful Transformers.

To understand what a Transformer is and the potential of this technology, you can read the article below.

https://hackernoon.com/the-dawn-of-the-transformer-neural-networks?embedable=true

Here’s what I mean

The concept is so simple that a 3-year-old child could do it. And the child would be more creative than most of us.

Here are ten common Artificial Intelligence/ Machine Learning use-cases, in contrast with the long complex multi-stage program creation of 3-month long projects and the corresponding LLM prompts done in seconds!


Sentiment Analysis:

# Import necessary libraries
import tweepy
from textblob import TextBlob

# Twitter API credentials
consumer_key = 'YOUR_CONSUMER_KEY'
consumer_secret = 'YOUR_CONSUMER_SECRET'
access_token = 'YOUR_ACCESS_TOKEN'
access_token_secret = 'YOUR_ACCESS_TOKEN_SECRET'

# Authenticate with the Twitter API
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)

# Define the search term and the date_since date
search_words = "#climatechange"
date_since = "2021-11-01"

# Collect tweets
tweets = tweepy.Cursor(api.search,
              q=search_words,
              lang="en",
              since=date_since).items(1000)

# Create a function to get the subjectivity
def getSubjectivity(text):
   return TextBlob(text).sentiment.subjectivity

# Create a function to get the polarity
def getPolarity(text):
   return TextBlob(text).sentiment.polarity

# Create a function to compute the negative, neutral and positive analysis
def getAnalysis(score):
  if score < 0:
    return 'Negative'
  elif score == 0:
    return 'Neutral'
  else:
    return 'Positive'

# Collect the tweets and their polarity
data = [[tweet.text, getAnalysis(getPolarity(tweet.text))] for tweet in tweets]

# Print the tweets and their sentiment
for tweet in data:
    print(f'{tweet[0]}: {tweet[1]}')

This program first authenticates with the Twitter API using your credentials. It then defines a search term and a date and collects tweets that match this criterion. It defines three functions to get the subjectivity, polarity, and sentiment analysis of a text. It then applies these functions to the collected tweets and prints the results.

LLM Prompt Equivalent:

"What is the sentiment of the text 'OpenAI's GPT-4 is amazing!'?"


Image Recognition:

# Import necessary libraries
import numpy as np
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input, decode_predictions

# Load the VGG16 model
model = VGG16(weights='imagenet')

# Define a function to load and preprocess the image
def load_and_process_image(image_path):
    img = image.load_img(image_path, target_size=(224, 224))
    img_array = image.img_to_array(img)
    expanded_img_array = np.expand_dims(img_array, axis=0)
    preprocessed_img = preprocess_input(expanded_img_array)
    return preprocessed_img

# Define a function to make predictions
def predict_image(model, processed_image):
    predictions = model.predict(processed_image)
    label = decode_predictions(predictions)
    return label

# Define a function to print the predictions
def print_predictions(label):
    print('Top predictions of the image are:')
    for prediction_id in range(len(label[0])):
        print(f'{label[0][prediction_id][1]}: {round(100*label[0][prediction_id][2], 2)}%')

# Define a list of image paths for testing
image_paths = ['image1.jpg', 'image2.jpg', 'image3.jpg', 'image4.jpg', 'image5.jpg']

# Loop over all images and make predictions
for image_path in image_paths:
    print(f'Processing image: {image_path}')
    processed_image = load_and_process_image(image_path)
    label = predict_image(model, processed_image)
    print_predictions(label)
    print('---------------------------------------')

This program is designed to classify multiple images. It first loads the VGG16 model, and then defines three functions: one to load and preprocess the image, one to make predictions using the model, and one to print the predictions. It then defines a list of image paths and loops over this list, making predictions for each image and printing the results.

LLM Prompt Equivalent:

"What objects are in the image 'elephant.jpg'?"


Chatbots:


# Import necessary libraries
from chatterbot import ChatBot
from chatterbot.trainers import ChatterBotCorpusTrainer
from flask import Flask, render_template, request

# Create a chatbot
chatbot = ChatBot('Example Bot')

# Create a trainer for the chatbot
trainer = ChatterBotCorpusTrainer(chatbot)

# Train the chatbot on the English corpus
trainer.train("chatterbot.corpus.english")

# Define a function to get a response from the chatbot
def get_response(user_input):
    return chatbot.get_response(user_input)

# Create a Flask web application
app = Flask(__name__)

# Define the home page
@app.route("/")
def home():
    return render_template("index.html")

# Define the route to get the chatbot's response
@app.route("/get")
def get_bot_response():
    user_input = request.args.get('msg')
    return str(get_response(user_input))

# Run the web application
if __name__ == "__main__":
    app.run()

LLM Prompt Equivalent:

"Generate a response to the message 'Hello, how are you?'"


Text Summarization:

# Import necessary libraries
import requests
from bs4 import BeautifulSoup
from gensim.summarization import summarize

# Fetch the content from a webpage
url = 'https://en.wikipedia.org/wiki/Artificial_intelligence'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')

# Extract the text from the webpage
text = ''
for paragraph in soup.find_all('p'):
    text += paragraph.text

# Preprocess the text
text = text.replace('\n', ' ').replace('\r', '').strip()

# Summarize the text
summary = summarize(text)

# Print the summary
print(summary)

This program first fetches the content from a webpage, in this case, the Wikipedia page for Artificial Intelligence. It then extracts the text from the webpage and preprocesses it by replacing newline and carriage return characters with spaces and removing leading and trailing whitespace. It then summarizes the text using the summarize function from the Gensim library and prints the summary.

LLM Prompt Equivalent:

"Can you summarize this text?"


Language Translation:

Creating a language translation model from scratch is a complex task that involves deep learning and a large amount of data. It's typically done using sequence-to-sequence (seq2seq) models, a type of model that's ideal for many tasks including language translation.

Here's a simplified example of how you might start to approach this task using Python and the Keras library. This example doesn't include the training, testing, and deployment phases, as each of these would require a significant amount of code and computational resources.

# Import necessary libraries
from keras.models import Model
from keras.layers import Input, LSTM, Dense
import numpy as np

# Parameters for our model
batch_size = 64
epochs = 100
latent_dim = 256
num_samples = 10000

# Placeholder for input data
input_texts = []
target_texts = []

# Placeholder for character data
input_characters = set()
target_characters = set()

# Load your data here
# This could be a parallel corpus of English and French sentences
# For example, the English-French sentence pairs from http://www.manythings.org/anki/

# Vectorize your data

# Define an input sequence and process it
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)

# We discard `encoder_outputs` and only keep the states
encoder_states = [state_h, state_c]

# Set up the decoder, using `encoder_states` as initial state
decoder_inputs = Input(shape=(None, num_decoder_tokens))

# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

# Compile & run training
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
# Note that `decoder_target_data` needs to be one-hot encoded,
# rather than sequences of integers like `decoder_input_data`!
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
          batch_size=batch_size,
          epochs=epochs,
          validation_split=0.2)

This is a very simplified example and doesn't include many important steps such as data preprocessing, model evaluation, or the deployment phase. Also, it assumes you have a parallel corpus of English and French sentences.

LLM Prompt Equivalent:

"Translate 'Hello' to French."


Speech Recognition:

Creating a speech recognition system from scratch is a complex task that involves deep learning and a large amount of data. It's typically done using Recurrent Neural Networks (RNNs), a type of model that's ideal for sequence data like audio.

Here's a simplified example of how you might start to approach this task using Python and the Keras library. This example doesn't include the training, testing, and deployment phases, as each of these would require a significant amount of code and computational resources.

# Import necessary libraries
from keras.models import Model
from keras.layers import Input, LSTM, Dense
import numpy as np

# Parameters for our model
batch_size = 64
epochs = 100
latent_dim = 256
num_samples = 10000

# Placeholder for input data
input_audios = []
target_texts = []

# Placeholder for character data
input_audios_characters = set()
target_characters = set()

# Load your data here
# This could be a parallel corpus of audio and transcript

# Vectorize your data

# Define an input sequence and process it
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)

# We discard `encoder_outputs` and only keep the states
encoder_states = [state_h, state_c]

# Set up the decoder, using `encoder_states` as initial state
decoder_inputs = Input(shape=(None, num_decoder_tokens))

# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

# Compile & run training
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
# Note that `decoder_target_data` needs to be one-hot encoded,
# rather than sequences of integers like `decoder_input_data`!
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
          batch_size=batch_size,
          epochs=epochs,
          validation_split=0.2)

LLM Prompt Equivalent:

"What is being said in this audio?"


Anomaly Detection:

Creating an anomaly detection system from scratch is a complex task that involves machine learning and a large amount of data. It's typically done using algorithms like Isolation Forest, One-Class SVM, or Autoencoders.

Here's a simplified example of how you might start to approach this task using Python and the Scikit-learn library. This example doesn't include the training, testing, and deployment phases, as each would require a significant amount of code and computational resources.

# Import necessary libraries
from sklearn.ensemble import IsolationForest
import numpy as np
import pandas as pd

# Parameters for our model
outliers_fraction = 0.01
num_samples = 10000

# Placeholder for input data
input_data = []

# Load your data here
# This could be a dataset of network traffic, financial transactions, etc.

# Convert your data to a pandas DataFrame
df = pd.DataFrame(input_data)

# Define the model
model = IsolationForest(contamination=outliers_fraction)
model.fit(df)

# Predict the anomalies in the data
df['anomaly'] = model.predict(df)

# Print the anomaly prediction for each data point
for index, row in df.iterrows():
    print(f'Data point {index}: {"Anomaly" if row["anomaly"] == -1 else "Normal"}')

LLM Prompt Equivalent:

"Are there any anomalies in this data?"


Recommendation Systems:

Creating an anomaly detection system from scratch is a complex task that involves machine learning and a large amount of data. It typically uses algorithms like Isolation Forest, One-Class SVM, or Autoencoders.

Here's a simplified example of how you might start to approach this task using Python and the Scikit-learn library. This example doesn't include the training, testing, and deployment phases, as each of these would require a significant amount of code and computational resources.

# Import necessary libraries
from surprise import SVD
from surprise import Dataset
from surprise.model_selection import cross_validate

# Load the movielens-100k dataset
data = Dataset.load_builtin('ml-100k')

# Use the SVD algorithm
algo = SVD()

# Run 5-fold cross-validation and print results
cross_validate(algo, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)

# Train the algorithm on the trainset
trainset = data.build_full_trainset()
algo.fit(trainset)

# Predict a certain item
userid = str(196)
itemid = str(302)
actual_rating = 4
print(algo.predict(userid, itemid, actual_rating))

This is a very simplified example and doesn't include many important steps such as data preprocessing, model evaluation, or the deployment phase. Also, it assumes you have a dataset of network traffic, financial transactions, etc., which is not included here.

Creating an anomaly detection model from scratch is a complex task that typically requires a deep understanding of machine learning and statistical analysis, as well as access to a large amount of data and computational resources.

LLM Prompt Equivalent:

"What items would this user like?”


Predictive Analytics:

# Import necessary libraries
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import pandas as pd

# Load your data here
# This could be a dataset of features and labels
data = pd.read_csv('data.csv')

# Split the data into features and labels
X = data.drop('label', axis=1)
y = data['label']

# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Define the model
model = RandomForestClassifier(n_estimators=100)

# Train the model
model.fit(X_train, y_train)

# Make predictions on the testing set
y_pred = model.predict(X_test)

# Print the accuracy of the model
print("Accuracy:", accuracy_score(y_test, y_pred))

This program first loads a dataset of features and labels, then splits this data into a training set and a testing set. It defines a RandomForestClassifier model, trains this model on the training set, and makes predictions on the testing set. It then prints the accuracy of the model.

LLM Prompt Equivalent:

“What will happen next based on this data?"


Natural Language Processing:

# Import necessary libraries
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from gensim import corpora, models
import string
import os

# Load your data here
# This could be a directory of text documents
directory = 'documents/'

# Placeholder for the documents
documents = []

# Read the documents
for filename in os.listdir(directory):
    if filename.endswith('.txt'):
        with open(os.path.join(directory, filename), 'r') as file:
            documents.append(file.read())

# Tokenize the text
tokenized_text = [word_tokenize(doc) for doc in documents]

# Remove punctuation
table = str.maketrans('', '', string.punctuation)
tokenized_text = [[word.translate(table) for word in doc] for doc in tokenized_text]

# Convert to lower case
tokenized_text = [[word.lower() for word in doc] for doc in tokenized_text]

# Remove non-alphabetic tokens
tokenized_text = [[word for word in doc if word.isalpha()] for doc in tokenized_text]

# Remove stop words
stop_words = set(stopwords.words('english'))
tokenized_text = [[word for word in doc if not word in stop_words] for doc in tokenized_text]

# Perform stemming
porter = PorterStemmer()
tokenized_text = [[porter.stem(word) for word in doc] for doc in tokenized_text]

# Create a dictionary representation of the documents
dictionary = corpora.Dictionary(tokenized_text)

# Convert the list of documents (corpus) into Document-Term Matrix using dictionary prepared above
doc_term_matrix = [dictionary.doc2bow(doc) for doc in tokenized_text]

# Create an object for LDA model
Lda = models.LdaModel

# Train LDA model on the document term matrix
ldamodel = Lda(doc_term_matrix, num_topics=3, id2word = dictionary, passes=50)

# Print the topics
print(ldamodel.print_topics(num_topics=3, num_words=3))

This program first loads a directory of text documents, and then tokenizes each document into individual words. It removes punctuation, converts to lowercase, removes non-alphabetic tokens, removes stop words, and performs stemming.

It then creates a dictionary representation of the documents and converts this into a document-term matrix. It creates an LDA model and trains this model on the document-term matrix. It then prints the topics identified by the model.

LLM Prompt Equivalent:

"What is the meaning of this text?"


As you can see, the Python code for each task involves several steps and requires a certain level of technical expertise. On the other hand, the LLM prompts are simple English sentences that anyone can understand and use.

This is the power of LLMs: they democratize access to all types of technology available on the Internet, making it accessible to everyone, not just those with technical skills.

Conclusion

I said earlier that Llama2 was a new era in human existence.

Now I can confidently say,

The World Has Changed.

It will never again be the same.

Engineering and Specialized Skills Can Now Be Done using Plain English!


The technology landscape is evolving faster and faster.

What will be the skills that won’t be replaced by robots or automated?

There is a fundamental misunderstanding here.

We assume we will be replaced by our creations when they become sentient.

Now that is still an open question, but now, I can confidently say:

Artificial intelligence will not replace us but complement us. They will transform the human race forever.

Don’t think of it as a risk to your career.

Think of it as an addition to your skill set and an improvement in your scientific, technological, and artistic capacity.

Adapt, Don’t Give Up!

Learn to use LLMs.

And be happy.

Because it’s so easy that a child can do it!

All Images Created Using the Bing Image Creator by the author.

To Learn Some More About Transformers, you can check out this article:

https://hackernoon.com/the-transformer-neural-network-tnn-is-much-much-bigger-than-even-agi?embedable=true


Beloved readers, if you’re reached the end, pat yourself on the back - it’s an achievement! The next article will be on Quantum Computing and GPT-4! Stay cool - get into AI right now! (my advice for anyone reading this article).


Written by thomascherickal | #1 Top Writer in ML/AI on HackerNoon. Full Stack MLOps TDD Python Dev.
Published by HackerNoon on 2023/11/10