first subscribe to my twitter, i tweet tech stuff
One more speedrun in the series, let's goooo
I just used reinforcement learning (RL) meme on you:
What a bait it was.. omg..
So, why is it hard for meme-Kelvin to learn what RL is? Because he wants to learn the implementation of RL instead of understanding the Concept, THAT’S WHY meme-KELVIN!
When you learn a new thing, tool or tech or anything — you do NOT start with it's implementations, you start with ideas, concepts and problems the tool solves!
Wo Kelvin, here you go: To understand Reinforcement Learning (RL) — think about playing a video game where you earn points for making the right moves.
Reinforcement learning is like that - a program learns by making decisions and getting rewards or penalties based on its actions
That was just an example of applying START-WITH-IDEAS principle to Reinforcement Learning. But i promised to tell how to learn the whole ML in 52 seconds...
To learn ML in 52 seconds you learn the ML Concepts, not implementations, then you google (or GPT) implementations in PyTorch or whatever library name you like, anyway it is going to change in the next year and it's fine, ideas will stay the same for much longer — go for ideas!
In supervised learning, a program is taught using examples with answers (called labeled data). This helps the program learn the connection between the examples and answers, so it can guess the answers for new examples it hasn't seen before.
Algorithms and problems to solve: predicting house prices (linear regression), deciding if a customer will buy a product (decision trees)
Here, the program looks at examples without answers (unlabeled data) and tries to find patterns or groups within them. This can help with tasks like grouping similar items together or reducing the amount of information needed to describe the data.
Algorithms and problems to solve: grouping people with similar music tastes (k-means clustering), compressing images without losing too much information (principal component analysis)
In reinforcement learning, the program learns to make decisions by trying things out and getting feedback in the form of rewards or penalties. The goal is to make better decisions over time and solve problems more effectively.
Algorithms and problems to solve: teaching a robot to walk (Q-learning), training a program to play chess (policy gradient methods)
This is the process of choosing important information (features) from raw data to help the program learn better. Sometimes, this includes creating new features using expert knowledge and creativity.
Example: using the length and width of a leaf to help identify a plant species
Checking how well a machine learning model is doing is important to see if it's working well. Measurements like accuracy, precision, recall, F1-score, and mean squared error are often used to check a model's performance.
⏲️ 24 seconds left, you go good!
Let’s also cover Deep Learning!
This is really abstract. You may imagine an NN as brain-neurons ordered in columns and pinging each other from left to right with different force (via connections) — the force with which a neuron (in each layer) is pinged defines the next ping and so the end result itself.
Algorithms and problems to solve: recognizing objects in images (feedforward neural networks), translating languages (radial basis function networks)
When you make a mistake, you learn from it and try not to repeat it. Backpropagation is a way for a program to do the same thing.
It helps the program understand where it went wrong and get better at finding the right answers.
CNNs are special neural networks that can understand grid-like data, like pictures.
They have layers that help them learn patterns and recognize parts of the picture, such as lines and shapes.
Algorithms and problems to solve: detecting faces in photos (LeNet-5), identifying different types of animals in images (AlexNet, VGG)
RNNs are designed to work with data that comes in a sequence, like a series of numbers or words.
They can remember previous inputs and use that information to make better decisions.
Algorithms and problems to solve: predicting stock prices (Long Short-Term Memory, LSTM), generating text based on a given style (Gated Recurrent Units, GRU)
This is when a program model that has already learned a lot is fine-tuned to work on a new task with limited data.
It helps the program learn faster and perform better because it already knows useful things from its previous learning.
Example: using a model trained on many dog breeds (like ResNet) to recognize specific types of cats
These techniques help the program avoid learning too much from the data, which can cause problems like overfitting
finished in: 0 min, 52 sec
This is basically it, now go and train your agents Kelvin googling implementations for algorithms and problems you want to solve!
To start using a tool, you start with a problem the tool solves and learn what Concept(s) are used to solve the problem. Don’t keep implementation in your head like "how to do something with a tool", it's complicated and everything won't fit in your head anyway.
Learn ideas, google implementations.
After 5~10 times of googling an implementation you will memorize it, then the technology becomes obsolete and you forget it, its ok, it happens all the time
Ideas are hard to forget, they fit in your memory for a long time
Bye and see you in the next drag race
Wait akshually!
Think about following the twit bird if you want to see education systems to be practice-first and beneficial <3
Anyway you may follow twitter if you just liked the text and want more or you are addictive to fun on social networks
or don’t follow anybody and don’t listen to anyone! make your own way!
I actually want you to follow my twits, it was just a sale.
Check my "Learn REACT in 43 seconds"