Your grandfather wears those comfy slipper-y shoes all day, every day, and they’re starting to get holes in the toe and a detached sole.
For his birthday, you’d love to surprise him with a new pair of those.
You jump on your favorite retail website and land in the search box, only to realize that your aging human brain doesn’t remember the name of those.
Not slippers, not loafers…some Indian name.
If after scratching your head, your best guess is still “soft shoes my grandpa wears”, all is not lost. Because if the search engine is on the ball, it will simply dig a little deeper, harnessing the power of data science to decipher what you’re actually looking: your underlying search intent.
In the digital world nowadays, search intent is a very hot topic. To succeed in providing an excellent user experience, website managers must know more than just which keywords people are entering. Now, the best practice is for search engines to have the ability to fully comprehend what users want, even if these humans can’t come close to articulating it.
That’s where “brainy” neural network logistics come in. Neural network–based search technology facilitates “understanding” of search intent and related themes to find exactly what people need. A neural network takes into account the context of words and connected ideas to improve search relevance. So when you search for something like “soft shoes my grandpa wears”, a savvy search engine will instantly read between the lines and show you all the moccasins your granddad could ever need.
Neural networks were conceived as mathematical models by Warren McCullough and Walter Pitts of the University of Chicago (1944). This concept exists in two realms:
Nature: Animal brains, including humans’, have biological neural networks and feedback loops that can process input data from the senses — sights, sounds, and scents — plus learn from the surroundings.
Technology: Computer scientists have figured out how to replicate the model of neurons in natural neural networks. Artificial neural networks (ANNs) are the basis for artificial intelligence (AI) and machine learning, sitting at the center of many contemporary AI tools. These versions, complete with artificial neurons, allow computers to more effectively process an input dataset. As for the “intelligent” machines involved, the quality of their output is limited to the material they receive, but they do “think” like an animal does.
With naturally occurring networks, information is processed through nodes (computational units) that sit in various layers. An artificial neural net is also a series of nodes, organized in layers and connected through inputs and outputs.
Neural networks work their magic through three layers:
The number of layers in a neural network is a clue to its classification. A basic neural network has two or three layers. One that has at least two layers — which adds some complexity — is technically a deep neural network. A very large neural network is a deep-learning tool; IBM’s definition is that more than three layers (including the input and output) constitutes a deep-learning algorithm.
Artificial neural networks contain a number of weighted responses that trigger different reactions within the hidden layer. These responses dictate what gets produced as the outcome.
To do their jobs, a little training data is in order. Neural networks (or neural nets, for short) are “trained” by adjusting weighting and testing on different types of outcomes. A neural network must have the right rules and weighted responses to do the particular job for which it’s intended. For example, the ChatGPT neural network contains nodes in the hidden layer that allow it to effectively process language.
There are multiple kinds of artificial neural networks. Some are generative and can “learn” as they work. Some simply focus on automating and improving tasks such as facial recognition or data analysis. Some are fantastic for aiding image processing and automation; others are excellent for enhancing search and discovery. Different applications of neural networks can also be tailored for the type of use case.
Here are six types of neural network with their key data points:
Feed-forward neural networks are linear: they process information in one direction until an output is ready. This is the simplest form of neural network architecture. When used alone, as opposed to as part of a modular network, they don’t contain the feedback loops required to build artificial intelligence. This architecture is often used in classification software like spam filter, where each step in the analysis brings us closer to the final result without intermediate results needing to be reused later downstream.
In terms of complexity, recurrent neural networks (RNNs) pick up where feed-forward networks leave off, reprocessing their own outputs to generate more-accurate future outputs. This creates a feedback loop, — a recurring process — in the hidden layer.
Because an RNN can create more-valuable insights by reusing its output — pushing data back through the network to contextualize and “understand” the information — it’s ideal for utilizing with AI-powered tools like ChatGPT.
Convolutional neural networks (CNNs) are often used for pattern recognition, image recognition, and image classification applications. This subset of machine learning categorizes data as it’s processed through multiple convolutional layers, breaking it down and recategorizing based on importance. The resulting information can then be used to differentiate similar images from one another, spotting critical pixel differences, for instance, that the human eye might not see. One consideration: if a CNN isn’t set up optimally, it can consider some information “unimportant”.
That’s where a deconvolutional neural network comes in handy.
Deconvolutional neural networks (DNNs) work in the opposite way to CNNs. They start with “raw” data and work backward toward usable image output. DNNs can be used to identify information discarded by or missed when using CNNs, and they can be used in conjunction with CNNs to self-test outcomes.
Modular neural networks are made up of multiple neural networks that work separately toward a common goal by using ensemble learning, a method that connects different models to pool results and test outcomes before presenting findings.
An organization can use this variation of network technology to ensure accurate outcomes. For example, a finance company might utilize it to generate more-accurate stock-market predictions as the different networks produce different outcomes; the findings can then be used to accurately peg averages and outliers.
Did you know that the T in ChatGPT and GPT-4 stands for transformer? The transformer, the element that makes generative AI so powerful, is a relatively new form of machine learning that utilizes modular neural networks. Transformers use something called a self-attention layer, along with feed-forward neural networks and RNNs, to focus on complex tasks such as language processing.
As in many industries, AI is steadily changing the fabric of real-world ecommerce search. Neural networks are being pressed into service to improve software’s comprehension of what consumers want. These tools can help identify relationships among people, content, and data, as well as connections between user interests and search queries (both current and past).
At Algolia, we harness machine learning and natural language processing (NLP) to get at the heart of search intent; then the best query results can be ranked and provided to your users.
Want to smarten your search engine with the advantages of neural search? Maybe improve your visual search (image recognition or computer vision skills), or perhaps optimize your voice search?
To assist your decision-making, let’s look at your needs and weigh the options. Get in touch with us today.
Also published here.