paint-brush
Build a Personal Shopping Assistant Using Brain.js and Node.jsby@jsdevjournal
1,983 reads
1,983 reads

Build a Personal Shopping Assistant Using Brain.js and Node.js

by JSDevJournalAugust 23rd, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Explore the world of personalized recommendations with Brain.js and Nodejs. Uncover how it turns your preferences into curated shopping experiences.

People Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Build a Personal Shopping Assistant Using Brain.js and Node.js
JSDevJournal HackerNoon profile picture


Ever want to flex your coding and make an A.I. app to help people? Now's your chance with my guide to a machine-learning outfit picker!


I'll teach you the JavaScript tricks in Brain.js and Node.js to train a neural network. Soon, you'll have a program that learns styles and picks awesome outfits.


The best part? It will keep getting better at suggestions the more users it helps. By the end, you'll know how to build bot recommendations for any kind of shopping.


Join me to code your own AI stylist assistant. Then, you can use tech powers to help people find perfect outfits easily! Let's get coding.

Setting Up Our Project

First things first, we need to initialize our Node.js project. Open up your terminal and run:

npm init

This will create a package.json file to track dependencies and scripts for our app.


Next, we'll install the dependencies we need:

npm install brain.js express mongoose

Brain.js is the artificial intelligence library we'll use for recommendations. Express will help build our web server and API. And Mongoose interfaces with our MongoDB database, where we'll store product and user data.


With our packages installed, create an app.js file that will kick off our server:

const express = require('express');
const app = express();

app.get('/', (req, res) => {
  res.send('Shopping assistant is ready!');
});

app.listen(3000, () => {
  console.log('App listening on port 3000!');
});

Now, if you run node app.js, you should see the message "Shopping assistant is ready!" Great work. On to the next step!

Creating Our Database Models

For our assistant to learn, it needs data - lots of products with details like categories, colors, sizes, etc. It also needs to understand our preferences as users. Let's design some MongoDB Schemas using Mongoose:

// product.model.js
const mongoose = require('mongoose');

const productSchema = new mongoose.Schema({
  name: String,
  image: String, 
  categories: [String],
  colors: [String],
  sizes: [String],
  // etc
});

module.exports = mongoose.model('Product', productSchema);


// user.model.js
const mongoose = require('mongoose');

const userSchema = new mongoose.Schema({
  name: String,
  likes: [{type: mongoose.Schema.Types.ObjectId, ref: 'Product'}],
  dislikes: [{type: mongoose.Schema.Types.ObjectId, ref: 'Product'}] 
});

module.exports = mongoose.model('User', userSchema);

Now, we have the structure to store products, users, and their preferences! Onto filling our database with data...

Populating Our Database with Sample Data

To start training our recommendation model, we need some sample products and users stored in MongoDB. Let's populate it with some realistic data:

// seed.js

// Sample products
const products = [
  {
    name: 'Blue shirt',
    image: 'shirt1.jpg',
    categories: ['tops', 'shirts'],
    colors: ['blue'] 
  },
  {
    name: 'Black pants', 
    image: 'pants1.jpg',
    categories: ['bottoms', 'pants'], 
    colors: ['black'],
    sizes: ['S', 'M', 'L']
  },
  // etc
]

// Sample users 
const users = [
  {
    name: 'John Doe',
    likes: [products[0]._id, products[3]._id] //John likes blue shirt and black shoes
  },
  {
    name: 'Jane Doe',
    dislikes: [products[1]._id] //Jane dislikes red skirt
  }
]

// Populate database
Product.insertMany(products); 
User.insertMany(users);

console.log('Database populated!');

Now, when you run node seed.js, fake products and users will be inserted into MongoDB. From here, our AI can start observing patterns in the data. Exciting!

Designing the Model Architecture

The first decision is how to structure our neural network. Since we're dealing with sequential user data, a Recurrent Neural Network (RNN) makes the most sense.


Specifically, we'll use a Long Short-Term Memory (LSTM) network architecture implemented by Brain.js. LSTMs are great for capturing patterns in time series data like preferences.

Our model will take as input an encoding of a user's likes/dislikes. It will then output a score predicting how likely that user is to enjoy a new product.

Encoding the Training Data

For the network to learn, we need to prepare training examples. We encode each user as a sparse vector representing their known preferences. Products are also encoded to one-hot vectors.

The training data pairs these inputs with the true "liked" output for that user-product pair. This exposes the pattern we want it to extract.


Here is an example of what realistic encoded training data for our recommendation model could look like:

// Encoded users
const users = {
  'user1': [1, 0, 1, 0, 0, 0, 0], // Likes category 1, color 2
  'user2': [0, 1, 0, 1, 1, 0, 0] // Likes category 2, colors 3 & 5  
};

// Encoded products  
const products = {
  'prod1': [1, 0, 1, 0, 0, 0], // Category 1, color 2
  'prod2': [1, 0, 0, 1, 0, 0], // Category 1, color 4
  'prod3': [0, 1, 0, 0, 1, 0] // Category 2, color 5
};

Some notes:


  • Users and products are encoded as sparse vectors
  • Values represent categories/colors from a one-hot encoding
  • Train data pairs inputs to outputs
  • Real data would have thousands of examples
  • Values here are just for demonstration


The goal is to expose patterns between a user's preferences and whether they'd enjoy a product based on its attributes. This helps the network learn those predictive relationships from data.


Of course, encoding real customer data properly at scale involves more complexity. But this shows the general idea!

Training Our Recommendation Model

It's time to create the brain of our shopping assistant - the neural network model that will power the recommendations. Using Brain.js, we can train a simple multilayer perceptron:

// model.js

const brain = require('brain.js');

const network = new brain.recurrent.LSTM(); 

// Training data
const trainingData = [
  {
    input: users['user1'],
    output: {likes: products['prod1']}
  },

  {
    input: users['user1'], 
    output: {likes: false} // User 1 does not like prod2
  },

  {
    input: users['user2'],
    output: {likes: products['prod3']} 
  }
   // ... more data
];


network.train(trainingData);

module.exports = network;

Encoding product attributes and training on real user data is much more complex - but you get the idea! The key is to teach the network patterns of what users like based on product properties. Now it's ready to provide suggestions!

Creating the Recommendation API

Finally, let's expose an API for our trained model to generate recommendations. We'll take in a user ID and return top matches.


Some key things it needs to do:


  1. Map each of the user's liked/disliked products to their unique IDs.
  2. Lookup each product ID and extract its encoded feature vector (assigned during data preprocessing).
  3. Build a sparse vector representing the user by summing the feature vectors of their liked items and subtracting disliked ones.


For example, if a user liked:


  • Product 1 (with features [1,0,1])
  • Product 3 (with features [0,1,0])

And disliked:

  • Product 2 (with features [1,1,0])

Their encoded vector would be:

[1,0,1] + [0,1,0] - [1,1,0] = [0,1,1]


This exposes their preferences as a set of normalized feature scores.


The encodeUser function essentially:


  1. Maps user-product relations to IDs
  2. Looks up product encodings
  3. Computes the user's aggregate preference vector


Here is a sample implementation of the encodeUser function:

// Sample product encodings 
const productEncodings = {
  'prod1_id': [1, 0, 1],
  'prod2_id': [0, 1, 0],
  'prod3_id': [1, 1, 0]  
};

function encodeUser(userDoc) {

  // Initialize sparse vector
  let encoding = new Array(3).fill(0);  

  // Calculate contributions from liked items
  userDoc.likes.forEach(productId => {

    const productEncoding = productEncodings[productId];

    // Add encoding vector for this liked product
    encoding = encoding.map((value, i) => value + productEncoding[i]);

  });

  // Subtract encodings from disliked items  
  userDoc.dislikes.forEach(productId => {

    const productEncoding = productEncodings[productId];

    encoding = encoding.map((value, i) => value - productEncoding[i]);

  });

  return encoding;

}

module.exports = encodeUser;

The key steps are:


  1. Initialize a zero-filled vector as the user encoding
  2. Look up product encodings and add/subtract from the user vector
  3. Return the final aggregate encoding


This prepares a standardized input the model can use to generate recommendations based on a user's learned tastes.

// recommendations.js

const User = require('./models/user');
const Product = require('./models/product');
const network = require('./model');

app.get('/recommendations/:userId', async (req, res) => {

  const user = await User.findById(req.params.userId);

  // Encode user likes/dislikes as input
  const input = encodeUser(user);  

  // Get predictions from trained model
  const predictions = network.run(input);  

  // Filter products by prediction score
  const recommended = await Product.find()
    .where('_id')
    .in(predictions.map(p => p.id))
    .limit(5);

  res.json(recommended);
});

And with that, we have a fully functional shopping assistant API powered by AI! Users can now get tailored suggestions based on their preferences.

Train Recommendations Engine in Real Time

Here are a few ways we could improve the recommendations engine to train in real-time as users interact with the system:


Incremental Training

  • Instead of batch training, update the model weights after each user interaction.

  • This allows it to continuously learn from new implicit feedback data.


Online Learning

  • Maintain a pool of recently collected training examples

  • Periodically take a batch from the pool to update the model

  • As new data arrives, replace old examples in the pool


Transfer Learning

  • Initialize model weights from an existing pre-trained version

  • Freeze early layers and only fine-tune the top layers with new data

  • Speeds up learning compared to training from scratch


Contextual Recommendations

  • In addition to long-term preferences, also consider short-term factors like:

  • Recently viewed products

  • Current search/browse session

  • External contextual cues like time, location


Incremental Encoding Scheme

  • Dynamically expand encoding vocabularies as new categories/items are encountered.

  • Retrain model only on new encoded examples


Parallel and Distributed Training

  • Use model averaging across multiple replica models trained asynchronously on shards of data.
  • Scales to massive datasets through data/model parallelism


The key is to continuously update the model in an online fashion as implicit feedback comes in, rather than periodic batch training. This helps the assistant progressively get smarter based on real-time user interactions.

Bottom Line

We've covered the full process of building a personalized product recommendation system using machine learning. By implementing Brain.js and Node.js, we:


  • Designed realistic database schemas to represent real-world data
  • Seeded our MongoDB with sample products and users
  • Created an LSTM neural network architecture suited for sequential user data
  • Prepared training examples by encoding users and products
  • Trained the model on pairs of user-product interactions
  • Built an API endpoint for the model to generate recommendations


Through this project, we learned how to:


  • Structure and populate databases for ML applications
  • Design neural networks for predictive tasks
  • Prepare and optimize training data for the model
  • Construct APIs to serve predictions to clients


By continuously improving the system through real-time data logging and incremental training updates, our personal shopper AI can become smarter over time based on actual user behaviors.


The techniques shown here are broadly applicable for building other types of personalized recommendation engines. With further refinement, this approach could power an end-to-end product discovery experience tailored specifically for each unique customer.


I hope this guide has equipped you to start developing your very own machine-learning systems. Please feel free to reach out if any part needs further explanation - and happy coding!


Also published here.