paint-brush
How to Richly Preprocess your IPFS NFT Images and Metadataby@daltonic
3,646 reads
3,646 reads

How to Richly Preprocess your IPFS NFT Images and Metadata

by Darlington Gospel October 4th, 2022
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

IPFS stands for the interplanetary file system, it is peer-to-peer and decentralized. There isn’t an easy way to pull out data stored on the IPFS and as such, it is a near-perfect peer to use along with your blockchain applications for storing media contents.
featured image - How to Richly Preprocess your IPFS NFT Images and Metadata
Darlington Gospel  HackerNoon profile picture

The first step you’ll take to building an NFT minting Dapp is to prepare the artwork. Without the artwork, you can’t have your NFT project actualized.


Another reason for preprocessing your artworks is for generating metadata for each of the images. Without this metadata, your NFT cannot be sold in any of the big secondary markets, such as Opensea.


For example, the image below is an NFT in the Opensea marketplace bearing some metadata which can also be seen below.

Metadata Images

Metadata Details and Traits


The above information including the artwork, its details, and traits can be seen on the IPFS image below.


IPFS Metadata


Setting up Project Dependencies

Open your terminal and navigate to your project directory or create a project folder at a specific location. For example, running **mkdir preprocessor** on the terminal will create a folder called “preprocessor” on your specified location.


Next, run the following commands on the terminal.


cd preprocessor
npm init -y
npm install sharp @faker.js/faker --save-dev


The above commands will install both sharp and faker libraries on your project. The faker library will help us generate random information. Whereas, the sharp library will help us process the image to a certain dimension, quality, and format.


Next, create a folder in your project called arts and another called outputs. In the “arts” folder, put all the images for processing inside it.


With the above steps accomplished, open the project on “VS code”. The project structure should look like the one below.


- preprocessor/
  - arts/
  - node_modules/
  - outputs/
  - package.json
  - package-lock.json


Great, let’s move on to coding the engine responsible for processing the images.

Prerequisites

You’ll need the following installed on your computer to complete this tutorial.

  • NodeJs
  • IPFS Desktop app
  • VS Code

Coding the Processing Engine

Create a file at the root of your project called **imagify.js** and paste the following code


const fs = require('fs')
const path = require('path')
const sharp = require('sharp')
const { faker } = require('@faker-js/faker')
const input = './arts'
const output = './outputs'

let img_counter = 1
const imgSize = { width: 500, height: 500 }
const desired_ext = '.webp'
const base_url = 'https://ipfs.io/ipfs/REPLACE_WITH_IPFS_CID/'
const attributes = {
  weapon: [
    'Stick',
    'Knife',
    'Blade',
    'Clube',
    'Ax',
    'Sword',
    'Spear',
    'Gun',
    'Craft',
  ],
  environment: [
    'Space',
    'Sky',
    'Desert',
    'Forest',
    'Grassland',
    'Moiuntains',
    'Oceans',
    'Rainforest',
  ],
  rarity: Array.from(Array(10).keys()),
}

fs.readdirSync(input).forEach((file) => {
  const orginal_ext = path.extname(file)
  const orginal_file_name = path.basename(file).split('.')[0]

  if (['.jpg', '.jpeg', '.png', '.gif', '.webp'].includes(orginal_ext)) {
    const id = img_counter

    const metadata = {
      id,
      name: `Adulam NFT #${id}`,
      description:
        'A.I Arts NFTs Collection, Mint and collect the hottest NFTs around.',
      price: 1,
      image: base_url + id + desired_ext,
      demand: faker.random.numeric({ min: 10, max: 100 }),
      attributes: [
        {
          trait_type: 'Environment',
          value: attributes.environment.sort(() => 0.5 - Math.random())[0],
        },
        {
          trait_type: 'Weapon',
          value: attributes.weapon.sort(() => 0.5 - Math.random())[0],
        },
        {
          trait_type: 'Rarity',
          value: attributes.rarity.sort(() => 0.5 - Math.random())[0],
          max_value: 10,
        },
        {
          display_type: 'date',
          trait_type: 'Created',
          value: Date.now(),
        },
        {
          display_type: 'number',
          trait_type: 'generation',
          value: 1,
        },
      ],
    }

    if (fs.existsSync(`${input}/${orginal_file_name + orginal_ext}`)) {
      sharp(`${input}/${orginal_file_name + orginal_ext}`)
        .resize(imgSize.height, imgSize.width)
        .toFile(`${output}/images/${id + desired_ext}`, (err, info) =>
          console.log(err),
        )

      fs.writeFileSync(`${output}/metadata/${id}.json`, JSON.stringify(metadata), {
        encoding: 'utf-8',
        flag: 'w',
      })
    }
    console.log(metadata)
    img_counter++
  }
})


The following steps will help you understand how this metadata processing engine works.


Importing Essential Libraries

const fs = require('fs')
const path = require('path')
const sharp = require('sharp')
const { faker } = require('@faker-js/faker')
const input = './arts'
const output = './outputs'


The fs represents the file system, it is an inbuilt module that came with NodeJs. It has the responsibility of managing file reading and writing activities on your machine.


The path is another node module that helps you navigate through the directory structures on your machine. This will be instrumental in locating where our images are kept.


Sharp is the module we use for processing the image such as resizing and transforming to a different file type.


We’ll use the faker to generate a random number.


Lastly, the input variable contains the location where the images to be processed are located and the output points to the location where the processed images will be saved.


Defining Essential Variables

let img_counter = 1
const imgSize = { width: 500, height: 500 }
const desired_ext = '.webp'
const base_url = 'https://ipfs.io/ipfs/REPLACE_WITH_IPFS_CID/'
const attributes = {
  weapon: [
    'Stick',
    'Knife',
    'Blade',
    'Clube',
    'Ax',
    'Sword',
    'Spear',
    'Gun',
    'Craft',
  ],
  environment: [
    'Space',
    'Sky',
    'Desert',
    'Forest',
    'Grassland',
    'Moiuntains',
    'Oceans',
    'Rainforest',
  ],
  rarity: Array.from(Array(10).keys()),
}


The above codes contain important variables to be used in the course of generating our metadata.


  • **Image_counter** helps us to number the images consistently with the current iteration.
  • **ImgSize** defines the dimension of the width and height of each image to be processed.
  • **Desired_ext** speaks of the file format you want your processed images to bear.
  • **Base_url** specifies the location where the images are to be stored on the IPFS.
  • **Attributes** holds further information about each image’s metadata.


Executing Recursive Task

fs.readdirSync(input).forEach((file) => {
  if(['.jpg', '.jpeg', '.png', '.gif', '.webp'].includes(orginal_ext)) {
    // Images and metadata tasks are recursively performed here...
  }
})


In the above block of code, we used the file system library (fs) to loop through all the images in the **input** location (arts). And for each of the images, we made sure our engine is only selecting images from an approved list of extensions.


Performing The Metadata Task

const id = img_counter
const metadata = {
  id,
  name: `Adulam NFT #${id}`,
  description:
    'A.I Arts NFTs Collection, Mint and collect the hottest NFTs around.',
  price: 1,
  image: base_url + id + desired_ext,
  demand: faker.random.numeric({ min: 10, max: 100 }),
  attributes: [
    {
      trait_type: 'Environment',
      value: attributes.environment.sort(() => 0.5 - Math.random())[0],
    },
    {
      trait_type: 'Weapon',
      value: attributes.weapon.sort(() => 0.5 - Math.random())[0],
    },
    {
      trait_type: 'Rarity',
      value: attributes.rarity.sort(() => 0.5 - Math.random())[0],
      max_value: 10,
    },
    {
      display_type: 'date',
      trait_type: 'Created',
      value: Date.now(),
    },
    {
      display_type: 'number',
      trait_type: 'generation',
      value: 1,
    },
  ],
}


In the code block above, we supplied values for each metadata space. For example, environments, weapons, and all the trait values are randomly and dynamically supplied.


Performing The Image Transformation Task

if (fs.existsSync(`${input}/${orginal_file_name + orginal_ext}`)) {
  sharp(`${input}/${orginal_file_name + orginal_ext}`)
    .resize(imgSize.height, imgSize.width)
    .toFile(`${output}/images/${id + desired_ext}`, (err, info) =>
      console.log(err),
    )

  fs.writeFileSync(`${output}/metadata/${id}.json`, JSON.stringify(metadata), {
    encoding: 'utf-8',
    flag: 'w',
  })
}

console.log(metadata)
img_counter++


In the snippet above, we used the file system module again to first locate each one of our artwork and resized it to our specified dimension (500 x 500). Also, we supplied a new name in line with the current iteration and gave it our desired extension (webp).


Resizing and transforming the images into webp greatly optimized our artworks to an astonishing height.


For example, where I subjected this image preprocessing engine to 99 images totaling a size of 111MB . The size went down to 62MB for the .png extension and an astonishing 4.5MB for the .webp extension. That huge size reduction accounts for a big leap in the load time of a Minting website built with my images.


Lastly from the block of code above, we created JSON metadata for each image processed, bearing both an identical name and a URL pointing to the image’s location. This metadata is what we’ll deploy to IPFS after processing the images.


Now, run the command below to have your image transformed. Be sure you are in your project folder.


node imagify.js


At this point, we are done with our image engine, our output folder should have the following file structure as the result.


- output/
  - images
    - 1.webp
    - 2.webp
    - ......
  - metadata
    - 1.json
    - 2.json
    - ......

Uploading Images and Metadata to IPFS

Status screen of IPFS Desktop


IPFS stands for the interplanetary file system, it is peer-to-peer and decentralized. There isn’t an easy way to pull out data stored on the IPFS and as such, it is a near-perfect peer to use along with your blockchain applications for storing media contents.


To use the easy and less confusing way, head to the IPFS Desktop app installation page and follow the instructions specified there.


After the installation is successful, open up the IPFS app and upload FIRST, and I repeat, FIRST upload the images folder.


A unique CID (content Identification) string will be generated as part of the folder name, see the image below.

CID

Now, copy the images folder CID as can be seen from the image above, and replace it in your **imagify.js** code. See the code below.


const base_url = "https://ipfs.io/ipfs/REPLACE_WITH_IPFS_CID/" //old string
const base_url = "https://ipfs.io/ipfs/QmY1zrFibpdHQ7qcqZqq7THsqTremZYepXNWR5Au3MF1ST/" //new string


Now, run the **node imagify.js** again to include the accurate location of each image to your JSON metadata. See an example of the generated JSON metadata before and after the replacement of the CID.


You can watch this video to understand how I used these Images and metadata on a full NFT minting Project.

Before CID Replacement

{
  id: 97,
  name: 'Adulam NFT #97',
  description: 'A.I Arts NFTs Collection, Mint and collect the hottest NFTs around.',
  price: 1,
  image: 'https://ipfs.io/ipfs/REPLACE_WITH_IPFS_CID/97.webp',
  demand: '4',
  attributes: [
    { trait_type: 'Environment', value: 'Forest' },
    { trait_type: 'Weapon', value: 'Craft' },
    { trait_type: 'Rarity', value: 4, max_value: 10 },
    {
      display_type: 'date',
      trait_type: 'Created',
      value: 1664478034024
    },
    { display_type: 'number', trait_type: 'generation', value: 1 }
  ]
}


After CID Replacement

{
  id: 97,
  name: 'Adulam NFT #97',
  description: 'A.I Arts NFTs Collection, Mint and collect the hottest NFTs around.',
  price: 1,
  image: 'https://ipfs.io/ipfs/QmY1zrFibpdHQ7qcqZqq7THsqTremZYepXNWR5Au3MF1ST/97.webp',
  demand: '7',
  attributes: [
    { trait_type: 'Environment', value: 'Moiuntains' },
    { trait_type: 'Weapon', value: 'Clube' },
    { trait_type: 'Rarity', value: 2, max_value: 10 },
    {
      display_type: 'date',
      trait_type: 'Created',
      value: 1664478110287
    },
    { display_type: 'number', trait_type: 'generation', value: 1 }
  ]
}


Finally, as shown in the image below, upload the metadata folder to IPFS alongside the images folder.


Uploaded Folders


Fantastic, now let’s pin it on the web for the world to see. Currently, both folders are sitting on your local IPFS node (Your Computer), for it to be accessible worldwide, you need to use a Pinning service such as Pinata.

Pinning your folders to IPFS

First, head to Pinata pin manager and sign up if you haven’t done that before. Then click on the account icon and select API Keys. See the image below.


Pinata API Key


On the keys creation page, click on create a new key and enter the name for the key. Observe the image below.


Creating a new key


Now copy the JWT key on your clipboard. This is what we’ll use to link our IPFS Desktop with our Pinata account. See the image below.


Copying Pinata JWT token


Next, open up your IPFS desktop application, head to the settings tab and add a new service, select Pinata and paste your JWT token to the space provided. Refer to the image below.


Setting up a Pinning Service


The last thing to do is to actually pin your folders to Pinata using the instruction below.


Head to the files tab, click on the triple dotted line, and select set pinning. This will pop up in the image below.


Select Pinata and apply, by so doing, your images folder will be accessible globally.

Confirming Global Image Accessibility

Head to this website, copy and paste your CID on the IPFS input field and click on the cache button. This scans the entire set of publicly available IPFS gateways in search of your images. See the image below.


Public Gateway Cacher


The results from the above image show that many IPFS nodes now have copies of the images folder available and accessible globally even if you deleted the original copy on your local node.


With the steps clearly outlined for you, also Pin the metadata folder to make them publicly available online.


Now you can use any of the links in the image above as your base URL for ERC721 tokens. See the image below.


Your Images on the IPFS


And there you have it, that is how to prepare and upload your artworks on the IPFS.

Conclusion

You will always encounter the need to understand how to preprocess and upload artworks on the IPFS at a batch scale.


Once you understand how to work and process images to the IPFS you can start using it on your special web3 projects.


Till next time, keep crushing it!


See live demo usage on my NFT project on the Goerli testnet here. See Opensea’s location here. You can watch the video version on my YouTube Channel.


About the Author

I am a full-stack blockchain developer with 6+ years of experience in the software development industry.


By combining Software Development, writing, and teaching, I demonstrate how to build decentralized applications on EVM-compatible blockchain networks.


My stacks include JavaScript, React, Vue, Angular, Node, React Native, NextJs, Solidity, and more.



Subscribe to my YouTube channel to learn how to build a Web3 app from scratch. I also offer private and specialized classes for serious folks who want to learn one-on-one from a mentor.


Book your Web3 classes here.

Now, let me show you how to represent the above image and metadata in your code.