As we enter the era of AI, everything seems to be accelerating at an unprecedented speed. Generative this, generative that—data overload everywhere. I'm pretty sure you're already mentally fatigued from all the information, so let’s cut to the chase, shall we? What I wanna do: YouTube Video → Bite-sized Information → Learning ✨ + Sharing on Social Media 🐦 … That’s it … In this AI race, everything is about speed. If we can leverage AI to accelerate our learning, why not? So, share this—whether with your family, friends, or followers, or even as an influencer. The Outcomes: Repo: https://github.com/WorkSmarter-lol/yt2tweets-cli/ Disclaimer: This tutorial only works for YouTube videos with English subtitles and is not applicable to YouTube Shorts. The process 0. Prerequisite Knowledge: Before getting started, it is crucial to have some basic how LangChain.js, Prompt Engineering, and OpenAI Models work. 1. Prerequisite Tools: OpenAI API-Key (we’re using OpenAI GPT-4o-mini) youtube-transcript (NPM package) — extract a transcript from a youtube video Node.js Yarn / NPM LangChain.js For my case, I’m using yarn to start the development. As for a quick start to test out the script, I did a quick MVP using CLI to try out the project. 2. Choosing name for CLI For quick reference and a catchy name, I called it “yt2tweets”, which essentially means → “YouTube to Tweets”. 3. Desired Output $ yt2tweets "https://youtu.be/1-TZqOsVCNM" # Result: # Tweet 1: Introduction ... 🧵👇 (1/X) # Tweet 2: ... 🧵 (2/X) # Tweet 3: ... 🧵 (3/X) # Tweet 4: ... 🧵 (4/X) # Tweet 5: Conclusion ... 🧵 (5/X) Ideally, we need to provide the transcript as context for the AI model, which is GPT-4o-mini in our case, so it can understand the context and summarize the input in the output format we specified. 4. Prompt Design Here lies the secret sauce ✨ for making things work: Prompt Engineering is a necessary core skill to get the job done. To customize how the CLI converts YouTube videos into Twitter/X threads, follow the simple and easy 3-step setup that I’ve defined. You can adjust the tone, length, and style to fit your needs. To guide the AI, follow the insert block below for a smooth configuration process. Identity and purpose Set the AI’s role and goals with the Identity and Purpose block. Define its function and objectives to ensure it generates content that aligns with your needs and desired outcomes. Steps Define the step-by-step actions for the AI to follow, ensuring a clear and structured approach for generating your content. Output formats Specify the formats in which the AI should deliver content. Example as below: import { ChatPromptTemplate } from '@langchain/core/prompts'; const prompt = ChatPromptTemplate.fromMessages([ { role: 'system', content: ` # IDENTITY AND PURPOSE {identity} # STEPS {steps} # OUTPUT INSTRUCTIONS {formats} # INPUT INPUT: {input} `, }, ]); The {input} is where I put in all the transcript for GPT to do the summarisation. Reference for how I added my prompt can be found here, with an example below: 5. Putting it All Together Finally, in order for it to run, you need to have @langchain/openai installed and your OpenAI API key ready. Once everything is sorted out, you can initiate the model and start passing prompts and feeds to the AI for a response. import { ChatOpenAI } from '@langchain/openai'; // Instantiate Model const llm = new ChatOpenAI({ modelName: 'gpt-4o-mini', temperature: 0.7, // <-- feel free to adjust temperature here apiKey, }); // ... // add prompts here // ... // Ensure that chain.invoke correctly passes the variables result = await prompt.pipe(llm).invoke({ identity, steps, formats, input, }); // get result console.log('>> result?.content'); // Tweet 1: Introduction ... 🧵👇 (1/X) ... 6. Wrap It Up as a CLI For convenience, I exported the function as a CLI so it will be easy for me to use in the future. To achieve that, I used: Commander — to enable CLI for NPM package BIN Ora — Elegant terminal spinner Chalk — Terminal string styling A snippet of the code is provided below (full code at the end): import { Command } from 'commander'; import chalk from 'chalk'; import ora from 'ora'; const spinner = ora('Loading...'); // Initialize the command line interface const program = new Command(); // Command to convert a YouTube URL program .argument('<url>') .description('Turn YouTube Videos into Twitter Threads with AI') .action(async url => { const apiKey = readApiKey(); // Read the saved API key // ... spinner.start(); await convertYt2Tweets(url, apiKey); // ... }); Repository Link (Full Code) https://github.com/WorkSmarter-lol/yt2tweets-cli Conclusion Again, I hope this project helps you speed up your learning and digest YouTube content, or share it with your friends, family, and followers. If you prefer to access the UI-ready project, I created a user interface for the same project. You can find the link below: https://yt2tweets.worksmarter.lol As we enter the era of AI, everything seems to be accelerating at an unprecedented speed. Generative this, generative that—data overload everywhere. I'm pretty sure you're already mentally fatigued from all the information, so let’s cut to the chase, shall we? What I wanna do: YouTube Video → Bite-sized Information → Learning ✨ + Sharing on Social Media 🐦 YouTube Video → Bite-sized Information → Learning ✨ + Sharing on Social Media 🐦 YouTube Video → Bite-sized Information → Learning ✨ + Sharing on Social Media 🐦 … That’s it … In this AI race, everything is about speed. If we can leverage AI to accelerate our learning, why not? So, share this—whether with your family, friends, or followers, or even as an influencer. The Outcomes: The Outcomes: Repo: https://github.com/WorkSmarter-lol/yt2tweets-cli/ https://github.com/WorkSmarter-lol/yt2tweets-cli/ Disclaimer: Disclaimer: This tutorial only works for YouTube videos with English subtitles and is not applicable to YouTube Shorts. This tutorial only works for YouTube videos with English subtitles and is not applicable to YouTube Shorts. This tutorial only works for YouTube videos with English subtitles and is not applicable to YouTube Shorts. The process The process 0. Prerequisite Knowledge: Before getting started, it is crucial to have some basic how LangChain.js , Prompt Engineering , and OpenAI Models work. LangChain.js Prompt Engineering OpenAI Models 1. Prerequisite Tools: OpenAI API-Key (we’re using OpenAI GPT-4o-mini) youtube-transcript (NPM package) — extract a transcript from a youtube video Node.js Yarn / NPM LangChain.js OpenAI API-Key (we’re using OpenAI GPT-4o-mini) OpenAI API-Key youtube-transcript (NPM package) — extract a transcript from a youtube video youtube-transcript Node.js Node.js Yarn / NPM Yarn NPM LangChain.js LangChain.js For my case, I’m using yarn to start the development. As for a quick start to test out the script, I did a quick MVP using CLI to try out the project. 2. Choosing name for CLI For quick reference and a catchy name, I called it “ yt2tweets ”, which essentially means → “ YouTube to Tweets ”. yt2tweets YouTube to Tweets 3. Desired Output $ yt2tweets "https://youtu.be/1-TZqOsVCNM" # Result: # Tweet 1: Introduction ... 🧵👇 (1/X) # Tweet 2: ... 🧵 (2/X) # Tweet 3: ... 🧵 (3/X) # Tweet 4: ... 🧵 (4/X) # Tweet 5: Conclusion ... 🧵 (5/X) $ yt2tweets "https://youtu.be/1-TZqOsVCNM" # Result: # Tweet 1: Introduction ... 🧵👇 (1/X) # Tweet 2: ... 🧵 (2/X) # Tweet 3: ... 🧵 (3/X) # Tweet 4: ... 🧵 (4/X) # Tweet 5: Conclusion ... 🧵 (5/X) Ideally, we need to provide the transcript as context for the AI model, which is GPT-4o-mini in our case, so it can understand the context and summarize the input in the output format we specified. 4. Prompt Design Here lies the secret sauce ✨ for making things work: Prompt Engineering is a necessary core skill to get the job done. Prompt Engineering To customize how the CLI converts YouTube videos into Twitter/X threads, follow the simple and easy 3-step setup that I’ve defined. You can adjust the tone, length, and style to fit your needs. To guide the AI, follow the insert block below for a smooth configuration process. Identity and purpose Identity and purpose Set the AI’s role and goals with the Identity and Purpose block. Define its function and objectives to ensure it generates content that aligns with your needs and desired outcomes. Steps Steps Define the step-by-step actions for the AI to follow, ensuring a clear and structured approach for generating your content. Output formats Output formats Specify the formats in which the AI should deliver content. Example as below: import { ChatPromptTemplate } from '@langchain/core/prompts'; const prompt = ChatPromptTemplate.fromMessages([ { role: 'system', content: ` # IDENTITY AND PURPOSE {identity} # STEPS {steps} # OUTPUT INSTRUCTIONS {formats} # INPUT INPUT: {input} `, }, ]); import { ChatPromptTemplate } from '@langchain/core/prompts'; const prompt = ChatPromptTemplate.fromMessages([ { role: 'system', content: ` # IDENTITY AND PURPOSE {identity} # STEPS {steps} # OUTPUT INSTRUCTIONS {formats} # INPUT INPUT: {input} `, }, ]); The {input} is where I put in all the transcript for GPT to do the summarisation. {input} Reference for how I added my prompt can be found here , with an example below: here 5. Putting it All Together Finally, in order for it to run, you need to have @langchain/openai installed and your OpenAI API key ready. Once everything is sorted out, you can initiate the model and start passing prompts and feeds to the AI for a response. @langchain/openai import { ChatOpenAI } from '@langchain/openai'; // Instantiate Model const llm = new ChatOpenAI({ modelName: 'gpt-4o-mini', temperature: 0.7, // <-- feel free to adjust temperature here apiKey, }); // ... // add prompts here // ... // Ensure that chain.invoke correctly passes the variables result = await prompt.pipe(llm).invoke({ identity, steps, formats, input, }); // get result console.log('>> result?.content'); // Tweet 1: Introduction ... 🧵👇 (1/X) ... import { ChatOpenAI } from '@langchain/openai'; // Instantiate Model const llm = new ChatOpenAI({ modelName: 'gpt-4o-mini', temperature: 0.7, // <-- feel free to adjust temperature here apiKey, }); // ... // add prompts here // ... // Ensure that chain.invoke correctly passes the variables result = await prompt.pipe(llm).invoke({ identity, steps, formats, input, }); // get result console.log('>> result?.content'); // Tweet 1: Introduction ... 🧵👇 (1/X) ... 6. Wrap It Up as a CLI For convenience, I exported the function as a CLI so it will be easy for me to use in the future. To achieve that, I used: Commander — to enable CLI for NPM package BIN Ora — Elegant terminal spinner Chalk — Terminal string styling Commander — to enable CLI for NPM package BIN Commander Ora — Elegant terminal spinner Ora Chalk — Terminal string styling Chalk A snippet of the code is provided below (full code at the end): import { Command } from 'commander'; import chalk from 'chalk'; import ora from 'ora'; const spinner = ora('Loading...'); // Initialize the command line interface const program = new Command(); // Command to convert a YouTube URL program .argument('<url>') .description('Turn YouTube Videos into Twitter Threads with AI') .action(async url => { const apiKey = readApiKey(); // Read the saved API key // ... spinner.start(); await convertYt2Tweets(url, apiKey); // ... }); import { Command } from 'commander'; import chalk from 'chalk'; import ora from 'ora'; const spinner = ora('Loading...'); // Initialize the command line interface const program = new Command(); // Command to convert a YouTube URL program .argument('<url>') .description('Turn YouTube Videos into Twitter Threads with AI') .action(async url => { const apiKey = readApiKey(); // Read the saved API key // ... spinner.start(); await convertYt2Tweets(url, apiKey); // ... }); Repository Link (Full Code) https://github.com/WorkSmarter-lol/yt2tweets-cli https://github.com/WorkSmarter-lol/yt2tweets-cli Conclusion Again, I hope this project helps you speed up your learning and digest YouTube content, or share it with your friends, family, and followers. If you prefer to access the UI-ready project, I created a user interface for the same project. You can find the link below: https://yt2tweets.worksmarter.lol https://yt2tweets.worksmarter.lol