paint-brush
How to Auto-Generate Interactive Coding Videos for Software Developersby@a3raji
1,000 reads
1,000 reads

How to Auto-Generate Interactive Coding Videos for Software Developers

by Ash RajiNovember 25th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The code editor is a browser-based tool where users simply code as they normally would in editors like VSCode and Atom. In the background, it tracks and stores every action a user performs — writing in files, running commands, and making notes. With the click of a button, the editor generates a playback of the user’s actions which they can share on the platform for others to view and interact with. The application is structured as two micro-services: the main application and a language compilation server.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How to Auto-Generate Interactive Coding Videos for Software Developers
Ash Raji HackerNoon profile picture

These past few months, I challenged myself to solve a problem that many software content creators encounter — wanting to create video tutorials without the hassle of video editing.


I released and deployed the project you can find here (try on the desktop for the best user experience).

Inspiration

There are two things software engineers constantly do — learn new programming concepts and explain their code to other people.


Stack Overflow is a developer’s best friend due to its convenience — You can view code snippets and apply them to your work. However, more detail is required than a few lines of code when a question is rather complex or esoteric.


Video content sites like YouTube are a bit better — they provide more context and they’re engaging to watch. But it isn’t time-efficient for creators to make short content because it takes too long to edit videos. Therefore, they make longer videos about broader topics to reach wider audiences — a nightmare for more experienced developers.

Lightbulb Moment

The thought arose that there needs to be a tool that provides just the right amount of context to a concept, is as detailed as an article, as engaging as a video, but isn’t a huge time cost for the creator.


Feeling energized, I tasked myself to build such a thing — a browser-based IDE where users simply code as they normally would in editors like VSCode and Atom. In the background, it tracks and stores every action a user performs — writing in files, running commands, and making notes.


With the click of a button, the editor generates a playback of the user’s actions which they can share on the platform for others to view and interact with.

Architecture


The application is structured as two micro-services: the main application and a language compilation server.


I went with the classic MERN (MongoDBExpressReactNode) stack for the main application. A non-relational database like MongoDB was perfect since the type of data stored would vary significantly based on the layout of the IDE and the code being written by a user. React is my bread and butter, and I am a huge fan of Material UI’s components.


The code editor uses Codemirror for its rich syntax highlighting. It also features a notes section with multimedia support made with Quill and a terminal UI made with Xterm.js.


The language compilation server encompasses a Docker container running a Node server that exposes a pseudo-terminal interface using Node-pty. The server is provisioned with a NixOS environment that pre-installs the required packages to compile various languages and frameworks. The IDE currently supports 14 of the most popular programming languages.


The two micro-services are deployed as individual nodes in an AWS ECS cluster and communicate via WebSockets.


The Magic

The application’s core functionality is the ability to monitor a user’s action and create a playback as an interactive video. This is accomplished using ReduxsetTimeout(), and MediaRecorder.


Redux allows me to persist application state while passing information between React elements. The main application consists of two reducers: canvas and playback.


The canvas reducer is responsible for storing two sets of data. The first is the layout of the IDE called the windowGrid. This is a 2D array of objects where each object can be a file editor, notepad, or terminal type. These objects are rendered in the UI by mapping over the windowGrid and displaying the objects as corresponding React elements.


// The layout of the windows in the IDE.
this.windowGrid = [
        [new FileWindow(), new NoteWindow()],
        [new TerminalWindow()],
];

const getWindow = (window) => {
        let component;
        switch (window.type) {
            case constants.NOTE:
                component = (
                    <NoteWindow
                        key={window.Id}
                        id={window.Id}
                        width={calculatedColumnWidthPX}
                    />
                );
                break;
            case constants.TERMINAL:
                component = (
                    <TerminalWindow
                        key={window.Id}
                        id={window.Id}
                        height={calculatedRowHeightPX}
                        width={calculatedColumnWidthPX}
                    />
                );
                break;
            case constants.FILE:
                component = (
                    <FileWindow
                        height={calculatedRowHeightPX}
                        key={window.Id}
                        id={window.Id}
                        width={calculatedColumnWidthPX}
                        fileTabWidth={fileTabWidth}
                    />
                );
                break;
            default:
                break;
        }

        return component;
};


The second set of data stored by the canvas reducer is user input. I register custom input event listeners to each of the objects in the windowGrid such that an action is dispatched to update the redux store when a user interacts with the IDE.


const fileEditorChangeHandler = (newValue) => {
    props.fileEditorContentChanged({
      windowId: props.id,
      value: newValue,
    });
};


The playback reducer also keeps track of two sets of data. The first data set is stored in the speed and position of text as it is rendered during playback. Text is displayed sequentially as it was typed. This is accomplished by keeping a timestamped record of text input and updating the state of the React component inside a setTimeout() call.


const startAnimation = () => {
  if (retrievedDiff) {
    ANIMATION_WINDOW_MAP[props.id] = setTimeout(
        () => displayCharacter(),
         constants.SPEED_SETTING[props.currentSpeed].File
    );
  }
};


The second set of information stored by the playback reducer is audio metadata. With the help of MediaRecorder, a user can overlay audio onto a post. Audio data is saved as chunks, stitched into a Blob object, and converted into an audio element when the IDE component mounts. I use AWS S3 to store the audio files.


initRecorder = async () => {
        navigator.getUserMedia =
            navigator.getUserMedia ||
            navigator.webkitGetUserMedia ||
            navigator.mozGetUserMedia ||
            navigator.msGetUserMedia;

        if (navigator.mediaDevices) {
            const s = await navigator.mediaDevices.getUserMedia({
                audio: true,
            });
            this.mediaRecorder = new MediaRecorder(s);
            this.chunks = [];
            this.mediaRecorder.ondataavailable = (e) => {
                if (e.data && e.data.size > 0) {
                    this.chunks.push(e.data);
                }
            };
            this.stream = s;
            this.mediaRecorder.addEventListener("stop", () => {
                this.setStream(null);
            });
        }
};

export const uploadAudioToS3 = async (
    postId,
    userId,
    blob,
    redirect = false
) => {
    s3.upload(
        {
            Bucket: "test",
            Body: blob,
            Key: `${userId}/${postId}.mp3`,
            ContentType: "audio/mp3",
        },
        function (err, data) {
            if (err) {
                console.log(err);
            } else if (redirect) {
                // reload and redirect to explore page.
                window.location.href = "/explore";
            }
        }
    );
};


Final Remarks

Photo by Randy Jacob on Unsplash


We’ve become accustomed to being told what a concept is and then expected to use that knowledge and apply it to our work. However, putting beginners in the perspective of an experienced programmer facilitates a bottom-up approach to teaching, where viewers learn how and why a concept works the way it does.


Why exactly this approach has not been wholly adopted can most likely be attributed to both inertia and a lack of realization of potential. The “memorize and adapt” teaching style has been predominant for so long, mainly because of its logistical simplicity. And since it has been just functional enough to output some number of knowledgeable learners, there is not a significant market push to fundamentally change the status quo.


However, in recent years, companies like Khan Academy and a growing level of research in the area have shown that a bottom-up approach to teaching is far more effective and reaps much greater long-term gains in education.


Want to Connect? Join the discord community here to provide feedback on the project and tag along the development process.


Also published here.