These past few months, I challenged myself to solve a problem that many software content creators encounter — wanting to create video tutorials without the hassle of video editing.
I released and deployed the project you can find
There are two things software engineers constantly do — learn new programming concepts and explain their code to other people.
Stack Overflow is a developer’s best friend due to its convenience — You can view code snippets and apply them to your work. However, more detail is required than a few lines of code when a question is rather complex or esoteric.
Video content sites like YouTube are a bit better — they provide more context and they’re engaging to watch. But it isn’t time-efficient for creators to make short content because it takes too long to edit videos. Therefore, they make longer videos about broader topics to reach wider audiences — a nightmare for more experienced developers.
The thought arose that there needs to be a tool that provides just the right amount of context to a concept, is as detailed as an article, as engaging as a video, but isn’t a huge time cost for the creator.
Feeling energized, I tasked myself to build such a thing — a browser-based IDE where users simply code as they normally would in editors like VSCode and Atom. In the background, it tracks and stores every action a user performs — writing in files, running commands, and making notes.
With the click of a button, the editor generates a playback of the user’s actions which they can share on the platform for others to view and interact with.
The application is structured as two micro-services: the main application and a language compilation server.
I went with the classic MERN (
The code editor uses
The language compilation server encompasses a
The two micro-services are deployed as individual nodes in an
The application’s core functionality is the ability to monitor a user’s action and create a playback as an interactive video. This is accomplished using
Redux allows me to persist application state while passing information between React elements. The main application consists of two
The canvas reducer is responsible for storing two sets of data. The first is the layout of the IDE called the windowGrid
. This is a 2D array of objects where each object can be a file editor, notepad, or terminal type. These objects are rendered in the UI
by mapping over the windowGrid
and displaying the objects as corresponding React elements.
// The layout of the windows in the IDE.
this.windowGrid = [
[new FileWindow(), new NoteWindow()],
[new TerminalWindow()],
];
const getWindow = (window) => {
let component;
switch (window.type) {
case constants.NOTE:
component = (
<NoteWindow
key={window.Id}
id={window.Id}
width={calculatedColumnWidthPX}
/>
);
break;
case constants.TERMINAL:
component = (
<TerminalWindow
key={window.Id}
id={window.Id}
height={calculatedRowHeightPX}
width={calculatedColumnWidthPX}
/>
);
break;
case constants.FILE:
component = (
<FileWindow
height={calculatedRowHeightPX}
key={window.Id}
id={window.Id}
width={calculatedColumnWidthPX}
fileTabWidth={fileTabWidth}
/>
);
break;
default:
break;
}
return component;
};
The second set of data stored by the canvas reducer is user input. I register custom input event listeners to each of the objects in the windowGrid
such that an action is dispatched to update the redux store when a user interacts with the IDE.
const fileEditorChangeHandler = (newValue) => {
props.fileEditorContentChanged({
windowId: props.id,
value: newValue,
});
};
The playback reducer also keeps track of two sets of data. The first data set is stored in the speed and position of text as it is rendered during playback. Text is displayed sequentially as it was typed. This is accomplished by keeping a timestamped record of text input and updating the state of the React component inside a setTimeout()
call.
const startAnimation = () => {
if (retrievedDiff) {
ANIMATION_WINDOW_MAP[props.id] = setTimeout(
() => displayCharacter(),
constants.SPEED_SETTING[props.currentSpeed].File
);
}
};
The second set of information stored by the playback reducer is audio metadata. With the help of MediaRecorder, a user can overlay audio onto a post. Audio data is saved as chunks, stitched into a
initRecorder = async () => {
navigator.getUserMedia =
navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia;
if (navigator.mediaDevices) {
const s = await navigator.mediaDevices.getUserMedia({
audio: true,
});
this.mediaRecorder = new MediaRecorder(s);
this.chunks = [];
this.mediaRecorder.ondataavailable = (e) => {
if (e.data && e.data.size > 0) {
this.chunks.push(e.data);
}
};
this.stream = s;
this.mediaRecorder.addEventListener("stop", () => {
this.setStream(null);
});
}
};
export const uploadAudioToS3 = async (
postId,
userId,
blob,
redirect = false
) => {
s3.upload(
{
Bucket: "test",
Body: blob,
Key: `${userId}/${postId}.mp3`,
ContentType: "audio/mp3",
},
function (err, data) {
if (err) {
console.log(err);
} else if (redirect) {
// reload and redirect to explore page.
window.location.href = "/explore";
}
}
);
};
We’ve become accustomed to being told what a concept is and then expected to use that knowledge and apply it to our work. However, putting beginners in the perspective of an experienced programmer facilitates a bottom-up approach to teaching, where viewers learn how and why a concept works the way it does.
Why exactly this approach has not been wholly adopted can most likely be attributed to both inertia and a lack of realization of potential. The “memorize and adapt” teaching style has been predominant for so long, mainly because of its logistical simplicity. And since it has been just functional enough to output some number of knowledgeable learners, there is not a significant market push to fundamentally change the status quo.
However, in recent years, companies like
Want to Connect? Join the discord community here to provide feedback on the project and tag along the development process.
Also published here.