I recently did something that was rather surprising to me. I ended up writing a book without even realizing.
In this post, I will discuss how my book/interactive course, Coding for Visual Learners, came to be, and what writing a book looks like as a programmer from a technical point of view. I will also talk a bit about creating online courses as it was the starting point of my writing journey. If you are short on time, you can skip to the third part of this article where I talk about what kind of tooling I ended up using for writing and publishing a book or check out the tl;dr below. But remember, sometimes the journey is more rewarding than the destination.
tl;dr:
Let’s start with a proper introduction. My name is Engin and I am a software developer, an educator and now an author. I have been teaching an Introduction to Python course at a college for more than two years now, and a couple of months ago I started to create online courses/video tutorials on the side as well. I usually find myself scripting the entire course before doing the recording to be able to have a more predictable and a repeatable outcome. In other words, I am not great at improvisation. These scripts that I end up writing are usually very comprehensive and contain virtually every single word that I plan to say during recording. I published my first online course last year on Pluralsight, Automating the Web using PhantomJS and CasperJS. You should check it out if you haven’t done so yet. It’s okay, I will wait for you to come back.
This time I wanted to create another online course, but I wanted to do everything by myself to have a more end-to-end experience in course creation. I have been working for someone else for my entire life and never really handled any transaction all by myself. I figured creating a course could be a good exercise in experimenting with the business side of the things as it would need me to handle things like marketing or distribution as well. In my preliminary research, I identified a market gap in visual ways of learning to code and choose to use the excellent JavaScript library, p5.js, to teach programming to beginners. As a self-taught programmer, I remember clearly the struggles that new programmers face. I set out to create the kind of course that would have helped me when I was starting out.
To write the scripts I would be reading during recording, I have been using a service called Workflowy. It is essentially a list creator where you can create deeply nested collapsible lists. I found that this kind of an interface maps well to how I structure my thinking. It allows me to create a new list item for every couple of sentences. This makes reorganizing your text really easy just by dragging and dropping. And collapsing lists helps with hiding and managing the complexity.
<a href="https://medium.com/media/5b848ef565256e54c2e731c4976e42a7/href">https://medium.com/media/5b848ef565256e54c2e731c4976e42a7/href</a>
Workflowy Interface
Workflowy mapped really well to how I did my initial recording as well. I was recording only a couple of sentences at a time since it felt like recording continuously for a long time would increase the chance of errors in my speech. #ESL
Learning One: Later I found out that this is not the greatest way of going about recording. This level of granularity increases your editing and recording time a lot. An audio engineer recommended me that I should try to record in batch (like even trying to do the entire thing in one go) and try to edit/patch it at a later time.
Workflowy is a great tool and I have been using it for many years now for other productivity related purposes as well. It makes writing and note-taking easy and friction-free however it is not great at showing code snippets, images, or anything other than plain text. A couple of months ago, I came across Notion and was super impressed by its functionality. It is basically like Workflowy as it can also create infinitely nested lists but also pretty much anything else as well (checklists, formatted code snippets, Markdown documents, etc…) as far as organization/productivity tools go. I decided to migrate my work to Notion and continue writing over there. Its polished interface made me a bit more proud of my work in progress and seeded the idea that this body of text that I am developing could be more than what it’s intended to be. With features like code highlighting, image importing, etc. Notion made me see what my work could look like if it was on paper (that is digital paper). Aesthetics do matter.
<a href="https://medium.com/media/651a0235c403dcbfcc4c1327162c69e9/href">https://medium.com/media/651a0235c403dcbfcc4c1327162c69e9/href</a>
At this point, I noticed I had written more than 150 pages. Using list item was keeping my approach very iterative and their easily collapsible nature was hiding my own progress from myself. I noticed I wrote a book without even realizing. So that’s how you write a book. One sentence at a time, my friend. One sentence at a time.
Learning Two: Little things add up. I was only writing a couple of talking points every day, but over time, they gave me enough material to create a book out of them. Having systems that allowed me to work in a consistent manner really helped with achieving a remarkable end result. The choice of the medium was also helpful as well. Lists were keeping things atomic and collapsible lists were hiding the increasing body of work from me, preventing me from self-sabotaging by triggering anxiety.
This is where things started to get a bit technical since I had a problem at hand. A book requires a continuous text whereas what I had was a huge bullet list of sentences. I needed to keep both formats for different reasons. A list was helping me with my approach to recording online courses at the time, whereas it was problematic if I were to create a body of text from this document. Enter coding.
I never really understood why programming might be needed for self-publishing a book. But it was starting to dawn on me.
Notion had an option to export my script as a Markdown file. I needed to convert this Markdown file which was essentially a huge nested list into a flat body of text. For this purpose, I used a node.js Markdown library..) That worked well but the remaining problem was that it was now an HTML document with a huge unsorted list. Still a long haul from being usable.
At this point, I started to use another node.js library called Cheerio to be able to manipulate the resulting HTML into the desired form. Cheerio is a library that offers a JQuery-like syntax on the server-side for the manipulation of hypertext data. Even though Cheerio was helpful to a certain level, it still didn’t get me all the way there.
As a result, I decided to add type hints to my text. Basically, providing a set of tags to guide Cheerio to do the correct manipulations. I would be adding a {{ h1 }} to indicate the list header needs to be an h1 title, or add a {{ + }} symbol in front of some of the sentences to indicate that an item needs to be merged with the previous one to create a paragraph.
Around this time, something terrible happened, though. Notion decided to roll a set of poorly communicated updates that blocked me from working on my text for several consecutive days #firstworldproblems. At the time, Notion was a relatively new service so their desktop application was not great in caching the online data as well. I found myself unable to work on my productive hours because a cloud service was unavailable. This was a hard lesson in not putting too much faith in cloud services. Notion failed me several times over a couple of consecutive days making me realize that I needed to find a new home for my writing.
Now, this is not to say Notion is a bad service. It is actually an amazing product. But I made the mistake of relying on it too much at an early stage in their product development where things were still very much in flux and their offline application wasn’t great at syncing with the online platform.
Learning Three: You need to have a way of working offline if things are to become inaccessible. You can’t assume cloud will always be there.
I started to look for alternatives that would replace my workflow and quickly settled with StackEdit.io. It allows you to write Markdown in the browser using the local cache. It can sync with Google Drive in an easy manner as well. This is around the time that I also found out that my previous way of doing recordings wasn’t very inefficient. Recording (and writing) a script only a few sentences at a time is a bit too granular, and slows you down when it comes to editing. This prompted me to transfer all my writing into StackEdit.io as a flat markdown text, completely getting rid of the list structure.
Learning Four: There are no right or wrong tools when you are just starting out. The concept of right, wrong, best exists in the realm of clickbaity titles of the internet. If I were to get obsessed with my tooling from the get-go, I wouldn’t be able to do any meaningful work. You will get things wrong, and that’s alright. It is part of the journey and the learning process.
At this point I had all my writing in StackEdit, I also got rid of all my previously added style hints as I could just be leveraging the Markdown syntax moving on. StackEdit can automatically sync with Google Drive so I had a way of fetching the files I was working on for offline processing as well. I finally had a reliable way of building a pipeline around things. This is where coding came into play in full force.
I wanted to be able to create multiple outputs from my Markdown source file. I was already working on sections in the book that were more in-depth which I started to consider having as a premium offering. I wanted to create an ebook out of the material but I also wanted to be able to display the same material on my Jekyll website as well. This already implied four different variations to the source text, like online-free, online-paid, offline-free, offline-paid. Another difference between the online and offline output was going to be that I could be using video (gif) files in the online version but these same visuals needed to be static images in the offline version.
Now I was starting to understand how programming can help with my process. I had a single authoritative source and multiple output formats that needed to be managed and this started to look more and more like an automation problem.
I started to build node.js scripts to establish my workflow. I was already syncing my Google Drive to a folder on my disk. So I wrote a script that would fetch the desired files from Google Drive to my working folder and place it in a staging area.
'use strict';
const fs = require('fs');
const SOURCE_DIR = '/Users/username/Google Drive';
const TARGET_DIR = './staging';
// remove the target content synchroniously
let targetContent = fs.readdirSync(TARGET_DIR);
targetContent.forEach((file, index) => {
let targetFile = `${TARGET_DIR}/${file}`
let isFile = fs.lstatSync(targetFile).isFile();
if (isFile) {
fs.unlinkSync(targetFile);
}
});
fs.readdir(SOURCE_DIR, (err, files) => {
files.forEach((file) => {
if (file.startsWith('p5js-') && !file.endsWith('.gdoc')) {
fs.createReadStream(`${SOURCE_DIR}/${file}`)
.pipe(fs.createWriteStream(`${TARGET_DIR}/${file}.md`));
}
});
});
I was hosting the images that were referenced inside the Markdown file on imgur. I built yet another script that would download these images into the staging folder. I needed images to be on my local for compiling an ebook.
Then I created another script that would pre-process the files in the staging area and move them to their corresponding folder that is determined based on the target destination. To be able to render different content in between online and offline versions, as well as the paid and free versions, I decided to use Nunjucks templating language. This solved two main use cases for me:
Using Nunjucks, I was able to conditionally render content based on the targeted output format. For example, a gif in my document might be represented like this:
{% if online %}
![01–01](http://i.imgur.com/<id>)
{% else %}
![01–01](images/01–01.jpg)
{% endif %}
Having an if-else statement, I can set a variable inside my preprocessing script to decide which conditional to execute and use this data to render the Nunjucks template.
const data = {online: true};
let renderedContent = nj.renderString(content, data);
Nunjucks also allowed me to create variables. I could use a variable like `{{ item }}` in my Markdown text which can render to a value of `book` or `course` depending on the destination I am targeting. (I ended up creating an interactive course using the book material where I needed to refer to the material as a ‘course’, more on that a bit later).
Using Nunjucks variables in Markdown.
Using the pre-processing script I was also able to manipulate the front matter I had in the original Markdown file using the node library called front-matter. This needed to be done since one of my target output format was a .md file for Jekyll and I wanted to automatically add some additional front matter attributes for Jekyll to parse.
This all might sound like terribly over-engineered and unnecessary. And there is a chance that it might very well be. But one thing that I was happy about this process was that even though I yielded to the developer inclination of tooling up in the face of the slightest problem, I didn’t get overly obsessed about building generic, scalable solutions. All this code I wrote is actually pretty ugly and embarrassing to be shared here but the point is to move fast and build automated solutions in a way that’s not going to waste your time. The primary objective is not building systems to deliver your content, it is delivering the content, however, possible. You should be doing things that don’t scale.
Learning Five: Definitely read Paul Graham’s essay on doing things that don’t scale. Things that are not efficient can get you massive leverage in the short run. But if you are to get bogged down by the concerns of scalability when you are only starting out then you might miss out on opportunities of growth and sources of motivation like the sense of delivering value to people which might impede and even ruin your progress.
One technical decision I regret is using Promises for my file system operations. I think I was trying to prove myself that I was comfortable with using them but they were completely an overkill for my circumstances as I didn’t have any performance concerns. The excessive usage of Promises started to have a mental toll, where I just wanted to move fast, as they are not as straightforward as synchronous operations would have been. Game developer Jonathan Blow has a great talk on optimizing for the cognitive load when developing personal projects. Granted this is not at the scale of anything he is working for but if you are just working on something that needs to work regardless of how you need to make sure that it is usable as possible. Don’t try to be smart, because most days you will dumber than your smartest self.
I also created a post-processing file that I would run for some specific purposes. For example sending the document to a copy editor ( I worked with a freelancer on Fiverr, I didn’t need to have the code snippets in there as they were ballooning up my word count and hence affecting the pricing. Having an automated workflow allowed me to remove them easily. There was an instance where I was faced with an opposite problem where I just needed the code snippets. This was again solved quickly by using the post processing file to selectively remove the target elements (anything that is not a code snippet) from the files.
I ended up publishing my work on three different platforms. I first published the book on Leanpub. Having already established a pipeline, it was very easy to integrate my work with Leanpub and use their Github integration. After pushing my work on Leanpub, I exported an unbranded version of the book using their tools and placed it on Amazon Kindle Store. Zapier has an amazing blog post on the self-publishing platforms. It is a must-read for anyone that is interested in this space.
The most amazing discovery for me was coming across Educative.io through a post on Hacker News. Educative.io is an online course creation site where you can build interactive courses using blocks that allow you to add executable code snippets (among many other things) inside a document in an easy manner. This allows you to create documents that are similar to Jupyter notebooks. Transferring my source text to their platform was a breeze as it uses the Markdown format as well.
I am not claiming that my workflow was perfect. One big shortcoming was that gathering online feedback on my text from other people was pretty hard. For that use case, working in Google Docs would have been much more useful. But unfortunately, Google Docs doesn’t offer a great way of working with Markdown files.
Also using Nunjucks templating in your source text introduces a little bit of an overhead as you can’t just copy-paste text, you need to process — compile — it first. But considering the efficiencies that were gained, I find this to be a reasonable tradeoff.
This summarizes my journey. If there is a final lesson to be derived here I think it is not to be overly obsessed with tools and best-practices from the get-go and just start with creating things. It is the content and the product that really matters and all the other concerns are secondary, at least initially. Don’t let yourself get slowed down by choices, you probably don’t know enough at the beginning to inform your decisions so it is important to get started and iteratively adjust according to your emerging needs.
Thank you for making it this far! Feel free to check out Coding for Visual Learners and feel free to reach out to me on Twitter or through my website with your comments, suggestions and questions.
Also thanks to Hoi-En and Leigh at Myplanet for reading drafts of this.