In this article, we’ll take a look at the File System core module, File Streams and some fs module alternatives.
Imagine that you just got a task, which you have to complete using Node.js..
The problem you face seems to be easy, but instead of checking the official Node.js docs, you head over to Google or npm and search for a module that can do the job for you.
While this is totally okay; sometimes the core modules could easily do the trick for you.
In this new Mastering the Node.js Core Modules series you can learn what hidden/barely known features the core modules have, and how you can use them. We will also mention modules that extend their behaviors and are great additions to your daily development flow.
fs
moduleFile I/O is provided by simple wrappers around standard POSIX functions. To use the fs
module you have to require it with require('fs')
. All the methods have asynchronous and synchronous forms.
// the async api
const fs = require('fs')
fs.unlink('/tmp/hello', (err) => { if (err) { return console.log(err) } console.log('successfully deleted /tmp/hello') })
You should always use the asynchronous API when developing production code, as it won’t block the event loop so you can build performant applications.
// the sync api
const fs = require('fs')
try { fs.unlinkSync('/tmp/hello')} catch (ex) { console.log(ex) }
console.log('successfully deleted /tmp/hello');
You should only use the synchronous API when building proof of concept applications, or small CLIs.
One of the things we see a lot is that developers barely take advantage of file streams.
Streams in @nodejs are powerful concepts — with them you can achieve small memory footprint of your applications.
What are Node.js streams, anyways?
Streams are a first-class construct in Node.js for handling data. There are three main concepts to understand:
For more information check Substack’s Stream Handbook.
As the core fs module does not expose a feature to copy files, you can easily do it with streams:
// copy a file
const fs = require('fs') const readableStream = fs.createReadStream('original.txt') var writableStream = fs.createWriteStream('copy.txt')
readableStream.pipe(writableStream)
You could ask — why should I do it when it is just a cp
command away?
The biggest advantage in this case to use streams is the ability to transform the files — you could easily do something like this to decompress a file:
const fs = require('fs') const zlib = require('zlib')
fs.createReadStream('original.txt.gz') .pipe(zlib.createGunzip()) .pipe(fs.createWriteStream('original.txt'))
fs.access
The purpose of the fs.access
method is to check if a user have permissions for the given file or path, something like this:
fs.access('/etc/passwd', fs.constants.R_OK | fs.constants.W_OK, (err) => { if (err) { return console.error('no access') } console.log('access for read/write') })
Constants exposed for permission checking:
fs.constants.F_OK
- to check if the path is visible to the calling process,fs.constants.R_OK
- to check if the path can be read by the process,fs.constants.W_OK
- to check if the path can be written by the process,fs.constants.X_OK
- to check if the path can be executed by the process.However, please note that using **fs.access**
to check for the accessibility of a file before calling **fs.open**
, **fs.readFile**
or **fs.writeFile**
is not recommended.
The reason is simple — if you do so, you will introduce a race condition. Between you check and the actual file operation, another process may have already changed that file.
Instead, you should open the file directly, and handle error cases there.
fs.watch
With the fs.watch
method, you can listen on changes of a file or a directory.
However, the **fs.watch**
API is not 100% consistent across platforms, and on some systems, it is not available at all:
Note, that the recursive option is only supported on OS X and Windows, but not on Linux.
Also, the fileName
argument in the watch
callback is not always provided (as it is only supported on Linux and Windows), so you should prepare for fallbacks if it is undefined
:
fs.watch('some/path', (eventType, fileName) => { if (!filename) { //filename is missing, handle it gracefully } })
fs
modules from npmThere are some very useful modules maintained by the community which extends the functionality of the fs
module.
graceful-fs
The graceful-fs
is a drop-in replacement for the core fs
module, with some improvements:
open
and readdir
calls, and retries them once something closes if there is an EMFILE error from too many file descriptors,EINVAL
and EPERM
errors in chown
, fchown
or lchown
if the user isn't root,lchmod
and lchown
become noops, if not available,EAGAIN
error.You can start using it just like the core fs
module, or alternatively by patching the global module.
// use as a standalone module
const fs = require('graceful-fs')
// patching the global one
const originalFs = require('fs') const gracefulFs = require('graceful-fs') gracefulFs.gracefulify(originalFs)
mock-fs
The mock-fs
module allows Node's built-in fs module to be backed temporarily by an in-memory, mock file system. This lets you run tests against a set of mock files or directories.
Start using the module is as easy as:
const mock = require('mock-fs') const fs = require('fs')
mock({ 'path/to/fake/dir': { 'some-file.txt': 'file content here', 'empty-dir': {} }, 'path/to/some.png': new Buffer([8, 6, 7, 5, 3, 0, 9])})
fs.exists('path/to/fake/dir', function (exists) { console.log(exists) // will output true })
lockfile
File locking is a way to restrict access to a file by allowing only one process access at any specific time. This can prevent race condition scenarios.
Adding lockfiles using the [lockfile](https://github.com/npm/lockfile)
module is striaghforward:
const lockFile = require('lockfile')
lockFile.lock('some-file.lock', function (err) { // if the err happens, then it failed to acquire a lock. // if there was not an error, then the file was created, // and won't be deleted until we unlock it.
// then, some time later, do: lockFile.unlock('some-file.lock', function (err) {
}) })
I hope this was a useful explanation of the Node.js file system and its’ possibilities.
If you have any questions about the topic, please let me know in the comments section below.
Originally published at blog.risingstack.com on May 2, 2017.
Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising & sponsorship opportunities.
To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.
If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!