In this article, we’ll take a look at the File System core module, File Streams and some fs module alternatives. Imagine that you just got a task, which you have to complete using Node.js.. The problem you face seems to be easy, but instead of checking the , you head over to Google or npm and search for a module that can do the job for you. official Node.js docs While this is totally okay; sometimes the core modules could easily do the trick for you. In this new series you can learn what hidden/barely known features the core modules have, and how you can use them. We will also mention modules that extend their behaviors and are great additions to your daily development flow. Mastering the Node.js Core Modules The Node.js module fs To use the module you have to require it with . All the methods have asynchronous and synchronous forms. File I/O is provided by simple wrappers around standard POSIX functions. fs require('fs') The asynchronous API // the async api const fs = require('fs') fs.unlink('/tmp/hello', (err) => { if (err) { return console.log(err) } console.log('successfully deleted /tmp/hello') }) You should always use the asynchronous API when developing production code, as it won’t block the so you can build performant applications. event loop The synchronous API // the sync api const fs = require('fs') try { fs.unlinkSync('/tmp/hello')} catch (ex) { console.log(ex) } console.log('successfully deleted /tmp/hello'); You should only use the synchronous API when building proof of concept applications, or small CLIs. Node.js File Streams One of the things we see a lot is that developers barely take advantage of file streams. Streams in @nodejs are powerful concepts — with them you can achieve small memory footprint of your applications. What are Node.js streams, anyways? Streams are a first-class construct in Node.js for handling data. There are three main concepts to understand: — the object where your data comes from, source — where your data passes through , pipeline (you can filter, or modify it here) — where your data ends up. sink For more information check Substack’s Stream Handbook . As the core fs module does not expose a feature to copy files, you can easily do it with streams: // copy a file const fs = require('fs') const readableStream = fs.createReadStream('original.txt') var writableStream = fs.createWriteStream('copy.txt') readableStream.pipe(writableStream) You could ask — why should I do it when it is just a command away? cp — you could easily do something like this to decompress a file: The biggest advantage in this case to use streams is the ability to transform the files const fs = require('fs') const zlib = require('zlib') fs.createReadStream('original.txt.gz') .pipe(zlib.createGunzip()) .pipe(fs.createWriteStream('original.txt')) When not to use fs.access The purpose of the method is to check if a user have permissions for the given file or path, something like this: fs.access fs.access('/etc/passwd', fs.constants.R_OK | fs.constants.W_OK, (err) => { if (err) { return console.error('no access') } console.log('access for read/write') }) Constants exposed for permission checking : - to check if the path is visible to the calling process, fs.constants.F_OK - to check if the path can be read by the process, fs.constants.R_OK - to check if the path can be written by the process, fs.constants.W_OK - to check if the path can be executed by the process. fs.constants.X_OK However, please note that using **fs.access** to check for the accessibility of a file before calling **fs.open** , **fs.readFile** or **fs.writeFile** is not recommended. The reason is simple — if you do so, you will introduce a race condition. Between you check and the actual file operation, another process may have already changed that file. Instead, you should open the file directly, and handle error cases there. Caveats about fs.watch With the method, you can listen on changes of a file or a directory. fs.watch However, the **fs.watch** API is not 100% consistent across platforms, and on some systems, it is not available at all: on Linux systems, this uses , inotify on BSD systems, this uses , kqueue on OS X, this uses for files and for directories, kqueue FSEvents on SunOS systems (including Solaris and SmartOS), this uses , event ports on Windows systems, this feature depends on . ReadDirectoryChangesW Note, that the recursive option is only supported on OS X and Windows, but not on Linux. Also, the argument in the callback is not always provided (as it is only supported on Linux and Windows), so you should prepare for fallbacks if it is : fileName watch undefined fs.watch('some/path', (eventType, fileName) => { if (!filename) { //filename is missing, handle it gracefully } }) Useful modules from npm fs There are some very useful modules maintained by the community which extends the functionality of the module. fs graceful-fs The is a drop-in replacement for the core module, with some improvements: graceful-fs fs queues up and calls, and retries them once something closes if there is an EMFILE error from too many file descriptors, open readdir ignores and errors in , or if the user isn't root, EINVAL EPERM chown fchown lchown makes and become noops, if not available, lchmod lchown retries reading a file if read results in error. EAGAIN You can start using it just like the core module, or alternatively by patching the global module. fs // use as a standalone module const fs = require('graceful-fs') // patching the global one const originalFs = require('fs') const gracefulFs = require('graceful-fs') gracefulFs.gracefulify(originalFs) mock-fs The module allows Node's built-in fs module to be backed temporarily by an in-memory, mock file system. mock-fs This lets you run tests against a set of mock files or directories. Start using the module is as easy as: const mock = require('mock-fs') const fs = require('fs') mock({ 'path/to/fake/dir': { 'some-file.txt': 'file content here', 'empty-dir': {} }, 'path/to/some.png': new Buffer([8, 6, 7, 5, 3, 0, 9])}) fs.exists('path/to/fake/dir', function (exists) { console.log(exists) // will output true }) lockfile This can prevent race condition scenarios. File locking is a way to restrict access to a file by allowing only one process access at any specific time. Adding lockfiles using the module is striaghforward: [lockfile](https://github.com/npm/lockfile) const lockFile = require('lockfile') lockFile.lock('some-file.lock', function (err) { // if the err happens, then it failed to acquire a lock. // if there was not an error, then the file was created, // and won't be deleted until we unlock it. // then, some time later, do: lockFile.unlock('some-file.lock', function (err) { }) }) Conclusion I hope this was a useful explanation of the Node.js file system and its’ possibilities. If you have any questions about the topic, please let me know in the comments section below. Originally published at blog.risingstack.com on May 2, 2017. is how hackers start their afternoons. We’re a part of the family. We are now and happy to opportunities. Hacker Noon @AMI accepting submissions discuss advertising & sponsorship To learn more, , , or simply, read our about page like/message us on Facebook tweet/DM @HackerNoon. If you enjoyed this story, we recommend reading our and . Until next time, don’t take the realities of the world for granted! latest tech stories trending tech stories