Node

Node.js Best Practices

Node.js Best Practices

This article was originally published on Codementor.io

Node.js has become one of the most popular platforms over the last couple of years. It sure is easy to get started on those Node.js projects, but once you get beyond the basic Hello World app, knowing how to best structure your code and how to deal with errors can sometimes become a nightmare (as with most languages and frameworks).

And unfortunately, that nightmare makes all the difference between a rock solid production application and a launch disaster.

With that said, let’s take a look at a few best Node.js practices that will keep you safe from the most common Node.js traps.

1. Start all projects with npm init

Most people are familiar with NPM as a way to install dependencies, but it is so much more than this. First, I highly recommend creating a new project using npm init, like so:

$ mkdir my-new-project
$ cd my-new-project
$ npm init

This will create a new package.json for you which allows you to add a bunch of metadata to help others working on the project have the same setup as you.

For example, I usually open the package.json and add a specific version of Node.js I plan to run on, by adding:

"engines": {
"node": "6.2.0"
}

Read more: How to Use JSON files in Node.js


2. Setup .npmrc

If you’ve used npm before, you may have come across the - -save flag which updates the package.json with the dependency. When other developers clone the project, they can be sure to have the right dependencies because of this. Unfortunately, remembering to add the flag can be a problem.

In addition, NPM adds a leading caret ^ to all versions. Consequently, when someone runs npm install, they may get different versions of the modules than what you have. While updating modules is always a good practice, having a team of developers all running against slightly different versions of dependencies can lead to differences in behaviour or availability of APIs.

Therefore, it’s a good idea to have everyone on the same version. To make this easier for everyone, the .npmrc file has some useful properties that can make sure npm install always updates the package.json and enforces the version of installed dependency to be an exact match.

Simply run the following lines in your terminal:

$ npm config set save=true
$ npm config set save-exact=true

Now when you run npm install, you can be sure the dependency is saved and will be locked down to the version you installed.

3. Add scripts to your package.json

If there’s one thing all applications need, it’s a launch script. Knowing which file to call first and with what arguments can be an epic adventure of discovery on some projects. Good thing NPM has a standard way to start all node applications.

Simply add a scripts property and object to your package.json with a start key. It’s value should be the command to launch your app. For example:

"scripts": {
"start": "node myapp.js"
}

As soon as someone runs npm start, NPM will run node myapp.js with all the dependencies from node_modules/.bin on your $PATH. This means you can avoid having to do global installs of NPM modules.

There’s a couple of other script hooks worth knowing:

"scripts": {
"postinstall": "bower install && grunt build",
"start": "node myapp.js",
"test": "node ./node_modules/jasmine/bin/jasmine.js"
}

The postinstall script is run after npm install is run. There’s also preinstall if you need to run something before all the NPM dependencies are installed.

The test script is run when someone runs npm test. This is a nice simple way for someone to be able to run your tests without figuring out if you’ve chosen to use Jasmine, Mocha, Selenium, etc.

You can add your own custom scripts here, too. They can then be run using npm run-script {name} — a simple way for you to give your team a central set of launch scripts.

4. Use environment variables

Configuration management is always a big topic in any language. How do you decouple your code from the databases, services, etc. that it has to use during development, QA, and production?

The recommended way in Node.js is to use environment variables and to look up the values from process.env in your code. For example, to figure out which environment you’re running on, check the NODE_ENV environment variables:

console.log("Running in :"  + process.env.NODE_ENV);

This is now a standard variable name used across most cloud-hosting vendors.

If you need to load further configurations, you can use a module like https://github.com/indexzero/nconf.

5. Use a style guide

I know we’ve all had those moments where we open a new file from another project for the first time or the file came from a different developer, we then spend the next hour reformatting the braces to be on different lines, changing the spaces to tabs, and vice versa. The problem here is a mixture of opinionated developers and no team/company standard style guide.

It’s far easier to understand code on a codebase if it’s all written in a consistent style. It also reduces the cognitive overhead of whether you should be writing with tabs or spaces. If the style is dictated (and enforced using JSHint or JSCS) then all of sudden, the codebase becomes a lot more manageable.

You don’t have to come out with your own rules either, sometimes it’s better to pick an existing set of guidelines and follow them. Here are some good examples:

Just pick one and stick with it!

6. Embrace async

I’m sure you’ve heard all the hype about promises, maybe even heard a little about async / await and generators in ES2016. The key idea behind all these techniques is making your code async.

The problem with synchronous functions in JavaScript is that they block any other code from running until they complete. However, synchronous code makes the flow of your application logic easy to understand. On the other hand, async structures like promises actually bring back a lot of that reasoning while keeping your code free from blockages.

So first, I highly recommend running your app (during development only) with the --trace-sync-io flag. This will print a warning and stack trace whenever your application uses a synchronous API.

There are plenty of great articles about how to use promises, generators and async / await. I don’t need to duplicate other great work that’s already available, so here’s a few links to get you started:

7. Handle errors

Having an error bring down your entire app in production is never a great experience. Good exception management is important for any app, and the best way to deal with errors is to use the async structures above. For example, promises provide a .catch() handler that will propagate all errors to be dealt with, cleanly.

Let’s say you have a chain of promises, and any one of which could suddenly fail, you can easily handle the error like so:

doSomething()
.then(doNextStage)
.then(recordTheWorkSoFar)
.then(updateAnyInterestedParties)
.then(tidyUp)
.catch(errorHandler);

In the example above, it doesn’t matter which of the earlier functions could have failed, any error will end up in the errorHandler.

8. Ensure your app automatically restarts

Okay, so you followed the best practice to handle errors. Unfortunately, some error from a dependency still, somehow, brought down your app 🙁

This is where it’s important to ensure you use a process manager to make sure the app recovers gracefully from a runtime error. The other scenario where you need it to restart is if the entire server you’re running on went down. In that situation, you want minimal downtime and for you application to restart as soon as the server is alive again!

I’d recommend using KeyMetric’s PM2 http://pm2.keymetrics.io/ to manage your process.

enter image description here

First, install it as a global module:

$ npm install pm2 -g

Then to launch your process, you should run:

$ pm2 start myApp.js

To handle restarting after the server crashes, you can follow the PM2 guide for you platform:

9. Cluster your app to improve performance and reliability

By default Node.js is run in a single process. Ideally, you want one process for each CPU core so that you can distribute the work load across all the cores. This improves scalability of web apps processing HTTP requests and performance in general. In addition to this, if one worker crashes, the others are still available to handle requests.

One of the other benefits of using a process manager like PM2 is that it supports clustering out of the box:

To start up multiple instances of your app for each core on a machine, you’d simply run:

$ pm2 start myApp.js -i max

One thing to bear in mind is that each process is standalone — they don’t share memory, or resources. Each process will open it’s own connections to databases, for example. Always keep that in mind as you code. A useful tool people use to share session state, for example, is Redis, this provides an in-memory datastore that can be quickly accessed by all the processes to store session related data.

10. Require all your dependencies up front

I’ve seen many developers write code like this:

app.get("/my-service", function(request, response) {
var datastore = require("myDataStoreDep")(someConfig);

datastore.get(req.query.someKey)
// etc, ...
});

The problem with the code above is that when someone makes a request to /my-service, the code will now load all files required by myDataStoreDep — any of which could throw an exception. Additionally, when the configuration is passed on, there could also be an error at that point which can bring down the entire process. In addition, we don’t know how long that synchronous setup of a resource will take. At this point in the code, we essentially block all other requests from being handled!

So you should always load all your dependencies upfront and configure them upfront. That way, you’ll know from the startup if there is a problem, not three to four hours after your app has gone live in production!

11. Use a logging library to increase errors visibility

console.log is great but it has limits in a production application. Trying to sift through thousands of lines of logs to find the cause of the bug… which I guarantee you will have to do at some point, is painful!

A mature logging library can help with this. First, they allow you to set levels for each log message — whether it’s a debug, info, warning, or error. In addition, they typically allow you to log to different files or even remote datastore.

In my applications, for example, I typically log to https://www.loggly.com/. Loggly allows me to quickly search all my log messages using patterns. In addition, it can alert me if a threshold is reached — for example, if my web application starts returning 500 SERVER ERROR messages to my users for a period longer than 30 seconds, Loggly can send me a message and I can figure out what’s going on.

enter image description here

So what library should you use? Again this is always up for opinions. I personally like to use winstonhttps://github.com/winstonjs/winston.

12. Use Helmet if you’re writing a web app

If you’re writing a web application, there are a lot of common best practices that you should follow to secure your application:

  • XSS Protection
  • Prevent Clickingjacking using X-Frame-Options
  • Enforcing all connections to be HTTPS
  • Setting a Context-Security-Policy header
  • Disabling the X-Powered-By header so attackers can’t narrow down their attacks to specific software

Instead of remembering to configure all these headers, Helmet will set them all to sensible defaults for you, and allow you to tweak the ones that you need.

It’s incredibly simple to set up on an Express.js application:

$ npm install helmet

And then in your code when setting up Express add:

var helmet = require('helmet');
app.use(helmet());

13. Monitor your applications

Getting notified when something goes wrong with your application is critical on production applications. You don’t want to check your Twitter feed and see thousands of angry users telling you your servers are down or your app is broken and has been for the last few hours. So having something monitoring and alerting you to critical issues or abnormal behaviour is important.

We already discussed PM2 for process management. In addition it’s developers KeyMetrics.io run a process monitoring SaaS with integration with PM2 baked in. It’s very simple to enable and they have a free plan which is a great starting point for a lot of developers. Once you’ve signed up for KeyMetrics, you can simply run:

$ pm2 interact [public_key] [private_key] [machine_name]

This will start sending memory & CPU usage data, plus exception reporting to key metrics servers to view from their dashboard. You can also view latency of your http requests, or set up events when problems occur (for example timeouts to downstream dependencies).

In addition, Loggly (that we mentioned earlier) also provides monitoring based off logs. Both tools in combination can provide you with a way to quickly react to problems before they get out of hand.

14. Test your code

Yeah, yeah, yeah – I know I should be testing. TDD and all that jazz!

Seriously though, testing will save your ass on many occasions. Like creating any new habit, it’s painful to start and keep up the momentum. It gets in the way of your speed of development. However, I can talk from experience that once the first few production issues occur on a project with no tests, you’ll wish you had in the first place.

No matter what stage you are on a project, it’s never too late to introduce testing. My advice is start small, start simple. I’d also highly recommend writing a test for every bug that gets reported. That way you know:

  • How to reproduce the bug (make sure your test fails first!)
  • That the bug is fixed (make sure you test passes after you fix the issue)
  • That the bug will never occur again (make sure you run your tests on every new deployment)

There’s a lot of testing libraries. I personally stick with Jasmine because I’ve used it for a long time now, but Mocha, chai or any other libraries are great too. If you’re writing a web application too I’d also highly recommend Supertest to black box test your web end points.


Interested with unit testing? Read the first part of our Unit Testing and TDD in Node.js series


###Wrapping up

And with that, ladies and gentlemen, those are my nominees for the “top 14 best practices” of Node.js.

If you would like to nominate an additional Node.js best practices, please do so in the comments. Let’s save the world of Node.js projects together! 🙂

###Other tutorials you might be interested in

Author: Matt Goldspink

I'm a web developer based in the UK. I'm currently UI Architect at Vlocity, Inc, developing UI's on the Salesforce platform using a mix of Web Components, Angular.js and Lightning.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.