Tag Archives: Node.js

JWT implementation with Refresh Token in Node.js example | MongoDB

In previous post, we’ve known how to build Token based Authentication & Authorization with Node.js, JWT and MongoDB. This tutorial will continue to make JWT Refresh Token in the Node.js Express Application. You can know how to expire the JWT, then renew the Access Token with Refresh Token.

Related Posts:
– Node.js, Express & MongoDb: Build a CRUD Rest Api example
– How to upload/store images in MongoDB using Node.js, Express & Multer
– Using MySQL/PostgreSQL instead: JWT Refresh Token implementation in Node.js example

Associations:
– MongoDB One-to-One relationship with Mongoose example
– MongoDB One-to-Many Relationship tutorial with Mongoose examples
– MongoDB Many-to-Many Relationship with Mongoose examples

The code in this post bases on previous article that you need to read first:
Node.js + MongoDB: User Authentication & Authorization with JWT

Overview of JWT Refresh Token with Node.js example

We already have a Node.js Express & MongoDB application in that:

  • User can signup new account, or login with username & password.
  • By User’s role (admin, moderator, user), we authorize the User to access resources

With APIs:

Methods Urls Actions
POST /api/auth/signup signup new account
POST /api/auth/signin login an account
GET /api/test/all retrieve public content
GET /api/test/user access User’s content
GET /api/test/mod access Moderator’s content
GET /api/test/admin access Admin’s content

For more details, please visit this post.

We’re gonna add Token Refresh to this Node.js & JWT Project.
The final result can be described with following requests/responses:

– Send /signin request, return response with refreshToken.

jwt-refresh-token-node-js-example-mongodb-signin

– Access resource successfully with accessToken.

jwt-refresh-token-node-js-example-mongodb-access-resource

– When the accessToken is expired, user cannot use it anymore.

jwt-refresh-token-node-js-example-mongodb-expire-token

– Send /refreshtoken request, return response with new accessToken.

jwt-refresh-token-node-js-example-mongodb-send-token-refresh-request

– Access resource successfully with new accessToken.

jwt-refresh-token-node-js-example-mongodb-new-token-access-resource

– Send an expired Refresh Token.

jwt-refresh-token-node-js-example-mongodb-expire-refresh-token

– Send an inexistent Refresh Token.

jwt-refresh-token-node-js-example-mongodb-token-not-exist-database

– Axios Client to check this: Axios Interceptors tutorial with Refresh Token example

– Using React Client:

– Or Vue Client:

– Angular Client:

Flow for JWT Refresh Token implementation

The diagram shows flow of how we implement Authentication process with Access Token and Refresh Token.

jwt-refresh-token-node-js-example-flow

– A legal JWT must be added to HTTP Header if Client accesses protected resources.
– A refreshToken will be provided at the time user signs in.

How to Expire JWT Token in Node.js

The Refresh Token has different value and expiration time to the Access Token.
Regularly we configure the expiration time of Refresh Token longer than Access Token’s.

Open config/auth.config.js:

module.exports = {
  secret: "bezkoder-secret-key",
  jwtExpiration: 3600,           // 1 hour
  jwtRefreshExpiration: 86400,   // 24 hours

  /* for test */
  // jwtExpiration: 60,          // 1 minute
  // jwtRefreshExpiration: 120,  // 2 minutes
};

Update middlewares/authJwt.js file to catch TokenExpiredError in verifyToken() function.

const jwt = require("jsonwebtoken");
const config = require("../config/auth.config");
const db = require("../models");
...
const { TokenExpiredError } = jwt;

const catchError = (err, res) => {
  if (err instanceof TokenExpiredError) {
    return res.status(401).send({ message: "Unauthorized! Access Token was expired!" });
  }

  return res.sendStatus(401).send({ message: "Unauthorized!" });
}

const verifyToken = (req, res, next) => {
  let token = req.headers["x-access-token"];

  if (!token) {
    return res.status(403).send({ message: "No token provided!" });
  }

  jwt.verify(token, config.secret, (err, decoded) => {
    if (err) {
      return catchError(err, res);
    }
    req.userId = decoded.id;
    next();
  });
};

Create Refresh Token Model

This Mongoose model has one-to-one relationship with User model. It contains expiryDate field which value is set by adding config.jwtRefreshExpiration value above.

There are 2 static methods:

  • createToken: use uuid library for creating a random token and save new object into MongoDB database
  • verifyExpiration: compare expiryDate with current Date time to check the expiration
const mongoose = require("mongoose");
const config = require("../config/auth.config");
const { v4: uuidv4 } = require('uuid');

const RefreshTokenSchema = new mongoose.Schema({
  token: String,
  user: {
    type: mongoose.Schema.Types.ObjectId,
    ref: "User",
  },
  expiryDate: Date,
});

RefreshTokenSchema.statics.createToken = async function (user) {
  let expiredAt = new Date();

  expiredAt.setSeconds(
    expiredAt.getSeconds() + config.jwtRefreshExpiration
  );

  let _token = uuidv4();

  let _object = new this({
    token: _token,
    user: user._id,
    expiryDate: expiredAt.getTime(),
  });

  console.log(_object);

  let refreshToken = await _object.save();

  return refreshToken.token;
};

RefreshTokenSchema.statics.verifyExpiration = (token) => {
  return token.expiryDate.getTime() < new Date().getTime();
}

const RefreshToken = mongoose.model("RefreshToken", RefreshTokenSchema);

module.exports = RefreshToken;

Don’t forget to export this model in models/index.js:

const mongoose = require('mongoose');
mongoose.Promise = global.Promise;

const db = {};

db.mongoose = mongoose;

db.user = require("./user.model");
db.role = require("./role.model");
db.refreshToken = require("./refreshToken.model");

db.ROLES = ["user", "admin", "moderator"];

module.exports = db;

Node.js Express Rest API for JWT Refresh Token

Let’s update the payloads for our Rest APIs:
– Requests:

  • refreshToken }

– Responses:

  • Signin Response: { accessToken, refreshToken, id, username, email, roles }
  • Message Response: { message }
  • RefreshToken Response: { new accessTokenrefreshToken }

In the Auth Controller, we:

  • update the method for /signin endpoint with Refresh Token
  • expose the POST API for creating new Access Token from received Refresh Token

controllers/auth.controller.js

const config = require("../config/auth.config");
const db = require("../models");
const { user: User, role: Role, refreshToken: RefreshToken } = db;

const jwt = require("jsonwebtoken");
const bcrypt = require("bcryptjs");

...
exports.signin = (req, res) => {
  User.findOne({
    username: req.body.username,
  })
    .populate("roles", "-__v")
    .exec(async (err, user) => {
      if (err) {
        res.status(500).send({ message: err });
        return;
      }

      if (!user) {
        return res.status(404).send({ message: "User Not found." });
      }

      let passwordIsValid = bcrypt.compareSync(
        req.body.password,
        user.password
      );

      if (!passwordIsValid) {
        return res.status(401).send({
          accessToken: null,
          message: "Invalid Password!",
        });
      }

      let token = jwt.sign({ id: user.id }, config.secret, {
        expiresIn: config.jwtExpiration,
      });

      let refreshToken = await RefreshToken.createToken(user);

      let authorities = [];

      for (let i = 0; i < user.roles.length; i++) {
        authorities.push("ROLE_" + user.roles[i].name.toUpperCase());
      }
      res.status(200).send({
        id: user._id,
        username: user.username,
        email: user.email,
        roles: authorities,
        accessToken: token,
        refreshToken: refreshToken,
      });
    });
};

exports.refreshToken = async (req, res) => {
  const { refreshToken: requestToken } = req.body;

  if (requestToken == null) {
    return res.status(403).json({ message: "Refresh Token is required!" });
  }

  try {
    let refreshToken = await RefreshToken.findOne({ token: requestToken });

    if (!refreshToken) {
      res.status(403).json({ message: "Refresh token is not in database!" });
      return;
    }

    if (RefreshToken.verifyExpiration(refreshToken)) {
      RefreshToken.findByIdAndRemove(refreshToken._id, { useFindAndModify: false }).exec();
      
      res.status(403).json({
        message: "Refresh token was expired. Please make a new signin request",
      });
      return;
    }

    let newAccessToken = jwt.sign({ id: refreshToken.user._id }, config.secret, {
      expiresIn: config.jwtExpiration,
    });

    return res.status(200).json({
      accessToken: newAccessToken,
      refreshToken: refreshToken.token,
    });
  } catch (err) {
    return res.status(500).send({ message: err });
  }
};

In refreshToken() function:

  • Firstly, we get the Refresh Token from request data
  • Next, get the RefreshToken object {idusertokenexpiryDate} from raw Token using RefreshToken model static method
  • We verify the token (expired or not) basing on expiryDate field. If the Refresh Token was expired, remove it from MongoDB database and return message
  • Continue to use user _id field of RefreshToken object as parameter to generate new Access Token using jsonwebtoken library
  • Return { new accessTokenrefreshToken } if everything is done
  • Or else, send error message

Define Route for JWT Refresh Token API

Finally, we need to determine how the server with an endpoint will response by setting up the routes.
In routes/auth.routes.js, add one line of code:

...
const controller = require("../controllers/auth.controller");

module.exports = function(app) {
  ...
  app.post("/api/auth/refreshtoken", controller.refreshToken);
};

Conclusion

Today we’ve learned JWT Refresh Token implementation in just a Node.js example using Express Rest Api and MongoDB. You also know how to expire the JWT Token and renew the Access Token.

The code in this post bases on previous article that you need to read first:
Node.js + MongoDB: User Authentication & Authorization with JWT

If you want to use MySQL/PostgreSQL instead, please visit:
JWT Refresh Token implementation in Node.js example

You can test this Rest API with:
– Axios Client: Axios Interceptors tutorial with Refresh Token example
– React Client:

– Vue Client:

– Angular Client:

Happy learning! See you again.

Further Reading

Fullstack CRUD application:
– MEVN: Vue.js + Node.js + Express + MongoDB example
– MEAN:
Angular 8 + Node.js + Express + MongoDB example
Angular 10 + Node.js + Express + MongoDB example
Angular 11 + Node.js + Express + MongoDB example
Angular 12 + Node.js + Express + MongoDB example
– MERN: React + Node.js + Express + MongoDB example

Source Code

You can find the complete source code for this tutorial on Github.

from:JWT implementation with Refresh Token in Node.js example | MongoDB – BezKoder

Teach Yourself Node.js in 10 Steps

Lets start talking about modularity, to grasp the differences between Node and code in the browser. I prepared a special something you can use as you read this article. You can find every example in this article on GitHub nicely packed for you to start playing right away.

Modularity in Node

Node implements CommonJS Modules/1.1, which allow you to keep files self-contained. You can learn all about Node Modules from their increasingly useful documentation.

Modules can expose an API through the module.exports convention.

// modules/math.js

var api = {
    sum: function(a, b){
        return a + b;
    }
};

module.exports = api;

Note the variable api won’t make it’s way to the global object. You can learn why from the docs. Globals work differently in Node. The top-level scope of a module is local to that module, but you can still access a few globals on your own, such as process, and console. Setting up your own globals on the global object is discouraged.

Consequently, module isn’t a global, but rather a local variable, private to the module we are currently working on.

Modules can be referenced using the require function. You can provide a package name (more on that later), or a physical path relative to the file you are invoking require from.

// modules/app.js

var math = require('./math.js');
var result = math.sum(1, 2);

console.log('I can\'t believe the result is:', result, '!');

Both files being on the same directory is assumed. The math variable will now equal the value of module.exports in math.js.

What does console.log do? It’s actually just syntactic sugar for process.stdout.write, and it will append a new line \n at the end. This is deliberately done all over Node to help ease your on-boarding onto the platform by leveraging the conventions and objects you are already used to from your experience in writing client-side JavaScript.

Sidebar. You might want to read the actual console API documentation.

Note that requiring a file multiple times in the same process will only execute the code in the module once. So if you were to require the app.js module several times, it would still be executed a single time. As a result, the output would only be buffered once.

// modules/several.js

require('./app.js');
require('./app.js');
require('./app.js');

Those are modules all right, but where’s the asynchronicity Node is supposedly so popular for?

Asynchronous Convention

Node is an event-based language, and most of the code written for Node follows a really simple convention that helps modules look inspiringly similar to each other, as far as coding conventions go.

Our math module would probably look more like this if we wanted to play nice with the Node community at large.

// async/math.js

var api = {
    sum: function(a, b, done){
        process.nextTick(function(){
            done(null, a + b);
        });
    }
};

module.exports = api;

process.nextTick is kind of hard to wrap our head around at first, but lets just imagine it’s setTimeout(fn,0), which we might have used while trying hacky fixes in the browser.

I’ve used process.nextTick to turn an otherwise synchronous function into an asynchronous one. When we are done processing, we’re going to pass the result as the second parameter of the done callback. The first parameter should always be err, if an error occurs, we are passing that as the first parameter, rather than throwing an exception. If no error occurs, we are fine passing any falsy value.

Consuming this module is still really easy.

// async/app.js

var math = require('./math.js');

math.sum(1, 2, function(err, result){
    if(err){
        throw err;
    }
    
    console.log('I can\'t believe the result is:', result, '!');
});

We are now waiting reactively for the sum function to let us know when it’s done. This is the basest of asynchronous examples in Node.js. Note how we changed modes, and use throw here; this is fine as long as we are in a synchronous path, throwing errors should always have the end result of process termination, so keep that in mind when you are dealing with this type of situations. This is acceptable for our console application, however in a web application we probably would prefer to just return an HTTP status code 500, internal server error, for the current request.

You’ll also have to consider the option of bubbling errors through multiple asynchronous calls. This, for example, might not be the best error-handling approach:

// async/wrong.js

var math = require('./math.js');

math.sum(1, 2, function(err, result){
    if(err){
        throw err;
    }

    math.sum(result, 3, function(err, result){
        if(err){
            throw err;
        }

        console.log('I can\'t believe the result is:', result, '!');
    });
});

A more sensible approach might be to avoid throwing errors all over the place, but handle those in a centralized location.

// async/better.js

var math = require('./math.js');

math.sum(1, 2, function(err, result){
    if(err){
        return then(err);
    }

    math.sum(result, 3, function(err, result){
        if(err){
            return then(err);
        }

        then(err, result);
    });
});

function then(err, result){
    if(err){
        throw err;
    }

    console.log('I can\'t believe the result is:', result, '!');
}

This is however, getting pretty verbose. Let me skip to a module for a bit, and then come back and explain what’s going on. We’re going to use the control flow module called async, to improve the readability of our code.

// async/right.js

var async = require('async');
var math = require('./math.js');

async.waterfall([
    function(next){
        math.sum(1, 2, next);
    },
    function(result, next){
        math.sum(result, 3, next);
    }
], then);

function then(err, result){
    if(err){
        throw err;
    }

    console.log('I can\'t believe the result is:', result, '!');
}

That’s a little better. Since we’ve been following the right conventions, we can use async, which allows us to get rid of all those pesky if(err) statements, and flatten our callback hell while we’re at it. waterfall‘s API is pretty simple, we give it an array of functions, and these will be called in series, when our first math.sum completes, it will invoke the next callback with the (null, 3) arguments. If a function returns a truthy value in the first parameter, this will shortcut the waterfall and immediatly jumping to the then function, passing the error argument, still in the first position. If no error occurs, then the next function in the sequence is executed, passing any resulting arguments to it (in this case, just the 3).

This is the recommended way of doing things because it flattens the structure of our code, turning our codebase into something more readable, while at the same time following the same conventions and using the same API that is used everywhere else. You must check out the async module and its comprehensive API, toy with it for a while.

That’s great and all, but where did async come from? It sure as hell isn’t part of Node.

I’m glad you asked.

Node Packaged Modules

npm is a small treasure that comes bundled with Node, and helps you manage dependencies in your projects. There is a huge repository you can search, and most people include installation instructions in their GitHub repositories. Ultimately, npm is a CLI (command-line interface) tool.

If you have been following the instructions of the learn-nodejs repository I provided, then you already have dependencies installed in your project folder. If not, just run the following command in your terminal.

$ npm install

That’s it, now you have everything you need. How does that work? Some weird magic? No, just package.json. This file helps us define the dependencies in our project. When you ran npm install in your terminal, all it did was install the dependencies listed in the package.json file.

{
  "name": "learn-nodejs",
  "description": "Simple NodeJS Application Examples",
  "homepage": "https://github.com/ponyfoo/learn-nodejs",
  "author": {
    "name": "Nicolas Bevacqua",
    "email": "nicolasbevacqua@gmail.com",
    "url": "http://www.ponyfoo.com"
  },
  "version": "0.0.1",
  "repository": {
    "type": "git",
    "url": "https://github.com/ponyfoo/learn-nodejs.gitt"
  },
  "dependencies": {
    "async": "~0.2.9"
  }
}

I rarely add dependencies manually to this definition file, in the case of async, for example, all I did was run the following command:

$ npm install async --save

That’s it. async has been added it to the dependencies object. Installing a module basically just fetches it, and adds it to a node_modules folder, which you should always exclude in your .gitignore settings.

If you are interested in developing your own npm module, you’ll be shocked to learn how simple that is.

Before we jump into building a decent application, lets look at one of the most powerful constructs in Node.

Events API

Yes! Of course, I was talking about the event emitter API. What are events? Well, the documentation explains it like this:

Many objects in Node emit events: a net.Server emits an event each time a peer connects to it, a fs.readStream emits an event when the file is opened. All objects which emit events are instances of events.EventEmitter. You can access this module by doing: require('events');

Functions can then be attached to objects, to be executed when an event is emitted. These functions are called listeners. Inside a listener function, this refers to the EventEmitter that the listener was attached to.

Lets write our own event emitter and explain a few things along the way. Then, we’ll see how it can be used.

// events/implementation.js

var util = require('util');
var EventEmitter = require('events').EventEmitter;

function Heartbeat(interval){
    EventEmitter.call(this);

    var emitter = this;
    var beats = 0;

    setInterval(function(){
        emitter.emit('beat', ++beats);
    }, interval);
}

util.inherits(Heartbeat, EventEmitter);

module.exports = Heartbeat;

Don’t laugh, that’s the best I could come up with. Here I’m simply creating a constructor function for my custom EventEmitter implementation. I’m using util.inherits, as it’s the recommended way of performing prototypal inheritance in Node applications.

Whenever our emitter .emits an event, all subscribers to that event will be notified, and receive the arguments which where provided when the event was emitted.

Remember what I mentioned about leveraging your API knowledge about the browser with that in Node? setInterval is one of those cases.

Fine, how do we use our newly born event emitter? It’s simple, really:

// events/usage.js

var Heartbeat = require('./implementation.js');
var a = new Heartbeat(400);
var b = new Heartbeat(1000);

a.on('beat', function(beats){
    console.log('Heart A beat n times:', beats);
});

b.on('beat', function(beats){
    console.log('Heart B beat n times:', beats);
});

I’m not even sure I need to explain this, but whenever the emitter invokes .emit, every listener for that event, added by .on, will have its callback triggered. This seemingly innocent API powers a lot of what Node does.

One last thing, read this quote from the documentation:

When an EventEmitter instance experiences an error, the typical action is to emit an 'error' event. Error events are treated as a special case in node. If there is no listener for it, then the default action is to print a stack trace and exit the program.

What this means is that if there is no .on('error', fn) listener, and your emitter emits an 'error' event, then your application will die a tragic death.

HTTP Server

Enough blabbering, here is an HTTP server in Node.

// http/server.js

var http = require('http');

http.createServer(function(req, res) {
    res.end('Hello Node', 200);
    console.log('I think I\'ve heard something!');
}).listen(8000);

console.log('Listening!');

That wasn’t so amusing, it was very simple and self-describing, though! Lets try something different, serving an HTML file from disk.

// http/html.js

var http = require('http');
var fs = require('fs');
var path = require('path');
var index = path.resolve(__dirname, './index.html');

http.createServer(function(req, res) {
    var stream = fs.createReadStream(index);

    stream.on('open', function(){
        res.writeHead(200, { 'Content-Type': 'text/html' });
    });

    stream.on('error', function(){
        res.writeHead(404);
        res.end();
    });

    stream.pipe(res);
}).listen(8000);

console.log('Listening!');

Couple of things. First of all, __dirname is a special local variable that contains the absolute path to the directory for our currently executing module. We just learned what events are, the fs.createReadStream method will provide us with an event emitter we can use to stream data to the response. The file will be piped straight into a chunked response, this can be achieved using the readable.pipe method. If the file isn’t found, the stream will emit an 'error' event; we can take advantage of that and respond with a 404 status code instead.

This is, however, a very convoluted thing to do to just serve a file. Enter Express.

Express Application Framework

Express is built on Connect, which expands on Node’s HTTP server. There’s also Socket.IO for implementing web socket communications, but I won’t be getting into realtime for now.

Connect just provides middleware, a nice abstraction over what the native HTTP module offers. Express builds on that, adding a lot of awesome features, and making your life more bearable. Here is a small sample application built on Express:

// http/express.js

var express = require('express');
var app = express();

app.get('/', function(req, res){
    res.send('hello world');
});

app.listen(8000);

The API is incredibly self-documenting, I wish more projects had an API as clean as Express does.

Enough already. You are mean for laughing at all of my stupid examples. You know what else is mean?

MongoDB, ExpressJS, AngularJS and NodeJS

The MEAN Stack is not a hipster thing, as delusional people try to assertain with no real reasoning behind their empty statements. The MEAN stack is a very real thing. Here’s a slideshare for you to look at.

The glaring benefit of using a stack such as this is the ease with which you can transfer objects through your application without having to resort to different interfaces, data presentation alternatives, and programming languages. You can really get away with just using JavaScript everywhere.

from:https://ponyfoo.com/articles/teach-yourself-nodejs-in-10-steps

Node.js Best Practices

Welcome! 3 Things You Ought To Know First

1. You are reading dozens of the best Node.js articles – this repository is a summary and curation of the top-ranked content on Node.js best practices, as well as content written here by collaborators

2. It is the largest compilation, and it is growing every week – currently, more than 80 best practices, style guides, and architectural tips are presented. New issues and pull requests are created every day to keep this live book updated. We’d love to see you contributing here, whether that is fixing code mistakes, helping with translations, or suggesting brilliant new ideas. See our writing guidelines here

3. Best practices have additional info – most bullets include a linkRead More link that expands on the practice with code examples, quotes from selected blogs, and more information

Table of Contents

  1. Project Structure Practices (5)
  2. Error Handling Practices (12)
  3. Code Style Practices (12)
  4. Testing And Overall Quality Practices (13)
  5. Going To Production Practices (19)
  6. Security Practices (25)
  7. Performance Practices (2) (Work In Progress️ writing_hand)
  8. Docker Practices (15)

 

1. Project Structure Practices

✔ 1.1 Structure your solution by components

TL;DR: The worst large applications pitfall is maintaining a huge code base with hundreds of dependencies – such a monolith slows down developers as they try to incorporate new features. Instead, partition your code into components, each gets its folder or a dedicated codebase, and ensure that each unit is kept small and simple. Visit ‘Read More’ below to see examples of correct project structure

Otherwise: When developers who code new features struggle to realize the impact of their change and fear to break other dependent components – deployments become slower and riskier. It’s also considered harder to scale-out when all the business units are not separated

link Read More: structure by components

 

✔ 1.2 Layer your components, keep the web layer within its boundaries

TL;DR: Each component should contain ‘layers’ – a dedicated object for the web, logic, and data access code. This not only draws a clean separation of concerns but also significantly eases mocking and testing the system. Though this is a very common pattern, API developers tend to mix layers by passing the web layer objects (e.g. Express req, res) to business logic and data layers – this makes your application dependent on and accessible only by specific web frameworks

Otherwise: App that mixes web objects with other layers cannot be accessed by testing code, CRON jobs, triggers from message queues, etc

link Read More: layer your app

 

✔ 1.3 Wrap common utilities as npm packages

TL;DR: In a large app that constitutes a large codebase, cross-cutting-concern utilities like a logger, encryption and alike, should be wrapped by your code and exposed as private npm packages. This allows sharing them among multiple codebases and projects

Otherwise: You’ll have to invent your deployment and the dependency wheel

link Read More: Structure by feature

 

✔ 1.4 Separate Express ‘app’ and ‘server’

TL;DR: Avoid the nasty habit of defining the entire Express app in a single huge file – separate your ‘Express’ definition to at least two files: the API declaration (app.js) and the networking concerns (WWW). For even better structure, locate your API declaration within components

Otherwise: Your API will be accessible for testing via HTTP calls only (slower and much harder to generate coverage reports). It probably won’t be a big pleasure to maintain hundreds of lines of code in a single file

link Read More: separate Express ‘app’ and ‘server’

 

✔ 1.5 Use environment aware, secure and hierarchical config

TL;DR: A perfect and flawless configuration setup should ensure (a) keys can be read from file AND from environment variable (b) secrets are kept outside committed code (c) config is hierarchical for easier findability. There are a few packages that can help tick most of those boxes like rcnconfconfig, and convict.

Otherwise: Failing to satisfy any of the config requirements will simply bog down the development or DevOps team. Probably both

link Read More: configuration best practices

 

arrow_up Return to top

2. Error Handling Practices

✔ 2.1 Use Async-Await or promises for async error handling

TL;DR: Handling async errors in callback style is probably the fastest way to hell (a.k.a the pyramid of doom). The best gift you can give to your code is using a reputable promise library or async-await instead which enables a much more compact and familiar code syntax like try-catch

Otherwise: Node.js callback style, function(err, response), is a promising way to un-maintainable code due to the mix of error handling with casual code, excessive nesting, and awkward coding patterns

link Read More: avoiding callbacks

 

✔ 2.2 Use only the built-in Error object

TL;DR: Many throw errors as a string or as some custom type – this complicates the error handling logic and the interoperability between modules. Whether you reject a promise, throw an exception or emit an error – using only the built-in Error object (or an object that extends the built-in Error object) will increase uniformity and prevent loss of information. There is no-throw-literal ESLint rule that strictly checks that (although it have some limitations which can be solved when using TypeScript and setting the @typescript-eslint/no-throw-literal rule)

Otherwise: When invoking some component, being uncertain which type of errors come in return – it makes proper error handling much harder. Even worse, using custom types to describe errors might lead to loss of critical error information like the stack trace!

link Read More: using the built-in error object

 

✔ 2.3 Distinguish operational vs programmer errors

TL;DR: Operational errors (e.g. API received an invalid input) refer to known cases where the error impact is fully understood and can be handled thoughtfully. On the other hand, programmer error (e.g. trying to read an undefined variable) refers to unknown code failures that dictate to gracefully restart the application

Otherwise: You may always restart the application when an error appears, but why let ~5000 online users down because of a minor, predicted, operational error? the opposite is also not ideal – keeping the application up when an unknown issue (programmer error) occurred might lead to an unpredicted behavior. Differentiating the two allows acting tactfully and applying a balanced approach based on the given context

link Read More: operational vs programmer error

 

✔ 2.4 Handle errors centrally, not within a middleware

TL;DR: Error handling logic such as mail to admin and logging should be encapsulated in a dedicated and centralized object that all endpoints (e.g. Express middleware, cron jobs, unit-testing) call when an error comes in

Otherwise: Not handling errors within a single place will lead to code duplication and probably to improperly handled errors

link Read More: handling errors in a centralized place

 

✔ 2.5 Document API errors using Swagger or GraphQL

TL;DR: Let your API callers know which errors might come in return so they can handle these thoughtfully without crashing. For RESTful APIs, this is usually done with documentation frameworks like Swagger. If you’re using GraphQL, you can utilize your schema and comments as well.

Otherwise: An API client might decide to crash and restart only because it received back an error it couldn’t understand. Note: the caller of your API might be you (very typical in a microservice environment)

link Read More: documenting API errors in Swagger or GraphQL

 

✔ 2.6 Exit the process gracefully when a stranger comes to town

TL;DR: When an unknown error occurs (a developer error, see best practice 2.3) – there is uncertainty about the application healthiness. Common practice suggests restarting the process carefully using a process management tool like Forever or PM2

Otherwise: When an unfamiliar exception occurs, some object might be in a faulty state (e.g. an event emitter which is used globally and not firing events anymore due to some internal failure) and all future requests might fail or behave crazily

link Read More: shutting the process

 

✔ 2.7 Use a mature logger to increase error visibility

TL;DR: A set of mature logging tools like Pino or Log4js, will speed-up error discovery and understanding. So forget about console.log

Otherwise: Skimming through console.logs or manually through messy text file without querying tools or a decent log viewer might keep you busy at work until late

link Read More: using a mature logger

 

✔ 2.8 Test error flows using your favorite test framework

TL;DR: Whether professional automated QA or plain manual developer testing – Ensure that your code not only satisfies positive scenarios but also handles and returns the right errors. Testing frameworks like Mocha & Chai can handle this easily (see code examples within the “Gist popup”)

Otherwise: Without testing, whether automatically or manually, you can’t rely on your code to return the right errors. Without meaningful errors – there’s no error handling

link Read More: testing error flows

 

✔ 2.9 Discover errors and downtime using APM products

TL;DR: Monitoring and performance products (a.k.a APM) proactively gauge your codebase or API so they can automagically highlight errors, crashes, and slow parts that you were missing

Otherwise: You might spend great effort on measuring API performance and downtimes, probably you’ll never be aware which are your slowest code parts under real-world scenario and how these affect the UX

link Read More: using APM products

 

✔ 2.10 Catch unhandled promise rejections

TL;DR: Any exception thrown within a promise will get swallowed and discarded unless a developer didn’t forget to explicitly handle it. Even if your code is subscribed to process.uncaughtException! Overcome this by registering to the event process.unhandledRejection

Otherwise: Your errors will get swallowed and leave no trace. Nothing to worry about

link Read More: catching unhandled promise rejection

 

✔ 2.11 Fail fast, validate arguments using a dedicated library

TL;DR: Assert API input to avoid nasty bugs that are much harder to track later. The validation code is usually tedious unless you are using a very cool helper library like ajv and Joi

Otherwise: Consider this – your function expects a numeric argument “Discount” which the caller forgets to pass, later on, your code checks if Discount!=0 (amount of allowed discount is greater than zero), then it will allow the user to enjoy a discount. OMG, what a nasty bug. Can you see it?

link Read More: failing fast

 

✔ 2.12 Always await promises before returning to avoid a partial stacktrace

TL;DR: Always do return await when returning a promise to benefit full error stacktrace. If a function returns a promise, that function must be declared as async function and explicitly await the promise before returning it

Otherwise: The function that returns a promise without awaiting won’t appear in the stacktrace. Such missing frames would probably complicate the understanding of the flow that leads to the error, especially if the cause of the abnormal behavior is inside of the missing function

link Read More: returning promises

 

arrow_up Return to top

3. Code Style Practices

✔ 3.1 Use ESLint

TL;DR: ESLint is the de-facto standard for checking possible code errors and fixing code style, not only to identify nitty-gritty spacing issues but also to detect serious code anti-patterns like developers throwing errors without classification. Though ESLint can automatically fix code styles, other tools like prettier and beautify are more powerful in formatting the fix and work in conjunction with ESLint

Otherwise: Developers will focus on tedious spacing and line-width concerns and time might be wasted overthinking the project’s code style

link Read More: Using ESLint and Prettier

 

✔ 3.2 Node.js specific plugins

TL;DR: On top of ESLint standard rules that cover vanilla JavaScript, add Node.js specific plugins like eslint-plugin-nodeeslint-plugin-mocha and eslint-plugin-node-security

Otherwise: Many faulty Node.js code patterns might escape under the radar. For example, developers might require(variableAsPath) files with a variable given as a path which allows attackers to execute any JS script. Node.js linters can detect such patterns and complain early

 

✔ 3.3 Start a Codeblock’s Curly Braces on the Same Line

TL;DR: The opening curly braces of a code block should be on the same line as the opening statement

Code Example

// Do
function someFunction() {
  // code block
}

// Avoid
function someFunction()
{
  // code block
}

Otherwise: Deferring from this best practice might lead to unexpected results, as seen in the StackOverflow thread below:

link Read more: “Why do results vary based on curly brace placement?” (StackOverflow)

 

✔ 3.4 Separate your statements properly

No matter if you use semicolons or not to separate your statements, knowing the common pitfalls of improper linebreaks or automatic semicolon insertion, will help you to eliminate regular syntax errors.

TL;DR: Use ESLint to gain awareness about separation concerns. Prettier or Standardjs can automatically resolve these issues.

Otherwise: As seen in the previous section, JavaScript’s interpreter automatically adds a semicolon at the end of a statement if there isn’t one, or considers a statement as not ended where it should, which might lead to some undesired results. You can use assignments and avoid using immediately invoked function expressions to prevent most of the unexpected errors.

Code example

// Do
function doThing() {
    // ...
}

doThing()

// Do

const items = [1, 2, 3]
items.forEach(console.log)

// Avoid — throws exception
const m = new Map()
const a = [1,2,3]
[...m.values()].forEach(console.log)
> [...m.values()].forEach(console.log)
>  ^^^
> SyntaxError: Unexpected token ...

// Avoid — throws exception
const count = 2 // it tries to run 2(), but 2 is not a function
(function doSomething() {
  // do something amazing
}())
// put a semicolon before the immediate invoked function, after the const definition, save the return value of the anonymous function to a variable or avoid IIFEs altogether

link Read more: “Semi ESLint rule” link Read more: “No unexpected multiline ESLint rule”

 

✔ 3.5 Name your functions

TL;DR: Name all functions, including closures and callbacks. Avoid anonymous functions. This is especially useful when profiling a node app. Naming all functions will allow you to easily understand what you’re looking at when checking a memory snapshot

Otherwise: Debugging production issues using a core dump (memory snapshot) might become challenging as you notice significant memory consumption from anonymous functions

 

✔ 3.6 Use naming conventions for variables, constants, functions and classes

TL;DR: Use lowerCamelCase when naming constants, variables and functions and UpperCamelCase (capital first letter as well) when naming classes. This will help you to easily distinguish between plain variables/functions, and classes that require instantiation. Use descriptive names, but try to keep them short

Otherwise: JavaScript is the only language in the world that allows invoking a constructor (“Class”) directly without instantiating it first. Consequently, Classes and function-constructors are differentiated by starting with UpperCamelCase

3.6 Code Example

// for class name we use UpperCamelCase
class SomeClassExample {}

// for const names we use the const keyword and lowerCamelCase
const config = {
  key: "value",
};

// for variables and functions names we use lowerCamelCase
let someVariableExample = "value";
function doSomething() {}

 

✔ 3.7 Prefer const over let. Ditch the var

TL;DR: Using const means that once a variable is assigned, it cannot be reassigned. Preferring const will help you to not be tempted to use the same variable for different uses, and make your code clearer. If a variable needs to be reassigned, in a for loop, for example, use let to declare it. Another important aspect of let is that a variable declared using it is only available in the block scope in which it was defined. var is function scoped, not block-scoped, and shouldn’t be used in ES6 now that you have const and let at your disposal

Otherwise: Debugging becomes way more cumbersome when following a variable that frequently changes

link Read more: JavaScript ES6+: var, let, or const?

 

✔ 3.8 Require modules first, not inside functions

TL;DR: Require modules at the beginning of each file, before and outside of any functions. This simple best practice will not only help you easily and quickly tell the dependencies of a file right at the top but also avoids a couple of potential problems

Otherwise: Requires are run synchronously by Node.js. If they are called from within a function, it may block other requests from being handled at a more critical time. Also, if a required module or any of its dependencies throw an error and crash the server, it is best to find out about it as soon as possible, which might not be the case if that module is required from within a function

 

✔ 3.9 Require modules by folders, as opposed to the files directly

TL;DR: When developing a module/library in a folder, place an index.js file that exposes the module’s internals so every consumer will pass through it. This serves as an ‘interface’ to your module and eases future changes without breaking the contract

Otherwise: Changing the internal structure of files or the signature may break the interface with clients

3.9 Code example

// Do
module.exports.SMSProvider = require("./SMSProvider");
module.exports.SMSNumberResolver = require("./SMSNumberResolver");

// Avoid
module.exports.SMSProvider = require("./SMSProvider/SMSProvider.js");
module.exports.SMSNumberResolver = require("./SMSNumberResolver/SMSNumberResolver.js");

 

✔ 3.10 Use the === operator

TL;DR: Prefer the strict equality operator === over the weaker abstract equality operator ==== will compare two variables after converting them to a common type. There is no type conversion in ===, and both variables must be of the same type to be equal

Otherwise: Unequal variables might return true when compared with the == operator

3.10 Code example

"" == "0"; // false
0 == ""; // true
0 == "0"; // true

false == "false"; // false
false == "0"; // true

false == undefined; // false
false == null; // false
null == undefined; // true

" \t\r\n " == 0; // true

All statements above will return false if used with ===

 

✔ 3.11 Use Async Await, avoid callbacks

TL;DR: Node 8 LTS now has full support for Async-await. This is a new way of dealing with asynchronous code which supersedes callbacks and promises. Async-await is non-blocking, and it makes asynchronous code look synchronous. The best gift you can give to your code is using async-await which provides a much more compact and familiar code syntax like try-catch

Otherwise: Handling async errors in callback style are probably the fastest way to hell – this style forces to check errors all over, deal with awkward code nesting, and makes it difficult to reason about the code flow

linkRead more: Guide to async-await 1.0

 

✔ 3.12 Use arrow function expressions (=>)

TL;DR: Though it’s recommended to use async-await and avoid function parameters when dealing with older APIs that accept promises or callbacks – arrow functions make the code structure more compact and keep the lexical context of the root function (i.e. this)

Otherwise: Longer code (in ES5 functions) is more prone to bugs and cumbersome to read

link Read more: It’s Time to Embrace Arrow Functions

 

arrow_up Return to top

4. Testing And Overall Quality Practices

✔ 4.1 At the very least, write API (component) testing

TL;DR: Most projects just don’t have any automated testing due to short timetables or often the ‘testing project’ ran out of control and was abandoned. For that reason, prioritize and start with API testing which is the easiest way to write and provides more coverage than unit testing (you may even craft API tests without code using tools like Postman). Afterward, should you have more resources and time, continue with advanced test types like unit testing, DB testing, performance testing, etc

Otherwise: You may spend long days on writing unit tests to find out that you got only 20% system coverage

 

✔ 4.2 Include 3 parts in each test name

TL;DR: Make the test speak at the requirements level so it’s self-explanatory also to QA engineers and developers who are not familiar with the code internals. State in the test name what is being tested (unit under test), under what circumstances, and what is the expected result

Otherwise: A deployment just failed, a test named “Add product” failed. Does this tell you what exactly is malfunctioning?

link Read More: Include 3 parts in each test name

 

✔ 4.3 Structure tests by the AAA pattern

TL;DR: Structure your tests with 3 well-separated sections: Arrange, Act & Assert (AAA). The first part includes the test setup, then the execution of the unit under test, and finally the assertion phase. Following this structure guarantees that the reader spends no brain CPU on understanding the test plan

Otherwise: Not only you spend long daily hours on understanding the main code, but now also what should have been the simple part of the day (testing) stretches your brain

link Read More: Structure tests by the AAA pattern

 

✔ 4.4 Detect code issues with a linter

TL;DR: Use a code linter to check the basic quality and detect anti-patterns early. Run it before any test and add it as a pre-commit git-hook to minimize the time needed to review and correct any issue. Also check Section 3 on Code Style Practices

Otherwise: You may let pass some anti-pattern and possible vulnerable code to your production environment.

 

✔ 4.5 Avoid global test fixtures and seeds, add data per-test

TL;DR: To prevent test coupling and easily reason about the test flow, each test should add and act on its own set of DB rows. Whenever a test needs to pull or assume the existence of some DB data – it must explicitly add that data and avoid mutating any other records

Otherwise: Consider a scenario where deployment is aborted due to failing tests, team is now going to spend precious investigation time that ends in a sad conclusion: the system works well, the tests however interfere with each other and break the build

link Read More: Avoid global test fixtures

 

✔ 4.6 Constantly inspect for vulnerable dependencies

TL;DR: Even the most reputable dependencies such as Express have known vulnerabilities. This can get easily tamed using community and commercial tools such as link npm audit and link snyk.io that can be invoked from your CI on every build

Otherwise: Keeping your code clean from vulnerabilities without dedicated tools will require to constantly follow online publications about new threats. Quite tedious

 

✔ 4.7 Tag your tests

TL;DR: Different tests must run on different scenarios: quick smoke, IO-less, tests should run when a developer saves or commits a file, full end-to-end tests usually run when a new pull request is submitted, etc. This can be achieved by tagging tests with keywords like #cold #api #sanity so you can grep with your testing harness and invoke the desired subset. For example, this is how you would invoke only the sanity test group with Mocha: mocha –grep ‘sanity’

Otherwise: Running all the tests, including tests that perform dozens of DB queries, any time a developer makes a small change can be extremely slow and keeps developers away from running tests

 

✔ 4.8 Check your test coverage, it helps to identify wrong test patterns

TL;DR: Code coverage tools like Istanbul/NYC are great for 3 reasons: it comes for free (no effort is required to benefit this reports), it helps to identify a decrease in testing coverage, and last but not least it highlights testing mismatches: by looking at colored code coverage reports you may notice, for example, code areas that are never tested like catch clauses (meaning that tests only invoke the happy paths and not how the app behaves on errors). Set it to fail builds if the coverage falls under a certain threshold

Otherwise: There won’t be any automated metric telling you when a large portion of your code is not covered by testing

 

✔ 4.9 Inspect for outdated packages

TL;DR: Use your preferred tool (e.g. npm outdated or npm-check-updates) to detect installed outdated packages, inject this check into your CI pipeline and even make a build fail in a severe scenario. For example, a severe scenario might be when an installed package is 5 patch commits behind (e.g. local version is 1.3.1 and repository version is 1.3.8) or it is tagged as deprecated by its author – kill the build and prevent deploying this version

Otherwise: Your production will run packages that have been explicitly tagged by their author as risky

 

✔ 4.10 Use production-like environment for e2e testing

TL;DR: End to end (e2e) testing which includes live data used to be the weakest link of the CI process as it depends on multiple heavy services like DB. Use an environment which is as close to your real production environment as possible like a-continue (Missed -continue here, needs content. Judging by the Otherwise clause, this should mention docker-compose)

Otherwise: Without docker-compose, teams must maintain a testing DB for each testing environment including developers’ machines, keep all those DBs in sync so test results won’t vary across environments

 

✔ 4.11 Refactor regularly using static analysis tools

TL;DR: Using static analysis tools helps by giving objective ways to improve code quality and keeps your code maintainable. You can add static analysis tools to your CI build to fail when it finds code smells. Its main selling points over plain linting are the ability to inspect quality in the context of multiple files (e.g. detect duplications), perform advanced analysis (e.g. code complexity), and follow the history and progress of code issues. Two examples of tools you can use are Sonarqube (2,600+ stars) and Code Climate (1,500+ stars).

Otherwise: With poor code quality, bugs and performance will always be an issue that no shiny new library or state of the art features can fix

link Read More: Refactoring!

 

✔ 4.12 Carefully choose your CI platform (Jenkins vs CircleCI vs Travis vs Rest of the world)

TL;DR: Your continuous integration platform (CICD) will host all the quality tools (e.g. test, lint) so it should come with a vibrant ecosystem of plugins. Jenkins used to be the default for many projects as it has the biggest community along with a very powerful platform at the price of a complex setup that demands a steep learning curve. Nowadays, it has become much easier to set up a CI solution using SaaS tools like CircleCI and others. These tools allow crafting a flexible CI pipeline without the burden of managing the whole infrastructure. Eventually, it’s a trade-off between robustness and speed – choose your side carefully

Otherwise: Choosing some niche vendor might get you blocked once you need some advanced customization. On the other hand, going with Jenkins might burn precious time on infrastructure setup

link Read More: Choosing CI platform

✔ 4.13 Test your middlewares in isolation

TL;DR: When a middleware holds some immense logic that spans many requests, it is worth testing it in isolation without waking up the entire web framework. This can be easily achieved by stubbing and spying on the {req, res, next} objects

Otherwise: A bug in Express middleware === a bug in all or most requests

link Read More: Test middlewares in isolation

 

arrow_up Return to top

5. Going To Production Practices

✔ 5.1. Monitoring

TL;DR: Monitoring is a game of finding out issues before customers do – obviously this should be assigned unprecedented importance. The market is overwhelmed with offers thus consider starting with defining the basic metrics you must follow (my suggestions inside), then go over additional fancy features and choose the solution that ticks all boxes. Click ‘The Gist’ below for an overview of the solutions

Otherwise: Failure === disappointed customers. Simple

link Read More: Monitoring!

 

✔ 5.2. Increase transparency using smart logging

TL;DR: Logs can be a dumb warehouse of debug statements or the enabler of a beautiful dashboard that tells the story of your app. Plan your logging platform from day 1: how logs are collected, stored and analyzed to ensure that the desired information (e.g. error rate, following an entire transaction through services and servers, etc) can really be extracted

Otherwise: You end up with a black box that is hard to reason about, then you start re-writing all logging statements to add additional information

link Read More: Increase transparency using smart logging

 

✔ 5.3. Delegate anything possible (e.g. gzip, SSL) to a reverse proxy

TL;DR: Node is awfully bad at doing CPU intensive tasks like gzipping, SSL termination, etc. You should use ‘real’ middleware services like nginx, HAproxy or cloud vendor services instead

Otherwise: Your poor single thread will stay busy doing infrastructural tasks instead of dealing with your application core and performance will degrade accordingly

link Read More: Delegate anything possible (e.g. gzip, SSL) to a reverse proxy

 

✔ 5.4. Lock dependencies

TL;DR: Your code must be identical across all environments, but amazingly npm lets dependencies drift across environments by default – when you install packages at various environments it tries to fetch packages’ latest patch version. Overcome this by using npm config files, .npmrc, that tell each environment to save the exact (not the latest) version of each package. Alternatively, for finer grained control use npm shrinkwrap. *Update: as of NPM5, dependencies are locked by default. The new package manager in town, Yarn, also got us covered by default

Otherwise: QA will thoroughly test the code and approve a version that will behave differently in production. Even worse, different servers in the same production cluster might run different code

link Read More: Lock dependencies

 

✔ 5.5. Guard process uptime using the right tool

TL;DR: The process must go on and get restarted upon failures. For simple scenarios, process management tools like PM2 might be enough but in today’s ‘dockerized’ world, cluster management tools should be considered as well

Otherwise: Running dozens of instances without a clear strategy and too many tools together (cluster management, docker, PM2) might lead to DevOps chaos

link Read More: Guard process uptime using the right tool

 

✔ 5.6. Utilize all CPU cores

TL;DR: At its basic form, a Node app runs on a single CPU core while all others are left idling. It’s your duty to replicate the Node process and utilize all CPUs – For small-medium apps you may use Node Cluster or PM2. For a larger app consider replicating the process using some Docker cluster (e.g. K8S, ECS) or deployment scripts that are based on Linux init system (e.g. systemd)

Otherwise: Your app will likely utilize only 25% of its available resources(!) or even less. Note that a typical server has 4 CPU cores or more, naive deployment of Node.js utilizes only 1 (even using PaaS services like AWS beanstalk!)

link Read More: Utilize all CPU cores

 

✔ 5.7. Create a ‘maintenance endpoint’

TL;DR: Expose a set of system-related information, like memory usage and REPL, etc in a secured API. Although it’s highly recommended to rely on standard and battle-tested tools, some valuable information and operations are easier done using code

Otherwise: You’ll find that you’re performing many “diagnostic deploys” – shipping code to production only to extract some information for diagnostic purposes

link Read More: Create a ‘maintenance endpoint’

 

✔ 5.8. Discover errors and downtime using APM products

TL;DR: Application monitoring and performance products (a.k.a. APM) proactively gauge codebase and API so they can auto-magically go beyond traditional monitoring and measure the overall user-experience across services and tiers. For example, some APM products can highlight a transaction that loads too slow on the end-user’s side while suggesting the root cause

Otherwise: You might spend great effort on measuring API performance and downtimes, probably you’ll never be aware which is your slowest code parts under real-world scenario and how these affect the UX

link Read More: Discover errors and downtime using APM products

 

✔ 5.9. Make your code production-ready

TL;DR: Code with the end in mind, plan for production from day 1. This sounds a bit vague so I’ve compiled a few development tips that are closely related to production maintenance (click Gist below)

Otherwise: A world champion IT/DevOps guy won’t save a system that is badly written

link Read More: Make your code production-ready

 

✔ 5.10. Measure and guard the memory usage

TL;DR: Node.js has controversial relationships with memory: the v8 engine has soft limits on memory usage (1.4GB) and there are known paths to leak memory in Node’s code – thus watching Node’s process memory is a must. In small apps, you may gauge memory periodically using shell commands but in medium-large apps consider baking your memory watch into a robust monitoring system

Otherwise: Your process memory might leak a hundred megabytes a day like how it happened at Walmart

link Read More: Measure and guard the memory usage

 

✔ 5.11. Get your frontend assets out of Node

TL;DR: Serve frontend content using dedicated middleware (nginx, S3, CDN) because Node performance really gets hurt when dealing with many static files due to its single-threaded model

Otherwise: Your single Node thread will be busy streaming hundreds of html/images/angular/react files instead of allocating all its resources for the task it was born for – serving dynamic content

link Read More: Get your frontend assets out of Node

 

✔ 5.12. Be stateless, kill your servers almost every day

TL;DR: Store any type of data (e.g. user sessions, cache, uploaded files) within external data stores. Consider ‘killing’ your servers periodically or use ‘serverless’ platform (e.g. AWS Lambda) that explicitly enforces a stateless behavior

Otherwise: Failure at a given server will result in application downtime instead of just killing a faulty machine. Moreover, scaling-out elasticity will get more challenging due to the reliance on a specific server

link Read More: Be stateless, kill your Servers almost every day

 

✔ 5.13. Use tools that automatically detect vulnerabilities

TL;DR: Even the most reputable dependencies such as Express have known vulnerabilities (from time to time) that can put a system at risk. This can be easily tamed using community and commercial tools that constantly check for vulnerabilities and warn (locally or at GitHub), some can even patch them immediately

Otherwise: Keeping your code clean from vulnerabilities without dedicated tools will require you to constantly follow online publications about new threats. Quite tedious

link Read More: Use tools that automatically detect vulnerabilities

 

✔ 5.14. Assign a transaction id to each log statement

Also known as correlation id / transit id / tracing id / request id / request context / etc.

TL;DR: Assign the same identifier, transaction-id: {some value}, to each log entry within a single request. Then when inspecting errors in logs, easily conclude what happened before and after. Until version 14 of Node, this was not easy to achieve due to Node’s async nature, but since AsyncLocalStorage came to town, this became possible and easy than ever. see code examples inside

Otherwise: Looking at a production error log without the context – what happened before – makes it much harder and slower to reason about the issue

link Read More: Assign ‘TransactionId’ to each log statement

 

✔ 5.15. Set NODE_ENV=production

TL;DR: Set the environment variable NODE_ENV to ‘production’ or ‘development’ to flag whether production optimizations should get activated – many npm packages determine the current environment and optimize their code for production

Otherwise: Omitting this simple property might greatly degrade performance. For example, when using Express for server-side rendering omitting NODE_ENV makes it slower by a factor of three!

link Read More: Set NODE_ENV=production

 

✔ 5.16. Design automated, atomic and zero-downtime deployments

TL;DR: Research shows that teams who perform many deployments lower the probability of severe production issues. Fast and automated deployments that don’t require risky manual steps and service downtime significantly improve the deployment process. You should probably achieve this using Docker combined with CI tools as they became the industry standard for streamlined deployment

Otherwise: Long deployments -> production downtime & human-related error -> team unconfident in making deployment -> fewer deployments and features

 

✔ 5.17. Use an LTS release of Node.js

TL;DR: Ensure you are using an LTS version of Node.js to receive critical bug fixes, security updates and performance improvements

Otherwise: Newly discovered bugs or vulnerabilities could be used to exploit an application running in production, and your application may become unsupported by various modules and harder to maintain

link Read More: Use an LTS release of Node.js

 

✔ 5.18. Don’t route logs within the app

TL;DR: Log destinations should not be hard-coded by developers within the application code, but instead should be defined by the execution environment the application runs in. Developers should write logs to stdout using a logger utility and then let the execution environment (container, server, etc.) pipe the stdout stream to the appropriate destination (i.e. Splunk, Graylog, ElasticSearch, etc.).

Otherwise: Application handling log routing === hard to scale, loss of logs, poor separation of concerns

link Read More: Log Routing

 

✔ 5.19. Install your packages with npm ci

TL;DR: You have to be sure that production code uses the exact version of the packages you have tested it with. Run npm ci to strictly do a clean install of your dependencies matching package.json and package-lock.json. Using this command is recommended in automated environments such as continuous integration pipelines.

Otherwise: QA will thoroughly test the code and approve a version that will behave differently in production. Even worse, different servers in the same production cluster might run different code.

link Read More: Use npm ci

 

arrow_up Return to top

6. Security Best Practices

54 items

✔ 6.1. Embrace linter security rules

 

TL;DR: Make use of security-related linter plugins such as eslint-plugin-security to catch security vulnerabilities and issues as early as possible, preferably while they’re being coded. This can help catching security weaknesses like using eval, invoking a child process or importing a module with a string literal (e.g. user input). Click ‘Read more’ below to see code examples that will get caught by a security linter

Otherwise: What could have been a straightforward security weakness during development becomes a major issue in production. Also, the project may not follow consistent code security practices, leading to vulnerabilities being introduced, or sensitive secrets committed into remote repositories

link Read More: Lint rules

 

✔ 6.2. Limit concurrent requests using a middleware

TL;DR: DOS attacks are very popular and relatively easy to conduct. Implement rate limiting using an external service such as cloud load balancers, cloud firewalls, nginx, rate-limiter-flexible package, or (for smaller and less critical apps) a rate-limiting middleware (e.g. express-rate-limit)

Otherwise: An application could be subject to an attack resulting in a denial of service where real users receive a degraded or unavailable service.

link Read More: Implement rate limiting

 

✔ 6.3 Extract secrets from config files or use packages to encrypt them

 

TL;DR: Never store plain-text secrets in configuration files or source code. Instead, make use of secret-management systems like Vault products, Kubernetes/Docker Secrets, or using environment variables. As a last resort, secrets stored in source control must be encrypted and managed (rolling keys, expiring, auditing, etc). Make use of pre-commit/push hooks to prevent committing secrets accidentally

Otherwise: Source control, even for private repositories, can mistakenly be made public, at which point all secrets are exposed. Access to source control for an external party will inadvertently provide access to related systems (databases, apis, services, etc).

link Read More: Secret management

 

✔ 6.4. Prevent query injection vulnerabilities with ORM/ODM libraries

TL;DR: To prevent SQL/NoSQL injection and other malicious attacks, always make use of an ORM/ODM or a database library that escapes data or supports named or indexed parameterized queries, and takes care of validating user input for expected types. Never just use JavaScript template strings or string concatenation to inject values into queries as this opens your application to a wide spectrum of vulnerabilities. All the reputable Node.js data access libraries (e.g. SequelizeKnexmongoose) have built-in protection against injection attacks.

Otherwise: Unvalidated or unsanitized user input could lead to operator injection when working with MongoDB for NoSQL, and not using a proper sanitization system or ORM will easily allow SQL injection attacks, creating a giant vulnerability.

link Read More: Query injection prevention using ORM/ODM libraries

 

✔ 6.5. Collection of generic security best practices

TL;DR: This is a collection of security advice that is not related directly to Node.js – the Node implementation is not much different than any other language. Click read more to skim through.

link Read More: Common security best practices

 

✔ 6.6. Adjust the HTTP response headers for enhanced security

TL;DR: Your application should be using secure headers to prevent attackers from using common attacks like cross-site scripting (XSS), clickjacking and other malicious attacks. These can be configured easily using modules like helmet.

Otherwise: Attackers could perform direct attacks on your application’s users, leading to huge security vulnerabilities

link Read More: Using secure headers in your application

 

✔ 6.7. Constantly and automatically inspect for vulnerable dependencies

TL;DR: With the npm ecosystem it is common to have many dependencies for a project. Dependencies should always be kept in check as new vulnerabilities are found. Use tools like npm audit or snyk to track, monitor and patch vulnerable dependencies. Integrate these tools with your CI setup so you catch a vulnerable dependency before it makes it to production.

Otherwise: An attacker could detect your web framework and attack all its known vulnerabilities.

link Read More: Dependency security

 

✔ 6.8. Protect Users’ Passwords/Secrets using bcrypt or scrypt

TL;DR: Passwords or secrets (e.g. API keys) should be stored using a secure hash + salt function like bcrypt,scrypt, or worst case pbkdf2.

Otherwise: Passwords and secrets that are stored without using a secure function are vulnerable to brute forcing and dictionary attacks that will lead to their disclosure eventually.

link Read More: User Passwords

 

✔ 6.9. Escape HTML, JS and CSS output

TL;DR: Untrusted data that is sent down to the browser might get executed instead of just being displayed, this is commonly referred as a cross-site-scripting (XSS) attack. Mitigate this by using dedicated libraries that explicitly mark the data as pure content that should never get executed (i.e. encoding, escaping)

Otherwise: An attacker might store malicious JavaScript code in your DB which will then be sent as-is to the poor clients

link Read More: Escape output

 

✔ 6.10. Validate incoming JSON schemas

 

TL;DR: Validate the incoming requests’ body payload and ensure it meets expectations, fail fast if it doesn’t. To avoid tedious validation coding within each route you may use lightweight JSON-based validation schemas such as jsonschema or joi

Otherwise: Your generosity and permissive approach greatly increases the attack surface and encourages the attacker to try out many inputs until they find some combination to crash the application

link Read More: Validate incoming JSON schemas

 

✔ 6.11. Support blocklisting JWTs

TL;DR: When using JSON Web Tokens (for example, with Passport.js), by default there’s no mechanism to revoke access from issued tokens. Once you discover some malicious user activity, there’s no way to stop them from accessing the system as long as they hold a valid token. Mitigate this by implementing a blocklist of untrusted tokens that are validated on each request.

Otherwise: Expired, or misplaced tokens could be used maliciously by a third party to access an application and impersonate the owner of the token.

link Read More: Blocklist JSON Web Tokens

 

✔ 6.12. Prevent brute-force attacks against authorization

TL;DR: A simple and powerful technique is to limit authorization attempts using two metrics:

  1. The first is number of consecutive failed attempts by the same user unique ID/name and IP address.
  2. The second is number of failed attempts from an IP address over some long period of time. For example, block an IP address if it makes 100 failed attempts in one day.

Otherwise: An attacker can issue unlimited automated password attempts to gain access to privileged accounts on an application

link Read More: Login rate limiting

 

✔ 6.13. Run Node.js as non-root user

TL;DR: There is a common scenario where Node.js runs as a root user with unlimited permissions. For example, this is the default behaviour in Docker containers. It’s recommended to create a non-root user and either bake it into the Docker image (examples given below) or run the process on this user’s behalf by invoking the container with the flag “-u username”

Otherwise: An attacker who manages to run a script on the server gets unlimited power over the local machine (e.g. change iptable and re-route traffic to his server)

link Read More: Run Node.js as non-root user

 

✔ 6.14. Limit payload size using a reverse-proxy or a middleware

 

TL;DR: The bigger the body payload is, the harder your single thread works in processing it. This is an opportunity for attackers to bring servers to their knees without tremendous amount of requests (DOS/DDOS attacks). Mitigate this limiting the body size of incoming requests on the edge (e.g. firewall, ELB) or by configuring express body parser to accept only small-size payloads

Otherwise: Your application will have to deal with large requests, unable to process the other important work it has to accomplish, leading to performance implications and vulnerability towards DOS attacks

link Read More: Limit payload size

 

✔ 6.15. Avoid JavaScript eval statements

  

TL;DR: eval is evil as it allows executing custom JavaScript code during run time. This is not just a performance concern but also an important security concern due to malicious JavaScript code that may be sourced from user input. Another language feature that should be avoided is new Function constructor. setTimeout and setInterval should never be passed dynamic JavaScript code either.

Otherwise: Malicious JavaScript code finds a way into text passed into eval or other real-time evaluating JavaScript language functions, and will gain complete access to JavaScript permissions on the page. This vulnerability is often manifested as an XSS attack.

link Read More: Avoid JavaScript eval statements

 

✔ 6.16. Prevent evil RegEx from overloading your single thread execution

TL;DR: Regular Expressions, while being handy, pose a real threat to JavaScript applications at large, and the Node.js platform in particular. A user input for text to match might require an outstanding amount of CPU cycles to process. RegEx processing might be inefficient to an extent that a single request that validates 10 words can block the entire event loop for 6 seconds and set the CPU on fire. For that reason, prefer third-party validation packages like validator.js instead of writing your own Regex patterns, or make use of safe-regex to detect vulnerable regex patterns

Otherwise: Poorly written regexes could be susceptible to Regular Expression DoS attacks that will block the event loop completely. For example, the popular moment package was found vulnerable with malicious RegEx usage in November of 2017

link Read More: Prevent malicious RegEx

 

✔ 6.17. Avoid module loading using a variable

  

TL;DR: Avoid requiring/importing another file with a path that was given as parameter due to the concern that it could have originated from user input. This rule can be extended for accessing files in general (i.e. fs.readFile()) or other sensitive resource access with dynamic variables originating from user input. Eslint-plugin-security linter can catch such patterns and warn early enough

Otherwise: Malicious user input could find its way to a parameter that is used to require tampered files, for example, a previously uploaded file on the file system, or access already existing system files.

link Read More: Safe module loading

 

✔ 6.18. Run unsafe code in a sandbox

  

TL;DR: When tasked to run external code that is given at run-time (e.g. plugin), use any sort of ‘sandbox’ execution environment that isolates and guards the main code against the plugin. This can be achieved using a dedicated process (e.g. cluster.fork()), serverless environment or dedicated npm packages that act as a sandbox

Otherwise: A plugin can attack through an endless variety of options like infinite loops, memory overloading, and access to sensitive process environment variables

link Read More: Run unsafe code in a sandbox

 

✔ 6.19. Take extra care when working with child processes

  

TL;DR: Avoid using child processes when possible and validate and sanitize input to mitigate shell injection attacks if you still have to. Prefer using child_process.execFile which by definition will only execute a single command with a set of attributes and will not allow shell parameter expansion.

Otherwise: Naive use of child processes could result in remote command execution or shell injection attacks due to malicious user input passed to an unsanitized system command.

link Read More: Be cautious when working with child processes

 

✔ 6.20. Hide error details from clients

TL;DR: An integrated express error handler hides the error details by default. However, great are the chances that you implement your own error handling logic with custom Error objects (considered by many as a best practice). If you do so, ensure not to return the entire Error object to the client, which might contain some sensitive application details

Otherwise: Sensitive application details such as server file paths, third party modules in use, and other internal workflows of the application which could be exploited by an attacker, could be leaked from information found in a stack trace

link Read More: Hide error details from client

 

✔ 6.21. Configure 2FA for npm or Yarn

TL;DR: Any step in the development chain should be protected with MFA (multi-factor authentication), npm/Yarn are a sweet opportunity for attackers who can get their hands on some developer’s password. Using developer credentials, attackers can inject malicious code into libraries that are widely installed across projects and services. Maybe even across the web if published in public. Enabling 2-factor-authentication in npm leaves almost zero chances for attackers to alter your package code.

Otherwise: Have you heard about the eslint developer whose password was hijacked?

 

✔ 6.22. Modify session middleware settings

TL;DR: Each web framework and technology has its known weaknesses - telling an attacker which web framework we use is a great help for them. Using the default settings for session middlewares can expose your app to module- and framework-specific hijacking attacks in a similar way to the X-Powered-By header. Try hiding anything that identifies and reveals your tech stack (E.g. Node.js, express)

Otherwise: Cookies could be sent over insecure connections, and an attacker might use session identification to identify the underlying framework of the web application, as well as module-specific vulnerabilities

link Read More: Cookie and session security

 

✔ 6.23. Avoid DOS attacks by explicitly setting when a process should crash

TL;DR: The Node process will crash when errors are not handled. Many best practices even recommend to exit even though an error was caught and got handled. Express, for example, will crash on any asynchronous error - unless you wrap routes with a catch clause. This opens a very sweet attack spot for attackers who recognize what input makes the process crash and repeatedly send the same request. There’s no instant remedy for this but a few techniques can mitigate the pain: Alert with critical severity anytime a process crashes due to an unhandled error, validate the input and avoid crashing the process due to invalid user input, wrap all routes with a catch and consider not to crash when an error originated within a request (as opposed to what happens globally)

Otherwise: This is just an educated guess: given many Node.js applications, if we try passing an empty JSON body to all POST requests - a handful of applications will crash. At that point, we can just repeat sending the same request to take down the applications with ease

 

✔ 6.24. Prevent unsafe redirects

TL;DR: Redirects that do not validate user input can enable attackers to launch phishing scams, steal user credentials, and perform other malicious actions.

Otherwise: If an attacker discovers that you are not validating external, user-supplied input, they may exploit this vulnerability by posting specially-crafted links on forums, social media, and other public places to get users to click it.

link Read More: Prevent unsafe redirects

 

✔ 6.25. Avoid publishing secrets to the npm registry

TL;DR: Precautions should be taken to avoid the risk of accidentally publishing secrets to public npm registries. An .npmignore file can be used to ignore specific files or folders, or the files array in package.json can act as an allow list.

Otherwise: Your project’s API keys, passwords or other secrets are open to be abused by anyone who comes across them, which may result in financial loss, impersonation, and other risks.

link Read More: Avoid publishing secrets

arrow_up Return to top

7. Draft: Performance Best Practices

Our contributors are working on this section. Would you like to join?

 

✔ 7.1. Don’t block the event loop

TL;DR: Avoid CPU intensive tasks as they will block the mostly single-threaded Event Loop and offload those to a dedicated thread, process or even a different technology based on the context.

Otherwise: As the Event Loop is blocked, Node.js will be unable to handle other request thus causing delays for concurrent users. 3000 users are waiting for a response, the content is ready to be served, but one single request blocks the server from dispatching the results back

link Read More: Do not block the event loop

 

✔ 7.2. Prefer native JS methods over user-land utils like Lodash

TL;DR: It’s often more penalising to use utility libraries like lodash and underscore over native methods as it leads to unneeded dependencies and slower performance. Bear in mind that with the introduction of the new V8 engine alongside the new ES standards, native methods were improved in such a way that it’s now about 50% more performant than utility libraries.

Otherwise: You’ll have to maintain less performant projects where you could have simply used what was already available or dealt with a few more lines in exchange of a few more files.

link Read More: Native over user land utils

 

arrow_up Return to top

8. Docker Best Practices

medal_sports Many thanks to Bret Fisher from whom we learned many of the following practices

 

✔ 8.1 Use multi-stage builds for leaner and more secure Docker images

TL;DR: Use multi-stage build to copy only necessary production artifacts. A lot of build-time dependencies and files are not needed for running your application. With multi-stage builds these resources can be used during build while the runtime environment contains only what’s necessary. Multi-stage builds are an easy way to get rid of overweight and security threats.

Otherwise: Larger images will take longer to build and ship, build-only tools might contain vulnerabilities and secrets only meant for the build phase might be leaked.

Example Dockerfile for multi-stage builds

FROM node:14.4.0 AS build

COPY . .
RUN npm ci && npm run build


FROM node:slim-14.4.0

USER node
EXPOSE 8080

COPY --from=build /home/node/app/dist /home/node/app/package.json /home/node/app/package-lock.json ./
RUN npm ci --production

CMD [ "node", "dist/app.js" ]

link Read More: Use multi-stage builds

 

✔ 8.2. Bootstrap using node command, avoid npm start

TL;DR: use CMD ['node','server.js'] to start your app, avoid using npm scripts which don’t pass OS signals to the code. This prevents problems with child-processes, signal handling, graceful shutdown and having zombie processes.

Otherwise: When no signals are passed, your code will never be notified about shutdowns. Without this, it will lose its chance to close properly possibly losing current requests and/or data.

Read More: Bootstrap container using node command, avoid npm start

 

✔ 8.3. Let the Docker runtime handle replication and uptime

TL;DR: When using a Docker run time orchestrator (e.g., Kubernetes), invoke the Node.js process directly without intermediate process managers or custom code that replicate the process (e.g. PM2, Cluster module). The runtime platform has the highest amount of data and visibility for making placement decision – It knows best how many processes are needed, how to spread them and what to do in case of crashes

Otherwise: Container keeps crashing due to lack of resources will get restarted indefinitely by the process manager. Should Kubernetes be aware of that, it could relocate it to a different roomy instance

link Read More: Let the Docker orchestrator restart and replicate processes

 

✔ 8.4. Use .dockerignore to prevent leaking secrets

TL;DR: Include a .dockerignore file that filters out common secret files and development artifacts. By doing so, you might prevent secrets from leaking into the image. As a bonus the build time will significantly decrease. Also, ensure not to copy all files recursively rather explicitly choose what should be copied to Docker

Otherwise: Common personal secret files like .env.aws and .npmrc will be shared with anybody with access to the image (e.g. Docker repository)

link Read More: Use .dockerignore

 

✔ 8.5. Clean-up dependencies before production

TL;DR: Although Dev-Dependencies are sometimes needed during the build and test life-cycle, eventually the image that is shipped to production should be minimal and clean from development dependencies. Doing so guarantees that only necessary code is shipped and the amount of potential attacks (i.e. attack surface) is minimized. When using multi-stage build (see dedicated bullet) this can be achieved by installing all dependencies first and finally running npm ci --production

Otherwise: Many of the infamous npm security breaches were found within development packages (e.g. eslint-scope)

link Read More: Remove development dependencies

 

✔ 8.6. Shutdown smartly and gracefully

TL;DR: Handle the process SIGTERM event and clean-up all existing connection and resources. This should be done while responding to ongoing requests. In Dockerized runtimes shutting down containers is not a rare event, rather a frequent occurrence that happen as part of routine work. Achieving this demands some thoughtful code to orchestrate several moving parts: The load balancer, keep-alive connections, the HTTP server and other resources

Otherwise: Dying immediately means not responding to thousands of disappointed users

link Read More: Graceful shutdown

 

✔ 8.7. Set memory limits using both Docker and v8

TL;DR: Always configure a memory limit using both Docker and the JavaScript runtime flags. The Docker limit is needed to make thoughtful container placement decision, the –v8’s flag max-old-space is needed to kick off the GC on time and prevent under utilization of memory. Practically, set the v8’s old space memory to be a just bit less than the container limit

Otherwise: The docker definition is needed to perform thoughtful scaling decision and prevent starving other citizens. Without also defining the v8’s limits, it will under utilize the container resources – Without explicit instructions it crashes when utilizing ~50-60% of its host resources

link Read More: Set memory limits using Docker only

 

✔ 8.8. Plan for efficient caching

TL;DR: Rebuilding a whole docker image from cache can be nearly instantaneous if done correctly. The less updated instructions should be at the top of your Dockerfile and the ones constantly changing (like app code) should be at the bottom.

Otherwise: Docker build will be very long and consume lot of resources even when making tiny changes

link Read More: Leverage caching to reduce build times

 

✔ 8.9. Use explicit image reference, avoid latest tag

TL;DR: Specify an explicit image digest or versioned label, never refer to latest. Developers are often led to believe that specifying the latest tag will provide them with the most recent image in the repository however this is not the case. Using a digest guarantees that every instance of the service is running exactly the same code.

In addition, referring to an image tag means that the base image is subject to change, as image tags cannot be relied upon for a deterministic install. Instead, if a deterministic install is expected, a SHA256 digest can be used to reference an exact image.

Otherwise: A new version of a base image could be deployed into production with breaking changes, causing unintended application behaviour.

link Read More: Understand image tags and use the “latest” tag with caution

 

✔ 8.10. Prefer smaller Docker base images

TL;DR: Large images lead to higher exposure to vulnerabilities and increased resource consumption. Using leaner Docker images, such as Slim and Alpine Linux variants, mitigates this issue.

Otherwise: Building, pushing, and pulling images will take longer, unknown attack vectors can be used by malicious actors and more resources are consumed.

link Read More: Prefer smaller images

 

✔ 8.11. Clean-out build-time secrets, avoid secrets in args

TL;DR: Avoid secrets leaking from the Docker build environment. A Docker image is typically shared in multiple environment like CI and a registry that are not as sanitized as production. A typical example is an npm token which is usually passed to a dockerfile as argument. This token stays within the image long after it is needed and allows the attacker indefinite access to a private npm registry. This can be avoided by coping a secret file like .npmrc and then removing it using multi-stage build (beware, build history should be deleted as well) or by using Docker build-kit secret feature which leaves zero traces

Otherwise: Everyone with access to the CI and docker registry will also get access to some precious organization secrets as a bonus

link Read More: Clean-out build-time secrets

 

✔ 8.12. Scan images for multi layers of vulnerabilities

TL;DR: Besides checking code dependencies vulnerabilities also scan the final image that is shipped to production. Docker image scanners check the code dependencies but also the OS binaries. This E2E security scan covers more ground and verifies that no bad guy injected bad things during the build. Consequently, it is recommended running this as the last step before deployment. There are a handful of free and commercial scanners that also provide CI/CD plugins

Otherwise: Your code might be entirely free from vulnerabilities. However it might still get hacked due to vulnerable version of OS-level binaries (e.g. OpenSSL, TarBall) that are commonly being used by applications

link Read More: Scan the entire image before production

 

✔ 8.13 Clean NODE_MODULE cache

TL;DR: After installing dependencies in a container remove the local cache. It doesn’t make any sense to duplicate the dependencies for faster future installs since there won’t be any further installs – A Docker image is immutable. Using a single line of code tens of MB (typically 10-50% of the image size) are shaved off

Otherwise: The image that will get shipped to production will weigh 30% more due to files that will never get used

link Read More: Clean NODE_MODULE cache

 

✔ 8.14. Generic Docker practices

TL;DR: This is a collection of Docker advice that is not related directly to Node.js – the Node implementation is not much different than any other language. Click read more to skim through.

link Read More: Generic Docker practices

 

✔ 8.15. Lint your Dockerfile

TL;DR: Linting your Dockerfile is an important step to identify issues in your Dockerfile which differ from best practices. By checking for potential flaws using a specialised Docker linter, performance and security improvements can be easily identified, saving countless hours of wasted time or security issues in production code.

Otherwise: Mistakenly the Dockerfile creator left Root as the production user, and also used an image from unknown source repository. This could be avoided with with just a simple linter.

link Read More: Lint your Dockerfile

from:https://github.com/goldbergyoni/nodebestpractices

用JS开发跨平台桌面应用,从原理到实践

导读

使用Electron开发客户端程序已经有一段时间了,整体感觉还是非常不错的,其中也遇到了一些坑点,本文是从【运行原理】到【实际应用】对Electron进行一次系统性的总结。【多图,长文预警~】

本文所有实例代码均在我的github electron-react上,结合代码阅读文章效果更佳。另外electron-react还可作为使用Electron + React + Mobx + Webpack技术栈的脚手架工程。

一、桌面应用程序

桌面应用程序,又称为 GUI 程序(Graphical User Interface),但是和 GUI 程序也有一些区别。桌面应用程序 将 GUI 程序从GUI 具体为“桌面”,使冷冰冰的像块木头一样的电脑概念更具有 人性化,更生动和富有活力。

我们电脑上使用的各种客户端程序都属于桌面应用程序,近年来WEB和移动端的兴起让桌面程序渐渐暗淡,但是在某些日常功能或者行业应用中桌面应用程序仍然是必不可少的。

传统的桌面应用开发方式,一般是下面两种:

1.1 原生开发

直接将语言编译成可执行文件,直接调用系统API,完成UI绘制等。这类开发技术,有着较高的运行效率,但一般来说,开发速度较慢,技术要求较高,例如:

  • 使用C++ / MFC开发Windows应用
  • 使用Objective-C开发MAC应用

1.2 托管平台

一开始就有本地开发和UI开发。一次编译后,得到中间文件,通过平台或虚机完成二次加载编译或解释运行。运行效率低于原生编译,但平台优化后,其效率也是比较可观的。就开发速度方面,比原生编译技术要快一些。例如:

  • 使用C# / .NET Framework(只能开发Windows应用)
  • Java / Swing

不过,上面两种对前端开发人员太不友好了,基本是前端人员不会涉及的领域,但是在这个【大前端?】的时代,前端开发者正在想方设法涉足各个领域,使用WEB技术开发客户端的方式横空出世。

1.3 WEB开发

使用WEB技术进行开发,利用浏览器引擎完成UI渲染,利用Node.js实现服务器端JS编程并可以调用系统API,可以把它想像成一个套了一个客户端外壳的WEB应用。

在界面上,WEB的强大生态为UI带来了无限可能,并且开发、维护成本相对较低,有WEB开发经验的前端开发者很容易上手进行开发。

本文就来着重介绍使用WEB技术开发客户端程序的技术之一【electron

二、Electron

Electron是由Github开发,用HTML,CSSJavaScript来构建跨平台桌面应用程序的一个开源库。 Electron通过将ChromiumNode.js合并到同一个运行时环境中,并将其打包为Mac,WindowsLinux系统下的应用来实现这一目的。

2.1 使用Electron开发的理由:

  • 1.使用具有强大生态的Web技术进行开发,开发成本低,可扩展性强,更炫酷的UI
  • 2.跨平台,一套代码可打包为Windows、Linux、Mac三套软件,且编译快速
  • 3.可直接在现有Web应用上进行扩展,提供浏览器不具备的能力
  • 4.你是一个前端?‍?~

当然,我们也要认清它的缺点:性能比原生桌面应用要低,最终打包后的应用比原生应用大很多。

2.2 开发体验

兼容性

虽然你还在用WEB技术进行开发,但是你不用再考虑兼容性问题了,你只需要关心你当前使用Electron的版本对应Chrome的版本,一般情况下它已经足够新来让你使用最新的API和语法了,你还可以手动升级Chrome版本。同样的,你也不用考虑不同浏览器带的样式和代码兼容问题。

Node环境

这可能是很多前端开发者曾经梦想过的功能,在WEB界面中使用Node.js提供的强大API,这意味着你在WEB页面直接可以操作文件,调用系统API,甚至操作数据库。当然,除了完整的Node API,你还可以使用额外的几十万个npm模块。

跨域

你可以直接使用Node提供的request模块进行网络请求,这意味着你无需再被跨域所困扰。

强大的扩展性

借助node-ffi,为应用程序提供强大的扩展性(后面的章节会详细介绍)。

2.3 谁在用Electron

现在市面上已经有非常多的应用在使用Electron进行开发了,包括我们熟悉的VS Code客户端、GitHub客户端、Atom客户端等等。印象很深的,去年迅雷在发布迅雷X10.1时的文案:

从迅雷X 10.1版本开始,我们采用Electron软件框架完全重写了迅雷主界面。使用新框架的迅雷X可以完美支持2K、4K等高清显示屏,界面中的文字渲染也更加清晰锐利。从技术层面来说,新框架的界面绘制、事件处理等方面比老框架更加灵活高效,因此界面的流畅度也显著优于老框架的迅雷。至于具体提升有多大?您一试便知。

你可以打开VS Code,点击【帮助】【切换开发人员工具】来调试VS Code客户端的界面。

三、Electron运行原理

Electron 结合了 ChromiumNode.js 和用于调用操作系统本地功能的API

3.1 Chromium

ChromiumGoogle为发展Chrome浏览器而启动的开源项目,Chromium相当于Chrome的工程版或称实验版,新功能会率先在Chromium上实现,待验证后才会应用在Chrome上,故Chrome的功能会相对落后但较稳定。

ChromiumElectron提供强大的UI能力,可以在不考虑兼容性的情况下开发界面。

3.2 Node.js

Node.js是一个让JavaScript运行在服务端的开发平台,Node使用事件驱动,非阻塞I/O模型而得以轻量和高效。

单单靠Chromium是不能具备直接操作原生GUI能力的,Electron内集成了Nodejs,这让其在开发界面的同时也有了操作系统底层API的能力,Nodejs 中常用的 Path、fs、Crypto 等模块在 Electron 可以直接使用。

3.3 系统API

为了提供原生系统的GUI支持,Electron内置了原生应用程序接口,对调用一些系统功能,如调用系统通知、打开系统文件夹提供支持。

在开发模式上,Electron在调用系统API和绘制界面上是分离开发的,下面我们来看看Electron关于进程如何划分。

3.4 主进程

Electron区分了两种进程:主进程和渲染进程,两者各自负责自己的职能。

Electron 运行package.jsonmain 脚本的进程被称为主进程。一个 Electron 应用总是有且只有一个主进程。

职责:

  • 创建渲染进程(可多个)
  • 控制了应用生命周期(启动、退出APP以及对APP做一些事件监听)
  • 调用系统底层功能、调用原生资源

可调用的API:

  • Node.js API
  • Electron提供的主进程API(包括一些系统功能和Electron附加功能)

3.5 渲染进程

由于 Electron 使用了 Chromium 来展示 web 页面,所以 Chromium 的多进程架构也被使用到。 每个Electron 中的 web页面运行在它自己的渲染进程中。

主进程使用 BrowserWindow 实例创建页面。 每个 BrowserWindow 实例都在自己的渲染进程里运行页面。 当一个 BrowserWindow 实例被销毁后,相应的渲染进程也会被终止。

你可以把渲染进程想像成一个浏览器窗口,它能存在多个并且相互独立,不过和浏览器不同的是,它能调用Node API

职责:

  • HTMLCSS渲染界面
  • JavaScript做一些界面交互

可调用的API:

  • DOM API
  • Node.js API
  • Electron提供的渲染进程API

四、Electron基础

4.1 Electron API

在上面的章节我们提到,渲染进和主进程分别可调用的Electron API。所有ElectronAPI都被指派给一种进程类型。 许多API只能被用于主进程中,有些API又只能被用于渲染进程,又有一些主进程和渲染进程中都可以使用。

你可以通过如下方式获取Electron API

const { BrowserWindow, ... } = require('electron')

下面是一些常用的Electron API

在后面的章节我们会选择其中常用的模块进行详细介绍。

4.2 使用 Node.js 的 API

你可以同时在Electron的主进程和渲染进程使用Node.js API,)所有在Node.js可以使用的API,在Electron中同样可以使用。

import {shell} from 'electron';
import os from 'os';

document.getElementById('btn').addEventListener('click', () => { 
  shell.showItemInFolder(os.homedir());
})

有一个非常重要的提示: 原生Node.js模块 (即指,需要编译源码过后才能被使用的模块) 需要在编译后才能和Electron一起使用。

4.3 进程通信

主进程和渲染进程虽然拥有不同的职责,然是他们也需要相互协作,互相通讯。

例如:在web页面管理原生GUI资源是很危险的,会很容易泄露资源。所以在web页面,不允许直接调用原生GUI相关的API。渲染进程如果想要进行原生的GUI操作,就必须和主进程通讯,请求主进程来完成这些操作。

4.4 渲染进程向主进程通信

ipcRenderer 是一个 EventEmitter 的实例。 你可以使用它提供的一些方法,从渲染进程发送同步或异步的消息到主进程。 也可以接收主进程回复的消息。

在渲染进程引入ipcRenderer

import { ipcRenderer } from 'electron';

异步发送:

通过 channel 发送同步消息到主进程,可以携带任意参数。

在内部,参数会被序列化为 JSON,因此参数对象上的函数和原型链不会被发送。

ipcRenderer.send('sync-render', '我是来自渲染进程的异步消息');

同步发送:

 const msg = ipcRenderer.sendSync('async-render', '我是来自渲染进程的同步消息');

注意: 发送同步消息将会阻塞整个渲染进程,直到收到主进程的响应。

主进程监听消息:

ipcMain模块是EventEmitter类的一个实例。 当在主进程中使用时,它处理从渲染器进程(网页)发送出来的异步和同步信息。 从渲染器进程发送的消息将被发送到该模块。

ipcMain.on:监听 channel,当接收到新的消息时 listener 会以 listener(event, args...) 的形式被调用。

  ipcMain.on('sync-render', (event, data) => {
    console.log(data);
  });

4.5 主进程向渲染进程通信

在主进程中可以通过BrowserWindowwebContents向渲染进程发送消息,所以,在发送消息前你必须先找到对应渲染进程的BrowserWindow对象。:

const mainWindow = BrowserWindow.fromId(global.mainId);
 mainWindow.webContents.send('main-msg', `ConardLi]`)

根据消息来源发送:

ipcMain接受消息的回调函数中,通过第一个参数event的属性sender可以拿到消息来源渲染进程的webContents对象,我们可以直接用此对象回应消息。

  ipcMain.on('sync-render', (event, data) => {
    console.log(data);
    event.sender.send('main-msg', '主进程收到了渲染进程的【异步】消息!')
  });

渲染进程监听:

ipcRenderer.on:监听 channel, 当新消息到达,将通过listener(event, args...)调用 listener

ipcRenderer.on('main-msg', (event, msg) => {
    console.log(msg);
})

4.6 通信原理

ipcMainipcRenderer 都是 EventEmitter 类的一个实例。EventEmitter 类是 NodeJS 事件的基础,它由 NodeJS 中的 events 模块导出。

EventEmitter 的核心就是事件触发与事件监听器功能的封装。它实现了事件模型需要的接口, 包括 addListener,removeListener, emit 及其它工具方法. 同原生 JavaScript 事件类似, 采用了发布/订阅(观察者)的方式, 使用内部 _events 列表来记录注册的事件处理器。

我们通过 ipcMainipcRendereron、send 进行监听和发送消息都是 EventEmitter 定义的相关接口。

4.7 remote

remote 模块为渲染进程(web页面)和主进程通信(IPC)提供了一种简单方法。 使用 remote 模块, 你可以调用 main 进程对象的方法, 而不必显式发送进程间消息, 类似于 JavaRMI

import { remote } from 'electron';

remote.dialog.showErrorBox('主进程才有的dialog模块', '我是使用remote调用的')

但实际上,我们在调用远程对象的方法、函数或者通过远程构造函数创建一个新的对象,实际上都是在发送一个同步的进程间消息。

在上面通过 remote 模块调用 dialog 的例子里。我们在渲染进程中创建的 dialog 对象其实并不在我们的渲染进程中,它只是让主进程创建了一个 dialog 对象,并返回了这个相对应的远程对象给了渲染进程。

4.8 渲染进程间通信

Electron并没有提供渲染进程之间相互通信的方式,我们可以在主进程中建立一个消息中转站。

渲染进程之间通信首先发送消息到主进程,主进程的中转站接收到消息后根据条件进行分发。

4.9 渲染进程数据共享

在两个渲染进程间共享数据最简单的方法是使用浏览器中已经实现的HTML5 API。 其中比较好的方案是用Storage APIlocalStorage,sessionStorage 或者 IndexedDB。

就像在浏览器中使用一样,这种存储相当于在应用程序中永久存储了一部分数据。有时你并不需要这样的存储,只需要在当前应用程序的生命周期内进行一些数据的共享。这时你可以用 Electron 内的 IPC 机制实现。

将数据存在主进程的某个全局变量中,然后在多个渲染进程中使用 remote 模块来访问它。

在主进程中初始化全局变量:

global.mainId = ...;
global.device = {...};
global.__dirname = __dirname;
global.myField = { name: 'ConardLi' };

在渲染进程中读取:

import { ipcRenderer, remote } from 'electron';

const { getGlobal } = remote;

const mainId = getGlobal('mainId')
const dirname = getGlobal('__dirname')
const deviecMac = getGlobal('device').mac;

在渲染进程中改变:

getGlobal('myField').name = 'code秘密花园';

多个渲染进程共享同一个主进程的全局变量,这样即可达到渲染进程数据共享和传递的效果。

五、窗口

5.1 BrowserWindow

主进程模块BrowserWindow用于创建和控制浏览器窗口。

  mainWindow = new BrowserWindow({
    width: 1000,
    height: 800,
    // ...
  });
  mainWindow.loadURL('http://www.conardli.top/');

你可以在这里查看它所有的构造参数。

5.2 无框窗口

无框窗口是没有镶边的窗口,窗口的部分(如工具栏)不属于网页的一部分。

BrowserWindow的构造参数中,将frame设置为false可以指定窗口为无边框窗口,将工具栏隐藏后,就会产生两个问题:

  • 1.窗口控制按钮(最小化、全屏、关闭按钮)会被隐藏
  • 2.无法拖拽移动窗口

可以通过指定titleBarStyle选项来再将工具栏按钮显示出来,将其设置为hidden表示返回一个隐藏标题栏的全尺寸内容窗口,在左上角仍然有标准的窗口控制按钮。

new BrowserWindow({
    width: 200,
    height: 200,
    titleBarStyle: 'hidden',
    frame: false
  });

5.3 窗口拖拽

默认情况下, 无边框窗口是不可拖拽的。我们可以在界面中通过CSS属性-webkit-app-region: drag手动制定拖拽区域。

在无框窗口中, 拖动行为可能与选择文本冲突,可以通过设定-webkit-user-select: none;禁用文本选择:

.header {
  -webkit-user-select: none;
  -webkit-app-region: drag;
}

相反的,在可拖拽区域内部设置 -webkit-app-region: no-drag则可以指定特定不可拖拽区域。

5.4 透明窗口

通过将transparent选项设置为true, 还可以使无框窗口透明:

new BrowserWindow({
    transparent: true,
    frame: false
  });

5.5 Webview

使用 webview 标签在Electron 应用中嵌入 “外来” 内容。外来内容包含在 webview 容器中。 应用中的嵌入页面可以控制外来内容的布局和重绘。

iframe 不同, webview 在与应用程序不同的进程中运行。它与您的网页没有相同的权限, 应用程序和嵌入内容之间的所有交互都将是异步的。

六、对话框

dialog 模块提供了api来展示原生的系统对话框,例如打开文件框,alert框,所以web应用可以给用户带来跟系统应用相同的体验。

注意:dialog是主进程模块,想要在渲染进程调用可以使用remote

6.1 错误提示

dialog.showErrorBox用于显示一个显示错误消息的模态对话框。

 remote.dialog.showErrorBox('错误', '这是一个错误弹框!')

6.2 对话框

dialog.showErrorBox用于调用系统对话框,可以为指定几种不同的类型: “none“, “info“, “error“, “question” 或者 “warning“。

在 Windows 上, “question” 与”info”显示相同的图标, 除非你使用了 “icon” 选项设置图标。 在 macOS 上, “warning” 和 “error” 显示相同的警告图标

remote.dialog.showMessageBox({
  type: 'info',
  title: '提示信息',
  message: '这是一个对话弹框!',
  buttons: ['确定', '取消']
}, (index) => {
  this.setState({ dialogMessage: `【你点击了${index ? '取消' : '确定'}!!】` })
})

6.3 文件框

dialog.showOpenDialog用于打开或选择系统目录。

remote.dialog.showOpenDialog({
  properties: ['openDirectory', 'openFile']
}, (data) => {
  this.setState({ filePath: `【选择路径:${data[0]}】 ` })
})

6.4 信息框

这里推荐直接使用HTML5 API,它只能在渲染器进程中使用。

let options = {
  title: '信息框标题',
  body: '我是一条信息~~~',
}
let myNotification = new window.Notification(options.title, options)
myNotification.onclick = () => {
  this.setState({ message: '【你点击了信息框!!】' })
}

七、系统

7.1 获取系统信息

通过remote获取到主进程的process对象,可以获取到当前应用的各个版本信息:

  • process.versions.electronelectron版本信息
  • process.versions.chromechrome版本信息
  • process.versions.nodenode版本信息
  • process.versions.v8v8版本信息

获取当前应用根目录:

remote.app.getAppPath()

使用nodeos模块获取当前系统根目录:

os.homedir();

7.2 复制粘贴

Electron提供的clipboard在渲染进程和主进程都可使用,用于在系统剪贴板上执行复制和粘贴操作。

以纯文本的形式写入剪贴板:

clipboard.writeText(text[, type])

以纯文本的形式获取剪贴板的内容:

clipboard.readText([type])

7.3 截图

desktopCapturer用于从桌面捕获音频和视频的媒体源的信息。它只能在渲染进程中被调用。

下面的代码是一个获取屏幕截图并保存的实例:

  getImg = () => {
    this.setState({ imgMsg: '正在截取屏幕...' })
    const thumbSize = this.determineScreenShotSize()
    let options = { types: ['screen'], thumbnailSize: thumbSize }
    desktopCapturer.getSources(options, (error, sources) => {
      if (error) return console.log(error)
      sources.forEach((source) => {
        if (source.name === 'Entire screen' || source.name === 'Screen 1') {
          const screenshotPath = path.join(os.tmpdir(), 'screenshot.png')
          fs.writeFile(screenshotPath, source.thumbnail.toPNG(), (error) => {
            if (error) return console.log(error)
            shell.openExternal(`file://${screenshotPath}`)
            this.setState({ imgMsg: `截图保存到: ${screenshotPath}` })
          })
        }
      })
    })
  }

  determineScreenShotSize = () => {
    const screenSize = screen.getPrimaryDisplay().workAreaSize
    const maxDimension = Math.max(screenSize.width, screenSize.height)
    return {
      width: maxDimension * window.devicePixelRatio,
      height: maxDimension * window.devicePixelRatio
    }
  }

八、菜单

应用程序的菜单可以帮助我们快捷的到达某一功能,而不借助客户端的界面资源,一般菜单分为两种:

  • 应用程序菜单:位于应用程序顶部,在全局范围内都能使用
  • 上下文菜单:可自定义任意页面显示,自定义调用,如右键菜单

Electron为我们提供了Menu模块用于创建本机应用程序菜单和上下文菜单,它是一个主进程模块。

你可以通过Menu的静态方法buildFromTemplate(template),使用自定义菜单模版来构造一个菜单对象。

template是一个MenuItem的数组,我们来看看MenuItem的几个重要参数:

  • label:菜单显示的文字
  • click:点击菜单后的事件处理函数
  • role:系统预定义的菜单,例如copy(复制)、paste(粘贴)、minimize(最小化)…
  • enabled:指示是否启用该项目,此属性可以动态更改
  • submenu:子菜单,也是一个MenuItem的数组

推荐:最好指定role与标准角色相匹配的任何菜单项,而不是尝试手动实现click函数中的行为。内置role行为将提供最佳的本地体验。

下面的实例是一个简单的额菜单template

const template = [
  {
    label: '文件',
    submenu: [
      {
        label: '新建文件',
        click: function () {
          dialog.showMessageBox({
            type: 'info',
            message: '嘿!',
            detail: '你点击了新建文件!',
          })
        }
      }
    ]
  },
  {
    label: '编辑',
    submenu: [{
      label: '剪切',
      role: 'cut'
    }, {
      label: '复制',
      role: 'copy'
    }, {
      label: '粘贴',
      role: 'paste'
    }]
  },
  {
    label: '最小化',
    role: 'minimize'
  }
]

8.1 应用程序菜单

使用Menu的静态方法setApplicationMenu,可创建一个应用程序菜单,在 WindowsLinux 上,menu将被设置为每个窗口的顶层菜单。

注意:必须在模块ready事件后调用此 API app。

我们可以根据应用程序不同的的生命周期,不同的系统对菜单做不同的处理。

app.on('ready', function () {
  const menu = Menu.buildFromTemplate(template)
  Menu.setApplicationMenu(menu)
})

app.on('browser-window-created', function () {
  let reopenMenuItem = findReopenMenuItem()
  if (reopenMenuItem) reopenMenuItem.enabled = false
})

app.on('window-all-closed', function () {
  let reopenMenuItem = findReopenMenuItem()
  if (reopenMenuItem) reopenMenuItem.enabled = true
})

if (process.platform === 'win32') {
  const helpMenu = template[template.length - 1].submenu
  addUpdateMenuItems(helpMenu, 0)
}

8.2 上下文菜单

使用Menu的实例方法menu.popup可自定义弹出上下文菜单。

    let m = Menu.buildFromTemplate(template)
    document.getElementById('menuDemoContainer').addEventListener('contextmenu', (e) => {
      e.preventDefault()
      m.popup({ window: remote.getCurrentWindow() })
    })

8.3 快捷键

在菜单选项中,我们可以指定一个accelerator属性来指定操作的快捷键:

  {
    label: '最小化',
    accelerator: 'CmdOrCtrl+M',
    role: 'minimize'
  }

另外,我们还可以使用globalShortcut来注册全局快捷键。

    globalShortcut.register('CommandOrControl+N', () => {
      dialog.showMessageBox({
        type: 'info',
        message: '嘿!',
        detail: '你触发了手动注册的快捷键.',
      })
    })

CommandOrControl代表在macOS上为Command键,以及在Linux和Windows上为Control键。

九、打印

很多情况下程序中使用的打印都是用户无感知的。并且想要灵活的控制打印内容,往往需要借助打印机给我们提供的api再进行开发,这种开发方式非常繁琐,并且开发难度较大。第一次在业务中用到Electron其实就是用到它的打印功能,这里就多介绍一些。

Electron提供的打印api可以非常灵活的控制打印设置的显示,并且可以通过html来书写打印内容。Electron提供了两种方式进行打印,一种是直接调用打印机打印,一种是打印到pdf

并且有两种对象可以调用打印:

  • 通过windowwebcontent对象,使用此种方式需要单独开出一个打印的窗口,可以将该窗口隐藏,但是通信调用相对复杂。
  • 使用页面的webview元素调用打印,可以将webview隐藏在调用的页面中,通信方式比较简单。

上面两种方式同时拥有printprintToPdf方法。

9.1 调用系统打印

contents.print([options], [callback])

打印配置(options)中只有简单的三个配置:

  • silent:打印时是否不展示打印配置(是否静默打印)
  • printBackground:是否打印背景
  • deviceName:打印机设备名称

首先要将我们使用的打印机名称配置好,并且要在调用打印前首先要判断打印机是否可用。

使用webContentsgetPrinters方法可获取当前设备已经配置的打印机列表,注意配置过不是可用,只是在此设备上安装过驱动。

通过getPrinters获取到的打印机对象:https://electronjs.org/docs/api/structures/printer-info

我们这里只管关心两个,namestatusstatus0时表示打印机可用。

print的第二个参数callback是用于判断打印任务是否发出的回调,而不是打印任务完成后的回调。所以一般打印任务发出,回调函数即会调用并返回参数true。这个回调并不能判断打印是否真的成功了。

    if (this.state.curretnPrinter) {
      mainWindow.webContents.print({
        silent: silent, printBackground: true, deviceName: this.state.curretnPrinter
      }, () => { })
    } else {
      remote.dialog.showErrorBox('错误', '请先选择一个打印机!')
    }

9.2 打印到PDF

printToPdf的用法基本和print相同,但是print的配置项非常少,而printToPdf则扩展了很多属性。这里翻了一下源码发现还有很多没有被贴进api的,大概有三十几个包括可以对打印的margin,打印页眉页脚等进行配置。

contents.printToPDF(options, callback)

callback函数在打印失败或打印成功后调用,可获取打印失败信息或包含PDF数据的缓冲区。

    const pdfPath = path.join(os.tmpdir(), 'webviewPrint.pdf');
    const webview = document.getElementById('printWebview');
    const renderHtml = '我是被临时插入webview的内容...';
    webview.executeJavaScript('document.documentElement.innerHTML =`' + renderHtml + '`;');
    webview.printToPDF({}, (err, data) => {
      console.log(err, data);
      fs.writeFile(pdfPath, data, (error) => {
        if (error) throw error
        shell.openExternal(`file://${pdfPath}`)
        this.setState({ webviewPdfPath: pdfPath })
      });
    });

这个例子中的打印是使用webview完成的,通过调用executeJavaScript方法可动态向webview插入打印内容。

9.3 两种打印方案的选择

上面提到,使用webviewwebcontent都可以调用打印功能,使用webcontent打印,首先要有一个打印窗口,这个窗口不能随时打印随时创建,比较耗费性能。可以将它在程序运行时启动好,并做好事件监听。

此过程需和调用打印的进行做好通信,大致过程如下:

可见通信非常繁琐,使用webview进行打印可实现同样的效果但是通信方式会变得简单,因为渲染进程和webview通信不需要经过主进程,通过如下方式即可:

  const webview = document.querySelector('webview')
  webview.addEventListener('ipc-message', (event) => {
    console.log(event.channel)
  })
  webview.send('ping')const {ipcRenderer} = require('electron')
  ipcRenderer.on('ping', () => {
    ipcRenderer.sendToHost('pong')
  })

之前专门为ELectron打印写过一个DEMO:electron-print-demo有兴趣可以clone下来看一下。

9.4 打印功能封装

下面是几个针对常用打印功能的工具函数封装。

/**
 * 获取系统打印机列表
 */
export function getPrinters() {
  let printers = [];
  try {
    const contents = remote.getCurrentWindow().webContents;
    printers = contents.getPrinters();
  } catch (e) {
    console.error('getPrintersError', e);
  }
  return printers;
}
/**
 * 获取系统默认打印机
 */
export function getDefaultPrinter() {
  return getPrinters().find(element => element.isDefault);
}
/**
 * 检测是否安装了某个打印驱动
 */
export function checkDriver(driverMame) {
  return getPrinters().find(element => (element.options["printer-make-and-model"] || '').includes(driverMame));
}
/**
 * 根据打印机名称获取打印机对象
 */
export function getPrinterByName(name) {
  return getPrinters().find(element => element.name === name);
}

十、程序保护

10.1 崩溃

崩溃监控是每个客户端程序必备的保护功能,当程序崩溃时我们一般期望做到两件事:

  • 1.上传崩溃日志,及时报警
  • 2.监控程序崩溃,提示用户重启程序

electron为我们提供给了crashReporter来帮助我们记录崩溃日志,我们可以通过crashReporter.start来创建一个崩溃报告器:

const { crashReporter } = require('electron')
crashReporter.start({
  productName: 'YourName',
  companyName: 'YourCompany',
  submitURL: 'https://your-domain.com/url-to-submit',
  uploadToServer: true
})

当程序发生崩溃时,崩溃报日志将被储存在临时文件夹中名为YourName Crashes的文件文件夹中。submitURL用于指定你的崩溃日志上传服务器。 在启动崩溃报告器之前,您可以通过调用app.setPath('temp', 'my/custom/temp')API来自定义这些临时文件的保存路径。你还可以通过crashReporter.getLastCrashReport()来获取上次崩溃报告的日期和ID

我们可以通过webContentscrashed来监听渲染进程的崩溃,另外经测试有些主进程的崩溃也会触发该事件。所以我们可以根据主window是否被销毁来判断进行不同的重启逻辑,下面使整个崩溃监控的逻辑:

import { BrowserWindow, crashReporter, dialog } from 'electron';
// 开启进程崩溃记录
crashReporter.start({
  productName: 'electron-react',
  companyName: 'ConardLi',
  submitURL: 'http://xxx.com',  // 上传崩溃日志的接口
  uploadToServer: false
});
function reloadWindow(mainWin) {
  if (mainWin.isDestroyed()) {
    app.relaunch();
    app.exit(0);
  } else {
    // 销毁其他窗口
    BrowserWindow.getAllWindows().forEach((w) => {
      if (w.id !== mainWin.id) w.destroy();
    });
    const options = {
      type: 'info',
      title: '渲染器进程崩溃',
      message: '这个进程已经崩溃.',
      buttons: ['重载', '关闭']
    }
    dialog.showMessageBox(options, (index) => {
      if (index === 0) mainWin.reload();
      else mainWin.close();
    })
  }
}
export default function () {
  const mainWindow = BrowserWindow.fromId(global.mainId);
  mainWindow.webContents.on('crashed', () => {
    const errorMessage = crashReporter.getLastCrashReport();
    console.error('程序崩溃了!', errorMessage); // 可单独上传日志
    reloadWindow(mainWindow);
  });
}

10.2 最小化到托盘

有的时候我们并不想让用户通过点关闭按钮的时候就关闭程序,而是把程序最小化到托盘,在托盘上做真正的退出操作。

首先要监听窗口的关闭事件,阻止用户关闭操作的默认行为,将窗口隐藏。

function checkQuit(mainWindow, event) {
  const options = {
    type: 'info',
    title: '关闭确认',
    message: '确认要最小化程序到托盘吗?',
    buttons: ['确认', '关闭程序']
  };
  dialog.showMessageBox(options, index => {
    if (index === 0) {
      event.preventDefault();
      mainWindow.hide();
    } else {
      mainWindow = null;
      app.exit(0);
    }
  });
}
function handleQuit() {
  const mainWindow = BrowserWindow.fromId(global.mainId);
  mainWindow.on('close', event => {
    event.preventDefault();
    checkQuit(mainWindow, event);
  });
}

这时程序就再也找不到了,任务托盘中也没有我们的程序,所以我们要先创建好任务托盘,并做好事件监听。

windows平台使用ico文件可以达到更好的效果

export default function createTray() {
  const mainWindow = BrowserWindow.fromId(global.mainId);
  const iconName = process.platform === 'win32' ? 'icon.ico' : 'icon.png'
  tray = new Tray(path.join(global.__dirname, iconName));
  const contextMenu = Menu.buildFromTemplate([
    {
      label: '显示主界面', click: () => {
        mainWindow.show();
        mainWindow.setSkipTaskbar(false);
      }
    },
    {
      label: '退出', click: () => {
        mainWindow.destroy();
        app.quit();
      }
    },
  ])
  tray.setToolTip('electron-react');
  tray.setContextMenu(contextMenu);
}

十一、扩展能力

在很多情况下,你的应用程序要和外部设备进行交互,一般情况下厂商会为你提供硬件设备的开发包,这些开发包基本上都是通过C++ 编写,在使用electron开发的情况下,我们并不具备直接调用C++代码的能力,我们可以利用node-ffi来实现这一功能。

node-ffi提供了一组强大的工具,用于在Node.js环境中使用纯JavaScript调用动态链接库接口。它可以用来为库构建接口绑定,而不需要使用任何C++代码。

注意node-ffi并不能直接调用C++代码,你需要将C++代码编译为动态链接库:在 Windows下是 Dll ,在 Mac OS下是 dylib ,Linuxsonode-ffi 加载 Library是有限制的,只能处理 C风格的 Library

下面是一个简单的实例:

const ffi = require('ffi');
const ref = require('ref');
const SHORT_CODE = ref.refType('short');


const DLL = new ffi.Library('test.dll', {
    Test_CPP_Method: ['int', ['string',SHORT_CODE]], 
  })

testCppMethod(str: String, num: number): void {
  try {
    const result: any = DLL.Test_CPP_Method(str, num);
    return result;
  } catch (error) {
    console.log('调用失败~',error);
  }
}

this.testCppMethod('ConardLi',123);

上面的代码中,我们用ffi包装C++接口生成的动态链接库test.dll,并使用ref进行一些类型映射。

使用JavaScript调用这些映射方法时,推荐使用TypeScript来约定参数类型,因为弱类型的JavaScript在调用强类型语言的接口时可能会带来意想不到的风险。

借助这一能力,前端开发工程师也可以在IOT领域一展身手了?~

十二、环境选择

一般情况下,我们的应用程序可能运行在多套环境下(productionbetauatmokedevelopment…),不同的开发环境可能对应不同的后端接口或者其他配置,我们可以在客户端程序中内置一个简单的环境选择功能来帮助我们更高效的开发。

具体策略如下:

  • 在开发环境中,我们直接进入环境选择页面,读取到选择的环境后进行响应的重定向操作
  • 在菜单保留环境选择入口,以便在开发过程中切换
const envList = ["moke", "beta", "development", "production"];
exports.envList = envList;
const urlBeta = 'https://wwww.xxx-beta.com';
const urlDev = 'https://wwww.xxx-dev.com';
const urlProp = 'https://wwww.xxx-prop.com';
const urlMoke = 'https://wwww.xxx-moke.com';
const path = require('path');
const pkg = require(path.resolve(global.__dirname, 'package.json'));
const build = pkg['build-config'];
exports.handleEnv = {
  build,
  currentEnv: 'moke',
  setEnv: function (env) {
    this.currentEnv = env
  },
  getUrl: function () {
    console.log('env:', build.env);
    if (build.env === 'production' || this.currentEnv === 'production') {
      return urlProp;
    } else if (this.currentEnv === 'moke') {
      return urlMoke;
    } else if (this.currentEnv === 'development') {
      return urlDev;
    } else if (this.currentEnv === "beta") {
      return urlBeta;
    }
  },
  isDebugger: function () {
    return build.env === 'development'
  }
}

十三、打包

最后也是最重要的一步,将写好的代码打包成可运行的.app.exe可执行文件。

这里我把打包氛围两部分来做,渲染进程打包和主进程打包。

13.1 渲染进程打包和升级

一般情况下,我们的大部分业务逻辑代码是在渲染进程完成的,在大部分情况下我们仅仅需要对渲染进程进行更新和升级而不需要改动主进程代码,我们渲染进程的打包实际上和一般的web项目打包没有太大差别,使用webpack打包即可。

这里我说说渲染进程单独打包的好处:

打包完成的htmljs文件,我们一般要上传到我们的前端静态资源服务器下,然后告知服务端我们的渲染进程有代码更新,这里可以说成渲染进程单独的升级。

注意,和壳的升级不同,渲染进程的升级仅仅是静态资源服务器上htmljs文件的更新,而不需要重新下载更新客户端,这样我们每次启动程序的时候检测到离线包有更新,即可直接刷新读取最新版本的静态资源文件,即使在程序运行过程中要强制更新,我们的程序只需要强制刷新页面读取最新的静态资源即可,这样的升级对用户是非常友好的。

这里注意,一旦我们这样配置,就意味着渲染进程和主进程打包升级的完全分离,我们在启动主窗口时读取的文件就不应该再是本地文件,而是打包完成后放在静态资源服务器的文件。

为了方便开发,这里我们可以区分本地和线上加载不同的文件:

function getVersion (mac,current){
  // 根据设备mac和当前版本获取最新版本
}
export default function () {
  if (build.env === 'production') {
    const version = getVersion (mac,current);
    return 'https://www.xxxserver.html/electron-react/index_'+version+'.html';
  }
  return url.format({
    protocol: 'file:',
    pathname: path.join(__dirname, 'env/environment.html'),
    slashes: true,
    query: { debugger: build.env === "development" }
  });
}

具体的webpack配置这里就不再贴出,可以到我的github electron-react/scripts目录下查看。

这里需要注意,在开发环境下我们可以结合webpackdevServerelectron命令来启动app

  devServer: {
    contentBase: './assets/',
    historyApiFallback: true,
    hot: true,
    port: PORT,
    noInfo: false,
    stats: {
      colors: true,
    },
    setup() {
      spawn(
        'electron',
        ['.'],
        {
          shell: true,
          stdio: 'inherit',
        }
      )
        .on('close', () => process.exit(0))
        .on('error', e => console.error(e));
    },
  },//...

13.2 主进程打包

主进程,即将整个程序打包成可运行的客户端程序,常用的打包方案一般有两种,electron-packagerelectron-builder

electron-packager在打包配置上我觉得有些繁琐,而且它只能将应用直接打包为可执行程序。

这里我推荐使用electron-builder,它不仅拥有方便的配置 protocol 的功能、内置的 Auto Update、简单的配置 package.json 便能完成整个打包工作,用户体验非常不错。而且electron-builder不仅能直接将应用打包成exe app等可执行程序,还能打包成msi dmg等安装包格式。

你可以在package.json方便的进行各种配置:

  "build": {
    "productName": "electron-react", // app中文名称
    "appId": "electron-react",// app标识
    "directories": { // 打包后输出的文件夹
      "buildResources": "resources",
      "output": "dist/"
    }
    "files": [ // 打包后依然保留的源文件
      "main_process/",
      "render_process/",
    ],
    "mac": { // mac打包配置
      "target": "dmg",
      "icon": "icon.ico"
    },
    "win": { // windows打包配置
      "target": "nsis",
      "icon": "icon.ico"
    },
    "dmg": { // dmg文件打包配置
      "artifactName": "electron_react.dmg",
      "contents": [
        {
          "type": "link",
          "path": "/Applications",
          "x": 410,
          "y": 150
        },
        {
          "type": "file",
          "x": 130,
          "y": 150
        }
      ]
    },
    "nsis": { // nsis文件打包配置
      "oneClick": false,
      "allowToChangeInstallationDirectory": true,
      "shortcutName": "electron-react"
    },
  }

执行electron-builder打包命令时,可指定参数进行打包。

  --mac, -m, -o, --macos   macOS打包
  --linux, -l              Linux打包
  --win, -w, --windows     Windows打包
  --mwl                    同时为macOS,Windows和Linux打包
  --x64                    x64 (64位安装包)
  --ia32                   ia32(32位安装包)

关于主进程的更新你可以使用electron-builder自带的Auto Update模块,在electron-react也实现了手动更新的模块,由于篇幅原因这里就不再赘述,如果有兴趣可以到我的github查看main下的update模块。

13.3 打包优化

electron-builder打包出来的App要比相同功能的原生客户端应用体积大很多,即使是空的应用,体积也要在100mb以上。原因有很多:

第一点;为了达到跨平台的效果,每个Electron应用都包含了整个V8引擎和Chromium内核。

第二点:打包时会将整个node_modules打包进去,大家都知道一个应用的node_module体积是非常庞大的,这也是使得Electron应用打包后的体积较大的原因。

第一点我们无法改变,我们可以从第二点对应用体积进行优化:Electron在打包时只会将denpendencies的依赖打包进去,而不会将 devDependencies 中的依赖进行打包。所以我们应尽可能的减少denpendencies中的依赖。在上面的进程中,我们使用webpack对渲染进程进行打包,所以渲染进程的依赖全部都可以移入devDependencies

另外,我们还可以使用双packajson.json的方式来进行优化,把只在开发环境中使用到的依赖放在整个项目的根目录的package.json下,将与平台相关的或者运行时需要的依赖装在app目录下。具体详见two-package-structure。

参考

  • https://electronjs.org/docs
  • http://jlord.us/essential-electron/
  • https://imweb.io/topic/5b9f500cc2ec8e6772f34d79
  • https://www.jianshu.com/p/1ece6fd7a80c
  • https://zhuanlan.zhihu.com/p/52991793

本项目源码地址:https://github.com/ConardLi/electron-react

from: https://cloud.tencent.com/developer/article/1446636

Node.js Resource

Stack Overflow Documentation

Tutorials

Developer Sites

Videos

Screencasts

Books

Courses

Blogs

Podcasts

JavaScript resources

Node.js Modules

Other

First, learn the core concepts of Node.js:

Then, you’re going to want to see what the community has to offer:

The gold standard for Node.js package management is NPM.

Finally, you’re going to want to know what some of the more popular packages are for various tasks:

Useful Tools for Every Project:

  • Underscore contains just about every core utility method you want.
  • Lo-Dash is a clone of Underscore that aims to be faster, more customizable, and has quite a few functions that underscore doesn’t have. Certain versions of it can be used as drop-in replacements of underscore.
  • TypeScript makes JavaScript considerably more bearable, while also keeping you out of trouble!
  • JSHint is a code-checking tools that’ll save you loads of time finding stupid errors. Find a plugin for your text editor that will automatically run it on your code.

Unit Testing:

  • Mocha is a popular test framework.
  • Vows is a fantastic take on asynchronous testing, albeit somewhat stale.
  • Expresso is a more traditional unit testing framework.
  • node-unit is another relatively traditional unit testing framework.
  • AVA is a new test runner with Babel built-in and runs tests concurrently.

Web Frameworks:

  • Express.js is by far the most popular framework.
  • Koa is a new web framework designed by the team behind Express.js, which aims to be a smaller, more expressive, and more robust foundation for web applications and APIs.
  • sails.js the most popular MVC framework for Node.js, and is based on express. It is designed to emulate the familiar MVC pattern of frameworks like Ruby on Rails, but with support for the requirements of modern apps: data-driven APIs with a scalable, service-oriented architecture.
  • Meteor bundles together jQuery, Handlebars, Node.js, WebSocket, MongoDB, and DDP and promotes convention over configuration without being a Ruby on Rails clone.
  • Tower (deprecated) is an abstraction of top of Express.js that aims to be a Ruby on Rails clone.
  • Geddy is another take on web frameworks.
  • RailwayJS is a Ruby on Rails inspired MVC web framework.
  • Sleek.js is a simple web framework, built upon Express.js.
  • Hapi is a configuration-centric framework with built-in support for input validation, caching, authentication, etc.
  • Trails is a modern web application framework. It builds on the pedigree of Rails and Grails to accelerate development by adhering to a straightforward, convention-based, API-driven design philosophy.
  • Danf is a full-stack OOP framework providing many features in order to produce a scalable, maintainable, testable and performant applications and allowing to code the same way on both the server (Node.js) and client (browser) sides.
  • Derbyjs is a reactive full-stack JavaScript framework. They are using patterns like reactive programming and isomorphic JavaScript for a long time.
  • Loopback.io is a powerful Node.js framework for creating APIs and easily connecting to backend data sources. It has a Angular.js SDK and provides SDKs for iOS and Android.

Web Framework Tools:

Networking:

  • Connect is the Rack or WSGI of the Node.js world.
  • Request is a very popular HTTP request library.
  • socket.io is handy for building WebSocket servers.

Command Line Interaction:

  • minimist just command line argument parsing.
  • Yargs is a powerful library for parsing command-line arguments.
  • Commander.js is a complete solution for building single-use command-line applications.
  • Vorpal.js is a framework for building mature, immersive command-line applications.
  • Chalk makes your CLI output pretty.

Work with streams:

Others:

Node lesson
Growth: 全栈增长工程师指南
refer:http://stackoverflow.com/questions/2353818/how-do-i-get-started-with-node-js