Tag Archives: MongoDB

Comparing DynamoDB and MongoDB

Quick Comparison Table

MongoDB DynamoDB
Freedom to Run Anywhere

Only available on AWS

No support for on-premises deployments

Locked-in to a single cloud provider

Data Model

Limited key-value store with JSON support

Maximum 400KB record size

Limited data type support (number, string, binary only) increases application complexity


Key-value queries only

Primary-key can have at most 2 attributes, limiting query flexibility

Analytic queries requires replicating data to another AWS service, increasing cost and complexity


Limited / Complex to manage

Indexes are sized, billed & provisioned separately from data

Hash or hash-range indexes only

Global secondary indexes (GSIs) are inconsistent with underlying data, forcing applications to handle stale data

Local secondary indexes (LSIs) can be strongly consistent, but must be defined when a table is created

GSIs can only be declared on top level item elements. Cannot index sub-documents or arrays, making complex queries impossible

Maximum of 20 GSIs & 5 LSIs per table

Data Integrity

Eventually Consistent

Complex – need to handle stale data in application

No data validation – must be handled in application

ACID transactions apply to table data only, not to indexes or backups

Maximum of 25 writes per transaction

Monitoring and Performance Tuning


Fewer than 20 metrics limit visibility into database behavior

No tools to visualize schema or recommend indexes


On-demand or continuous backups

No queryable backup; additional charge to restore backups; many configurations are not backed up and need to be recreated manually


Highly Variable

Throughput-based pricing

A wide range of inputs may affect price. See Pricing and Commercial Considerations

What is DynamoDB?

DynamoDB is a proprietary NoSQL database service built by Amazon and offered as part of the Amazon Web Services (AWS) portfolio.

The name comes from Dynamo, a highly available key-value store developed in response to holiday outages on the Amazon e-commerce platform in 2004. Initially, however, few teams within Amazon adopted Dynamo due to its high operational complexity and the trade-offs that needed to be made between performance, reliability, query flexibility, and data consistency.

Around the same time, Amazon found that its developers enjoyed using SimpleDB, its primary NoSQL database service at the time which allowed users to offload database administration work. But SimpleDB, which is no longer being updated by Amazon, had severe limitations when it came to scale; its strict storage limitation of 10 GB and the limited number of operations it could support per second made it only viable for small workloads.

DynamoDB, which was launched as a database service on AWS in 2012, was built to address the limitations of both SimpleDB and Dynamo.

What is MongoDB?

MongoDB is an open, non-tabular database built by MongoDB, Inc. The company was established in 2007 by former executives and engineers from DoubleClick, which Google acquired and now uses as the backbone of its advertising products. The founders originally focused on building a platform as a service using entirely open source components, but when they struggled to find an existing database that could meet their requirements for building a service in the cloud, they began work on their own database system. After realizing the potential of the database software on its own, the team shifted their focus to what is now MongoDB. The company released MongoDB in 2009.

MongoDB was designed to create a technology foundation that enables development teams through:

  1. The document data model – presenting them the best way to work with data.
  2. A distributed systems design – allowing them to intelligently put data where they want it.
  3. A unified experience that gives them the freedom to run anywhere – allowing them to future-proof their work and eliminate vendor lock-in.

MongoDB stores data in flexible, JSON-like records called documents, meaning fields can vary from document to document and data structure can be changed over time. This model maps to objects in application code, making data easy to work with for developers. Related information is typically stored together for fast query access through the MongoDB query language. MongoDB uses dynamic schemas, allowing users to create records without first defining the structure, such as the fields or the types of their values. Users can change the structure of documents simply by adding new fields or deleting existing ones. This flexible data model makes it easy for developers to represent hierarchical relationships and other more complex structures. Documents in a collection need not have an identical set of fields and denormalization of data is common.

In summer of 2016, MongoDB Atlas, the MongoDB fully managed cloud database service, was announced. Atlas offers genuine MongoDB under the hood, allowing users to offload operational tasks and featuring built-in best practices for running the database with all the power and freedom developers are used to with MongoDB.

Terminology and Concepts

Many concepts in DynamoDB have close analogs in MongoDB. The table below outlines some of the common concepts across DynamoDB and MongoDB.

DynamoDB MongoDB
Table Collection
Item Document
Attribute Field
Secondary Index Secondary Index

Deployment Environments

MongoDB can be run anywhere – from a developer’s laptop to an on-premises data center to any of the public cloud platforms. As mentioned above, MongoDB is also available as a fully managed cloud database with MongoDB Atlas; this model is most similar to how DynamoDB is delivered.

In contrast, DynamoDB is a proprietary database only available on Amazon Web Services. While a downloadable version of the database is available for prototyping on a local machine, the database can only be run in production in AWS. Organizations looking into DynamoDB should consider the implications of building on a data layer that is locked in to a single cloud vendor.

Comparethemarket.com, the UK’s leading price comparison service, completed a transition from on-prem deployments with Microsoft SQL Server to AWS and MongoDB. When asked why they hadn’t selected DynamoDB, a company representative was quoted as saying “DynamoDB was eschewed to help avoid AWS vendor lock-in.”

Data Model

MongoDB stores data in a JSON-like format called BSON, which allows the database to support a wide spectrum of data types including dates, timestamps, 64-bit integers, & Decimal128. MongoDB documents can be up to 16 MB in size; with GridFS, even larger assets can be natively stored within the database.

Unlike some NoSQL databases that push enforcement of data quality controls back into the application code, MongoDB provides built-in schema validation. Users can enforce checks on document structure, data types, data ranges and the presence of mandatory fields. As a result, DBAs can apply data governance standards, while developers maintain the benefits of a flexible document model.

DynamoDB is a key-value store with added support for JSON to provide document-like data structures that better match with objects in application code. An item or record cannot exceed 400KB. Compared to MongoDB, DynamoDB has limited support for different data types. For example, it supports only one numeric type and does not support dates. As a result, developers must preserve data types on the client, which adds application complexity and reduces data re-use across different applications. DynamoDB does not have native data validation capabilities.

Queries and Indexes

MongoDB‘s API enables developers to build applications that can query and analyze their data in multiple ways – by single keys, ranges, faceted search, graph traversals, JOINs and geospatial queries through to complex aggregations, returning responses in milliseconds. Complex queries are executed natively in the database without having to use additional analytics frameworks or tools. This helps users avoid the latency that comes from syncing data between operational and analytical engines.

MongoDB ensures fast access to data by any field with full support for secondary indexes. Indexes can be applied to any field in a document, down to individual values in arrays.

MongoDB supports multi-document transactions, making it the only database to combine the ACID guarantees of traditional relational databases; the speed, flexibility, and power of the document model; and the intelligent distributed systems design to scale-out and place data where you need it.

Multi-document transactions feel just like the transactions developers are familiar with from relational databases – multi-statement, similar syntax, and easy to add to any application. Through snapshot isolation, transactions provide a globally consistent view of data and enforce all-or-nothing execution. MongoDB allows reads and writes against the same documents and fields within the transaction. For example, users can check the status of an item before updating it. MongoDB best practices advise up to 1,000 operations in a single transaction. Learn more about MongoDB transactions here.

Supported indexing strategies such as compound, unique, array, partial, TTL, geospatial, sparse, hash, wildcard and text ensure optimal performance for multiple query patterns, data types, and application requirements. Indexes are strongly consistent with the underlying data.

DynamoDB supports key-value queries only. For queries requiring aggregations, graph traversals, or search, data must be copied into additional AWS technologies, such as Elastic MapReduce or Redshift, increasing latency, cost, and developer work. The database supports two types of indexes: Global secondary indexes (GSIs) and local secondary indexes (LSIs). Users can define up to 5 LSIs and 20 GSIs per table. Indexes can be defined as hash or hash-range indexes; more advanced indexing strategies are not supported.

GSIs, which are eventually consistent with the underlying data, do not support ad-hoc queries and usage requires knowledge of data access patterns in advance. GSIs can also not index any element below the top level record structure – so you cannot index sub-documents or arrays. LSIs can be queried to return strongly consistent data, but must be defined when the table is created. They cannot be added to existing tables and they cannot be removed without dropping the table.

DynamoDB indexes are sized and provisioned separately from the underlying tables, which may result in unforeseen issues at runtime. The DynamoDB documentation explains,

“In order for a table write to succeed, the provisioned throughput settings for the table and all of its global secondary indexes must have enough write capacity to accommodate the write; otherwise, the write to the table will be throttled.”

DynamoDB also supports multi-record ACID transactions. Unlike MongoDB transactions, each DynamoDB transaction is limited to just 25 write operations; the same item also cannot be targeted with multiple operations as a part of the same transaction. As a result, complex business logic may require multiple, independent transactions, which would add more code and overhead to the application, while also resulting in the possibility of more conflicts and transaction failures. Only base data in a DynamoDB table is transactional. Secondary indexes, backups and streams are updated “eventually”. This can lead to “silent data loss”. Subsequent queries against indexes can return data that is has not been updated data from the base tables, breaking transactional semantics. Similarly data restored from backups may not be transactionally consistent with the original table.


MongoDB is strongly consistent by default as all read/writes go to the primary in a MongoDB replica set, scaled across multiple partitions (shards). If desired, consistency requirements for read operations can be relaxed. Through secondary consistency controls, read queries can be routed only to secondary replicas that fall within acceptable consistency limits with the primary server.

DynamoDB is eventually consistent by default. Users can configure read operations to return only strongly consistent data, but this doubles the cost of the read (see Pricing and Commercial Considerations) and adds latency. There is also no way to guarantee read consistency when querying against DynamoDB’s global secondary indexes (GSIs); any operation performed against a GSI will be eventually consistent, returning potentially stale or deleted data, and therefore increasing application complexity.

Operational Maturity

MongoDB Atlas allows users to deploy, manage, and scale their MongoDB clusters using built in operational and security best practices, such as end-to-end encryption, network isolation, role-based access control, VPC peering, and more. Atlas deployments are guaranteed to be available and durable with distributed and auto-healing replica set members and continuous backups with point in time recovery to protect against data corruption. MongoDB Atlas is fully elastic with zero downtime configuration changes and auto-scaling both storage and compute capacity. Atlas also grants organizations deep insights into how their databases are performing with a comprehensive monitoring dashboard, a real-time performance panel, and customizable alerting.

For organizations that would prefer to run MongoDB on their own infrastructure, MongoDB, Inc. offers advanced operational tooling to handle the automation of the entire database lifecycle, comprehensive monitoring (tracking 100+ metrics that could impact performance), and continuous backup. Product packages like MongoDB Enterprise Advanced bundle operational tooling and visualization and performance optimization platforms with end-to-end security controls for applications managing sensitive data.

MongoDB’s deployment flexibility allows single clusters to span racks, data centers and continents. With replica sets supporting up to 50 members and geography-aware sharding across regions, administrators can provision clusters that support globally deployments, with write local/read global access patterns and data locality. Using Atlas Global Clusters, developers can deploy fully managed “write anywhere” active-active clusters, allowing data to be localized to any region. With each region acting as primary for its own data, the risks of data loss and eventual consistency imposed by the multi-primary approach used by DynamoDB are eliminated, and customers can meet the data sovereignty demands of new privacy regulations. Finally, multi-cloud clusters enable users to provision clusters that span across AWS, Azure, and Google Cloud, giving maximum resilience and flexibility in terms of data distribution.

Offered only as a managed service on AWS, DynamoDB abstracts away its underlying partitioning and replication schemes. While provisioning is simple, other key operational tasks are lacking when compared to MongoDB:

  • Fewer than 20 database metrics are reported by AWS Cloudwatch, which limits visibility into real-time database behavior
  • AWS CloudTrail can be used to create audit trails, but it only tracks a small subset of DDL (administrative) actions to the database, not all user access to individual tables or records
  • DynamoDB has limited tooling to allow developers and/or DBAs to optimize performance by visualizing schema or graphically profiling query performance
  • DynamoDB supports cross region replication with multi-primary global tables, however these add further application complexity and cost, with eventual consistency, risks of data loss due to write conflicts between regions, and no automatic client failover

Pricing & Commercial Considerations

In this section we will again compare DynamoDB with its closest analog from MongoDB, Inc., MongoDB Atlas.

DynamoDB‘s pricing model is based on throughput. Users pay for a certain capacity on a given table and AWS automatically throttles any reads or writes that exceed that capacity.

This sounds simple in theory, but the reality is that correctly provisioning throughput and estimating pricing is far more nuanced.

Below is a list of all the factors that could impact the cost of running DynamoDB:

  • Size of the data set per month
  • Size of each object
  • Number of reads per second (pricing is based on “read capacity units”, which are equivalent to reading a 4KB object) and whether those reads need to be strongly consistent or eventually consistent (the former is twice as expensive)
    • If accessing a JSON object, the entire document must be retrieved, even if the application needs to read only a single element
  • Number of writes per second (pricing is based on “write capacity units”, which are the equivalent of writing a 1KB object)
  • Whether transactions will be used. Transactions double the cost of read and write operations
  • Whether clusters will be replicated across multiple regions. This increases write capacity costs by 50%.
  • Size and throughput requirements for each index created against the table
  • Costs for backup and restore. AWS offers on-demand and continuous backups – both are charged separately, at different rates for both the backup and restore operation
  • Data transferred by Dynamo streams per month
  • Data transfers both in and out of the database per month
  • Cross-regional data transfers, EC2 instances, and SQS queues needed for cross-regional deployments
  • The use of additional AWS services to address what is missing from DynamoDB’s limited key value query model
  • Use of on-demand or reserved instances
  • Number of metrics pushed into CloudWatch for monitoring
  • Number of events pushed into CloudTrail for database auditing

It is key to point out from the list above that indexes affect pricing and strongly consistent reads are twice as expensive.

With DynamoDB, throughput pricing actually dictates the number of partitions, not total throughput. Since users don’t have precise control over partitioning, if any individual partition is saturated, one may have to dramatically increase capacity by splitting partitions rather than scaling linearly. Very careful design of the data model is essential to ensure that provisioned throughput can be realized.

AWS has introduced the concept of Adaptive Capacity, which will automatically increase the available resources for a single partition when it becomes saturated, however it is not without limitations. Total read and write volume to a single partition cannot exceed 3,000 read capacity units and 1,000 write capacity units per second. The required throughput increase cannot exceed the total provisioned capacity for the table. Adaptive capacity doesn’t grant more resources as much as borrow resources from lower utilized partitions. And finally, DynamoDB may take up to 15 minutes to provision additional capacity.

For customers frustrated with capacity planning exercises for DynamoDB, AWS recently introduced DynamoDB On-Demand, which will allow the platform to automatically provision additional resources based on workload demand. On-demand is suitable for low-volume workloads with short spikes in demand. However, it can get expensive quick — when the database’s utilization rate exceeds 14% of the equivalent provisioned capacity, DynamoDB On-Demand becomes more expensive than provisioning throughput.

Compared to DynamoDB, pricing for MongoDB Atlas is relatively straightforward by selecting just:

  • The instance size with enough RAM to accommodate the portion of your data (including indexes) that clients access most often
  • the number of replicas and shards that will make up the cluster
  • whether to include fully managed backups
  • the region(s) the cluster needs to run in

Users can adjust any of these parameters on demand. The only additional charge is for data transfer costs.

When to use DynamoDB vs. MongoDB

DynamoDB may work for organizations that are:

  • Looking for a database to support relatively simple key-value workloads
  • Heavily invested in AWS with no plans to change their deployment environment in the future

For organizations that need their database to support a wider range of use cases with more deployment flexibility and no platform lock-in, MongoDB would likely be a better fit.

For example, biotechnology giant Thermo Fisher migrated from DynamoDB to MongoDB for their Instrument Connect IoT app, citing that while both databases were easy to deploy, MongoDB Atlas allowed for richer queries and much simpler schema evolution.

Want to Learn More?

MongoDB Atlas Best Practices

This guide describes the best practices to help you get the most out of the MongoDB Atlas service, including: schema design, capacity planning, security, and performance optimization.

MongoDB Atlas Security Controls

This document will provide you with an understanding of MongoDB Atlas’ Security Controls and Features as well as a view into how many of the underlying mechanisms work.

from:Comparing DynamoDB and MongoDB | MongoDB

JWT implementation with Refresh Token in Node.js example | MongoDB

In previous post, we’ve known how to build Token based Authentication & Authorization with Node.js, JWT and MongoDB. This tutorial will continue to make JWT Refresh Token in the Node.js Express Application. You can know how to expire the JWT, then renew the Access Token with Refresh Token.

Related Posts:
– Node.js, Express & MongoDb: Build a CRUD Rest Api example
– How to upload/store images in MongoDB using Node.js, Express & Multer
– Using MySQL/PostgreSQL instead: JWT Refresh Token implementation in Node.js example

– MongoDB One-to-One relationship with Mongoose example
– MongoDB One-to-Many Relationship tutorial with Mongoose examples
– MongoDB Many-to-Many Relationship with Mongoose examples

The code in this post bases on previous article that you need to read first:
Node.js + MongoDB: User Authentication & Authorization with JWT

Overview of JWT Refresh Token with Node.js example

We already have a Node.js Express & MongoDB application in that:

  • User can signup new account, or login with username & password.
  • By User’s role (admin, moderator, user), we authorize the User to access resources

With APIs:

Methods Urls Actions
POST /api/auth/signup signup new account
POST /api/auth/signin login an account
GET /api/test/all retrieve public content
GET /api/test/user access User’s content
GET /api/test/mod access Moderator’s content
GET /api/test/admin access Admin’s content

For more details, please visit this post.

We’re gonna add Token Refresh to this Node.js & JWT Project.
The final result can be described with following requests/responses:

– Send /signin request, return response with refreshToken.


– Access resource successfully with accessToken.


– When the accessToken is expired, user cannot use it anymore.


– Send /refreshtoken request, return response with new accessToken.


– Access resource successfully with new accessToken.


– Send an expired Refresh Token.


– Send an inexistent Refresh Token.


– Axios Client to check this: Axios Interceptors tutorial with Refresh Token example

– Using React Client:

– Or Vue Client:

– Angular Client:

Flow for JWT Refresh Token implementation

The diagram shows flow of how we implement Authentication process with Access Token and Refresh Token.


– A legal JWT must be added to HTTP Header if Client accesses protected resources.
– A refreshToken will be provided at the time user signs in.

How to Expire JWT Token in Node.js

The Refresh Token has different value and expiration time to the Access Token.
Regularly we configure the expiration time of Refresh Token longer than Access Token’s.

Open config/auth.config.js:

module.exports = {
  secret: "bezkoder-secret-key",
  jwtExpiration: 3600,           // 1 hour
  jwtRefreshExpiration: 86400,   // 24 hours

  /* for test */
  // jwtExpiration: 60,          // 1 minute
  // jwtRefreshExpiration: 120,  // 2 minutes

Update middlewares/authJwt.js file to catch TokenExpiredError in verifyToken() function.

const jwt = require("jsonwebtoken");
const config = require("../config/auth.config");
const db = require("../models");
const { TokenExpiredError } = jwt;

const catchError = (err, res) => {
  if (err instanceof TokenExpiredError) {
    return res.status(401).send({ message: "Unauthorized! Access Token was expired!" });

  return res.sendStatus(401).send({ message: "Unauthorized!" });

const verifyToken = (req, res, next) => {
  let token = req.headers["x-access-token"];

  if (!token) {
    return res.status(403).send({ message: "No token provided!" });

  jwt.verify(token, config.secret, (err, decoded) => {
    if (err) {
      return catchError(err, res);
    req.userId = decoded.id;

Create Refresh Token Model

This Mongoose model has one-to-one relationship with User model. It contains expiryDate field which value is set by adding config.jwtRefreshExpiration value above.

There are 2 static methods:

  • createToken: use uuid library for creating a random token and save new object into MongoDB database
  • verifyExpiration: compare expiryDate with current Date time to check the expiration
const mongoose = require("mongoose");
const config = require("../config/auth.config");
const { v4: uuidv4 } = require('uuid');

const RefreshTokenSchema = new mongoose.Schema({
  token: String,
  user: {
    type: mongoose.Schema.Types.ObjectId,
    ref: "User",
  expiryDate: Date,

RefreshTokenSchema.statics.createToken = async function (user) {
  let expiredAt = new Date();

    expiredAt.getSeconds() + config.jwtRefreshExpiration

  let _token = uuidv4();

  let _object = new this({
    token: _token,
    user: user._id,
    expiryDate: expiredAt.getTime(),


  let refreshToken = await _object.save();

  return refreshToken.token;

RefreshTokenSchema.statics.verifyExpiration = (token) => {
  return token.expiryDate.getTime() < new Date().getTime();

const RefreshToken = mongoose.model("RefreshToken", RefreshTokenSchema);

module.exports = RefreshToken;

Don’t forget to export this model in models/index.js:

const mongoose = require('mongoose');
mongoose.Promise = global.Promise;

const db = {};

db.mongoose = mongoose;

db.user = require("./user.model");
db.role = require("./role.model");
db.refreshToken = require("./refreshToken.model");

db.ROLES = ["user", "admin", "moderator"];

module.exports = db;

Node.js Express Rest API for JWT Refresh Token

Let’s update the payloads for our Rest APIs:
– Requests:

  • refreshToken }

– Responses:

  • Signin Response: { accessToken, refreshToken, id, username, email, roles }
  • Message Response: { message }
  • RefreshToken Response: { new accessTokenrefreshToken }

In the Auth Controller, we:

  • update the method for /signin endpoint with Refresh Token
  • expose the POST API for creating new Access Token from received Refresh Token


const config = require("../config/auth.config");
const db = require("../models");
const { user: User, role: Role, refreshToken: RefreshToken } = db;

const jwt = require("jsonwebtoken");
const bcrypt = require("bcryptjs");

exports.signin = (req, res) => {
    username: req.body.username,
    .populate("roles", "-__v")
    .exec(async (err, user) => {
      if (err) {
        res.status(500).send({ message: err });

      if (!user) {
        return res.status(404).send({ message: "User Not found." });

      let passwordIsValid = bcrypt.compareSync(

      if (!passwordIsValid) {
        return res.status(401).send({
          accessToken: null,
          message: "Invalid Password!",

      let token = jwt.sign({ id: user.id }, config.secret, {
        expiresIn: config.jwtExpiration,

      let refreshToken = await RefreshToken.createToken(user);

      let authorities = [];

      for (let i = 0; i < user.roles.length; i++) {
        authorities.push("ROLE_" + user.roles[i].name.toUpperCase());
        id: user._id,
        username: user.username,
        email: user.email,
        roles: authorities,
        accessToken: token,
        refreshToken: refreshToken,

exports.refreshToken = async (req, res) => {
  const { refreshToken: requestToken } = req.body;

  if (requestToken == null) {
    return res.status(403).json({ message: "Refresh Token is required!" });

  try {
    let refreshToken = await RefreshToken.findOne({ token: requestToken });

    if (!refreshToken) {
      res.status(403).json({ message: "Refresh token is not in database!" });

    if (RefreshToken.verifyExpiration(refreshToken)) {
      RefreshToken.findByIdAndRemove(refreshToken._id, { useFindAndModify: false }).exec();
        message: "Refresh token was expired. Please make a new signin request",

    let newAccessToken = jwt.sign({ id: refreshToken.user._id }, config.secret, {
      expiresIn: config.jwtExpiration,

    return res.status(200).json({
      accessToken: newAccessToken,
      refreshToken: refreshToken.token,
  } catch (err) {
    return res.status(500).send({ message: err });

In refreshToken() function:

  • Firstly, we get the Refresh Token from request data
  • Next, get the RefreshToken object {idusertokenexpiryDate} from raw Token using RefreshToken model static method
  • We verify the token (expired or not) basing on expiryDate field. If the Refresh Token was expired, remove it from MongoDB database and return message
  • Continue to use user _id field of RefreshToken object as parameter to generate new Access Token using jsonwebtoken library
  • Return { new accessTokenrefreshToken } if everything is done
  • Or else, send error message

Define Route for JWT Refresh Token API

Finally, we need to determine how the server with an endpoint will response by setting up the routes.
In routes/auth.routes.js, add one line of code:

const controller = require("../controllers/auth.controller");

module.exports = function(app) {
  app.post("/api/auth/refreshtoken", controller.refreshToken);


Today we’ve learned JWT Refresh Token implementation in just a Node.js example using Express Rest Api and MongoDB. You also know how to expire the JWT Token and renew the Access Token.

The code in this post bases on previous article that you need to read first:
Node.js + MongoDB: User Authentication & Authorization with JWT

If you want to use MySQL/PostgreSQL instead, please visit:
JWT Refresh Token implementation in Node.js example

You can test this Rest API with:
– Axios Client: Axios Interceptors tutorial with Refresh Token example
– React Client:

– Vue Client:

– Angular Client:

Happy learning! See you again.

Further Reading

Fullstack CRUD application:
– MEVN: Vue.js + Node.js + Express + MongoDB example
Angular 8 + Node.js + Express + MongoDB example
Angular 10 + Node.js + Express + MongoDB example
Angular 11 + Node.js + Express + MongoDB example
Angular 12 + Node.js + Express + MongoDB example
– MERN: React + Node.js + Express + MongoDB example

Source Code

You can find the complete source code for this tutorial on Github.

from:JWT implementation with Refresh Token in Node.js example | MongoDB – BezKoder

精通 MEAN: MEAN 堆栈

在 2002 年的一本著作中,David Weinberger 将发展迅速的 Web 内容描述成一个 小块松散组合(Small Pieces                    Loosely Joined)。这个比喻让我印象深刻,因为大家一般很容易认为 Web                 是一个巨大的技术堆栈。实际上,您访问的每个网站都是库、语言与 Web 框架的一种独特组合。

LAMP 堆栈 是早期表现突出的开源                 Web 技术集合之一:它使用 Linux® 作为操作系统,使用 Apache 作为 Web 服务器,使用 MySQL 作为数据库,并使用                 Perl(或者 Python 和 PHP)作为生成基于 HTML Web                 页面的编程语言。这些技术的出现并非为了一起联合工作。它们是独立的项目,由多位雄心勃勃的软件工程师前赴后继地整合在一起。自那以后,我们就见证了 Web                 堆栈的大爆发。每一种现代编程语言似乎都有一个(或两个)对应的 Web                 框架,可将各种混杂的技术预先组装在一起,快速而又轻松地创建一个新的网站。

MEAN 堆栈是 Web 社区中赢得大量关注和令人兴奋的一种新兴堆栈:MongoDBExpressAngularJSNode.js。MEAN 堆栈代表着一种完全现代的 Web                 开发方法:一种语言运行在应用程序的所有层次上,从客户端到服务器,再到持久层。本系列文章演示了一个 MEAN Web                 开发项目的端到端开发情况,但这种开发并不仅限于简单的语法。本文将通过作者的亲身实践向您深入浅出地介绍了该堆栈的组件技术,包括安装与设置。参见 下载 部分,以便获取示例代码。


在使用开源软件构建专业网站领域时,MEAN(MongoDB、Express、AngularJS 和 Node.js)堆栈是对流行已久的                     LAMP 堆栈的一个新兴挑战者。MEAN 代表着架构与心理模型(mental model)方面的一次重大变迁:从关系数据库到                     NoSQL,以及从服务器端的模型-视图-控制器到客户端的单页面应用程序。本系列文章将介绍 MEAN                     堆栈技术如何互补,以及如何使用堆栈创建二十一世纪的、现代的全堆栈 JavaScript Web 应用程序。

“实际上,您访问的每个网站都是库、语言与 Web 框架的独特组合。”



MEAN 不仅仅是一次首字母缩写的简单重新安排与技术升级。将基础平台从操作系统 (Linux) 转变为 JavaScript 运行时                 (Node.js) 让操作系统变得独立:Node.js 在 Windows® 与 OS X 上的运行情况和在 Linux 上一样优秀。

Node.js 同样取代了 LAMP 堆栈中的 Apache。但 Node.js 远远不止是一种简单的 Web                 服务器。事实上,用户不会将完成后的应用程序部署到单机的 Web 服务器上;相反,Web 服务器已经包含在应用程序中,并已在 MEAN                 堆栈中自动安装。结果,部署过程得到了极大简化,因为所需的 Web 服务器版本已经与余下的运行时依赖关系一起得到了明确定义。

不仅是 MEAN

尽管本系列文章重点讲述的是 MEAN 太阳系中的四大行星,但也会介绍 MEAN 堆栈中的一些较小的(但并非不重要的)卫星类技术:

从传统数据库(如 MySQL)到 NoSQL,再到无架构的、以文档为导向的持久存储(如                 MongoDB),这些代表着持久化策略发生了根本性的转变。用户花费在编写 SQL 上的时间将会减少,将会有更多的时间编写 JavaScript                 中的映射/化简功能。用户还能省掉大量的转换逻辑,因为 MongoDB 可以在本地运行 JavaScript Object Notation                    (JSON)。因此,编写 RESTful Web 服务变得前所未有的容易。

但从 LAMP 到 MEAN 的最大转变在于从传统的服务器端页面生成变为客户端 单页面应用程序                     (SPA)。借助 Express 仍然可以处理服务器端的路由与页面生成,但目前的重点在客户端视图上,而 AngularJS                 可以实现这一点。这种变化并不仅仅是将 模型-视图-控制器 (MVC)                 工件从服务器转移到客户端。用户还要尝试从习惯的同步方式转而使用基本由事件驱动的、实质上为异步的方式。或许最重要的一点是,您将从以页面为中心的应用程序视图转到面向组件的视图。

MEAN 堆栈并非以移动为中心,AngularJS                 在桌面电脑、笔记本电脑、智能手机、平板电脑和甚至是智能电视上的运行效果都一样,但它不会把移动设备当作二等公民对待。而且测试事后不再是问题:借助世界级的测试框架,比如                     MochaJSJasmineJSKarmaJS,您可以为自己的 MEAN                 应用程序编写深入而又全面的测试套件。

准备好获得 MEAN 了吗?


安装 Node.js

您需要安装 Node.js,以便在本系列中的示例应用程序上工作,如果尚未安装它,那就立刻开始安装吧。

如果使用 UNIX® 风格的操作系统(Linux、Mac OS X 等),我推荐使用 Node Version Manager                    (NVM)。(否则,在 Node.js 主页上单击                     Install,下载适合您操作系统的安装程序,然后接受默认选项即可。)借助 NVM,您可以轻松下载                 Node.js,并从命令行切换各种版本。这可以帮助您从一个版本的 Node.js 无缝转移到下一版本,就像我从一个客户项目转到下一个客户项目一样。

NVM 安装完毕后,请输入命令 nvm ls-remote 查看哪些 Node.js 版本可用于安装,如清单 1 中所示。

清单 1. 使用 NVM 列出可用的 Node.js                 版本
$ nvm ls-remote



输入 nvm ls 命令可以显示本地已经安装的 Node.js 版本,以及目前正在使用中的版本。

在撰写本文之际,Node 网站推荐 v0.10.28 是最新的稳定版本。输入 nvm install v0.10.28                 命令在本地安装它。

安装 Node.js 后(通过 NVM 或平台特定的安装程序均可),可以输入 node --version                 命令来确认当前使用的版本:

$ node --version



什么是 Node.js?

Node.js 是一种 headless JavaScript 运行时。它与运行在 Google Chrome 内的 JavaScript                 引擎(名叫 V8)是一样的,但使用 Node.js 可以从命令行(而非浏览器)运行 JavaScript。


熟悉自己所选浏览器中的开发人员工具。我将在整个系列中通篇使用 Google Chrome,但用户可以自行选择使用 Firefox、Safari                     或者甚至是 Internet Explorer。

  • 在 Google Chrome 中,单击 Tools > JavaScript                            Console
  • 在 Firefox 中,单击 Tools > Web Developer > Browser                            Console
  • 在 Safari 中,单击 Develop > Show Error                        Console。(如果看不到 Develop 菜单,可以在 Advanced preferences 页面上单击                             Show Develop menu in menu bar。)
  • 在 Internet Explorer 中,单击 Developer Tools > Script >                            Console

我曾有些学生嘲笑过从命令行运行 JavaScript 的主意:“如果没有要控制的 HTML,那 JavaScript 还有什么好处呢?”                 JavaScript 是在浏览器(Netscape Navigator 2.0)中来到这个世界的,因此那些反对者的短视和天真是可以原谅的。

事实上,JavaScript 编程语言并未针对 文档对象模型 (DOM) 操作或形成 Ajax 请求提供本地功能。该浏览器提供了 DOM API,可以方便用户使用                 JavaScript 来完成这类工作,但在浏览器之外的地方,JavaScript 不具备这些功能。

下面给出了一个例子。在浏览器中打开一个 JavaScript 控制台(参见 访问浏览器的开发人员工具)。输入 navigator.appName。获得响应后,请输入                 navigator.appVersion。得到的结果类似于图 1 中所示。

图 1. 在 Web 浏览器中使用 JavaScript navigator                     对象

在 Web 浏览器中使用 navigator JavaScript 对象的屏幕截图

在图 1 中,Netscape 是对 navigator.appName 的响应,而对                 navigator.appVersion 的响应则是经验丰富的 Web                 开发人员已经熟知但爱恨不一的、神秘的开发人员代理字符串。在图 1 中(截自 OS X 上的 Chrome 浏览器),该字符串是                 5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36

现在,我们要创建一个名为 test.js 的文件。在文件中输入同样的命令,并将每个命令包含在 console.log()                 调用中:


保存文件并输入 node test.js 来运行它,如清单 2 中所示。

清单 2. 查看 Node.js 中的                     navigator is not defined 错误
$ node test.js 

ion (exports, require, module, __filename, __dirname) { console.log(navigator.
ReferenceError: navigator is not defined
    at Object.<anonymous> (/test.js:1:75)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Function.Module.runMain (module.js:497:10)
    at startup (node.js:119:16)
    at node.js:902:3

正如您看到的那样,navigator 在浏览器中可用,但在 Node.js 中不可用。(不好意思,让您的第一个                 Node.js 脚本失败了,但我想确保让您相信,在浏览器中运行 JavaScript 与在 Node.js 中运行它是不同的。)

根据堆栈跟踪的情况,正确的 Module 没有得到加载。(Modules 是在浏览器中运行 JavaScript 与在                 Node.js 中运行它之间的另一主要区别。我们将立刻讲述 Modules 的更多相关内容。)为了从 Node.js 获得类似的信息,请将                 test.js 的内容修改为:


再次输入 node test.js,可以看到类似于清单 3 中的输出。

清单 3. 在 Node.js                 中使用过程模块
$ node test.js

{ http_parser: '1.0',
  node: '0.10.28',
  v8: '',
  ares: '1.9.0-DEV',
  uv: '0.10.27',
  zlib: '1.2.3',

  modules: '11',
  openssl: '1.0.1g' }

在 Node.js 中成功运行第一个脚本之后,我们将接触下一个主要概念:模块。



可以在 JavaScript 中创建单一功能的函数,但与在 Java、Ruby 或 Perl 中不同,无法将多个函数打包到一个能够导入导出的内聚模块或                 ”包“ 中。当然,使用 <script> 元素可以包含任意 JavaScript                 源代码文件,但这种历史悠久的方法在两个关键方面缺少正确的模块声明。

首先,使用 <script> 元素包含的任意 JavaScript                 将被加载到全局命名空间中。使用模块可以导入的函数被封装在一个局部命名的变量中。其次,同时更为关键的是,可以使用模块显式地声明依赖关系,而使用                 <script> 元素则做不到这一点。结果,导入 Module A 时也会同时导入依赖的 Modules B                 和 C。当应用程序变得复杂时,传递依赖关系管理很快将成为一种关键需求。


顾名思义,CommonJS 项目定义了一种通用的模块格式(包括其他浏览器之外的 JavaScript 规范)。Node.js 属于众多非官方的                     CommonJS 实现之一。RingoJS (类似于 Node.js 的一种应用服务器,运行在 JDK 上的 Rhino/Nashorn                    JavaScript 运行时之上) 基于 CommonJS,流行的 NoSQL 持久存储 CouchDB 和 MongoDB 也是如此。

模块是用户衷心期盼的下一 JavaScript 主要版本 (ECMAScript 6) 的功能,但直到该版本被广泛接受之前,Node.js                 目前使用的是它自己基于 CommonJS                 规范的模块版本。

使用 require 关键字可以在脚本中包含 CommonJS 模块。例如,清单 4 是对 Node.js 主页上的                 Hello World 脚本稍微进行修改后的版本。创建一个名为 example.js 的文件,并将清单 4 中的代码复制到其中。

清单 4. Node.js 中的 Hello                World
var http = require('http');
var port = 9090;
console.log('Server running at' + port + '/');

function responseHandler(req, res){
  res.writeHead(200, {'Content-Type': 'text/html'});
  res.end('<html><body><h1>Hello World</h1></body></html>');

输入 node example.js 命令运行新的 Web 服务器,然后在 Web 浏览器中访问

看一看清单 4 中的头两行。您很可能写过几百次(或几千次)像 var port = 9090;                 这样的简单语句。这条语句定义了一个名为 port 的变量,并将数字 9090 赋值给它。第一行                 (var http = require('http');) 用于导入一个 CommonJS 模块。它引入                 http 模块并将它指派给一个局部变量。 and assigns it to a local variable. All                of the corresponding modules that http 依赖的所有对应模块也同时被                 require 语句导入。

example.js 后面的代码行:

  1. 创建一个新的 HTTP 服务器。
  2. 指定一个函数来处理响应。
  3. 开始监听指定端口上进入的 HTTP 请求。

这样通过寥寥几行 JavaScript 代码,就可以在 Node.js 中创建了一个简单的 Web                 服务器。在本系列随后的文章中您会看到,Express 将这个简单的例子被扩展用于处理更为复杂的路由,同时还将提供静态与动态生成的资源。

http 模块是 Node.js 安装的标准组件之一。其他标准的 Node.js 模块还支持文件                 I/O,读取来自用户的命令行输入,处理底层的 TCP 和 UDP 请求等等。访问 Node.js 文档的 Modules                 部分,查看标准模块的完整列表并了解它们的功能。



什么是 NPM?

NPM 是 Node Packaged Modules 的简写。要查看包含超过 75,000 个公用第三方 Node 模块的清单,请访问 NPM 网站。在网站上搜索 yo                 模块。图 2 显示了搜索结果。

图 2. yo 模块的详细情况

显示了 yo 模块的详细信息的 NPM 搜索结果的屏幕截图

结果页面简要介绍了该模块(搭建 Yeoman 项目的 CLI                 工具),并显示它在过去一天、一周和一月内被下载的次数、编写该模块的作者、它依赖于哪些其他的模块(如果存在)等内容。最重要的是,结果页面给出了安装该模块的命令行语法。

要从命令行获取关于 yo 模块的类似信息,请输入 npm info yo                 命令。(如果您还不知道模块的官方名称,可以输入 npm search yo 来搜索名称中包含字符串                 yo 的所有模块。)npm info 命令显示模块的 package.json 文件的内容。

了解 package.json

每个 Node.js 模块都必须关联一个格式良好的 package.json 文件,因此,熟悉此文件的内容是值得的。清单 5、清单 6 和清单 7 分三部分显示了                 yo 模块的 package.json 文件的内容。

如清单 5 中所示,第一个元素通常是 namedescription 和一个可用                 versions 的 JSON 数组。

清单 5. package.json,第 1                 部分
$ npm info yo

{ name: 'yo',
  description: 'CLI tool for scaffolding out Yeoman projects',
  'dist-tags': { latest: '1.1.2' },
     '1.1.2' ],

要安装一个模块的最新版本,请输入 npm install package 命令。输入                 npm install package@version 可以安装一个特定的版本。

如清单 6 中所示,接下来将显示作者、维护者和可以直接查找源文件的 GitHub 库。

清单 6. package.json,第 2                 部分
author: 'Chrome Developer Relations',
 { type: 'git',
   url: 'git://github.com/yeoman/yo' },
homepage: 'http://yeoman.io',
 [ 'front-end',
   'stack' ],

在这个例子中,还可以看到一个指向项目主页的链接和一个相关关键字的 JSON 数组。并非所有 package.json                 文件中都会出现所有这些字段,但用户很少会抱怨与一个项目相关的元数据太多。

最后,清单 7                 中列出了附有显式版本号的依赖关系。这些版本号符合主版本.次版本.补丁版本的常用模式,被称为                     SemVer(语义版本控制)。

清单 7. package.json,第 3                 部分
engines: { node: '>=0.8.0', npm: '>=1.2.10' },
 { 'yeoman-generator': '~0.16.0',
   nopt: '~2.1.1',
   lodash: '~2.4.1',
   'update-notifier': '~0.1.3',
   insight: '~0.3.0',
   'sudo-block': '~0.3.0',
   async: '~0.2.9',
   open: '0.0.4',
   chalk: '~0.4.0',
   findup: '~0.1.3',
   shelljs: '~0.2.6' },
 { 'grunt-cli': '~0.1.7',
   bower: '>=0.9.0' },
 { grunt: '~0.4.2',
   mockery: '~1.4.0',
   'grunt-contrib-jshint': '~0.8.0',
   'grunt-contrib-watch': '~0.5.3',
   'grunt-mocha-test': '~0.8.1' },

这个 package.json 文件表明,它必须安装在 0.8.0 或更高版本的 Node.js 实例上。如果试图使用                 npm install 命令安装一个不受支持的版本,那么安装将会失败。

SemVer 的快捷语法

清单 7 中,您会注意到,很多依赖关系版本中都有一个波浪符号 (~)。这个符号相当于                     1.0.x(也属于有效语法),意思是 ”主版本必须是 1,次版本必须是 0,但您可以安装所能找到的最新补丁版本“。SemVer                         中的这种隐含表达法意味着,补丁版本绝不会 对 API                     做出重大修改(通常是对现有功能的缺陷修复),而次版本会在不打破现有功能的情况下引入另外的功能(比如新的函数调用)。

除了平台要求之外,这个 package.json 文件还提供几个依赖关系列表:

  • dependencies 部分列出了运行时的依赖关系。
  • devDependencies 部分列出了开发过程中需要的模块。
  • peerDependencies 部分支持作者定义项目之间的 ”对等“                     关系。这种功能通常用于指定基础项目与其插件之间的关系,但在这个例子中,它指出了包含 Yeoman 项目与 Yo 的其他两个项目(Grunt                     与 Bower)。

如果在不指定模块名的情况下输入 npm install 命令,那么 npm 会访问当前目录中的                 package.json 文件,并安装我刚刚讨论过的三部分内容中列出的所有依赖关系。

安装一个能正常工作的 MEAN 堆栈,下一步是安装 Yeoman 与相应的 Yeoman-MEAN 生成器。


安装 Yeoman

作为一名 Java 开发人员,我无法想象在没有诸如 Ant 或 Maven 这样的编译系统的情况下如何启动一个新项目。类似地,Groovy 和                 Grails 开发人员依靠的是 Gant(Ant 的一种 Groovy 实现)或                 Gradle。这些工具可以搭建起一个新的目录结构,动态下载依赖关系,并准备好将项目发布。

在纯粹的 Web 开发环境中,Yeoman 可以满足这种需要。Yeoman 是三种 Node.js 工具的集合,包括用于搭建的纯 JavaScript                 工具 Yo,管理客户端依赖关系的 Bower,以及准备项目发布的 Grunt。通过分析 清单 7                 可以得出这样的结论:安装 Yo 时也会安装它对等的 Grunt 和 Bower,这要感谢 package.json 中的                 peerDependencies 部分。

通常,输入 npm install yo --save 命令可以安装 yo 模块并更新                 package.json 文件中的 dependencies                 部分。(npm install yo --save-dev 用于更新                 devDependencies 部分。)但这三个对等的 Yeoman                 模块算不上是特定于项目的模块,它们是命令行实用工具,而非运行时依赖关系。要全局安装一个 NPM 包,需要在 install                 命令后增加一个 -g 标志。

在系统上安装 Yeoman:

npm install -g yo

在完成包安装后,输入 yo --version 命令来验证它已经在运行中。

Yeoman 与基础架构的所有余下部分都准备就绪后,便可以开始安装 MEAN 堆栈了。


安装 MeanJS

您可以手动安装 MEAN 堆栈的每一部分,但需要十分小心。谢天谢地,Yeoman 通过其 generators(生成器)                提供了一种更轻松的安装方式。

Yeoman 生成器就是引导一个新 Web                 项目更轻松的方式。该生成器提供了基础包及其所有依赖关系。此外,它通常还会包含一个工作的编译脚本及其所有相关插件。通常,该生成器还包含一个示例应用程序,包括测试在内。

Yeoman 团队构建和维护了几个 “官方的”                     Yeoman 生成器社区驱动的 Yeoman 生成器(超过 800 个)远远超过官方生成器的数量。

您将用于引导第一个 MEAN 应用程序的社区生成器被称为 MEAN.JS,这也在意料之中。

在 MEAN.JS 主页上,单击 Yo Generator 菜单选项或者直接访问 Generator 页面,图 3                 中显示了其中的一部分。

图 3. MEAN.JS Yeoman 生成器

MEAN.JS Yeoman 生成器页面的屏幕截图

该页面上的说明指出要首先 Yeoman,这一点您已经完成。下一步是全局安装 MEAN.JS 生成器:

npm install -g generator-meanjs

生成器准备就绪后,便可以开始创建您的第一个 MEAN 应用程序了。创建一个名为 test 的目录,使用 cd                 命令进入它,然后输入 yo meanjs 命令生成应用程序。回答最后两个问题,如清单 8                 中所示。(您可以为开始四个问题提供自己的答案。)

清单 8. 使用 MEAN.JS Yeoman                generator
$ mkdir test
$ cd test
$ yo meanjs

    |       |
    |--(o)--|   .--------------------------.
   `---------�  |    Welcome to Yeoman,    |
    ( _�U`_ )   |   ladies and gentlemen!  |
    /___A___\   '__________________________'
     |  ~  |
 �   `  |� � Y `

You're using the official MEAN.JS generator.
[?] What would you like to call your application? 
[?] How would you describe your application? 
Full-Stack JavaScript with MongoDB, Express, AngularJS, and Node.js
[?] How would you describe your application in comma separated key words?
MongoDB, Express, AngularJS, Node.js
[?] What is your company/author name? 
Scott Davis
[?] Would you like to generate the article example CRUD module? 
[?] Which AngularJS modules would you like to include? 
ngCookies, ngAnimate, ngTouch, ngSanitize

在回答最后一个问题后,您会看到一系列行为,这是 NPM 在下载所有服务器端的依赖关系(包括 Express)。NPM 完成后,Bower                 将下载所有客户端的依赖关系(包括 AngularJS、Bootstrap 和 jQuery)。

至此,您已经安装了 EAN 堆栈(Express、AngularJS 和 Node.js) — 目前只缺少 M                (MongoDB)。如果现在输入 grunt 命令,在没有安装 MongoDB 的情况下启动应用程序,您会看到类似于清单                 9 中的一条错误消息。

清单 9. 试图在没有 MongoDB 的情况下启动                 MeanJS
        throw er; // Unhandled 'error' event
Error: failed to connect to [localhost:27017]
    at null.<anonymous> 

[nodemon] app crashed - waiting for file changes before starting...

如果启动应用程序时看到这条错误消息,请按下 Ctrl+C 键停止应用程序。

为了使用新的 MEAN 应用程序,现在需要安装 MongoDB。


安装 MongoDB

MongoDB 是一种 NoSQL 持久性存储。它不是使用 JavaScript 编写的,也不是 NPM 包。必须单独安装它才能完成 MEAN                 堆栈的安装。

访问 MongoDB 主页,下载平台特定的安装程序,并在安装                 MongoDB 时接受所有默认选项。

安装完成时,输入 mongod 命令启动 MongoDB 守护程序。

MeanJS Yeoman 生成器已经安装了一个名为 Mongoose 的                 MongoDB 客户端模块,您可以检查 package.json 文件来确认这一点。我将在后续的文章中详细介绍 MongoDB 和                 Mongoose。

安装并运行 MongoDB 后,最终您可以运行您的 MEAN 应用程序并观察使用效果了。


运行 MEAN 应用程序

要启动新安装的 MEAN 应用程序,在运行 MeanJS Yeoman 生成器之前,一定要位于您创建的 test 目录中。在输入                 grunt 命令时,输出内容应该如清单 10 中所示。

清单 10. 启动 MEAN.JS                 应用程序
$ grunt

Running "jshint:all" (jshint) task
>> 46 files lint free.

Running "csslint:all" (csslint) task
>> 2 files lint free.

Running "concurrent:default" (concurrent) task
Running "watch" task
Running "nodemon:dev" (nodemon) task
[nodemon] v1.0.20
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: app/views/**/*.* gruntfile.js server.js config/**/*.js app/**/*.js
[nodemon] starting `node --debug server.js`
debugger listening on port 5858

 NODE_ENV is not defined! Using default development environment

MEAN.JS application started on port 3000

jshintcsslint                     模块(均由生成器进行安装)可以确保源代码在句法和语体上是正确的。nodemon                 包监控文件系统中的代码修改情况,并在检测到有的情况下自动重启服务器,当开发人员需要快速而频繁地修改代码基础时,这可以极大地提高他们的效率。(nodemon                 包只在开发阶段运行,要监测生产环境的变化,必须重新部署应用程序并重启 Node.js。)

按照控制台输出的提示,访问 http://localhost:3000                 并运行您的新 MEAN 应用程序。

图 4 显示了 MEAN.JS 示例应用程序的主页。

图 4. MEAN.JS 示例应用程序的主页

MEAN.JS 主页的屏幕截图

在菜单栏中单击 Signup 创建一个新的用户账号。现在填写 Sign-up 页面上的所有字段(如图 5                 中所示),然后单击 Sign up。在后续的指南中,您可以通过 Facebook、Twitter 等启用                 OAuth 登录

图 5. MEAN.JS 示例应用程序的 Sign-up 页面

MEAN.JS 示例应用程序的 Sign-up 页面的屏幕截图

现在,您的本地 MongoDB 实例中已经保存了一组用户证书,您可以开始撰写新的文章了。单击 Articles                 菜单选项(当您登录之后才会显示出来),并创建一些示例文章。图 6 显示了 Articles 页面。

图 6. MeanJS 的文章页面

MeanJS 文章页面的屏幕截图

您已经创建了自己的第一个 MEAN 应用程序。欢迎加入!


在这篇指南中,您完成相当多的内容。安装 Node.js 并编写了第一个 Node.js 脚本。学习了模块并使用 NPM 安装了几个第三方模块。安装                 Yeoman 并将它作为可靠的 Web 开发平台,其中包含一个搭建实用工具 (Yo),一个编译脚本                 (Grunt),以及一个管理客户端依赖关系的实用工具 (Bower)。安装 MeanJS Yeoman 生成器,并使用它来创建第一个 MEAN                 应用程序。安装 MongoDB 与 Node.js 客户端库 Mongoose。最后运行您的首个 MEAN 应用程序。

下一次,我们会详细了解示例应用程序的源代码,从而了解 MEAN 太阳系中的所有四颗行星 (MongoDB、Express、AngularJS 和                 Node.js)是如何相互作用的。



描述 名字 大小
样例代码 wa-mean1src.zip 1.38MB




  • 使用 Node.js、Express、AngularJS 和 MongoDB 构建一个实时投票应用程序”                 (developerWorks,2014 年 6 月):剖析一个在 IBM Bluemix™ 上部署的 MEAN                 开发项目。
  • 针对 Java 开发人员的 Node.js“(developerWorks,2011 年 11 月):介绍 Node.js                 并分析其事件驱动的并发性为何能引发用户广泛兴趣,甚至在死硬派 Java 开发人员中也是如此。
  • Node.js 起步(developerWorks,2014 年 1 月):查看这个时长 9 分钟的演示,其中快速介绍了                 Node.js 和 Express。
  • MongoDB:一种具有(所有正确的)RDBMS 行为的 NoSQL 数据库“(developerWorks,2010 年 9                 月):了解 MongoDB 的自定义 API、交互式 shell,以及对 RDBMS 样式的动态查询与快速简便的 MapReduce                 计算的支持。
  • 开始使用 JavaScript 语言“(developerWorks,2011 4 月和 8 月):在这篇由两部分组成的文章中学习                 JavaScript 的基础知识。
  • 针对 Java 开发人员的 JavaScript“(developerWorks,2011 年 4 月):分析                 JavaScript 为何是现代 Java 开发人员的重要工具的原因,并开始学习 JavaScript 变量、类型、函数和类。
  • LAMP 技术简介“(developerWorks,2005 年 5 月):将 MEAN                 与其以前的堆栈进行比较。
  • Mastering Grails(developerWorks, 2008-2009 年):查阅                 Scott Davis 撰写的关于 Grails (基于 Groovy 的 Web 开发框架)的系列文章。
  • 查看 HTML5 专题,了解更多和 HTML5 相关的知识和动向。
  • developerWorks Web development                专区:通过专门关于 Web 技术的文章和教程,扩展您在网站开发方面的技能。
  • developerWorks Ajax 资源中心:这是有关 Ajax 编程模型信息的一站式中心,包括很多文档、教程、论坛、blog、wiki 和新闻。任何 Ajax 的新信息都能在这里找到。







存储在mongodb中的时间是标准时间UTC +0:00 , 而中国的时区是+8.00 。




[BsonDateTimeOptions(Kind = DateTimeKind.Local)]
public DateTime EntryTime

此特性需要引用MongoDB.Bson.dll 。

using MongoDB.Bson.Serialization.Attributes;





命令行配置文件界面可为 MongoDB 管理员提供大量选项和设置,用于控制数据库系统的运行。该文档提供了通用配置以及普通使用案例的最佳配置示例。

尽管两种界面都可访问相同的选项和设置集合,但该文档主要使用配置文件界面。如果您使用控制脚本或操作系统的程序包来运行 MongoDB,很可能已经有一个配置文件,该文件位于 /etc/mogondb.conf。检查/etc/init.d/mongod /etc/rc.d/mongod 脚本的内容确定这一点,以确保控制脚本会以适当的配置文件启动 mongod(见下文)。

要使用该配置启动 MongoDB 实例,按以下格式发出一个命令:

 mongod --config /etc/mongodb.conf mongod -f /etc/mongodb.conf

修改系统上的 /etc/mongodb.conf 文件中的值,以控制数据库实例的配置。



 fork = true bind_ip = port = 27017 quiet = true dbpath = /srv/mongodb logpath = /var/log/mongodb/mongod.log logappend = true journal = true


  • fork true,可为 mongod 启用后台模式 ,使(如 “forks”)MongoDB 从当前会话中分离,并允许您将数据库作为传统服务器来运行。
  • bind_ip,它会强制服务器仅侦听本地主机 IP 上的请求。仅绑定至安全接口,该接口可由应用程序级系统通过系统网络过滤(如“防火墙”)系统提供的访问控制权限来访问。
  • 端口27017,这是数据库实例的默认 MongoDB 端口。MongoDB 可绑定至任何端口。您也可以使用网络过滤工具来过滤访问权限。


    UNIX 类系统要求超级用户权限才能将进程连接至低于 1000 的端口。

  • quiet true。这会禁止输出/日志文件中的所有条目,但最重要的条目除外。在正常操作中,这是避免日志噪音的最佳操作。在诊断或测试情况中,将该值设为false。使用 setParameter 可在运行时过程中修改该设置。
  • dbpath /srv/mongodb,它指定 MongoDB 存储其数据文件的位置。/srv/mongodb /var/lib/mongodb 都是常用的位置。mongod 运行时所在的用户帐户将需要对该目录具有读写权限。
  • logpath /var/log/mongodb/mongod.log,其中 mongod 将写入其输出。如果您不设置此值,mongod 将把所有输出写入到标准输出(即 stdout)中。
  • logappendtrue,确保 mongod 在服务器启动操作之后不会覆盖现有的日志文件。
  • journal true,这样将启用 日志

    日志可确保单实例写入耐久性。 64 位版本的 mongod默认情况下启用日志。因此,此设置可能是多余的。



下面的配置选项集合对于限制对 mongod 实例的访问权限很有用。请考虑以下配置:

 bind_ip = bind_ip = bind_ip = nounixsocket = true auth = true


  • bind_ip”有三个值:,本地主机接口;,通常用于本地网络和 VPN 接口的专用 IP 地址;,通常用于本地网络的专用网络接口。

    由于生产 MongoDB 实例需要从多个数据库服务器访问,因此务必将 MongoDB 绑定到多个可从您的应用程序服务器访问的接口。同时,务必将这些接口限制为在网络层实现控制和保护的接口。

  • nounixsocket”为 true,这样将会禁用 UNIX 套接字,而在默认情况下为启用。这样可限制对本地系统的访问。使用共享权限连续运行 MongoDB 时这种情况很理想,但在大多数情况下影响极小。
  • auth”为 true,这样将在 MongoDB 中启用身份验证。如果已启用,第一次登录时您需要通过本地主机接口建立连接,以创建用户凭证。





副本集配置简单明了,只需要 replSet 有一个在集合的所有成员之间保持一致的值即可。请考虑以下配置:

 replSet = set0

使用描述性的副本集名称。配置后,使用 mongo壳将主机添加到副本集。




 keyfile = /srv/mongodb/keyfile

1.8 版新特性:针对副本集;1.9.1 版针对分片副本集。

设置keyFile以启用身份验证,并指定一个密钥文件供副本集成员使用,确定相互之间何时进行身份验证。密钥文件的内容可以任意规定,但在副本集 以及连接到该集的 mongos 实例的所有成员上必须相同。 keyfile 的大小必须小于 1 KB,可以只包含 base64 编码集字符,文件在 UNIX 系统上不得拥有组或“世界”权限。




最后,请参阅“复制” 索引和“复制基础”文档,以了解关于 MongoDB 中的复制以及一般副本集配置的信息。


分片需要若干采用不同配置的 mongod 实例。配置服务器存储群集的元数据,而群集将数据发布到一个或多个分片服务器。



设置一个或三个“配置服务器”实例作为正常 mongod 实例,然后添加下列配置选项:

 configsrv = true bind_ip = port = 27001

这样将创建一个运行于专用 IP 地址:,端口:27001 的配置服务器。确保没有端口冲突,且配置服务器可从您的“mongos”和“mongod”实例访问。

要设置分片,请配置两个或更多 mongod实例,使用您的基本配置并添加 shardsvr 设置:

 shardsvr = true

最后建立群集,使用下列设置来配置至少一个 mongos 进程:

 configdb = chunkSize = 64

您可以通过在逗号分隔列表的表格中指定主机名和端口来指定多个 configdb 实例。通常,避免将 chunkSize 修改为默认值 64 以外的值,[1]并应当确保此设置在所有 mongos实例中都保持一致。

[1] 数据块大小默认值为 64 MB,可在最均匀的数据分布(较小的数据块最佳)和最小化数据块迁移(较大的数据块最佳)之间实现理想的平衡。




在很多情况下,建议不要在单个系统上运行多个 mongod 实例。有些类型的部署[2]可能会出于测试目的而需要在单个系统上运行多个 mongod


 dbpath = /srv/mongodb/db0/ pidfileath = /srv/mongodb/db0.pid

dbpath 值控制 mongod 实例的数据目录的位置。确保每个数据库都有明确且标签正确的数据目录。pidfilepath 控制 mongod 进程将其pid 文件放置到的位置。由于此轨迹取决于具体的 mongod文件,因此务必确保该文件是唯一的且标签正确,以便于开始和停止这些进程。

创建附加控制脚本并/或调整现有 MongoDB 的配置以及控制这些进程所需的控制脚本。

[2] 使用 SSD 或其他高性能磁盘的单租户系统可为多 mongod 实例提供可接受的性能水平。此外,您还会发现,使用小工作集的多数据库在单系统上的性能可以接受。


下列配置选项控制多种用于诊断的 mongod 行为。下列使用针对一般生产目的调整的默认值:

 slowms = 50 profile = 3 verbose = true diaglog = 3 objcheck = true cpu = true


  • slowms 配置数据库探查器的阈值以考虑“缓慢”的查询。默认值为 100 毫秒。如果数据库探查器未返回有用的结果,则设置较低的值。请参阅“优化”维基页面,以了解 MongoDB 中的优化操作的详细信息。
  • profile 设置数据库探查器 等级。探查器默认情况下不活动,因为那样可能会影响探查器本身的性能。除非为此设置指定了一个值,否则不对查询进行探查。
  • verbose 启用详细记录模式,在此模式下可修改 mongod 输出并增加记录以包括更多的事件。仅在遇到不能正常反映日志记录级别的问题时使用此选项。如果您需要达到更详细的级别,请考虑下列选项:
     v = true vv = true vvv = true vvvv = true vvvvv = true

    增加的每个 v 级别都会额外地增加记录的详细程度。verbose 选项相当于 v=true

  • diaglog 启用诊断日志记录。等级 3 记录所有读写选项。
  • objcheck 强制 mongod 在收到来自客户端的请求时全部进行验证。使用此选项确保无效的请求不会导致错误,特别是在不可信客户机运行数据库时。此选项可能会影响数据库的性能。
  • cpu 强制 mongod 报告

    写锁定所用的最后时间间隔的百分比。时间间隔通常为 4 秒,日志中的每个输出行都包括自上次报告以来的实际时间间隔和写锁定所用的时间百分比。