All posts by dotte

闭包(closure)

一、什么是闭包?

“官方”的解释是:闭包是一个拥有许多变量和绑定了这些变量的环境的表达式(通常是一个函数),因而这些变量也是该表达式的一部分。 相信很少有人能直接看懂这句话,因为他描述的太学术。其实这句话通俗的来说就是:JavaScript中所有的function都是一个闭包。不过一般来说,嵌套的function所产生的闭包更为强大,也是大部分时候我们所谓的“闭包”。看下面这段代码:

function a() {
    var i = 0;
    function b() {
        alert(++i);
    }
    return b;
}
var c = a();
c();

这段代码有两个特点:

  1. 函数b嵌套在函数a内部;
  2. 函数a返回函数b。

引用关系如图:

jsclosure

这样在执行完var c=a()后,变量c实际上是指向了函数b,b中用到了变量i,再执行c()后就会弹出一个窗口显示i的值(第一次为1)。这段代码其实就创建了一个闭包,为什么?因为函数a外的变量c引用了函数a内的函数b,就是说:

当函数a的内部函数b被函数a外的一个变量引用的时候,就创建了一个我们通常所谓的“闭包”。

让我们说的更透彻一些。所谓“闭包”,就是在构造函数体内定义另外的函数作为目标对象的方法函数,而这个对象的方法函数反过来引用外层函数体中的临时变量。这使得只要目标 对象在生存期内始终能保持其方法,就能间接保持原构造函数体当时用到的临时变量值。尽管最开始的构造函数调用已经结束,临时变量的名称也都消失了,但在目 标对象的方法内却始终能引用到该变量的值,而且该值只能通这种方法来访问。即使再次调用相同的构造函数,但只会生成新对象和方法,新的临时变量只是对应新 的值,和上次那次调用的是各自独立的。

为了更深刻的理解闭包,下面让我们继续探索闭包的作用和效果。

二、闭包有什么作用和效果?

简而言之,闭包的作用就是在a执行完并返回后,闭包使得Javascript的垃圾回收机制GC不会收回a所占用的资源,因为a的内部函数b的执行需要依赖a中的变量。这是对闭包作用的非常直白的描述,不专业也不严谨,但你一定能看懂。理解闭包需要循序渐进的过程。 在上面的例子中,由于闭包的存在使得函数a返回后,a中的i始终存在,这样每次执行c(),i都是自加1后alert出i的值。

那么我们来想象另一种情况,如果a返回的不是函数b,情况就完全不同了。因为a执行完后,b没有被返回给a的外界,只是被a所引用,而此时a也只会被b引用,因此函数a和b互相引用但又不被外界打扰(被外界引用),函数a和b就会被GC回收。(关于Javascript的垃圾回收机制将在后面详细介绍)

三、闭包的微观世界

如果要更加深入的了解闭包以及函数a和嵌套函数b的关系,我们需要引入另外几个概念:函数的执行环境(excution context)、活动对象(call object)、作用域(scope)、作用域链(scope chain)。以函数a从定义到执行的过程为例阐述这几个概念。

  1. 定义函数a的时候,js解释器会将函数a的作用域链(scope chain)设置为定义a时a所在的“环境”,如果a是一个全局函数,则scope chain中只有window对象。
  2. 执行函数a的时候,a会进入相应的执行环境(excution context)
  3. 在创建执行环境的过程中,首先会为a添加一个scope属性,即a的作用域,其值就为第1步中的scope chain。即a.scope=a的作用域链。
  4. 然后执行环境会创建一个活动对象(call object)。活动对象也是一个拥有属性的对象,但它不具有原型而且不能通过JavaScript代码直接访问。创建完活动对象后,把活动对象添加到a的作用域链的最顶端。此时a的作用域链包含了两个对象:a的活动对象和window对象。
  5. 下一步是在活动对象上添加一个arguments属性,它保存着调用函数a时所传递的参数。
  6. 最后把所有函数a的形参和内部的函数b的引用也添加到a的活动对象上。在这一步中,完成了函数b的的定义,因此如同第3步,函数b的作用域链被设置为b所被定义的环境,即a的作用域。

到此,整个函数a从定义到执行的步骤就完成了。此时a返回函数b的引用给c,又函数b的作用域链包含了对函数a的活动对象的引用,也就是说b可以访问到a中定义的所有变量和函数。函数b被c引用,函数b又依赖函数a,因此函数a在返回后不会被GC回收。

当函数b执行的时候亦会像以上步骤一样。因此,执行时b的作用域链包含了3个对象:b的活动对象、a的活动对象和window对象,如下图所示:

http://www.felixwoo.com/wp-content/uploads/attachments/200712/11_110522_scopechain.jpg

如图所示,当在函数b中访问一个变量的时候,搜索顺序是:

  1. 先搜索自身的活动对象,如果存在则返回,如果不存在将继续搜索函数a的活动对象,依次查找,直到找到为止。
  2. 如果函数b存在prototype原型对象,则在查找完自身的活动对象后先查找自身的原型对象,再继续查找。这就是Javascript中的变量查找机制。
  3. 如果整个作用域链上都无法找到,则返回undefined。

小结,本段中提到了两个重要的词语:函数的定义执行。文中提到函数的作用域是在定义函数时候就已经确定,而不是在执行的时候确定(参看步骤1和3)。用一段代码来说明这个问题:

function f(x) {
    var g = function () { return x; }
    return g;
}
var h = f(1);
alert(h());

这段代码中变量h指向了f中的那个匿名函数(由g返回)。

  • 假设函数h的作用域是在执行alert(h())确定的,那么此时h的作用域链是:h的活动对象->alert的活动对象->window对象。
  • 假设函数h的作用域是在定义时确定的,就是说h指向的那个匿名函数在定义的时候就已经确定了作用域。那么在执行的时候,h的作用域链为:h的活动对象->f的活动对象->window对象。

如果第一种假设成立,那输出值就是undefined;如果第二种假设成立,输出值则为1。

运行结果证明了第2个假设是正确的,说明函数的作用域确实是在定义这个函数的时候就已经确定了。

四、闭包的应用场景

  1. 保护函数内的变量安全。以最开始的例子为例,函数a中i只有函数b才能访问,而无法通过其他途径访问到,因此保护了i的安全性。
  2. 在内存中维持一个变量。依然如前例,由于闭包,函数a中i的一直存在于内存中,因此每次执行c(),都会给i自加1。
  3. 通过保护变量的安全实现JS私有属性和私有方法(不能被外部访问)推荐阅读:http://javascript.crockford.com/private.html私有属性和方法在Constructor外是无法被访问的
    function Constructor(...) {
        var that = this;
        var membername = value;
        function membername(...) {...}
    }

以上3点是闭包最基本的应用场景,很多经典案例都源于此。

五、Javascript的垃圾回收机制

在Javascript中,如果一个对象不再被引用,那么这个对象就会被GC回收。如果两个对象互相引用,而不再被第3者所引用,那么这两个互相引用的对象也会被回收。因为函数a被b引用,b又被a外的c引用,这就是为什么函数a执行后不会被回收的原因。

六、结语

理解JavaScript的闭包是迈向高级JS程序员的必经之路,理解了其解释和运行机制才能写出更为安全和优雅的代码。如果您对本文有任何的建议和疑问,欢迎留言

Private Members in JavaScript  http://javascript.crockford.com/private.html

闭包的概念、形式与应用http://www.ibm.com/developerworks/cn/linux/l-cn-closure/

闭包之美 http://www.ituring.com.cn/article/1317

理解C#闭包 http://www.cnblogs.com/jiejie_peng/p/3701070.html

Multithreading Demystified

Introduction

The article deals with explaining the concepts behind implementing multi-threading applications in .NET through a working code example. The article covers the following topics in brief:

  1. Concepts of threading
  2. How to implement multi-threading in .NET
  3. Concepts behind implementing Thread Safe applications
  4. Deadlocks

What is a Process?

A process is an Operating System context in which an executable runs. It is used to segregate virtual address space, threads, object handles (pointers to resources such as files), and environment variables. Processes have attributes such as base priority class and maximum memory consumption.

Meaning…

  1. A process is a memory slice that contains resources
  2. An isolated task performed by the Operating System
  3. An application that is being run
  4. A process owns one or more Operating System threads

Technically, a process is a contiguous memory space of 4 GB. This memory is secure and private and cannot be accessed by other processes.

What is a Thread?

A thread is an instruction stream executing within a process. All threads execute within a process and a process can have multiple threads. All threads of a process use their process’ virtual address space. The thread is a unit of Operating System scheduling. The context of the thread is saved / restored as the Operating System switches execution between threads.

Meaning…

  • A thread is an instruction stream executing within a process.
  • All threads execute within a process and a process can have multiple threads.
  • All threads of a process use their process’ virtual address space.

What is Multi-Threading?

Multi threading is when a process has multiple threads active at the same time. This allows for either the appearance of simultaneous thread execution (through time slicing) or actual simultaneous thread execution on hyper-threading and multi-processor systems.

Multi-Threading – Why and Why Not

Why multi-thread:

  • To keep the UI responsive.
  • To improve performance (for example, concurrent operation of CPU bound and I/O bound activities).

Why not multi-thread:

  • Overhead can reduce actual performance.
  • Complicates code, increases design time, and risk of bugs.

Thread Pool

The thread pool provides your application with a pool of worker threads that are managed by the system. The threads in the managed thread pool are background threads. A ThreadPool thread will not keep an application running after all foreground threads have exited. There is one thread pool per process. The thread pool has a default size of 25 threads per available processor. The number of threads in the pool can be changed by the SetMaxThreads method. Each thread uses the default stack size and runs at the default priority.

Threading in .NET

In .NET, threading is achieved by the following methods:

  1. Thread class
  2. Delegates
  3. Background Worker
  4. ThreadPool
  5. Task
  6. Parallel

In the sections below, we will see how threading can be implemented by each of these methods.

In a nutshell, multi-threading is a technology by which any application can be made to run multiple tasks concurrently, thereby utilizing the maximum computing power of the processor and keeping the UI responsive. An example of this can be expressed by the block diagram below:

The code

The project is a simple WinForms application which demonstrates the use of threading in .NET by three methods:

  1. Delegates
  2. Thread class
  3. Background Worker

The application executes a heavy operation asynchronously so that the UI is not blocked. The same heavy operation is achieved by the above three ways to demonstrate their purpose.

The “Heavy” Operation

In real world, a heavy operation can be anything from polling a database to streaming a media file. For this example, we have simulated a heavy operation by appending values to a string. String being immutable, a string append will cause a new string variable to be created while discarding the old one. (This is handled by the CLR.) If done a huge number of times, this can really consume a lot of resources (a reason why we use Stringbuilder.Append instead). In the above UI screen, set the up down counter to specify the number of times the string is going to be appended.

We have a Utility class in the backend, which has a LoadData() method. It also has a delegate with signature similar to that of LoadData().

class Utility
{
    public delegate string delLoadData(int number);
    public static delLoadData dLoadData;

    public Utility()
    {

    }

    public static string LoadData(int max)
    {
        string str = string.Empty;

        for (int i = 0; i < max; i++)
                                {
            str += i.ToString();
                                }

        return str;
    }
}

The Synchronous Call

When you click the “Get Data Sync” button, the operation is run in the same thread as that of the UI thread (blocking call). Hence, for the time the operation is running, the UI will remain unresponsive.

private void btnSync_Click(object sender, EventArgs e)
{
    this.Cursor = Cursors.WaitCursor;
    this.txtContents.Text = Utility.LoadData(upCount);
    this.Cursor = Cursors.Default;
}

The Asynchronous Call

Using Delegates (Asynchronous Programming Model)

If you choose the radio button “Delegates”, the LoadData() method is called asynchronously using delegates. We first initialize the type delLoadData with the address of utility.LoadData(). Then we call the BeginInvoke() method of the delegate. In .NET world, any method that has the name BeginXXX or EndXXX is asynchronous. For example, delegate.Invoke() will call a method in the same thread. While delegate.BeginInvoke() will call the method in a separate thread.

The BeginInvoke() takes three arguments:

  1. Parameter to be passed to the Utility.LoadData() method
  2. Address of the callback method
  3. State of the object
Utility.dLoadData = new Utility.delLoadData(Utility.LoadData);
Utility.dLoadData.BeginInvoke(upCount, CallBack, null);
The Callback

Once we spawn an operation in a thread, we have to know what is happening in that operation. In other words, we should be notified when it has completed its operation. There are three ways of knowing whether the operation has completed:

  1. Callback
  2. Polling
  3. Wait until done

In our project, we use a callback method to trap the finishing of the thread. This is nothing but the name of the method that you had passed while calling the Begininvoke() method. This tells the thread to come back and invoke that method when it has done doing what it was supposed to do.

Once a method is fired in a separate thread, you might or might not be interested to know what that method returns. If the method does not return anything, then it will be a “fire and forget call”. In such a case, you would not be interested in the callback and would pass the callback parameter as null.

Utility.dLoadData.BeginInvoke(upCount, CallBack, null);

In our case, we need a callback method and hence we have passed the name of our callback method, which is coincidentally CallBack().

private void CallBack(IAsyncResult asyncResult)
{
    string result= string.Empty;

    if (this.cancelled)
        result = "Operation Cancelled";
    else
        result = Utility.dLoadData.EndInvoke(asyncResult);

      object[] args = { this.cancelled, result };
    this.BeginInvoke(dUpdateUI, args);
}

The signature of a callback method is – void MethodName(IAsyncResult asyncResult).

The IAsyncResult contains the necessary information about the thread. The returned data can be trapped as follows:

result = Utility.dLoadData.EndInvoke(asyncResult);

The polling method (not used in this project) is like the following:

IAsyncResult r = Utility.dLoadData.BeginInvoke(upCount, CallBack, null);
while (!r.IsCompleted)
{
    //do work
}
result = Utility.dLoadData.EndInvoke(asyncResult);

The wait-until-done, as the name suggests, is to wait until the operation is completed.

IAsyncResult r = Utility.dLoadData.BeginInvoke(upCount, CallBack, null);

//do work
result = Utility.dLoadData.EndInvoke(asyncResult);
Updating the UI

Now that we have trapped the ending of the operation and retrieved the result that LoadData() returned, we need to update the UI with that result. But there is a problem. The text box which needs to be updated resides in the UI thread and the result has been returned in the callback. The callback happens in the same thread that it started. So the UI thread is different from the callback thread. In other words, the text box cannot be updated with the result like shown below:

this.txtContents.Text = text;

Executing this line in the callback method will result in a cross thread system exception. We have to form a bridge between the UI thread and the background thread to update the result in the textbox. That is done using the Invoke() or BeginInvoke() methods of the form.

I have defined a method which will update the UI:

private void UpdateUI(bool cancelled, string text)
{
    this.btnAsync.Enabled = true;
    this.btnCancel.Enabled = false;
    this.txtContents.Text = text;
}

Define a delegate to the above method:

private delegate void delUpdateUI(bool value, string text);
dUpdateUI = new delUpdateUI(UpdateUI);

Call the BeginInvoke() method of the form:

object[] args = { this.cancelled, result };
this.BeginInvoke(dUpdateUI, args);

One thing to be noted here is that once a thread is spawned using a delegate, it cannot be cancelled, suspended, or aborted. We have no control on that thread.

Using the Thread Class

The same operation can be achieved using the Thread class. The advantage is that the Thread class gives you more power over suspending and cancelling the operation. The Thread class resides in the namespace System.Threading.

We have a private method LoadData() which is a wrapper to our Utility.LoadData().

private void LoadData()
{
    string result = Utility.LoadData(upCount);
    object[] args = { this.cancelled, result };
    this.BeginInvoke(dUpdateUI, args);
}

The reason we have this is because, Utility.LoadData() requires an argument. We need a thread start delegate to initialize the thread.

doWork = new Thread(new ThreadStart(this.LoadData));
doWork.Start();

The delegate has a void, void signature. In case we need to pass an argument, we have to use a parameterized thread start delegate. Unfortunately, the parameterized thread start delegate can take only objects as parameters. We need a string and would have to implement a type casting.

doWork = new Thread(new ParameterizedThreadStart(this.LoadData));
doWork.Start(parameter);

The Thread class gives a lot of power over the thread like Suspend, Abort, Interrupt, ThreadState, etc.

Using BackgroundWorker

The BackgroundWorker is a control which helps to make threading simple. The main feature of the BackgroundWorker is that it can report progress asynchronously which can be used to update a status bar, keeping the UI updated about the progress of the operation in a visual way.

To do this, we need to set the following properties to true. These are false by default.

  • WorkerReportsProgress
  • WorkerSupportsCancel

The control has three main events: DoCount, ProgressChanged, RunWorkerCompleted. We need to register these events at initializing:

this.bgCount.DoWork += new DoWorkEventHandler(bgCount_DoWork);
this.bgCount.ProgressChanged +=
     new ProgressChangedEventHandler(bgCount_ProgressChanged);
this.bgCount.RunWorkerCompleted +=
     new RunWorkerCompletedEventHandler(bgCount_RunWorkerCompleted);

The operation can be started by invoking the RunWorkerAsync() method as shown below:

this.bgCount.RunWorkerAsync();

Once this is invoked, the following method is invoked for processing the operation:

void bgCount_DoWork(object sender, DoWorkEventArgs e)
{
    string result = string.Empty;
    if (this.bgCount.CancellationPending)
    {
        e.Cancel = true;
        e.Result = "Operation Cancelled";
    }
    else
    {
        for (int i = 0; i < this.upCount; i++)
        {
            result += i.ToString();
            this.bgCount.ReportProgress((i / this.upCount) * 100);
        }
        e.Result = result;
    }
}

The CancellationPending property can be checked to see if the operation has been cancelled. The operation can be cancelled by calling:

this.bgCount.CancelAsync();

The below line reports the percentage progress:

this.bgCount.ReportProgress((i / this.upCount) * 100);

Once this is called, the below method is invoked to update the UI:

void bgCount_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
    if (this.bgCount.CancellationPending)
        this.txtContents.Text = "Cancelling....";
    else
        this.progressBar.Value = e.ProgressPercentage;
}

Finally, the bgCount_RunWorkerCompleted method is called to complete the operation:

void bgCount_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
{
    this.btnAsync.Enabled = true;
    this.btnCancel.Enabled = false;
    this.txtContents.Text = e.Result.ToString();
}

Thread  Pools

It is not recommended that programmers create as many threads as possible on their own. Creating threads is an expensive operation. There are overheads involved in terms of memory and computing. Also, the computer can only work one thread at a given time per CPU. So if there are multiple threads on a single core system, the computer will only be able to cater to one thread at a time. It does so by allocating time “slices” to each thread and working on the available threads in a round robin manner (which also depends on their priority). This is called context switching which in itself is another overhead. So if we have too many threads practically doing noting or sitting idle, we only have overheads in terms of memory consumption, context switching, etc without any net gain. So as developers we need to be extremely cautious when creating threads and be diligent about the number of existing threads we are working with.

Fortunately the CLR has a managed code library that does this for us. This is the ThreadPool class. This class manages a number of  threads in its pool and decides on the need to create or destroy any threads based on our application need. The threadPool has no thread to start with. As and when requests start queuing up, it starts creating the threads. If we set the SetMinThreads property, the threadpool quickly assigns that many number of threads as work items start queuing up. When the threadpool finds out that threads are getting idle or are asleep for a long time, it decides to kill threads appropriately.

So, Thread pools are a great way to tap into the pool of background threads that is maintained by our computer. The ThreadPool class allows us to queue a work item which is then delegated to a background thread.

WaitCallback threadCallback = new WaitCallback(HeavyOperation);

for (int i = 0; i < 3; i++)
{
  System.Threading.ThreadPool.QueueUserWorkItem(HeavyOperation, i);
}

The heavy operation is defined as :

 

private static void HeavyOperation(object WorkItem)
{
  System.Threading.Thread.Sleep(5000);
  Console.WriteLine("Executed work Item {0}", (int)WorkItem);
}

 

Notice the signature of the WaitCallBack delegate. It must take an object as a method parameter. This is generally used to pass across state information between threads.

Now that we know how to delegate work to a background thread using ThreadPool, we must explore on the callback techniques that go with it. We capture callbacks by using a WaitHandle. The WaitHandle class lends inheritance to two children classes – AutoResetEvent and ManualResetEvent.

public static void Demo_ResetEvent()
{
  Server s = new Server();
  ThreadPool.QueueUserWorkItem(new WaitCallback((o) =>
  {
     s.DoWork();                

   }));

   ((AutoResetEvent)Global.GetHandle(Handles.AutoResetEvent)).WaitOne();
    Console.WriteLine("Work complete signal received");
}

Here we have a Global class which maintains a singleton instance for the WaitHandles.

public static class Global
{
  static WaitHandle w = null;
  static AutoResetEvent ae = new AutoResetEvent(false);
  static ManualResetEvent me = new ManualResetEvent(false);
  public static WaitHandle GetHandle(Handles Type)
  {
    switch (Type)
    {
      case Handles.ManualResetEvent:
         w = me;
         break;
      case Handles.AutoResetEvent:
         w = ae;
         break;
      default:
         break;
    }
    return w;
  }
}

The WaitOne method, blocks the code execution till the value is set on the WaitHandle from the background thread.

 

public void DoWork()
{
  Console.WriteLine("Work Starting ...");
  Thread.Sleep(5000);
  Console.WriteLine("Work Ended ...");
  ((AutoResetEvent)Global.GetHandle(Handles.AutoResetEvent)).Set();
}

 

The AutoResetEvent resets itself after being set automatically. It is analogous to a toll gate on an expressway where two or more lanes merge so that vehicles can only pass one at a time. When a vehicle approaches, the gate is set allowing it to pass through and then immediately resets automatically for the other vehicle.

The following example elaborates on the AutoResetEvent. Consider, we have a server with a method DoWork(). This method is a heavy operation and the application needs to update the log file after calling this method. Consider that several threads access this method asynchronously. Hence, we must make sure that the update log is thread safe or is only available to one thread at a time.

public void DoWork(int threadID, int waitSingal)
{
  Thread.Sleep(waitSingal);
  Console.WriteLine("Work Complete by Thread : {0} @ {1}", threadID, DateTime.Now.ToString("hh:mm:ss"));
  ((AutoResetEvent)Global.GetHandle(Handles.AutoResetEvent)).Set();

}
public void UpdateLog(int threadID)
{
  if(((AutoResetEvent)Global.GetHandle(Handles.AutoResetEvent)).WaitOne(5000))
       Console.WriteLine("Update Log File by thread : {0} @ {1}", threadID, DateTime.Now.ToString("hh:mm:ss"));
  else
       Console.WriteLine("Time out");
}

We create two threads and delegate the DoWork() method simultaneously. Then we call the UpdateLog(). The code execution at update log will wait for each thread to complete their respective task before updating.

public static void Demo_AutoResetEvent()
{
  Console.WriteLine("Demo Autoreset event...");
  Server s = new Server();

  Console.WriteLine("Start Thread 1..");
  ThreadPool.QueueUserWorkItem(new WaitCallback((o) =>
  {
     s.DoWork(1, 4000);  

  }));            

  Console.WriteLine("Start Thread 2..");
  ThreadPool.QueueUserWorkItem(new WaitCallback((o) =>
  {
     s.DoWork(2, 4000);                

  }));

  s.UpdateLog(1);
  s.UpdateLog(2);
}

 

 

 

The ManualResetEvent differs from the AutoResetEvent by the fact that we need to reset it manually before setting it again. Unlike AutoResetEvent, it does not reset automatically. Consider we have a server which sends messages continuously in a background thread.  The server runs a continuous loop awaiting the signal to send messages. When the value is set, the server starts sending messages. When the wait handle is reset the server stops and the process can be repeated.

public void SendMessages(bool monitorSingal)
{
  int counter=1;
  while (monitorSingal)
  {
     if (((ManualResetEvent)Global.GetHandle(Handles.ManualResetEvent)).WaitOne())
     {
        Console.WriteLine("Sending message {0}", counter);
        Thread.Sleep(3000);
        counter += 1;
     }
  }
}
public static void Demo_ManualResetEvent()
{
  Console.WriteLine("Demo Mnaulreset event...");
  Server s = new Server();
  ThreadPool.QueueUserWorkItem(new WaitCallback((o) =>
  {
    s.SendMessages(true);
  }));

  Console.WriteLine("Press 1 to send messages");
  Console.WriteLine("Prress 2 to stop messages");

  while (true)
  {
    int input = Convert.ToInt16(Console.ReadLine());                              

    switch (input)
    {
      case 1:
         Console.WriteLine("Starting to send message ...");
         ((ManualResetEvent)Global.GetHandle(Handles.ManualResetEvent)).Set();
         break;
      case 2:
         ((ManualResetEvent)Global.GetHandle(Handles.ManualResetEvent)).Reset();
         Console.WriteLine("Message Stopped ...");
         break;
      default:
         Console.WriteLine("Invalid Input");
         break;
    }
  }
}

The Task Class  

.NET  4.0 came up with an extension of the ThreadPool class in the form of a Task class. The concept remains pretty much the same with the exception that we have the power to cancel the task, wait on a task and tap on the thread’s status from time to time to check the progress. Consider the following example where we have three methods

static void DoHeavyWork(CancellationToken ct)
{
 try
 {
                while (true)
                {
                    ct.ThrowIfCancellationRequested();
                    Console.WriteLine("Background thread working for task 3..");
                    Thread.Sleep(2000);
                    if (ct.IsCancellationRequested)
                    {
                        ct.ThrowIfCancellationRequested();
                    }
                }
            }
            catch (OperationCanceledException ex)
            {
                Console.WriteLine("Exception :" + ex.Message);
            }
            catch (Exception ex)
            {
                Console.WriteLine("Exception :", ex.Message);
            }            

        }

static void DoHeavyWork(int n)
{
  Thread.Sleep(5000);
  Console.WriteLine("Operation complete for thread {0}", Thread.CurrentThread.ManagedThreadId);
}
static int DoHeavyWorkWithResult(int num)
{
  Thread.Sleep(5000);
  Console.WriteLine("Operation complete for thread {0}", Thread.CurrentThread.ManagedThreadId);
  return num;
}

We have 3 tasks designed to run these 3 methods. The first thread completes without returning a result. The second thread completes and returns a result while the third is cancelled before completion.

 try
            {
                Console.WriteLine(DateTime.Now);
                CancellationTokenSource cts1 = new CancellationTokenSource();
                CancellationTokenSource cts2 = new CancellationTokenSource();
                CancellationTokenSource cts3 = new CancellationTokenSource();

                Task t1 = new Task((o) => DoHeavyWork(2), cts1.Token);

                Console.WriteLine("Starting Task 1");
                Console.WriteLine("Thread1 state {0}", t1.Status);
                t1.Start();

                Console.WriteLine("Starting Task 2");
                Task<int> t2 = Task<int>.Factory.StartNew((o) => DoHeavyWorkWithResult(2), cts2.Token);

                Console.WriteLine("Starting Task 3");
                Task t3 = new Task((o) => DoHeavyWork(cts3.Token), cts3);
                t3.Start();               

                Console.WriteLine("Thread1 state {0}", t1.Status);
                Console.WriteLine("Thread2 state {0}", t2.Status);
                Console.WriteLine("Thread3 state {0}", t3.Status);

                // wait for task 1 to be over
                t1.Wait();

                Console.WriteLine("Task 1 complete");

                Console.WriteLine("Thread1 state {0}", t1.Status);
                Console.WriteLine("Thread2 state {0}", t2.Status);
                Console.WriteLine("Thread3 state {0}", t3.Status);

                //cancel task 3
                Console.WriteLine("Task 3 is : {0} and cancelling...", t3.Status);
                cts3.Cancel();

                // wait for task 2 to be over
                t2.Wait();

                Console.WriteLine("Task 2 complete");

                Console.WriteLine("Thread1 state {0}", t1.Status);
                Console.WriteLine("Thread2 state {0}", t2.Status);
                Console.WriteLine("Thread3 state {0}", t3.Status);

                Console.WriteLine("Result {0}", t2.Result);
                Console.WriteLine(DateTime.Now);

                t3.Wait();

                Console.WriteLine("Task 3 complete");
                Console.WriteLine(DateTime.Now);
            }

            catch (Exception ex)
            {
                Console.WriteLine("Exception : " + ex.Message.ToString());
            }
            finally
            {
                Console.Read();
            }

Parallel Programming with .NET 4.0 (Time Slicing)

.NET 4.0 came with a cool feature of parallel processing. Most of the  threading examples that we saw above were only  about delegating bulk jobs to idle threads. The computer was still processing one thread at a time in a round robin way. In a nutshell we were not multitasking in the true sense of the word. All that is possible with the Parallel class.

Consider you have an Employee class which has a heavy operation ProcessEmployeeInformation

class Employee
{
  public Employee(){}

  public int EmployeeID {get;set;}

  public void ProcessEmployeeInformation()
  {
    Thread.Sleep(5000);
    Console.WriteLine("Processed Information for Employee {0}",EmployeeID);
  }
}

We create 8 instances and fire parallel requests. On a 4 core processor, 4 of the requests will be processed simultaneously and the rest will be queued waiting for any thread to free up.

 List<employee> empList = new List<employee>()
 {
   new Employee(){EmployeeID=1},
   new Employee(){EmployeeID=2},
   new Employee(){EmployeeID=3},
   new Employee(){EmployeeID=4},
   new Employee(){EmployeeID=5},
   new Employee(){EmployeeID=6},
   new Employee(){EmployeeID=7},
   new Employee(){EmployeeID=8},
 };

 Console.WriteLine("Start Operation {0}", DateTime.Now);
 System.Threading.Tasks.Parallel.ForEach(empList, (e) =>e.ProcessEmployeeInformation());

</employee></employee>

We can control or limit the number of concurrent tasks by using the MaxDegreeOfParallelism property. If it is set to -1, there is no limit.

System.Threading.Tasks.Parallel.For(0, 8, new ParallelOptions() { MaxDegreeOfParallelism = 4 }, (o) =>
       {
          Thread.Sleep(5000);
          Console.WriteLine("Thread ID - {0}", Thread.CurrentThread.ManagedThreadId);
        });

The problem with parallelism is that if we fire a set of requests we have no guarantee that the responses will bear the same order. The order in which the threads get processed is non deterministic. The AsOrdered property helps us to ensure just that. The inputs can be processed in any order but the output will be delivered in that order.

Console.WriteLine("Start Operation {0}", DateTime.Now);
var q = from e in empList.AsParallel().AsOrdered()
        select new { ID = e.EmployeeID };

foreach (var item in q)
{
  Console.WriteLine(item.ID);
}
Console.WriteLine("End Operation {0}", DateTime.Now);

Web Applications

Threading in ASP.NET web applications can be achieved by sending an AJAX request from the client to the server. This makes the client request certain data to the server without blocking the UI. When the data is ready, the client is notified via a callback and only the part of the client concerned is updated, making the client agile and responsive. Threading in ASP.NET web applications can be achieved by sending an AJAX request from the client to the server. This makes the client request certain data to the server without blocking the UI. When the data is ready, the client is notified via a callback and only the part of the client concerned is updated, making the client agile and responsive. The most common way to achieve this is by ICallbackEventHandler. Refer to the project Demo.Threading.Web. I have the same interface as Windows with a text box to enter a number and a textbox to show the data. The Load Data button performs the previously discussed “heavy” operation.
<div>
    <asp:Label runat="server" >Enter Number</asp:Label>
    <input type="text" id="inputText" /><br /><br />
    <asp:TextBox ID="txtContentText" runat="server" TextMode="MultiLine" /><br /><br />
    <input type="button" id="LoadData" title="LoadData"
           onclick="LoadHeavyData()" value="LoadData" />
</div>
I have a JavaScript function LoadHeavyData() which is called on the click event of the button. This function calls the function CallServer with parameters.
<script type="text/ecmascript">
    function LoadHeavyData() {

        var lb = document.getElementById("inputText");
        CallServer(lb.value.toString(), "");
    }

    function ReceiveServerData(rValue) {
        document.getElementById("txtContentText").innerHTML = rValue;
    }
</script>
The CallServer function is registered with the server in the script that is defined at the page load event of the page:
protected void Page_Load(object sender, EventArgs e)
{
    String cbReference = Page.ClientScript.GetCallbackEventReference(this,
                         "arg", "ReceiveServerData", "context");

    String callbackScript;
    callbackScript = "function CallServer(arg, context)" +
                     "{ " + cbReference + ";}";

    Page.ClientScript.RegisterClientScriptBlock(this.GetType(),
                      "CallServer", callbackScript, true);
}
The above script defines and registers a CallServer function. On calling the CallServer function, the RaiseCallBackEvent of ICallbackeventHandler is invoked. This method invokes the LoadData() method which performs the heavy operation and returns the data.
public void RaiseCallbackEvent(string eventArgument)
{
    if (eventArgument!=null)
    {
        Result = this.LoadData(Convert.ToUInt16(eventArgument));
    }
}

private string LoadData(int num)
{
    // call Heavy data
    return Utility.LoadData(num);
}
Once LoadData() is executed, the GetCallbackResult() method of ICallbackEventHandler is executed, which returns the data:
public string GetCallbackResult()
{
    return Result;
}
Finally, the ReceiveServerData() function is called to update the UI. The ReceiveServerData function is registered as the callback for the CallServer() function in the page load event.
function ReceiveServerData(rValue) {
    document.getElementById("txtContentText").innerHTML = rValue;
}

WPF

Typically WPF applications start with two threads –

 

  1. Rendering Thread – Runs in the background handling low level tasks.
  2. UI Thread –  Receives input, handles event, paints the screen and runs application code.

 

Threading  in WPF is achieved in the same way as win forms with an exception that we use the Dispatcher object to bridge UI update from a background thread. The UI thread queues work items inside an object called Dispatcher. The Dispatcher selects work items on a priority basis and runs each one to completion. Every UI thread has one Dispatcher and each Dispatcher can execute items in one thread.  When an expensive work is completed in a background thread and the UI needs to be updated with the result, we use the dispatcher to queue the item in the task list of the UI thread.

Consider the following example where we have a Grid split into two parts. On the 1st part we have a property called ViewModelProperty bound to the view Model and on the 2nd part we have a bound collection ViewModelCollection. We also have a button which updates these properties. To simulate a “heavy work” we put the thread to sleep before updating the properties.

<DockPanel>
    <TextBlock Text="View Model Proeprty: " DockPanel.Dock="Left"/>
    <TextBlock Text="{Binding ViewModelProperty}" DockPanel.Dock="Right"/>
</DockPanel>
<ListBox Grid.Row="1" ItemsSource="{Binding ViewModelCollection}"/>
<Button Grid.Row="2" Content="Change Property" Width="100" Command="{Binding ChangePropertyCommand}"/>

Here is the View Model. Notice the method DoWork() which  is called via a background thread. As discussed we have two properties –  ViewModelProperty and ViewModelCollection. These implement the INotifyCollectionChanged and the view model itself inherits from DispatcherObject. The  main purpose of this example is to show how a data change from a background thread is passed on to the UI. In the DoWork() method, the change in the property ViewModelProperty is handled automatically but an addition to the collection is queued into the UI thread from the background thread via the Dispatcher object. The key point to note here is that while the WPF run time takes care of the property changed notification from a  background thread, the notification from a change in collection has to be handled by the programmer.

public ViewModel()
        {
            ChangePropertyCommand = new MVVMCommand((o) => DoWork(), (o)=> DoWorkCanExecute());
            ViewModelCollection = new ObservableCollection<string>();
            ViewModelCollection.CollectionChanged +=
                new System.Collections.Specialized.NotifyCollectionChangedEventHandler(ViewModelCollection_CollectionChanged);
        }

        public ICommand ChangePropertyCommand { get; set; }

        private string viewModelProperty;
        public string ViewModelProperty
        {
            get { return viewModelProperty; }
            set
            {
                if (value!=viewModelProperty)
                {
                    viewModelProperty = value;
                    OnPropertyChanged("ViewModelProperty");
                }
            }
        }

        private ObservableCollection<string> viewModelCollection;
        public ObservableCollection<string> ViewModelCollection
        {
            get { return viewModelCollection; }
            set
            {
                if (value!= viewModelCollection)
                {
                    viewModelCollection = value;
                }
            }

        }

        public void DoWork()
        {
            ThreadPool.QueueUserWorkItem((o) =>
                {
                    Thread.Sleep(5000);
                    ViewModelProperty = "New VM Property";
                    Dispatcher.Invoke(DispatcherPriority.Background,
                        (SendOrPostCallback)delegate
                        {
                            ViewModelCollection.Add("New Collection Item");
                        },null);
                });
        }

        private bool DoWorkCanExecute()
        {
            return true;
        }

        public event PropertyChangedEventHandler PropertyChanged;

        private void OnPropertyChanged(string PropertyName)
        {
            if (PropertyChanged!=null)
            {
                PropertyChanged(this, new PropertyChangedEventArgs(PropertyName));
            }
        }

    }

Thread Safety

A talk on threads is never over without talking about thread safety. Consider a resource being used by multiple threads. That would mean that the resource is being used and shared by the control over multiple threads. This would result in the resource behaving in an in-deterministic way and the results getting haywire. That is why we need to implement “thread safe” applications so that a resource is only available to one single thread at any point in time. The following are the ways of implementing thread safety in .NET:
  • Interlocked– The Interlocked class treats an operation as atomic. For example, simple addition, subtraction operations are three step operations inside the processor. When multiple threads access the same resource subject to these operations, the results can get confusing because one thread can be preempted after executing the first two steps. Another thread can then execute all three steps. When the first thread resumes execution, it overwrites the value in the instance variable, and the effect of the operation performed by the second thread is lost. Hence we need to use the Interlocked class which treats these operations as atomic, making them thread safe. E.g.: Increment, Decrement, Add, Read, Exchange, CompareExchange.
    System.Threading.Interlocked.Increment(object);
  • Monitor– The Monitor class is used to lock an object which might be vulnerable to the perils of multiple threads accessing that object concurrently.
    if (Monitor.TryEnter(this, 300)) {
        try {
            // code protected by the Monitor here.
        }
        finally {
            Monitor.Exit(this);
        }
    }
    else {
        // Code if the attempt times out.
    }
  •  Locks – The Lock class is an enhanced version of the monitor. In other words it encapsulates the features of the monitor without explicitly having to exit as is the case with the Monitor. The most popular example is that of the GetInstance() method of the Singleton class. Here the method can be used by various modules accessing it concurrently. Thread safety is implemented by locking that block of code with an object syncLock. Note that the object that is used to lock is similar to a real world key of a lock. if two or more resources have the key they can each open the lock and access the underlying resource. Hence we need to make sure that the key (or the object in this case) can never be shared. It is best to have the object as a private member of the class.
  • static object syncLock = new object();
    
    if (_instance == null)
    {
        lock (syncLock)
        {
            if (_instance == null)
            {
                _instance = new LoadBalancer();
            }
        }
    }
  • Reader-Writer Lock – The lock can be acquired by an unlimited number of concurrent readers, or exclusively by a single writer. This can provide better performance than a Monitor if most accesses are reads while writes are infrequent and of short duration. At any point in time readers and writer queue up separately. When the writer thread has the lock, the readers queue up and wait for the writer to finish. When the readers have the lock, all writing threads queue up separately. Readers and writers alternate to get the job done. The below code explains in detail. We have two methods –  ReadFromCollection and WriteToCollection to read and write from a collection respectively. Note the use of the methods  –AcquireReaderLock and AcquireWriterLock. These methods hold the thread till the reader or writer is free.
    static void Main(string[] args)
            {
                // Thread 1 writing
                new Thread(new ThreadStart(() =>
                    {
                        WriteToCollection(new int[]{1,2,3});
    
                    })).Start();
    
                // Thread 2 Reading
                new Thread(new ThreadStart(() =>
                {
                    ReadFromCollection();
                })).Start();
    
                // Thread 3 Writing
                new Thread(new ThreadStart(() =>
                {
                    WriteToCollection(new int[] { 4, 5, 6 });
    
                })).Start();
    
                // Thread 4 Reading
                new Thread(new ThreadStart(() =>
                {
                    ReadFromCollection();
                })).Start();            
    
                Console.ReadLine();
            }
    
            static void ReadFromCollection()
            {
                rwLock.AcquireReaderLock(5000);
                try
                {
                    Console.WriteLine("Read Lock acquired by thread : {0}  @ {1}", Thread.CurrentThread.ManagedThreadId, DateTime.Now.ToString("hh:mm:ss"));
                    Console.Write("Collection : ");
                    foreach (int item in myCollection)
                    {
                        Console.Write(item + ", ");
                    }
                    Console.Write("\n");
                }
                catch (Exception ex)
                {
                    Console.WriteLine("Exception : " + ex.Message);
                }
                finally
                {
                    Console.WriteLine("Read Lock released by thread : {0}  @ {1}", Thread.CurrentThread.ManagedThreadId, DateTime.Now.ToString("hh:mm:ss"));
                    rwLock.ReleaseReaderLock();
    
                }
            }
    
            static void WriteToCollection(int[] num)
            {
                rwLock.AcquireWriterLock(5000);
                try
                {
                    Console.WriteLine("Write Lock acquired by thread : {0}  @ {1}", Thread.CurrentThread.ManagedThreadId, DateTime.Now.ToString("hh:mm:ss"));
                    myCollection.AddRange(num);
                    Console.WriteLine("Written to collection ............: {0}", DateTime.Now.ToString("hh:mm:ss"));
                }
                catch (Exception ex)
                {
                    Console.WriteLine("Exception : " + ex.Message);
                }
                finally
                {
                    Console.WriteLine("Write Lock released by thread : {0}  @ {1}", Thread.CurrentThread.ManagedThreadId, DateTime.Now.ToString("hh:mm:ss"));
                    rwLock.ReleaseWriterLock();
                }
            }
  • Mutex  – A Mutex is used to share resources across the Operating system. A good example is to detect if multiple versions of the same applicationare running concurrently.

There are other ways of implementing thread safety. Please refer to MSDN for further information.   Dead Lock

A discussion on how to create a thread safe application can never be complete without touching on the concept of deadlocks. Let’s look at what that is.

A deadlock is a situation when two or more threads lock the same resource, each waiting for the other to let go. Such a situation will result in the operation being stuck indefinitely. Deadlocks can be avoided by careful programming. Example:
  • Thread A locks object A
  • Thread A locks object B
  • Thread B locks object B
  • Thread B locks object A

Thread A waits for Thread B to release object B and Thread B waits for Thread A to release object A. Consider the below example where we have a class DeadLock. We have two methods with nested locking of two objects – OperationA and OperationB.  We will have a deadlock situation when we fire two threads running operation A and operation B simultaneously.

public class DeadLock
{
 static object lockA = new object();
 static object lockB = new object();

 public void OperationA()
 {
  lock (lockA)
  {
   Console.WriteLine("Thread {0} has locked Obect A", Thread.CurrentThread.ManagedThreadId);
   lock (lockB)
   {
    Console.WriteLine("Thread {0} has locked Obect B", Thread.CurrentThread.ManagedThreadId);
   }
   Console.WriteLine("Thread {0} has released Obect B", Thread.CurrentThread.ManagedThreadId);
  }
  Console.WriteLine("Thread {0} has released Obect A", Thread.CurrentThread.ManagedThreadId);
 }

 public void OperationB()
 {
  lock (lockB)
  {
   Console.WriteLine("Thread {0} has locked Obect B", Thread.CurrentThread.ManagedThreadId);
   lock (lockA)
   {
    Console.WriteLine("Thread {0} has locked Obect A", Thread.CurrentThread.ManagedThreadId);
   }
   Console.WriteLine("Thread {0} has released Obect A", Thread.CurrentThread.ManagedThreadId);
  }
  Console.WriteLine("Thread {0} has released Obect B", Thread.CurrentThread.ManagedThreadId);
 } }
 DeadLock deadLock = new DeadLock();

 Thread tA = new Thread(new ThreadStart(deadLock.OperationA));
 Thread tB = new Thread(new ThreadStart(deadLock.OperationB));

 Console.WriteLine("Starting Thread A");
 tA.Start();

 Console.WriteLine("Starting Thread B");
 tB.Start();

 

 

 

 

 

Worker Threads vs I/O Threads

The Operating System has only one concept of threads which is what it uses to run various processes. But the .NET CLR has abstracted out a layer for us where we can deal with two types of threads – Worker Threads and I/O Threads. The method ThreadPool.GetAvailableThreads(out workerThread, out ioThread) shows us the number of each of these threads available. While coding, the heavy tasks in our applications should be classified into two categories – Compute bound or I/O bound operations. A compute bound operation is an operation where the CPU is used for heavy computation like running search results or complex algorithms. The I/O bound operations  are those operations which utilize the system I/O hardware or network drives. For example – reading and writing a file, fetching data from database or querying a remote web server. Compute bound operations should be delegated to worker threads and I/O bound operations should be delegated to I/O threads. When we queue items in a ThreadPool we are delegating items to the worker threads. If we use the worker threads to perform I/O bound operations, the threads remains blocked while the device driver performs that operation. A blocked thread is a wasted resource. On the other hand, if we use a I/O thread for the same task, the calling thread will delegate the task to the device driver and return to the thread pool. When the operation is completed, a thread from the thread pool will be notified and handle the task completion. The advantage is that the threads remain unblocked to handle other tasks because when an I/O operation is initiated the calling thread only delegates the task to the part of OS which handles the device drivers. There is no reason why the thread should remain blocked till the task is completed. In the .NET class library the Async Programming Model on certain types handles the I/O threads. For example – BeginRead() and EndRead() in FileStream class. As a thumb rule all methods with BeginXXX and EndXXX fall into this category.

Summary

“With great power comes great responsibility” – ThreadPool 

 

  1. No application should ever run heavy tasks on the UI thread. There is nothing uglier than a frozen UI. Threads should be created to manage the heavy work asynchronously using thread pools when ever possible.
  2. The UI cannot update data directly from a non UI or a background thread. Programmers need to delegate that work to the UI thread. This is done using the Invoke method of the winform class, Dispatcher in WPF or handled automatically when using BackGroundWorker.
  3. Threads are expensive resources and should be treated with respect. The term “More the merrier..” is unfortunately not applicable.
  4. Problems in our application will not go away by simply assigning a task to another thread. There is no magic happening and we need to carefully consider our design and purpose for maximizing efficiency.
  5. Creating a Thread with Thread class should be dealt with caution. Wherever possible a thread pool should be used. It is also not a good idea to fiddle around with the priority of a thread as it may stop other important threads from getting executed.
  6. Setting the IsBackground property to false carelessly can have catastrophic effect. Foreground threads will not let the application terminate till its task is complete. So if the user wants to terminate an application and there is a task that running in the background which has been marked as a foreground thread, then the application wont be terminated till the task is completed.
  7. Thread synchronization techniques should be carefully implemented when multiple threads are sharing  resources in an application. Deadlocks should be avoided through careful coding. Nesting of locks should always be avoided as these may result in deadlocks.
  8. Programmers should make sure that we do not end up creating more threads than required. Idle threads only give us overheads and may result in ‘Out of Memory” exception.
  9. I/ O operations must be delegated to I/O threads rather than working threads.

 

from:http://www.codeproject.com/Articles/212377/Multithreading-Demystified
中文版:http://www.cnblogs.com/lazycoding/archive/2013/02/06/2904918.html

 

精通 MEAN: MEAN 堆栈

在 2002 年的一本著作中,David Weinberger 将发展迅速的 Web 内容描述成一个 小块松散组合(Small Pieces                    Loosely Joined)。这个比喻让我印象深刻,因为大家一般很容易认为 Web                 是一个巨大的技术堆栈。实际上,您访问的每个网站都是库、语言与 Web 框架的一种独特组合。

LAMP 堆栈 是早期表现突出的开源                 Web 技术集合之一:它使用 Linux® 作为操作系统,使用 Apache 作为 Web 服务器,使用 MySQL 作为数据库,并使用                 Perl(或者 Python 和 PHP)作为生成基于 HTML Web                 页面的编程语言。这些技术的出现并非为了一起联合工作。它们是独立的项目,由多位雄心勃勃的软件工程师前赴后继地整合在一起。自那以后,我们就见证了 Web                 堆栈的大爆发。每一种现代编程语言似乎都有一个(或两个)对应的 Web                 框架,可将各种混杂的技术预先组装在一起,快速而又轻松地创建一个新的网站。

MEAN 堆栈是 Web 社区中赢得大量关注和令人兴奋的一种新兴堆栈:MongoDBExpressAngularJSNode.js。MEAN 堆栈代表着一种完全现代的 Web                 开发方法:一种语言运行在应用程序的所有层次上,从客户端到服务器,再到持久层。本系列文章演示了一个 MEAN Web                 开发项目的端到端开发情况,但这种开发并不仅限于简单的语法。本文将通过作者的亲身实践向您深入浅出地介绍了该堆栈的组件技术,包括安装与设置。参见 下载 部分,以便获取示例代码。

关于本系列

在使用开源软件构建专业网站领域时,MEAN(MongoDB、Express、AngularJS 和 Node.js)堆栈是对流行已久的                     LAMP 堆栈的一个新兴挑战者。MEAN 代表着架构与心理模型(mental model)方面的一次重大变迁:从关系数据库到                     NoSQL,以及从服务器端的模型-视图-控制器到客户端的单页面应用程序。本系列文章将介绍 MEAN                     堆栈技术如何互补,以及如何使用堆栈创建二十一世纪的、现代的全堆栈 JavaScript Web 应用程序。

“实际上,您访问的每个网站都是库、语言与 Web 框架的独特组合。”

 

从 LAMP 到 MEAN

MEAN 不仅仅是一次首字母缩写的简单重新安排与技术升级。将基础平台从操作系统 (Linux) 转变为 JavaScript 运行时                 (Node.js) 让操作系统变得独立:Node.js 在 Windows® 与 OS X 上的运行情况和在 Linux 上一样优秀。

Node.js 同样取代了 LAMP 堆栈中的 Apache。但 Node.js 远远不止是一种简单的 Web                 服务器。事实上,用户不会将完成后的应用程序部署到单机的 Web 服务器上;相反,Web 服务器已经包含在应用程序中,并已在 MEAN                 堆栈中自动安装。结果,部署过程得到了极大简化,因为所需的 Web 服务器版本已经与余下的运行时依赖关系一起得到了明确定义。

不仅是 MEAN

尽管本系列文章重点讲述的是 MEAN 太阳系中的四大行星,但也会介绍 MEAN 堆栈中的一些较小的(但并非不重要的)卫星类技术:

从传统数据库(如 MySQL)到 NoSQL,再到无架构的、以文档为导向的持久存储(如                 MongoDB),这些代表着持久化策略发生了根本性的转变。用户花费在编写 SQL 上的时间将会减少,将会有更多的时间编写 JavaScript                 中的映射/化简功能。用户还能省掉大量的转换逻辑,因为 MongoDB 可以在本地运行 JavaScript Object Notation                    (JSON)。因此,编写 RESTful Web 服务变得前所未有的容易。

但从 LAMP 到 MEAN 的最大转变在于从传统的服务器端页面生成变为客户端 单页面应用程序                     (SPA)。借助 Express 仍然可以处理服务器端的路由与页面生成,但目前的重点在客户端视图上,而 AngularJS                 可以实现这一点。这种变化并不仅仅是将 模型-视图-控制器 (MVC)                 工件从服务器转移到客户端。用户还要尝试从习惯的同步方式转而使用基本由事件驱动的、实质上为异步的方式。或许最重要的一点是,您将从以页面为中心的应用程序视图转到面向组件的视图。

MEAN 堆栈并非以移动为中心,AngularJS                 在桌面电脑、笔记本电脑、智能手机、平板电脑和甚至是智能电视上的运行效果都一样,但它不会把移动设备当作二等公民对待。而且测试事后不再是问题:借助世界级的测试框架,比如                     MochaJSJasmineJSKarmaJS,您可以为自己的 MEAN                 应用程序编写深入而又全面的测试套件。

准备好获得 MEAN 了吗?


回页首

安装 Node.js

您需要安装 Node.js,以便在本系列中的示例应用程序上工作,如果尚未安装它,那就立刻开始安装吧。

如果使用 UNIX® 风格的操作系统(Linux、Mac OS X 等),我推荐使用 Node Version Manager                    (NVM)。(否则,在 Node.js 主页上单击                     Install,下载适合您操作系统的安装程序,然后接受默认选项即可。)借助 NVM,您可以轻松下载                 Node.js,并从命令行切换各种版本。这可以帮助您从一个版本的 Node.js 无缝转移到下一版本,就像我从一个客户项目转到下一个客户项目一样。

NVM 安装完毕后,请输入命令 nvm ls-remote 查看哪些 Node.js 版本可用于安装,如清单 1 中所示。

清单 1. 使用 NVM 列出可用的 Node.js                 版本
$ nvm ls-remote

v0.10.20

v0.10.21
v0.10.22
v0.10.23
v0.10.24
v0.10.25
v0.10.26
v0.10.27
v0.10.28

输入 nvm ls 命令可以显示本地已经安装的 Node.js 版本,以及目前正在使用中的版本。

在撰写本文之际,Node 网站推荐 v0.10.28 是最新的稳定版本。输入 nvm install v0.10.28                 命令在本地安装它。

安装 Node.js 后(通过 NVM 或平台特定的安装程序均可),可以输入 node --version                 命令来确认当前使用的版本:

$ node --version

v0.10.28

回页首

什么是 Node.js?

Node.js 是一种 headless JavaScript 运行时。它与运行在 Google Chrome 内的 JavaScript                 引擎(名叫 V8)是一样的,但使用 Node.js 可以从命令行(而非浏览器)运行 JavaScript。

访问浏览器的开发人员工具

熟悉自己所选浏览器中的开发人员工具。我将在整个系列中通篇使用 Google Chrome,但用户可以自行选择使用 Firefox、Safari                     或者甚至是 Internet Explorer。

  • 在 Google Chrome 中,单击 Tools > JavaScript                            Console
  • 在 Firefox 中,单击 Tools > Web Developer > Browser                            Console
  • 在 Safari 中,单击 Develop > Show Error                        Console。(如果看不到 Develop 菜单,可以在 Advanced preferences 页面上单击                             Show Develop menu in menu bar。)
  • 在 Internet Explorer 中,单击 Developer Tools > Script >                            Console

我曾有些学生嘲笑过从命令行运行 JavaScript 的主意:“如果没有要控制的 HTML,那 JavaScript 还有什么好处呢?”                 JavaScript 是在浏览器(Netscape Navigator 2.0)中来到这个世界的,因此那些反对者的短视和天真是可以原谅的。

事实上,JavaScript 编程语言并未针对 文档对象模型 (DOM) 操作或形成 Ajax 请求提供本地功能。该浏览器提供了 DOM API,可以方便用户使用                 JavaScript 来完成这类工作,但在浏览器之外的地方,JavaScript 不具备这些功能。

下面给出了一个例子。在浏览器中打开一个 JavaScript 控制台(参见 访问浏览器的开发人员工具)。输入 navigator.appName。获得响应后,请输入                 navigator.appVersion。得到的结果类似于图 1 中所示。

图 1. 在 Web 浏览器中使用 JavaScript navigator                     对象

在 Web 浏览器中使用 navigator JavaScript 对象的屏幕截图

在图 1 中,Netscape 是对 navigator.appName 的响应,而对                 navigator.appVersion 的响应则是经验丰富的 Web                 开发人员已经熟知但爱恨不一的、神秘的开发人员代理字符串。在图 1 中(截自 OS X 上的 Chrome 浏览器),该字符串是                 5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36

现在,我们要创建一个名为 test.js 的文件。在文件中输入同样的命令,并将每个命令包含在 console.log()                 调用中:

console.log(navigator.appName);
console.log(navigator.appVersion);

保存文件并输入 node test.js 来运行它,如清单 2 中所示。

清单 2. 查看 Node.js 中的                     navigator is not defined 错误
$ node test.js 

/test.js:1
ion (exports, require, module, __filename, __dirname) { console.log(navigator.
                                                                    ^
ReferenceError: navigator is not defined
    at Object.<anonymous> (/test.js:1:75)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Function.Module.runMain (module.js:497:10)
    at startup (node.js:119:16)
    at node.js:902:3

正如您看到的那样,navigator 在浏览器中可用,但在 Node.js 中不可用。(不好意思,让您的第一个                 Node.js 脚本失败了,但我想确保让您相信,在浏览器中运行 JavaScript 与在 Node.js 中运行它是不同的。)

根据堆栈跟踪的情况,正确的 Module 没有得到加载。(Modules 是在浏览器中运行 JavaScript 与在                 Node.js 中运行它之间的另一主要区别。我们将立刻讲述 Modules 的更多相关内容。)为了从 Node.js 获得类似的信息,请将                 test.js 的内容修改为:

console.log(process.versions)
console.log(process.arch)
console.log(process.platform)

再次输入 node test.js,可以看到类似于清单 3 中的输出。

清单 3. 在 Node.js                 中使用过程模块
$ node test.js

{ http_parser: '1.0',
  node: '0.10.28',
  v8: '3.14.5.9',
  ares: '1.9.0-DEV',
  uv: '0.10.27',
  zlib: '1.2.3',

  modules: '11',
  openssl: '1.0.1g' }
x64
darwin

在 Node.js 中成功运行第一个脚本之后,我们将接触下一个主要概念:模块。


回页首

什么是模块?

可以在 JavaScript 中创建单一功能的函数,但与在 Java、Ruby 或 Perl 中不同,无法将多个函数打包到一个能够导入导出的内聚模块或                 ”包“ 中。当然,使用 <script> 元素可以包含任意 JavaScript                 源代码文件,但这种历史悠久的方法在两个关键方面缺少正确的模块声明。

首先,使用 <script> 元素包含的任意 JavaScript                 将被加载到全局命名空间中。使用模块可以导入的函数被封装在一个局部命名的变量中。其次,同时更为关键的是,可以使用模块显式地声明依赖关系,而使用                 <script> 元素则做不到这一点。结果,导入 Module A 时也会同时导入依赖的 Modules B                 和 C。当应用程序变得复杂时,传递依赖关系管理很快将成为一种关键需求。

CommonJS

顾名思义,CommonJS 项目定义了一种通用的模块格式(包括其他浏览器之外的 JavaScript 规范)。Node.js 属于众多非官方的                     CommonJS 实现之一。RingoJS (类似于 Node.js 的一种应用服务器,运行在 JDK 上的 Rhino/Nashorn                    JavaScript 运行时之上) 基于 CommonJS,流行的 NoSQL 持久存储 CouchDB 和 MongoDB 也是如此。

模块是用户衷心期盼的下一 JavaScript 主要版本 (ECMAScript 6) 的功能,但直到该版本被广泛接受之前,Node.js                 目前使用的是它自己基于 CommonJS                 规范的模块版本。

使用 require 关键字可以在脚本中包含 CommonJS 模块。例如,清单 4 是对 Node.js 主页上的                 Hello World 脚本稍微进行修改后的版本。创建一个名为 example.js 的文件,并将清单 4 中的代码复制到其中。

清单 4. Node.js 中的 Hello                World
var http = require('http');
var port = 9090;
http.createServer(responseHandler).listen(port);
console.log('Server running at http://127.0.0.1:' + port + '/');

function responseHandler(req, res){
  res.writeHead(200, {'Content-Type': 'text/html'});
  res.end('<html><body><h1>Hello World</h1></body></html>');
}

输入 node example.js 命令运行新的 Web 服务器,然后在 Web 浏览器中访问 http://127.0.0.1:9090

看一看清单 4 中的头两行。您很可能写过几百次(或几千次)像 var port = 9090;                 这样的简单语句。这条语句定义了一个名为 port 的变量,并将数字 9090 赋值给它。第一行                 (var http = require('http');) 用于导入一个 CommonJS 模块。它引入                 http 模块并将它指派给一个局部变量。 and assigns it to a local variable. All                of the corresponding modules that http 依赖的所有对应模块也同时被                 require 语句导入。

example.js 后面的代码行:

  1. 创建一个新的 HTTP 服务器。
  2. 指定一个函数来处理响应。
  3. 开始监听指定端口上进入的 HTTP 请求。

这样通过寥寥几行 JavaScript 代码,就可以在 Node.js 中创建了一个简单的 Web                 服务器。在本系列随后的文章中您会看到,Express 将这个简单的例子被扩展用于处理更为复杂的路由,同时还将提供静态与动态生成的资源。

http 模块是 Node.js 安装的标准组件之一。其他标准的 Node.js 模块还支持文件                 I/O,读取来自用户的命令行输入,处理底层的 TCP 和 UDP 请求等等。访问 Node.js 文档的 Modules                 部分,查看标准模块的完整列表并了解它们的功能。

尽管模块列表内容十分丰富,但与可用的第三方模块列表相比,仍然是小巫见大巫。要访问它们,您需要熟悉另一个命令行实用工具:NPM。


回页首

什么是 NPM?

NPM 是 Node Packaged Modules 的简写。要查看包含超过 75,000 个公用第三方 Node 模块的清单,请访问 NPM 网站。在网站上搜索 yo                 模块。图 2 显示了搜索结果。

图 2. yo 模块的详细情况

显示了 yo 模块的详细信息的 NPM 搜索结果的屏幕截图

结果页面简要介绍了该模块(搭建 Yeoman 项目的 CLI                 工具),并显示它在过去一天、一周和一月内被下载的次数、编写该模块的作者、它依赖于哪些其他的模块(如果存在)等内容。最重要的是,结果页面给出了安装该模块的命令行语法。

要从命令行获取关于 yo 模块的类似信息,请输入 npm info yo                 命令。(如果您还不知道模块的官方名称,可以输入 npm search yo 来搜索名称中包含字符串                 yo 的所有模块。)npm info 命令显示模块的 package.json 文件的内容。

了解 package.json

每个 Node.js 模块都必须关联一个格式良好的 package.json 文件,因此,熟悉此文件的内容是值得的。清单 5、清单 6 和清单 7 分三部分显示了                 yo 模块的 package.json 文件的内容。

如清单 5 中所示,第一个元素通常是 namedescription 和一个可用                 versions 的 JSON 数组。

清单 5. package.json,第 1                 部分
$ npm info yo

{ name: 'yo',
  description: 'CLI tool for scaffolding out Yeoman projects',
  'dist-tags': { latest: '1.1.2' },
  versions: 
   [ 
     '1.0.0',
     '1.1.0',
     '1.1.1',
     '1.1.2' ],

要安装一个模块的最新版本,请输入 npm install package 命令。输入                 npm install package@version 可以安装一个特定的版本。

如清单 6 中所示,接下来将显示作者、维护者和可以直接查找源文件的 GitHub 库。

清单 6. package.json,第 2                 部分
author: 'Chrome Developer Relations',
repository: 
 { type: 'git',
   url: 'git://github.com/yeoman/yo' },
homepage: 'http://yeoman.io',
keywords: 
 [ 'front-end',
   'development',
   'dev',
   'build',
   'web',
   'tool',
   'cli',
   'scaffold',
   'stack' ],

在这个例子中,还可以看到一个指向项目主页的链接和一个相关关键字的 JSON 数组。并非所有 package.json                 文件中都会出现所有这些字段,但用户很少会抱怨与一个项目相关的元数据太多。

最后,清单 7                 中列出了附有显式版本号的依赖关系。这些版本号符合主版本.次版本.补丁版本的常用模式,被称为                     SemVer(语义版本控制)。

清单 7. package.json,第 3                 部分
engines: { node: '>=0.8.0', npm: '>=1.2.10' },
dependencies: 
 { 'yeoman-generator': '~0.16.0',
   nopt: '~2.1.1',
   lodash: '~2.4.1',
   'update-notifier': '~0.1.3',
   insight: '~0.3.0',
   'sudo-block': '~0.3.0',
   async: '~0.2.9',
   open: '0.0.4',
   chalk: '~0.4.0',
   findup: '~0.1.3',
   shelljs: '~0.2.6' },
peerDependencies: 
 { 'grunt-cli': '~0.1.7',
   bower: '>=0.9.0' },
devDependencies: 
 { grunt: '~0.4.2',
   mockery: '~1.4.0',
   'grunt-contrib-jshint': '~0.8.0',
   'grunt-contrib-watch': '~0.5.3',
   'grunt-mocha-test': '~0.8.1' },

这个 package.json 文件表明,它必须安装在 0.8.0 或更高版本的 Node.js 实例上。如果试图使用                 npm install 命令安装一个不受支持的版本,那么安装将会失败。

SemVer 的快捷语法

清单 7 中,您会注意到,很多依赖关系版本中都有一个波浪符号 (~)。这个符号相当于                     1.0.x(也属于有效语法),意思是 ”主版本必须是 1,次版本必须是 0,但您可以安装所能找到的最新补丁版本“。SemVer                         中的这种隐含表达法意味着,补丁版本绝不会 对 API                     做出重大修改(通常是对现有功能的缺陷修复),而次版本会在不打破现有功能的情况下引入另外的功能(比如新的函数调用)。

除了平台要求之外,这个 package.json 文件还提供几个依赖关系列表:

  • dependencies 部分列出了运行时的依赖关系。
  • devDependencies 部分列出了开发过程中需要的模块。
  • peerDependencies 部分支持作者定义项目之间的 ”对等“                     关系。这种功能通常用于指定基础项目与其插件之间的关系,但在这个例子中,它指出了包含 Yeoman 项目与 Yo 的其他两个项目(Grunt                     与 Bower)。

如果在不指定模块名的情况下输入 npm install 命令,那么 npm 会访问当前目录中的                 package.json 文件,并安装我刚刚讨论过的三部分内容中列出的所有依赖关系。

安装一个能正常工作的 MEAN 堆栈,下一步是安装 Yeoman 与相应的 Yeoman-MEAN 生成器。


回页首

安装 Yeoman

作为一名 Java 开发人员,我无法想象在没有诸如 Ant 或 Maven 这样的编译系统的情况下如何启动一个新项目。类似地,Groovy 和                 Grails 开发人员依靠的是 Gant(Ant 的一种 Groovy 实现)或                 Gradle。这些工具可以搭建起一个新的目录结构,动态下载依赖关系,并准备好将项目发布。

在纯粹的 Web 开发环境中,Yeoman 可以满足这种需要。Yeoman 是三种 Node.js 工具的集合,包括用于搭建的纯 JavaScript                 工具 Yo,管理客户端依赖关系的 Bower,以及准备项目发布的 Grunt。通过分析 清单 7                 可以得出这样的结论:安装 Yo 时也会安装它对等的 Grunt 和 Bower,这要感谢 package.json 中的                 peerDependencies 部分。

通常,输入 npm install yo --save 命令可以安装 yo 模块并更新                 package.json 文件中的 dependencies                 部分。(npm install yo --save-dev 用于更新                 devDependencies 部分。)但这三个对等的 Yeoman                 模块算不上是特定于项目的模块,它们是命令行实用工具,而非运行时依赖关系。要全局安装一个 NPM 包,需要在 install                 命令后增加一个 -g 标志。

在系统上安装 Yeoman:

npm install -g yo

在完成包安装后,输入 yo --version 命令来验证它已经在运行中。

Yeoman 与基础架构的所有余下部分都准备就绪后,便可以开始安装 MEAN 堆栈了。


回页首

安装 MeanJS

您可以手动安装 MEAN 堆栈的每一部分,但需要十分小心。谢天谢地,Yeoman 通过其 generators(生成器)                提供了一种更轻松的安装方式。

Yeoman 生成器就是引导一个新 Web                 项目更轻松的方式。该生成器提供了基础包及其所有依赖关系。此外,它通常还会包含一个工作的编译脚本及其所有相关插件。通常,该生成器还包含一个示例应用程序,包括测试在内。

Yeoman 团队构建和维护了几个 “官方的”                     Yeoman 生成器社区驱动的 Yeoman 生成器(超过 800 个)远远超过官方生成器的数量。

您将用于引导第一个 MEAN 应用程序的社区生成器被称为 MEAN.JS,这也在意料之中。

在 MEAN.JS 主页上,单击 Yo Generator 菜单选项或者直接访问 Generator 页面,图 3                 中显示了其中的一部分。

图 3. MEAN.JS Yeoman 生成器

MEAN.JS Yeoman 生成器页面的屏幕截图

该页面上的说明指出要首先 Yeoman,这一点您已经完成。下一步是全局安装 MEAN.JS 生成器:

npm install -g generator-meanjs

生成器准备就绪后,便可以开始创建您的第一个 MEAN 应用程序了。创建一个名为 test 的目录,使用 cd                 命令进入它,然后输入 yo meanjs 命令生成应用程序。回答最后两个问题,如清单 8                 中所示。(您可以为开始四个问题提供自己的答案。)

清单 8. 使用 MEAN.JS Yeoman                generator
$ mkdir test
$ cd test
$ yo meanjs

     _-----_
    |       |
    |--(o)--|   .--------------------------.
   `---------�  |    Welcome to Yeoman,    |
    ( _�U`_ )   |   ladies and gentlemen!  |
    /___A___\   '__________________________'
     |  ~  |
   __'.___.'__
 �   `  |� � Y `

You're using the official MEAN.JS generator.
[?] What would you like to call your application? 
Test
[?] How would you describe your application? 
Full-Stack JavaScript with MongoDB, Express, AngularJS, and Node.js
[?] How would you describe your application in comma separated key words?
MongoDB, Express, AngularJS, Node.js
[?] What is your company/author name? 
Scott Davis
[?] Would you like to generate the article example CRUD module? 
Yes
[?] Which AngularJS modules would you like to include? 
ngCookies, ngAnimate, ngTouch, ngSanitize

在回答最后一个问题后,您会看到一系列行为,这是 NPM 在下载所有服务器端的依赖关系(包括 Express)。NPM 完成后,Bower                 将下载所有客户端的依赖关系(包括 AngularJS、Bootstrap 和 jQuery)。

至此,您已经安装了 EAN 堆栈(Express、AngularJS 和 Node.js) — 目前只缺少 M                (MongoDB)。如果现在输入 grunt 命令,在没有安装 MongoDB 的情况下启动应用程序,您会看到类似于清单                 9 中的一条错误消息。

清单 9. 试图在没有 MongoDB 的情况下启动                 MeanJS
events.js:72
        throw er; // Unhandled 'error' event
              ^
Error: failed to connect to [localhost:27017]
    at null.<anonymous> 
(/test/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connection/server.js:546:74)

[nodemon] app crashed - waiting for file changes before starting...

如果启动应用程序时看到这条错误消息,请按下 Ctrl+C 键停止应用程序。

为了使用新的 MEAN 应用程序,现在需要安装 MongoDB。


回页首

安装 MongoDB

MongoDB 是一种 NoSQL 持久性存储。它不是使用 JavaScript 编写的,也不是 NPM 包。必须单独安装它才能完成 MEAN                 堆栈的安装。

访问 MongoDB 主页,下载平台特定的安装程序,并在安装                 MongoDB 时接受所有默认选项。

安装完成时,输入 mongod 命令启动 MongoDB 守护程序。

MeanJS Yeoman 生成器已经安装了一个名为 Mongoose 的                 MongoDB 客户端模块,您可以检查 package.json 文件来确认这一点。我将在后续的文章中详细介绍 MongoDB 和                 Mongoose。

安装并运行 MongoDB 后,最终您可以运行您的 MEAN 应用程序并观察使用效果了。


回页首

运行 MEAN 应用程序

要启动新安装的 MEAN 应用程序,在运行 MeanJS Yeoman 生成器之前,一定要位于您创建的 test 目录中。在输入                 grunt 命令时,输出内容应该如清单 10 中所示。

清单 10. 启动 MEAN.JS                 应用程序
$ grunt

Running "jshint:all" (jshint) task
>> 46 files lint free.

Running "csslint:all" (csslint) task
>> 2 files lint free.

Running "concurrent:default" (concurrent) task
Running "watch" task
Waiting...
Running "nodemon:dev" (nodemon) task
[nodemon] v1.0.20
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: app/views/**/*.* gruntfile.js server.js config/**/*.js app/**/*.js
[nodemon] starting `node --debug server.js`
debugger listening on port 5858

 NODE_ENV is not defined! Using default development environment

MEAN.JS application started on port 3000

jshintcsslint                     模块(均由生成器进行安装)可以确保源代码在句法和语体上是正确的。nodemon                 包监控文件系统中的代码修改情况,并在检测到有的情况下自动重启服务器,当开发人员需要快速而频繁地修改代码基础时,这可以极大地提高他们的效率。(nodemon                 包只在开发阶段运行,要监测生产环境的变化,必须重新部署应用程序并重启 Node.js。)

按照控制台输出的提示,访问 http://localhost:3000                 并运行您的新 MEAN 应用程序。

图 4 显示了 MEAN.JS 示例应用程序的主页。

图 4. MEAN.JS 示例应用程序的主页

MEAN.JS 主页的屏幕截图

在菜单栏中单击 Signup 创建一个新的用户账号。现在填写 Sign-up 页面上的所有字段(如图 5                 中所示),然后单击 Sign up。在后续的指南中,您可以通过 Facebook、Twitter 等启用                 OAuth 登录

图 5. MEAN.JS 示例应用程序的 Sign-up 页面

MEAN.JS 示例应用程序的 Sign-up 页面的屏幕截图

现在,您的本地 MongoDB 实例中已经保存了一组用户证书,您可以开始撰写新的文章了。单击 Articles                 菜单选项(当您登录之后才会显示出来),并创建一些示例文章。图 6 显示了 Articles 页面。

图 6. MeanJS 的文章页面

MeanJS 文章页面的屏幕截图

您已经创建了自己的第一个 MEAN 应用程序。欢迎加入!

结束语

在这篇指南中,您完成相当多的内容。安装 Node.js 并编写了第一个 Node.js 脚本。学习了模块并使用 NPM 安装了几个第三方模块。安装                 Yeoman 并将它作为可靠的 Web 开发平台,其中包含一个搭建实用工具 (Yo),一个编译脚本                 (Grunt),以及一个管理客户端依赖关系的实用工具 (Bower)。安装 MeanJS Yeoman 生成器,并使用它来创建第一个 MEAN                 应用程序。安装 MongoDB 与 Node.js 客户端库 Mongoose。最后运行您的首个 MEAN 应用程序。

下一次,我们会详细了解示例应用程序的源代码,从而了解 MEAN 太阳系中的所有四颗行星 (MongoDB、Express、AngularJS 和                 Node.js)是如何相互作用的。


回页首

下载

描述 名字 大小
样例代码 wa-mean1src.zip 1.38MB

 

参考资料

学习

  • 使用 Node.js、Express、AngularJS 和 MongoDB 构建一个实时投票应用程序”                 (developerWorks,2014 年 6 月):剖析一个在 IBM Bluemix™ 上部署的 MEAN                 开发项目。
  • 针对 Java 开发人员的 Node.js“(developerWorks,2011 年 11 月):介绍 Node.js                 并分析其事件驱动的并发性为何能引发用户广泛兴趣,甚至在死硬派 Java 开发人员中也是如此。
  • Node.js 起步(developerWorks,2014 年 1 月):查看这个时长 9 分钟的演示,其中快速介绍了                 Node.js 和 Express。
  • MongoDB:一种具有(所有正确的)RDBMS 行为的 NoSQL 数据库“(developerWorks,2010 年 9                 月):了解 MongoDB 的自定义 API、交互式 shell,以及对 RDBMS 样式的动态查询与快速简便的 MapReduce                 计算的支持。
  • 开始使用 JavaScript 语言“(developerWorks,2011 4 月和 8 月):在这篇由两部分组成的文章中学习                 JavaScript 的基础知识。
  • 针对 Java 开发人员的 JavaScript“(developerWorks,2011 年 4 月):分析                 JavaScript 为何是现代 Java 开发人员的重要工具的原因,并开始学习 JavaScript 变量、类型、函数和类。
  • LAMP 技术简介“(developerWorks,2005 年 5 月):将 MEAN                 与其以前的堆栈进行比较。
  • Mastering Grails(developerWorks, 2008-2009 年):查阅                 Scott Davis 撰写的关于 Grails (基于 Groovy 的 Web 开发框架)的系列文章。
  • 查看 HTML5 专题,了解更多和 HTML5 相关的知识和动向。
  • developerWorks Web development                专区:通过专门关于 Web 技术的文章和教程,扩展您在网站开发方面的技能。
  • developerWorks Ajax 资源中心:这是有关 Ajax 编程模型信息的一站式中心,包括很多文档、教程、论坛、blog、wiki 和新闻。任何 Ajax 的新信息都能在这里找到。

讨论

from:http://www.ibm.com/developerworks/cn/web/wa-mean1/index.html?ca=drs

函数式编程

当我们说起函数式编程来说,我们会看到如下函数式编程的长相:

  • 函数式编程的三大特性:
    • immutable data 不可变数据:像Clojure一样,默认上变量是不可变的,如果你要改变变量,你需要把变量copy出去修改。这样一来,可以让你的程序少很多Bug。因为,程序中的状态不好维护,在并发的时候更不好维护。(你可以试想一下如果你的程序有个复杂的状态,当以后别人改你代码的时候,是很容易出bug的,在并行中这样的问题就更多了)
    • first class functions:这个技术可以让你的函数就像变量一样来使用。也就是说,你的函数可以像变量一样被创建,修改,并当成变量一样传递,返回或是在函数中嵌套函数。这个有点像Javascript的Prototype(参看Javascript的面向对象编程
    • 尾递归优化:我们知道递归的害处,那就是如果递归很深的话,stack受不了,并会导致性能大幅度下降。所以,我们使用尾递归优化技术——每次递归时都会重用stack,这样一来能够提升性能,当然,这需要语言或编译器的支持。Python就不支持。
  • 函数式编程的几个技术
    • map & reduce:这个技术不用多说了,函数式编程最常见的技术就是对一个集合做Map和Reduce操作。这比起过程式的语言来说,在代码上要更容易阅读。(传统过程式的语言需要使用for/while循环,然后在各种变量中把数据倒过来倒过去的)这个很像C++中的STL中的foreach,find_if,count_if之流的函数的玩法。
    • pipeline:这个技术的意思是,把函数实例成一个一个的action,然后,把一组action放到一个数组或是列表中,然后把数据传给这个action list,数据就像一个pipeline一样顺序地被各个函数所操作,最终得到我们想要的结果。
    • recursing 递归:递归最大的好处就简化代码,他可以把一个复杂的问题用很简单的代码描述出来。注意:递归的精髓是描述问题,而这正是函数式编程的精髓。
    • currying:把一个函数的多个参数分解成多个函数, 然后把函数多层封装起来,每层函数都返回一个函数去接收下一个参数这样,可以简化函数的多个参数。在C++中,这个很像STL中的bind_1st或是bind2nd。
    • higher order function 高阶函数:所谓高阶函数就是函数当参数,把传入的函数做一个封装,然后返回这个封装函数。现象上就是函数传进传出,就像面向对象对象满天飞一样。

 

  • 还有函数式的一些好处
    • parallelization 并行:所谓并行的意思就是在并行环境下,各个线程之间不需要同步或互斥。
    • lazy evaluation 惰性求值:这个需要编译器的支持。表达式不在它被绑定到变量之后就立即求值,而是在该值被取用的时候求值,也就是说,语句如x:=expression; (把一个表达式的结果赋值给一个变量)明显的调用这个表达式被计算并把结果放置到 x 中,但是先不管实际在 x 中的是什么,直到通过后面的表达式中到 x 的引用而有了对它的值的需求的时候,而后面表达式自身的求值也可以被延迟,最终为了生成让外界看到的某个符号而计算这个快速增长的依赖树。
    • determinism 确定性:所谓确定性的意思就是像数学那样 f(x) = y ,这个函数无论在什么场景下,都会得到同样的结果,这个我们称之为函数的确定性。而不是像程序中的很多函数那样,同一个参数,却会在不同的场景下计算出不同的结果。所谓不同的场景的意思就是我们的函数会根据一些运行中的状态信息的不同而发生变化。

上面的那些东西太抽象了,还是让我们来循序渐近地看一些例子吧。

我们先用一个最简单的例子来说明一下什么是函数式编程。

先看一个非函数式的例子:

1
2
3
4
int cnt; void increment(){
    cnt++;
}

那么,函数式的应该怎么写呢?

1
2
3
int increment(int cnt){
    return cnt+1;
}

你可能会觉得这个例子太普通了。是的,这个例子就是函数式编程的准则:不依赖于外部的数据,而且也不改变外部数据的值,而是返回一个新的值给你

我们再来看一个简单例子:

1
2
3
4
5
6
7
8
9
10
def inc(x):
    def incx(y):
        return x+y
    return incx
 
inc2 = inc(2)
inc5 = inc(5)
 print inc2(5) # 输出 7
print inc5(5) # 输出 10

我们可以看到上面那个例子inc()函数返回了另一个函数incx(),于是我们可以用inc()函数来构造各种版本的inc函数,比如:inc2()和inc5()。这个技术其实就是上面所说的Currying技术。从这个技术上,你可能体会到函数式编程的理念:把函数当成变量来用,关注于描述问题而不是怎么实现,这样可以让代码更易读。

Map & Reduce

在函数式编程中,我们不应该用循环迭代的方式,我们应该用更为高级的方法,如下所示的Python代码

1
2
3
name_len = map(len, ["hao", "chen", "coolshell"]) print name_len # 输出 [3, 4, 9]

你可以看到这样的代码很易读,因为,这样的代码是在描述要干什么,而不是怎么干

我们再来看一个Python代码的例子:

1
2
3
4
5
6
def toUpper(item):
      return item.upper()
 
upper_name = map(toUpper, ["hao", "chen", "coolshell"]) print upper_name # 输出 ['HAO', 'CHEN', 'COOLSHELL']

顺便说一下,上面的例子个是不是和我们的STL的transform有些像?

1
2
3
4
5
6
7
8
9
10
11
12
#include <iostream>
#include <algorithm>
#include <string>
using namespace std;
 int main() {
  string s="hello";
  string out;
  transform(s.begin(), s.end(), back_inserter(out), ::toupper);
  cout << out << endl;
  // 输出:HELLO }

在上面Python的那个例子中我们可以看到,我们写义了一个函数toUpper,这个函数没有改变传进来的值,只是把传进来的值做个简单的操作,然后返回。然后,我们把其用在map函数中,就可以很清楚地描述出我们想要干什么。而不会去理解一个在循环中的怎么实现的代码,最终在读了很多循环的逻辑后才发现原来是这个或那个意思。 下面,我们看看描述实现方法的过程式编程是怎么玩的(看上去是不是不如函数式的清晰?):

1
2
3
4
upname =['HAO', 'CHEN', 'COOLSHELL']
lowname =[]  for i in range(len(upname)):
    lowname.append( upname[i].lower() )

对于map我们别忘了lambda表达式:你可以简单地理解为这是一个inline的匿名函数。下面的lambda表达式相当于:def func(x): return x*x

1
2
3
squares = map(lambda x: x * x, range(9)) print squares # 输出 [0, 1, 4, 9, 16, 25, 36, 49, 64]

我们再来看看reduce怎么玩?(下面的lambda表达式中有两个参数,也就是说每次从列表中取两个值,计算结果后把这个值再放回去,下面的表达式相当于:((((1+2)+3)+4)+5) )

1
2
print reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) # 输出 15

Python中的除了map和reduce外,还有一些别的如filter, find, all, any的函数做辅助(其它函数式的语言也有),可以让你的代码更简洁,更易读。 我们再来看一个比较复杂的例子:

计算数组中正数的平均值
1
2
3
4
5
6
7
8
9
10
11
12
13
num =[2, -5, 9, 7, -2, 5, 3, 1, 0, -3, 8]
positive_num_cnt = 0
positive_num_sum = 0 for i in range(len(num)):
    if num[i] > 0:
        positive_num_cnt += 1
        positive_num_sum += num[i]
 if positive_num_cnt > 0:
    average = positive_num_sum / positive_num_cnt
 print average # 输出 5

如果用函数式编程,这个例子可以写成这样:

1
2
positive_num = filter(lambda x: x>0, num)
average = reduce(lambda x,y: x+y, positive_num) / len( positive_num )

C++11玩的法:

1
2
3
4
5
6
7
8
9
10
11
12
#include <iostream>
#include <algorithm>
#include <numeric>
#include <string>
#include <vector>
using namespace std;
 
vector num {2, -5, 9, 7, -2, 5, 3, 1, 0, -3, 8};
vector p_num;
copy_if(num.begin(), num.end(), back_inserter(p_num), [](int i){ return (i>0);} ); int average = accumulate(p_num.begin(), p_num.end(), 0) / p_num.size();
cout << "averge: " << average << endl;

我们可以看到,函数式编程有如下好处:

1)代码更简单了。 2)数据集,操作,返回值都放到了一起。 3)你在读代码的时候,没有了循环体,于是就可以少了些临时变量,以及变量倒来倒去逻辑。 4)你的代码变成了在描述你要干什么,而不是怎么去干。

最后,我们来看一下Map/Reduce这样的函数是怎么来实现的(下面是Javascript代码)

map函数
1
2
3
4
5
6
7
var map = function (mappingFunction, list) {
  var result = [];
  forEach(list, function (item) {
    result.push(mappingFunction(item));
  });
  return result;
};

下面是reduce函数的javascript实现(谢谢 @下雨在家 修正的我原来的简单版本)

reduce函数
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
function reduce(actionFunction, list, initial){
    var accumulate;
    var temp;
    if(initial){
        accumulate = initial;
    }
    else{
        accumulate = list.shfit();
    }
    temp = list.shift();
    while(temp){
        accumulate = actionFunction(accumulate,temp);
        temp = list.shift();
    }
    return accumulate;
};

Declarative Programming vs Imperative Programming

前面提到过多次的函数式编程关注的是:describe what to do, rather than how to do it. 于是,我们把以前的过程式的编程范式叫做 Imperative Programming – 指令式编程,而把函数式的这种范式叫做 Declarative Programming – 声明式编程。

下面我们看一下相关的示例(本示例来自这篇文章 )。

比如,我们有3辆车比赛,简单起见,我们分别给这3辆车有70%的概率可以往前走一步,一共有5次机会,我们打出每一次这3辆车的前行状态。

对于Imperative Programming来说,代码如下(Python):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
from random import random
 
time = 5
car_positions = [1, 1, 1]
 while time:
    # decrease time     time -= 1
 
    print ''
    for i in range(len(car_positions)):
        # move car         if random() > 0.3:
            car_positions[i] += 1
 
        # draw car         print '-' * car_positions[i]

我们可以把这个两重循环变成一些函数模块,这样有利于我们更容易地阅读代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
from random import random
 def move_cars():
    for i, _ in enumerate(car_positions):
        if random() > 0.3:
            car_positions[i] += 1
 def draw_car(car_position):
    print '-' * car_position
 def run_step_of_race():
    global time
    time -= 1
    move_cars()
 def draw():
    print ''
    for car_position in car_positions:
        draw_car(car_position)
 
time = 5
car_positions = [1, 1, 1]
 while time:
    run_step_of_race()
    draw()

上面的代码,我们可以从主循环开始,我们可以很清楚地看到程序的主干,因为我们把程序的逻辑分成了几个函数,这样一来,我们的代码逻辑也会变得几个小碎片,于是我们读代码时要考虑的上下文就少了很多,阅读代码也会更容易。不像第一个示例,如果没有注释和说明,你还是需要花些时间理解一下。而把代码逻辑封装成了函数后,我们就相当于给每个相对独立的程序逻辑取了个名字,于是代码成了自解释的

但是,你会发现,封装成函数后,这些函数都会依赖于共享的变量来同步其状态。于是,我们在读代码的过程时,每当我们进入到函数里,一量读到访问了一个外部的变量,我们马上要去查看这个变量的上下文,然后还要在大脑里推演这个变量的状态, 我们才知道程序的真正逻辑。也就是说,这些函数间必需知道其它函数是怎么修改它们之间的共享变量的,所以,这些函数是有状态的

我们知道,有状态并不是一件很好的事情,无论是对代码重用,还是对代码的并行来说,都是有副作用的。因此,我们要想个方法把这些状态搞掉,于是出现了我们的 Functional Programming 的编程范式。下面,我们来看看函数式的方式应该怎么写?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
from random import random
 def move_cars(car_positions):
    return map(lambda x: x + 1 if random() > 0.3 else x,
               car_positions)
 def output_car(car_position):
    return '-' * car_position
 def run_step_of_race(state):
    return {'time': state['time'] - 1,
            'car_positions': move_cars(state['car_positions'])}
 def draw(state):
    print ''
    print '\n'.join(map(output_car, state['car_positions']))
 def race(state):
    draw(state)
    if state['time']:
        race(run_step_of_race(state))
 
race({'time': 5,
      'car_positions': [1, 1, 1]})

上面的代码依然把程序的逻辑分成了函数,不过这些函数都是functional的。因为它们有三个症状:

1)它们之间没有共享的变量。 2)函数间通过参数和返回值来传递数据。 3)在函数里没有临时变量。

我们还可以看到,for循环被递归取代了(见race函数)—— 递归是函数式编程中带用到的技术,正如前面所说的,递归的本质就是描述问题是什么。

Pipeline

pipeline 管道借鉴于Unix Shell的管道操作——把若干个命令串起来,前面命令的输出成为后面命令的输入,如此完成一个流式计算。(注:管道绝对是一个伟大的发明,他的设哲学就是KISS – 让每个功能就做一件事,并把这件事做到极致,软件或程序的拼装会变得更为简单和直观。这个设计理念影响非常深远,包括今天的Web Service,云计算,以及大数据的流式计算等等)

比如,我们如下的shell命令:

1
ps auwwx | awk '{print $2}' | sort -n | xargs echo

如果我们抽象成函数式的语言,就像下面这样:

1
xargs(  echo, sort(n, awk('print $2', ps(auwwx)))  )

也可以类似下面这个样子:

1
pids = for_each(result, [ps_auwwx, awk_p2, sort_n, xargs_echo])

好了,让我们来看看函数式编程的Pipeline怎么玩?

我们先来看一个如下的程序,这个程序的process()有三个步骤:

1)找出偶数。 2)乘以3 3)转成字符串返回

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
def process(num):
    # filter out non-evens     if num % 2 != 0:
        return
    num = num * 3
    num = 'The Number: %s' % num
    return num
 
nums = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
 for num in nums:
    print process(num)
 # 输出:
# None
# The Number: 6
# None
# The Number: 12
# None
# The Number: 18
# None
# The Number: 24
# None
# The Number: 30

我们可以看到,输出的并不够完美,另外,代码阅读上如果没有注释,你也会比较晕。下面,我们来看看函数式的pipeline(第一种方式)应该怎么写?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
def even_filter(nums):
    for num in nums:
        if num % 2 == 0:
            yield num def multiply_by_three(nums):
    for num in nums:
        yield num * 3 def convert_to_string(nums):
    for num in nums:
        yield 'The Number: %s' % num
 
nums = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
pipeline = convert_to_string(multiply_by_three(even_filter(nums))) for num in pipeline:
    print num # 输出:
# The Number: 6
# The Number: 12
# The Number: 18
# The Number: 24
# The Number: 30

我们动用了Python的关键字 yield,这个关键字主要是返回一个Generator,yield 是一个类似 return 的关键字,只是这个函数返回的是个Generator-生成器。所谓生成器的意思是,yield返回的是一个可迭代的对象,并没有真正的执行函数。也就是说,只有其返回的迭代对象被真正迭代时,yield函数才会正真的运行,运行到yield语句时就会停住,然后等下一次的迭代。(这个是个比较诡异的关键字)这就是lazy evluation。

好了,根据前面的原则——“使用Map & Reduce,不要使用循环”,那我们用比较纯朴的Map & Reduce吧。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
def even_filter(nums):
    return filter(lambda x: x%2==0, nums)
 def multiply_by_three(nums):
    return map(lambda x: x*3, nums)
 def convert_to_string(nums):
    return map(lambda x: 'The Number: %s' % x,  nums)
 
nums = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
pipeline = convert_to_string(
               multiply_by_three(
                   even_filter(nums)
               )
            ) for num in pipeline:
    print num

但是他们的代码需要嵌套使用函数,这个有点不爽,如果我们能像下面这个样子就好了(第二种方式)。

1
2
3
pipeline_func(nums, [even_filter,
                     multiply_by_three,
                     convert_to_string])

那么,pipeline_func 实现如下:

1
2
3
4
def pipeline_func(data, fns):
    return reduce(lambda a, x: x(a),
                  fns,
                  data)

好了,在读过这么多的程序后,你可以回头看一下这篇文章的开头对函数式编程的描述,可能你就更有感觉了。

最后,我希望这篇浅显易懂的文章能让你感受到函数式编程的思想,就像OO编程,泛型编程,过程式编程一样,我们不用太纠结是不是我们的程序就是OO,就是functional的,我们重要的品味其中的味道

参考

补充:评论中redraiment这个评论大家也可以读一读。

感谢谢网友S142857 提供的shell风格的python pipeline:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
class Pipe(object):
    def __init__(self, func):
        self.func = func
 
    def __ror__(self, other):
        def generator():
            for obj in other:
                if obj is not None:
                    yield self.func(obj)
        return generator()
@Pipe def even_filter(num):
    return num if num % 2 == 0 else None
@Pipe def multiply_by_three(num):
    return num*3
@Pipe def convert_to_string(num):
    return 'The Number: %s' % num
@Pipe def echo(item):
    print item
    return item
 def force(sqs):
    for item in sqs: pass
 
nums = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
 
force(nums | even_filter | multiply_by_three | convert_to_string | echo)

(全文完)
from:http://coolshell.cn/articles/10822.html 酷 壳 – CoolShell.cn

Private Members in JavaScript

JavaScript is the world’s most misunderstood programming language. Some believe that it lacks the property of information hiding because objects cannot have private instance variables and methods. But this is a misunderstanding. JavaScript objects can have private members. Here’s how.

Objects

JavaScript is fundamentally about objects. Arrays are objects. Functions are objects. Objects are objects. So what are objects? Objects are collections of name-value pairs. The names are strings, and the values are strings, numbers, booleans, and objects (including arrays and functions). Objects are usually implemented as hashtables so values can be retrieved quickly.

If a value is a function, we can consider it a method. When a method of an object is invoked, the this variable is set to the object. The method can then access the instance variables through the this variable.

Objects can be produced by constructors, which are functions which initialize objects. Constructors provide the features that classes provide in other languages, including static variables and methods.

Public

The members of an object are all public members. Any function can access, modify, or delete those members, or add new members. There are two main ways of putting members in a new object:

In the constructor

This technique is usually used to initialize public instance variables. The constructor’s this variable is used to add members to the object.

function Container(param) {
    this.member = param;
}

So, if we construct a new object

var myContainer = new Container('abc');

then myContainer.member contains 'abc'.

In the prototype

This technique is usually used to add public methods. When a member is sought and it isn’t found in the object itself, then it is taken from the object’s constructor’s prototype member. The prototype mechanism is used for inheritance. It also conserves memory. To add a method to all objects made by a constructor, add a function to the constructor’s prototype:

Container.prototype.stamp = function (string) {
    return this.member + string;
}

So, we can invoke the method

myContainer.stamp('def')

which produces 'abcdef'.

Private

Private members are made by the constructor. Ordinary vars and parameters of the constructor becomes the private members.

function Container(param) {
    this.member = param;
    var secret = 3;
    var that = this;
}

This constructor makes three private instance variables: param, secret, and that. They are attached to the object, but they are not accessible to the outside, nor are they accessible to the object’s own public methods. They are accessible to private methods. Private methods are inner functions of the constructor.

function Container(param) {

    function dec() {
        if (secret > 0) {
            secret -= 1;
            return true;
        } else {
            return false;
        }
    }

    this.member = param;
    var secret = 3;
    var that = this;
}

The private method dec examines the secret instance variable. If it is greater than zero, it decrements secret and returns true. Otherwise it returns false. It can be used to make this object limited to three uses.

By convention, we make a private that variable. This is used to make the object available to the private methods. This is a workaround for an error in the ECMAScript Language Specification which causes this to be set incorrectly for inner functions.

Private methods cannot be called by public methods. To make private methods useful, we need to introduce a privileged method.

Privileged

A privileged method is able to access the private variables and methods, and is itself accessible to the public methods and the outside. It is possible to delete or replace a privileged method, but it is not possible to alter it, or to force it to give up its secrets.

Privileged methods are assigned with this within the constructor.

function Container(param) {

    function dec() {
        if (secret > 0) {
            secret -= 1;
            return true;
        } else {
            return false;
        }
    }

    this.member = param;
    var secret = 3;
    var that = this;

    this.service = function () {
        return dec() ? that.member : null;
    };
}

service is a privileged method. Calling myContainer.service() will return 'abc' the first three times it is called. After that, it will return null. service calls the private dec method which accesses the private secret variable. service is available to other objects and methods, but it does not allow direct access to the private members.

Closures

This pattern of public, private, and privileged members is possible because JavaScript has closures. What this means is that an inner function always has access to the vars and parameters of its outer function, even after the outer function has returned. This is an extremely powerful property of the language. There is no book currently available on JavaScript programming that shows how to exploit it. Most don’t even mention it.

Private and privileged members can only be made when an object is constructed. Public members can be added at any time.

Patterns

Public

function Constructor()   {

this.membername =       value;

} Constructor.prototype.membername =    value;

Private

function Constructor()   {

var that = this;       var membername = value;function membername() {}

}

Note: The function statement

function       membername() {}

is shorthand for

var membername = function membername()    {};

Privileged

function Constructor()   {

this.membername = function () {};

}

from: http://www.crockford.com/javascript/private.html