Category Archives: 多线程

Java concurrency (multi-threading)

Table of Contents
Java concurrency (multi-threading). This article describes how to do concurrent programming with Java. It covers the concepts of parallel programming, immutability, threads, the executor framework (thread pools), futures, callables CompletableFuture and the fork-join framework.

1. Concurrency

1.1. What is concurrency?

Concurrency is the ability to run several programs or several parts of a program in parallel. If a time consuming task can be performed asynchronously or in parallel, this improve the throughput and the interactivity of the program.

A modern computer has several CPU’s or several cores within one CPU. The ability to leverage these multi-cores can be the key for a successful high-volume application.

1.2. Process vs. threads

A process runs independently and isolated of other processes. It cannot directly access shared data in other processes. The resources of the process, e.g. memory and CPU time, are allocated to it via the operating system.

A thread is a so called lightweight process. It has its own call stack, but can access shared data of other threads in the same process. Every thread has its own memory cache. If a thread reads shared data it stores this data in its own memory cache. A thread can re-read the shared data.

A Java application runs by default in one process. Within a Java application you work with several threads to achieve parallel processing or asynchronous behavior.

2. Improvements and issues with concurrency

2.1. Limits of concurrency gains

Within a Java application you work with several threads to achieve parallel processing or asynchronous behavior. Concurrency promises to perform certain task faster as these tasks can be divided into subtasks and these subtasks can be executed in parallel. Of course the runtime is limited by parts of the task which can be performed in parallel.

The theoretical possible performance gain can be calculated by the following rule which is referred to as Amdahl’s Law.

If F is the percentage of the program which can not run in parallel and N is the number of processes, then the maximum performance gain is 1 / (F+ ((1-F)/n)).

2.2. Concurrency issues

Threads have their own call stack, but can also access shared data. Therefore you have two basic problems, visibility and access problems.

A visibility problem occurs if thread A reads shared data which is later changed by thread B and thread A is unaware of this change.

An access problem can occur if several thread access and change the same shared data at the same time.

Visibility and access problem can lead to

  • Liveness failure: The program does not react anymore due to problems in the concurrent access of data, e.g. deadlocks.
  • Safety failure: The program creates incorrect data.

3. Concurrency in Java

3.1. Processes and Threads

A Java program runs in its own process and by default in one thread. Java supports threads as part of the Java language via the Thread code. The Java application can create new threads via this class.

Java 1.5 also provides improved support for concurrency with the in the java.util.concurrent package.

3.2. Locks and thread synchronization

Java provides locks to protect certain parts of the code to be executed by several threads at the same time. The simplest way of locking a certain method or Java class is to define the method or class with the synchronized keyword.

The synchronized keyword in Java ensures:

  • that only a single thread can execute a block of code at the same time
  • that each thread entering a synchronized block of code sees the effects of all previous modifications that were guarded by the same lock

Synchronization is necessary for mutually exclusive access to blocks of and for reliable communication between threads.

You can use the synchronized keyword for the definition of a method. This would ensure that only one thread can enter this method at the same time. Another threads which is calling this method would wait until the first threads leaves this method.

public synchronized void critial() {
    // some thread critical stuff
    // here
}

You can also use the synchronized keyword to protect blocks of code within a method. This block is guarded by a key, which can be either a string or an object. This key is called the lock.

All code which is protected by the same lock can only be executed by one thread at the same time

For example the following datastructure will ensure that only one thread can access the inner block of the add() and next() methods.

package de.vogella.pagerank.crawler;

import java.util.ArrayList;
import java.util.List;

/**
 * Data structure for a web crawler. Keeps track of the visited sites and keeps
 * a list of sites which needs still to be crawled.
 *
 * @author Lars Vogel
 *
 */
public class CrawledSites {
    private List<String> crawledSites = new ArrayList<String>();
    private List<String> linkedSites = new ArrayList<String>();

    public void add(String site) {
        synchronized (this) {
            if (!crawledSites.contains(site)) {
                linkedSites.add(site);
            }
        }
    }

    /**
     * Get next site to crawl. Can return null (if nothing to crawl)
     */
    public String next() {
        if (linkedSites.size() == 0) {
            return null;
        }
        synchronized (this) {
            // Need to check again if size has changed
            if (linkedSites.size() > 0) {
                String s = linkedSites.get(0);
                linkedSites.remove(0);
                crawledSites.add(s);
                return s;
            }
            return null;
        }
    }

}

3.3. Volatile

If a variable is declared with the volatile keyword then it is guaranteed that any thread that reads the field will see the most recently written value. The volatile keyword will not perform any mutual exclusive lock on the variable.

As of Java 5 write access to a volatile variable will also update non-volatile variables which were modified by the same thread. This can also be used to update values within a reference variable, e.g. for a volatile variable person. In this case you must use a temporary variable person and use the setter to initialize the variable and then assign the temporary variable to the final variable. This will then make the address changes of this variable and the values visible to other threads.

4. The Java memory model

4.1. Overview

The Java memory model describes the communication between the memory of the threads and the main memory of the application.

It defines the rules how changes in the memory done by threads are propagated to other threads.

The Java memory model also defines the situations in which a thread re-fresh its own memory from the main memory.

It also describes which operations are atomic and the ordering of the operations.

4.2. Atomic operation

An atomic operation is an operation which is performed as a single unit of work without the possibility of interference from other operations.

The Java language specification guarantees that reading or writing a variable is an atomic operation(unless the variable is of type long or double ). Operations variables of type long or double are only atomic if they declared with the volatile keyword.

Assume i is defined as int. The i++ (increment) operation it not an atomic operation in Java. This also applies for the other numeric types, e.g. long. etc).

The i++ operation first reads the value which is currently stored in i (atomic operations) and then it adds one to it (atomic operation). But between the read and the write the value of i might have changed.

Since Java 1.5 the java language provides atomic variables, e.g. AtomicInteger or AtomicLong which provide methods like getAndDecrement(), getAndIncrement() and getAndSet() which are atomic.

4.3. Memory updates in synchronized code

The Java memory model guarantees that each thread entering a synchronized block of code sees the effects of all previous modifications that were guarded by the same lock.

5. Immutability and Defensive Copies

5.1. Immutability

The simplest way to avoid problems with concurrency is to share only immutable data between threads. Immutable data is data which cannot changed.

To make a class immutable make

  • all its fields final
  • the class declared as final
  • the this reference is not allowed to escape during construction
  • Any fields which refer to mutable data objects are
  • private
  • have no setter method
  • they are never directly returned of otherwise exposed to a caller
  • if they are changed internally in the class this change is not visible and has no effect outside of the class

An immutable class may have some mutable data which is uses to manages its state but from the outside this class nor any attribute of this class can get changed.

For all mutable fields, e.g. Arrays, that are passed from the outside to the class during the construction phase, the class needs to make a defensive-copy of the elements to make sure that no other object from the outside still can change the data

5.2. Defensive Copies

You must protect your classes from calling code. Assume that calling code will do its best to change your data in a way you didn’t expect it. While this is especially true in case of immutable data it is also true for non-immutable data which you still not expect that this data is changed outside your class.

To protect your class against that you should copy data you receive and only return copies of data to calling code.

The following example creates a copy of a list (ArrayList) and returns only the copy of the list. This way the client of this class cannot remove elements from the list.

package de.vogella.performance.defensivecopy;

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;

public class MyDataStructure {
    List<String> list = new ArrayList<String>();

    public void add(String s) {
        list.add(s);
    }

    /**
     * Makes a defensive copy of the List and return it
     * This way cannot modify the list itself
     *
     * @return List<String>
     */
    public List<String> getList() {
        return Collections.unmodifiableList(list);
    }
}

6. Threads in Java

The base means for concurrency are is the java.lang.Threads class. A Thread executes an object of type java.lang.Runnable.

Runnable is an interface with defines the run() method. This method is called by the Thread object and contains the work which should be done. Therefore the “Runnable” is the task to perform. The Thread is the worker who is doing this task.

The following demonstrates a task (Runnable) which counts the sum of a given range of numbers. Create a new Java project called de.vogella.concurrency.threads for the example code of this section.

package de.vogella.concurrency.threads;

/**
 * MyRunnable will count the sum of the number from 1 to the parameter
 * countUntil and then write the result to the console.
 * <p>
 * MyRunnable is the task which will be performed
 *
 * @author Lars Vogel
 *
 */
public class MyRunnable implements Runnable {
    private final long countUntil;

    MyRunnable(long countUntil) {
        this.countUntil = countUntil;
    }

    @Override
    public void run() {
        long sum = 0;
        for (long i = 1; i < countUntil; i++) {
            sum += i;
        }
        System.out.println(sum);
    }
}

The following example demonstrate the usage of the Thread and the Runnable class.

package de.vogella.concurrency.threads;

import java.util.ArrayList;
import java.util.List;

public class Main {

    public static void main(String[] args) {
        // We will store the threads so that we can check if they are done
        List<Thread> threads = new ArrayList<Thread>();
        // We will create 500 threads
        for (int i = 0; i < 500; i++) {
            Runnable task = new MyRunnable(10000000L + i);
            Thread worker = new Thread(task);
            // We can set the name of the thread
            worker.setName(String.valueOf(i));
            // Start the thread, never call method run() direct
            worker.start();
            // Remember the thread for later usage
            threads.add(worker);
        }
        int running = 0;
        do {
            running = 0;
            for (Thread thread : threads) {
                if (thread.isAlive()) {
                    running++;
                }
            }
            System.out.println("We have " + running + " running threads. ");
        } while (running > 0);

    }
}

Using the Thread class directly has the following disadvantages.

  • Creating a new thread causes some performance overhead.
  • Too many threads can lead to reduced performance, as the CPU needs to switch between these threads.
  • You cannot easily control the number of threads, therefore you may run into out of memory errors due to too many threads.

The java.util.concurrent package offers improved support for concurrency compared to the direct usage of Threads. This package is described in the next section.

7. Threads pools with the Executor Framework

You find this examples in the source section in Java project called de.vogella.concurrency.threadpools.

Thread pools manage a pool of worker threads. The thread pools contains a work queue which holds tasks waiting to get executed.

A thread pool can be described as a collection of Runnable objects.

(work queue) and a connections of running threads. These threads are constantly running and are checking the work query for new work. If there is new work to be done they execute this Runnable. The Thread class itself provides a method, e.g. execute(Runnable r) to add a new Runnable object to the work queue.

The Executor framework provides example implementation of the java.util.concurrent.Executor interface, e.g. Executors.newFixedThreadPool(int n) which will create n worker threads. The ExecutorService adds life cycle methods to the Executor, which allows to shutdown the Executor and to wait for termination.

If you want to use one thread pool with one thread which executes several runnables you can use the Executors.newSingleThreadExecutor() method.

Create again the Runnable.

package de.vogella.concurrency.threadpools;

/**
 * MyRunnable will count the sum of the number from 1 to the parameter
 * countUntil and then write the result to the console.
 * <p>
 * MyRunnable is the task which will be performed
 *
 * @author Lars Vogel
 *
 */
public class MyRunnable implements Runnable {
    private final long countUntil;

    MyRunnable(long countUntil) {
        this.countUntil = countUntil;
    }

    @Override
    public void run() {
        long sum = 0;
        for (long i = 1; i < countUntil; i++) {
            sum += i;
        }
        System.out.println(sum);
    }
}

Now you run your runnables with the executor framework.

package de.vogella.concurrency.threadpools;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class Main {
    private static final int NTHREDS = 10;

    public static void main(String[] args) {
        ExecutorService executor = Executors.newFixedThreadPool(NTHREDS);
        for (int i = 0; i < 500; i++) {
            Runnable worker = new MyRunnable(10000000L + i);
            executor.execute(worker);
        }
        // This will make the executor accept no new threads
        // and finish all existing threads in the queue
        executor.shutdown();
        // Wait until all threads are finish
        executor.awaitTermination();
        System.out.println("Finished all threads");
    }
}

In case the threads should return some value (result-bearing threads) then you can use the java.util.concurrent.Callable class.

8. Futures and Callables

8.1. Futures and Callables

The executor framework presented in the last chapter uses Runnable objects. Unfortunately a Runnable cannot return a result to the caller.

In case you expect your threads to return a computed result you can use java.util.concurrent.Callable. The Callable object allows to return values after completion.

The Callable object uses generics to define the type of object which is returned.

If you submit a Callable object to an Executor, the framework returns an object of type java.util.concurrent.Future. Future exposes methods allowing a client to monitor the progress of a task being executed by a different thread. Therefore, a Future object can be used to check the status of a Callable. It can also be used to retrieve the result from the Callable.

On the Executor you can use the method submit to submit a Callable and to get a future. To retrieve the result of the future use the get() method.

package de.vogella.concurrency.callables;

import java.util.concurrent.Callable;

public class MyCallable implements Callable<Long> {
    @Override
    public Long call() throws Exception {
        long sum = 0;
        for (long i = 0; i <= 100; i++) {
            sum += i;
        }
        return sum;
    }
}
package de.vogella.concurrency.callables;

import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

public class CallableFutures {
    private static final int NTHREDS = 10;

    public static void main(String[] args) {

        ExecutorService executor = Executors.newFixedThreadPool(NTHREDS);
        List<Future<Long>> list = new ArrayList<Future<Long>>();
        for (int i = 0; i < 20000; i++) {
            Callable<Long> worker = new MyCallable();
            Future<Long> submit = executor.submit(worker);
            list.add(submit);
        }
        long sum = 0;
        System.out.println(list.size());
        // now retrieve the result
        for (Future<Long> future : list) {
            try {
                sum += future.get();
            } catch (InterruptedException e) {
                e.printStackTrace();
            } catch (ExecutionException e) {
                e.printStackTrace();
            }
        }
        System.out.println(sum);
        executor.shutdown();
    }
}

8.2. Drawbacks with Futures and Callables

The Future interface is limited as a model of asynchronously executed tasks. Future allows a client to query a Callable task for its result. It does not provide the option to register a callback method. A callback method would allow you to get a callback once a task is done. In Java 5 you could use ExecutorCompletionService for this purpose but as of Java 8 you can use the CompletableFuture interface which allows to provide a callback interface which is called once a task is completed.

9. CompletableFuture

Asynchronous task handling is important for any application which performs time consuming activities, as IO operations. Two basic approaches to asynchronous task handling are available to a Java application:

  • application logic blocks until a task completes
  • application logic is called once the task completes, this is called a nonblocking approach.

CompletableFuture extends the functionality of the Future interface for asynchronous calls. It also implements the CompletionStage interface. CompletionStage offers methods, that let you attach callbacks that will be executed on completion.

It adds standard techniques for executing application code when a task completes, including various ways to combine tasks. CompletableFuture support both blocking and nonblocking approaches, including regular callbacks.

This callback can be executed in another thread as the thread in which the CompletableFuture is executed.

The following example demonstrates how to create a basic CompletableFuture.

CompletableFuture.supplyAsync(this::doSomething);

CompletableFuture.supplyAsync runs the task asynchronously on the default thread pool of Java. It has the option to supply your custom executor to define the ThreadPool.

package snippet;

import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;

public class CompletableFutureSimpleSnippet {
    public static void main(String[] args) {
        long started = System.currentTimeMillis();

        // configure CompletableFuture
        CompletableFuture<Integer> futureCount = createCompletableFuture();

            // continue to do other work
            System.out.println("Took " + (started - System.currentTimeMillis()) + " milliseconds" );

            // now its time to get the result
            try {
              int count = futureCount.get();
                System.out.println("CompletableFuture took " + (started - System.currentTimeMillis()) + " milliseconds" );

               System.out.println("Result " + count);
             } catch (InterruptedException | ExecutionException ex) {
                // Exceptions from the future should be handled here
            }
    }

    private static CompletableFuture<Integer> createCompletableFuture() {
        CompletableFuture<Integer> futureCount = CompletableFuture.supplyAsync(
                () -> {
                    try {
                        // simulate long running task
                        Thread.sleep(5000);
                    } catch (InterruptedException e) { }
                    return 20;
                });
        return futureCount;
    }

}

The usage of the thenApply method is demonstrated by the following code snippet.

package snippet;

import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;

public class CompletableFutureCallback {
    public static void main(String[] args) {
        long started = System.currentTimeMillis();

        CompletableFuture<String>  data = createCompletableFuture()
                .thenApply((Integer count) -> {
                    int transformedValue = count * 10;
                    return transformedValue;
                }).thenApply(transformed -> "Finally creates a string: " + transformed);

            try {
                System.out.println(data.get());
            } catch (InterruptedException | ExecutionException e) {

            }
    }

    public static CompletableFuture<Integer> createCompletableFuture() {
        CompletableFuture<Integer>  result = CompletableFuture.supplyAsync(() -> {
            try {
                // simulate long running task
                Thread.sleep(5000);
            } catch (InterruptedException e) { }
            return 20;
        });
        return result;
    }

}

10. Nonblocking algorithms

Java 5.0 provides supports for additional atomic operations. This allows to develop algorithm which are non-blocking algorithm, e.g. which do not require synchronization, but are based on low-level atomic hardware primitives such as compare-and-swap (CAS). A compare-and-swap operation check if the variable has a certain value and if it has this value it will perform this operation.

Non-blocking algorithms are typically faster than blocking algorithms, as the synchronization of threads appears on a much finer level (hardware).

For example this created a non-blocking counter which always increases. This example is contained in the project called de.vogella.concurrency.nonblocking.counter.

package de.vogella.concurrency.nonblocking.counter;

import java.util.concurrent.atomic.AtomicInteger;

public class Counter {
    private AtomicInteger value = new AtomicInteger();
    public int getValue(){
        return value.get();
    }
    public int increment(){
        return value.incrementAndGet();
    }

    // Alternative implementation as increment but just make the
    // implementation explicit
    public int incrementLongVersion(){
        int oldValue = value.get();
        while (!value.compareAndSet(oldValue, oldValue+1)){
             oldValue = value.get();
        }
        return oldValue+1;
    }

}

And a test.

package de.vogella.concurrency.nonblocking.counter;

import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

public class Test {
        private static final int NTHREDS = 10;

        public static void main(String[] args) {
            final Counter counter = new Counter();
            List<Future<Integer>> list = new ArrayList<Future<Integer>>();

            ExecutorService executor = Executors.newFixedThreadPool(NTHREDS);
            for (int i = 0; i < 500; i++) {
                Callable<Integer> worker = new  Callable<Integer>() {
                    @Override
                    public Integer call() throws Exception {
                        int number = counter.increment();
                        System.out.println(number );
                        return number ;
                    }
                };
                Future<Integer> submit= executor.submit(worker);
                list.add(submit);

            }


            // This will make the executor accept no new threads
            // and finish all existing threads in the queue
            executor.shutdown();
            // Wait until all threads are finish
            while (!executor.isTerminated()) {
            }
            Set<Integer> set = new HashSet<Integer>();
            for (Future<Integer> future : list) {
                try {
                    set.add(future.get());
                } catch (InterruptedException e) {
                    e.printStackTrace();
                } catch (ExecutionException e) {
                    e.printStackTrace();
                }
            }
            if (list.size()!=set.size()){
                throw new RuntimeException("Double-entries!!!");
            }

        }


}

The interesting part is how the incrementAndGet() method is implemented. It uses a CAS operation.

public final int incrementAndGet() {
        for (;;) {
            int current = get();
            int next = current + 1;
            if (compareAndSet(current, next))
                return next;
        }
    }

The JDK itself makes more and more use of non-blocking algorithms to increase performance for every developer. Developing correct non-blocking algorithm is not a trivial task.

For more information on non-blocking algorithm, e.g. examples for a non-blocking Stack and non-block LinkedList, please see http://www.ibm.com/developerworks/java/library/j-jtp04186/index.html

11. Fork-Join in Java 7

Java 7 introduce a new parallel mechanism for compute intensive tasks, the fork-join framework. The fork-join framework allows you to distribute a certain task on several workers and then wait for the result.

For Java 6.0 you can download the package (jsr166y) from the Download site.

For testing create the Java project “de.vogella.performance.forkjoin”. If you are not using Java 7 you also need to jsr166y.jar to the classpath.

Create first a algorithm package and then the following class.

package algorithm;

import java.util.Random;

/**
 *
 * This class defines a long list of integers which defines the problem we will
 * later try to solve
 *
 */
public class Problem {
    private final int[] list = new int[2000000];

    public Problem() {
        Random generator = new Random(19580427);
        for (int i = 0; i < list.length; i++) {
            list[i] = generator.nextInt(500000);
        }
    }

    public int[] getList() {
        return list;
    }

}

Define now the Solver class as shown in the following example coding.

The API defines other top classes, e.g. RecursiveAction, AsyncAction. Check the Javadoc for details.
package algorithm;

import java.util.Arrays;

import jsr166y.forkjoin.RecursiveAction;

public class Solver extends RecursiveAction {
    private int[] list;
    public long result;

    public Solver(int[] array) {
        this.list = array;
    }

    @Override
    protected void compute() {
        if (list.length == 1) {
            result = list[0];
        } else {
            int midpoint = list.length / 2;
            int[] l1 = Arrays.copyOfRange(list, 0, midpoint);
            int[] l2 = Arrays.copyOfRange(list, midpoint, list.length);
            Solver s1 = new Solver(l1);
            Solver s2 = new Solver(l2);
            forkJoin(s1, s2);
            result = s1.result + s2.result;
        }
    }
}

Now define a small test class for testing it efficiency.

package testing;

import jsr166y.forkjoin.ForkJoinExecutor;
import jsr166y.forkjoin.ForkJoinPool;
import algorithm.Problem;
import algorithm.Solver;

public class Test {

    public static void main(String[] args) {
        Problem test = new Problem();
        // check the number of available processors
        int nThreads = Runtime.getRuntime().availableProcessors();
        System.out.println(nThreads);
        Solver mfj = new Solver(test.getList());
        ForkJoinExecutor pool = new ForkJoinPool(nThreads);
        pool.invoke(mfj);
        long result = mfj.getResult();
        System.out.println("Done. Result: " + result);
        long sum = 0;
        // check if the result was ok
        for (int i = 0; i < test.getList().length; i++) {
            sum += test.getList()[i];
        }
        System.out.println("Done. Result: " + sum);
    }
}

12. Deadlock

A concurrent application has the risk of a deadlock. A set of processes are deadlocked if all processes are waiting for an event which another process in the same set has to cause.

For example if thread A waits for a lock on object Z which thread B holds and thread B wait for a look on object Y which is hold be process A then these two processes are locked and cannot continue in their processing.

This can be compared to a traffic jam, where cars(threads) require the access to a certain street(resource), which is currently blocked by another car(lock).

Deadlock

Copyright © 2012-2017 vogella GmbH. Free use of the software examples is granted under the terms of the EPL License. This tutorial is published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Germany license.

See Licence.

from:http://www.vogella.com/tutorials/JavaConcurrency/article.html

正确使用 Volatile 变量

Java 语言中的 volatile 变量可以被看作是一种 “程度较轻的 synchronized”;与 synchronized 块相比,volatile 变量所需的编码较少,并且运行时开销也较少,但是它所能实现的功能也仅是 synchronized 的一部分。本文介绍了几种有效使用 volatile 变量的模式,并强调了几种不适合使用 volatile 变量的情形。

锁提供了两种主要特性:互斥(mutual exclusion)可见性(visibility)。互斥即一次只允许一个线程持有某个特定的锁,因此可使用该特性实现对共享数据的协调访问协议,这样,一次就只有一个线程能够使用该共享数据。可见性要更加复杂一些,它必须确保释放锁之前对共享数据做出的更改对于随后获得该锁的另一个线程是可见的 —— 如果没有同步机制提供的这种可见性保证,线程看到的共享变量可能是修改前的值或不一致的值,这将引发许多严重问题。

Volatile 变量

Volatile 变量具有 synchronized 的可见性特性,但是不具备原子特性。这就是说线程能够自动发现 volatile 变量的最新值。Volatile 变量可用于提供线程安全,但是只能应用于非常有限的一组用例:多个变量之间或者某个变量的当前值与修改后值之间没有约束。因此,单独使用 volatile 还不足以实现计数器、互斥锁或任何具有与多个变量相关的不变式(Invariants)的类(例如 “start <=end”)。

出于简易性或可伸缩性的考虑,您可能倾向于使用 volatile 变量而不是锁。当使用 volatile 变量而非锁时,某些习惯用法(idiom)更加易于编码和阅读。此外,volatile 变量不会像锁那样造成线程阻塞,因此也很少造成可伸缩性问题。在某些情况下,如果读操作远远大于写操作,volatile 变量还可以提供优于锁的性能优势。

正确使用 volatile 变量的条件

您只能在有限的一些情形下使用 volatile 变量替代锁。要使 volatile 变量提供理想的线程安全,必须同时满足下面两个条件:

  • 对变量的写操作不依赖于当前值。
  • 该变量没有包含在具有其他变量的不变式中。

实际上,这些条件表明,可以被写入 volatile 变量的这些有效值独立于任何程序的状态,包括变量的当前状态。

第一个条件的限制使 volatile 变量不能用作线程安全计数器。虽然增量操作(x++)看上去类似一个单独操作,实际上它是一个由读取-修改-写入操作序列组成的组合操作,必须以原子方式执行,而 volatile 不能提供必须的原子特性。实现正确的操作需要使 x 的值在操作期间保持不变,而 volatile 变量无法实现这点。(然而,如果将值调整为只从单个线程写入,那么可以忽略第一个条件。)

大多数编程情形都会与这两个条件的其中之一冲突,使得 volatile 变量不能像 synchronized 那样普遍适用于实现线程安全。清单 1 显示了一个非线程安全的数值范围类。它包含了一个不变式 —— 下界总是小于或等于上界。

清单 1. 非线程安全的数值范围类
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
@NotThreadSafe
public class NumberRange {
    private int lower, upper;
    public int getLower() { return lower; }
    public int getUpper() { return upper; }
    public void setLower(int value) {
        if (value > upper)
            throw new IllegalArgumentException(...);
        lower = value;
    }
    public void setUpper(int value) {
        if (value < lower)
            throw new IllegalArgumentException(...);
        upper = value;
    }
}

这种方式限制了范围的状态变量,因此将 lower 和 upper 字段定义为 volatile 类型不能够充分实现类的线程安全;从而仍然需要使用同步。否则,如果凑巧两个线程在同一时间使用不一致的值执行 setLowersetUpper 的话,则会使范围处于不一致的状态。例如,如果初始状态是 (0, 5),同一时间内,线程 A 调用 setLower(4) 并且线程 B 调用 setUpper(3),显然这两个操作交叉存入的值是不符合条件的,那么两个线程都会通过用于保护不变式的检查,使得最后的范围值是 (4, 3) —— 一个无效值。至于针对范围的其他操作,我们需要使 setLower()setUpper() 操作原子化 —— 而将字段定义为 volatile 类型是无法实现这一目的的。

性能考虑

使用 volatile 变量的主要原因是其简易性:在某些情形下,使用 volatile 变量要比使用相应的锁简单得多。使用 volatile 变量次要原因是其性能:某些情况下,volatile 变量同步机制的性能要优于锁。

很难做出准确、全面的评价,例如 “X 总是比 Y 快”,尤其是对 JVM 内在的操作而言。(例如,某些情况下 VM 也许能够完全删除锁机制,这使得我们难以抽象地比较 volatilesynchronized 的开销。)就是说,在目前大多数的处理器架构上,volatile 读操作开销非常低 —— 几乎和非 volatile 读操作一样。而 volatile 写操作的开销要比非 volatile 写操作多很多,因为要保证可见性需要实现内存界定(Memory Fence),即便如此,volatile 的总开销仍然要比锁获取低。

volatile 操作不会像锁一样造成阻塞,因此,在能够安全使用 volatile 的情况下,volatile 可以提供一些优于锁的可伸缩特性。如果读操作的次数要远远超过写操作,与锁相比,volatile 变量通常能够减少同步的性能开销。

正确使用 volatile 的模式

很多并发性专家事实上往往引导用户远离 volatile 变量,因为使用它们要比使用锁更加容易出错。然而,如果谨慎地遵循一些良好定义的模式,就能够在很多场合内安全地使用 volatile 变量。要始终牢记使用 volatile 的限制 —— 只有在状态真正独立于程序内其他内容时才能使用 volatile —— 这条规则能够避免将这些模式扩展到不安全的用例。

模式 #1:状态标志

也许实现 volatile 变量的规范使用仅仅是使用一个布尔状态标志,用于指示发生了一个重要的一次性事件,例如完成初始化或请求停机。

很多应用程序包含了一种控制结构,形式为 “在还没有准备好停止程序时再执行一些工作”,如清单 2 所示:

清单 2. 将 volatile 变量作为状态标志使用
1
2
3
4
5
6
7
8
9
10
11
volatile boolean shutdownRequested;
...
public void shutdown() { shutdownRequested = true; }
public void doWork() {
    while (!shutdownRequested) {
        // do stuff
    }
}

很可能会从循环外部调用 shutdown() 方法 —— 即在另一个线程中 —— 因此,需要执行某种同步来确保正确实现 shutdownRequested 变量的可见性。(可能会从 JMX 侦听程序、GUI 事件线程中的操作侦听程序、通过 RMI 、通过一个 Web 服务等调用)。然而,使用 synchronized 块编写循环要比使用清单 2 所示的 volatile 状态标志编写麻烦很多。由于 volatile 简化了编码,并且状态标志并不依赖于程序内任何其他状态,因此此处非常适合使用 volatile。

这种类型的状态标记的一个公共特性是:通常只有一种状态转换;shutdownRequested 标志从 false 转换为 true,然后程序停止。这种模式可以扩展到来回转换的状态标志,但是只有在转换周期不被察觉的情况下才能扩展(从 falsetrue,再转换到 false)。此外,还需要某些原子状态转换机制,例如原子变量。

模式 #2:一次性安全发布(one-time safe publication)

缺乏同步会导致无法实现可见性,这使得确定何时写入对象引用而不是原语值变得更加困难。在缺乏同步的情况下,可能会遇到某个对象引用的更新值(由另一个线程写入)和该对象状态的旧值同时存在。(这就是造成著名的双重检查锁定(double-checked-locking)问题的根源,其中对象引用在没有同步的情况下进行读操作,产生的问题是您可能会看到一个更新的引用,但是仍然会通过该引用看到不完全构造的对象)。

实现安全发布对象的一种技术就是将对象引用定义为 volatile 类型。清单 3 展示了一个示例,其中后台线程在启动阶段从数据库加载一些数据。其他代码在能够利用这些数据时,在使用之前将检查这些数据是否曾经发布过。

清单 3. 将 volatile 变量用于一次性安全发布
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public class BackgroundFloobleLoader {
    public volatile Flooble theFlooble;
    public void initInBackground() {
        // do lots of stuff
        theFlooble = new Flooble();  // this is the only write to theFlooble
    }
}
public class SomeOtherClass {
    public void doWork() {
        while (true) {
            // do some stuff...
            // use the Flooble, but only if it is ready
            if (floobleLoader.theFlooble != null)
                doSomething(floobleLoader.theFlooble);
        }
    }
}

如果 theFlooble 引用不是 volatile 类型,doWork() 中的代码在解除对 theFlooble 的引用时,将会得到一个不完全构造的 Flooble

该模式的一个必要条件是:被发布的对象必须是线程安全的,或者是有效的不可变对象(有效不可变意味着对象的状态在发布之后永远不会被修改)。volatile 类型的引用可以确保对象的发布形式的可见性,但是如果对象的状态在发布后将发生更改,那么就需要额外的同步。

模式 #3:独立观察(independent observation)

安全使用 volatile 的另一种简单模式是:定期 “发布” 观察结果供程序内部使用。例如,假设有一种环境传感器能够感觉环境温度。一个后台线程可能会每隔几秒读取一次该传感器,并更新包含当前文档的 volatile 变量。然后,其他线程可以读取这个变量,从而随时能够看到最新的温度值。

使用该模式的另一种应用程序就是收集程序的统计信息。清单 4 展示了身份验证机制如何记忆最近一次登录的用户的名字。将反复使用 lastUser 引用来发布值,以供程序的其他部分使用。

清单 4. 将 volatile 变量用于多个独立观察结果的发布
1
2
3
4
5
6
7
8
9
10
11
12
13
public class UserManager {
    public volatile String lastUser;
    public boolean authenticate(String user, String password) {
        boolean valid = passwordIsValid(user, password);
        if (valid) {
            User u = new User();
            activeUsers.add(u);
            lastUser = user;
        }
        return valid;
    }
}

该模式是前面模式的扩展;将某个值发布以在程序内的其他地方使用,但是与一次性事件的发布不同,这是一系列独立事件。这个模式要求被发布的值是有效不可变的 —— 即值的状态在发布后不会更改。使用该值的代码需要清楚该值可能随时发生变化。

模式 #4:“volatile bean” 模式

volatile bean 模式适用于将 JavaBeans 作为“荣誉结构”使用的框架。在 volatile bean 模式中,JavaBean 被用作一组具有 getter 和/或 setter 方法 的独立属性的容器。volatile bean 模式的基本原理是:很多框架为易变数据的持有者(例如 HttpSession)提供了容器,但是放入这些容器中的对象必须是线程安全的。

在 volatile bean 模式中,JavaBean 的所有数据成员都是 volatile 类型的,并且 getter 和 setter 方法必须非常普通 —— 除了获取或设置相应的属性外,不能包含任何逻辑。此外,对于对象引用的数据成员,引用的对象必须是有效不可变的。(这将禁止具有数组值的属性,因为当数组引用被声明为 volatile 时,只有引用而不是数组本身具有 volatile 语义)。对于任何 volatile 变量,不变式或约束都不能包含 JavaBean 属性。清单 5 中的示例展示了遵守 volatile bean 模式的 JavaBean:

清单 5. 遵守 volatile bean 模式的 Person 对象
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
@ThreadSafe
public class Person {
    private volatile String firstName;
    private volatile String lastName;
    private volatile int age;
    public String getFirstName() { return firstName; }
    public String getLastName() { return lastName; }
    public int getAge() { return age; }
    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }
    public void setLastName(String lastName) {
        this.lastName = lastName;
    }
    public void setAge(int age) {
        this.age = age;
    }
}

volatile 的高级模式

前面几节介绍的模式涵盖了大部分的基本用例,在这些模式中使用 volatile 非常有用并且简单。这一节将介绍一种更加高级的模式,在该模式中,volatile 将提供性能或可伸缩性优势。

volatile 应用的的高级模式非常脆弱。因此,必须对假设的条件仔细证明,并且这些模式被严格地封装了起来,因为即使非常小的更改也会损坏您的代码!同样,使用更高级的 volatile 用例的原因是它能够提升性能,确保在开始应用高级模式之前,真正确定需要实现这种性能获益。需要对这些模式进行权衡,放弃可读性或可维护性来换取可能的性能收益 —— 如果您不需要提升性能(或者不能够通过一个严格的测试程序证明您需要它),那么这很可能是一次糟糕的交易,因为您很可能会得不偿失,换来的东西要比放弃的东西价值更低。

模式 #5:开销较低的读-写锁策略

目前为止,您应该了解了 volatile 的功能还不足以实现计数器。因为 ++x 实际上是三种操作(读、添加、存储)的简单组合,如果多个线程凑巧试图同时对 volatile 计数器执行增量操作,那么它的更新值有可能会丢失。

然而,如果读操作远远超过写操作,您可以结合使用内部锁和 volatile 变量来减少公共代码路径的开销。清单 6 中显示的线程安全的计数器使用 synchronized 确保增量操作是原子的,并使用 volatile 保证当前结果的可见性。如果更新不频繁的话,该方法可实现更好的性能,因为读路径的开销仅仅涉及 volatile 读操作,这通常要优于一个无竞争的锁获取的开销。

清单 6. 结合使用 volatile 和 synchronized 实现 “开销较低的读-写锁”
1
2
3
4
5
6
7
8
9
10
11
12
@ThreadSafe
public class CheesyCounter {
    // Employs the cheap read-write lock trick
    // All mutative operations MUST be done with the 'this' lock held
    @GuardedBy("this") private volatile int value;
    public int getValue() { return value; }
    public synchronized int increment() {
        return value++;
    }
}

之所以将这种技术称之为 “开销较低的读-写锁” 是因为您使用了不同的同步机制进行读写操作。因为本例中的写操作违反了使用 volatile 的第一个条件,因此不能使用 volatile 安全地实现计数器 —— 您必须使用锁。然而,您可以在读操作中使用 volatile 确保当前值的可见性,因此可以使用锁进行所有变化的操作,使用 volatile 进行只读操作。其中,锁一次只允许一个线程访问值,volatile 允许多个线程执行读操作,因此当使用 volatile 保证读代码路径时,要比使用锁执行全部代码路径获得更高的共享度 —— 就像读-写操作一样。然而,要随时牢记这种模式的弱点:如果超越了该模式的最基本应用,结合这两个竞争的同步机制将变得非常困难。

结束语

与锁相比,Volatile 变量是一种非常简单但同时又非常脆弱的同步机制,它在某些情况下将提供优于锁的性能和伸缩性。如果严格遵循 volatile 的使用条件 —— 即变量真正独立于其他变量和自己以前的值 —— 在某些情况下可以使用 volatile 代替 synchronized 来简化代码。然而,使用 volatile 的代码往往比使用锁的代码更加容易出错。本文介绍的模式涵盖了可以使用 volatile 代替 synchronized 的最常见的一些用例。遵循这些模式(注意使用时不要超过各自的限制)可以帮助您安全地实现大多数用例,使用 volatile 变量获得更佳性能。


相关主题

  • 您可以参阅本文在 developerWorks 全球站点上的 英文原文
  • Java Concurrency in Practice:使用 Java 代码开发并发程序的 how-to 手册,内容包括构建并编写线程安全的类和程序、避免性能影响、管理性能和测试并发应用程序。
  • 流行的原子:介绍了 Java 5.0 中新增的原子变量类,该特性对 volatile 变量进行了扩展,从而支持原子状态转换。
  • 非阻塞算法简介:介绍如何使用原子变量而不是锁实现并发算法。
  • Volatiles:从 Wikipedia 获得关于 volatile 变量的更多信息。
  • Java 技术专区:提供了数百篇有关 Java 编程各个方面的文章。

from:https://www.ibm.com/developerworks/cn/java/j-jtp06197.html

Multithreading Demystified

Introduction

The article deals with explaining the concepts behind implementing multi-threading applications in .NET through a working code example. The article covers the following topics in brief:

  1. Concepts of threading
  2. How to implement multi-threading in .NET
  3. Concepts behind implementing Thread Safe applications
  4. Deadlocks

What is a Process?

A process is an Operating System context in which an executable runs. It is used to segregate virtual address space, threads, object handles (pointers to resources such as files), and environment variables. Processes have attributes such as base priority class and maximum memory consumption.

Meaning…

  1. A process is a memory slice that contains resources
  2. An isolated task performed by the Operating System
  3. An application that is being run
  4. A process owns one or more Operating System threads

Technically, a process is a contiguous memory space of 4 GB. This memory is secure and private and cannot be accessed by other processes.

What is a Thread?

A thread is an instruction stream executing within a process. All threads execute within a process and a process can have multiple threads. All threads of a process use their process’ virtual address space. The thread is a unit of Operating System scheduling. The context of the thread is saved / restored as the Operating System switches execution between threads.

Meaning…

  • A thread is an instruction stream executing within a process.
  • All threads execute within a process and a process can have multiple threads.
  • All threads of a process use their process’ virtual address space.

What is Multi-Threading?

Multi threading is when a process has multiple threads active at the same time. This allows for either the appearance of simultaneous thread execution (through time slicing) or actual simultaneous thread execution on hyper-threading and multi-processor systems.

Multi-Threading – Why and Why Not

Why multi-thread:

  • To keep the UI responsive.
  • To improve performance (for example, concurrent operation of CPU bound and I/O bound activities).

Why not multi-thread:

  • Overhead can reduce actual performance.
  • Complicates code, increases design time, and risk of bugs.

Thread Pool

The thread pool provides your application with a pool of worker threads that are managed by the system. The threads in the managed thread pool are background threads. A ThreadPool thread will not keep an application running after all foreground threads have exited. There is one thread pool per process. The thread pool has a default size of 25 threads per available processor. The number of threads in the pool can be changed by the SetMaxThreads method. Each thread uses the default stack size and runs at the default priority.

Threading in .NET

In .NET, threading is achieved by the following methods:

  1. Thread class
  2. Delegates
  3. Background Worker
  4. ThreadPool
  5. Task
  6. Parallel

In the sections below, we will see how threading can be implemented by each of these methods.

In a nutshell, multi-threading is a technology by which any application can be made to run multiple tasks concurrently, thereby utilizing the maximum computing power of the processor and keeping the UI responsive. An example of this can be expressed by the block diagram below:

The code

The project is a simple WinForms application which demonstrates the use of threading in .NET by three methods:

  1. Delegates
  2. Thread class
  3. Background Worker

The application executes a heavy operation asynchronously so that the UI is not blocked. The same heavy operation is achieved by the above three ways to demonstrate their purpose.

The “Heavy” Operation

In real world, a heavy operation can be anything from polling a database to streaming a media file. For this example, we have simulated a heavy operation by appending values to a string. String being immutable, a string append will cause a new string variable to be created while discarding the old one. (This is handled by the CLR.) If done a huge number of times, this can really consume a lot of resources (a reason why we use Stringbuilder.Append instead). In the above UI screen, set the up down counter to specify the number of times the string is going to be appended.

We have a Utility class in the backend, which has a LoadData() method. It also has a delegate with signature similar to that of LoadData().

class Utility
{
    public delegate string delLoadData(int number);
    public static delLoadData dLoadData;

    public Utility()
    {

    }

    public static string LoadData(int max)
    {
        string str = string.Empty;

        for (int i = 0; i < max; i++)
                                {
            str += i.ToString();
                                }

        return str;
    }
}

The Synchronous Call

When you click the “Get Data Sync” button, the operation is run in the same thread as that of the UI thread (blocking call). Hence, for the time the operation is running, the UI will remain unresponsive.

private void btnSync_Click(object sender, EventArgs e)
{
    this.Cursor = Cursors.WaitCursor;
    this.txtContents.Text = Utility.LoadData(upCount);
    this.Cursor = Cursors.Default;
}

The Asynchronous Call

Using Delegates (Asynchronous Programming Model)

If you choose the radio button “Delegates”, the LoadData() method is called asynchronously using delegates. We first initialize the type delLoadData with the address of utility.LoadData(). Then we call the BeginInvoke() method of the delegate. In .NET world, any method that has the name BeginXXX or EndXXX is asynchronous. For example, delegate.Invoke() will call a method in the same thread. While delegate.BeginInvoke() will call the method in a separate thread.

The BeginInvoke() takes three arguments:

  1. Parameter to be passed to the Utility.LoadData() method
  2. Address of the callback method
  3. State of the object
Utility.dLoadData = new Utility.delLoadData(Utility.LoadData);
Utility.dLoadData.BeginInvoke(upCount, CallBack, null);
The Callback

Once we spawn an operation in a thread, we have to know what is happening in that operation. In other words, we should be notified when it has completed its operation. There are three ways of knowing whether the operation has completed:

  1. Callback
  2. Polling
  3. Wait until done

In our project, we use a callback method to trap the finishing of the thread. This is nothing but the name of the method that you had passed while calling the Begininvoke() method. This tells the thread to come back and invoke that method when it has done doing what it was supposed to do.

Once a method is fired in a separate thread, you might or might not be interested to know what that method returns. If the method does not return anything, then it will be a “fire and forget call”. In such a case, you would not be interested in the callback and would pass the callback parameter as null.

Utility.dLoadData.BeginInvoke(upCount, CallBack, null);

In our case, we need a callback method and hence we have passed the name of our callback method, which is coincidentally CallBack().

private void CallBack(IAsyncResult asyncResult)
{
    string result= string.Empty;

    if (this.cancelled)
        result = "Operation Cancelled";
    else
        result = Utility.dLoadData.EndInvoke(asyncResult);

      object[] args = { this.cancelled, result };
    this.BeginInvoke(dUpdateUI, args);
}

The signature of a callback method is – void MethodName(IAsyncResult asyncResult).

The IAsyncResult contains the necessary information about the thread. The returned data can be trapped as follows:

result = Utility.dLoadData.EndInvoke(asyncResult);

The polling method (not used in this project) is like the following:

IAsyncResult r = Utility.dLoadData.BeginInvoke(upCount, CallBack, null);
while (!r.IsCompleted)
{
    //do work
}
result = Utility.dLoadData.EndInvoke(asyncResult);

The wait-until-done, as the name suggests, is to wait until the operation is completed.

IAsyncResult r = Utility.dLoadData.BeginInvoke(upCount, CallBack, null);

//do work
result = Utility.dLoadData.EndInvoke(asyncResult);
Updating the UI

Now that we have trapped the ending of the operation and retrieved the result that LoadData() returned, we need to update the UI with that result. But there is a problem. The text box which needs to be updated resides in the UI thread and the result has been returned in the callback. The callback happens in the same thread that it started. So the UI thread is different from the callback thread. In other words, the text box cannot be updated with the result like shown below:

this.txtContents.Text = text;

Executing this line in the callback method will result in a cross thread system exception. We have to form a bridge between the UI thread and the background thread to update the result in the textbox. That is done using the Invoke() or BeginInvoke() methods of the form.

I have defined a method which will update the UI:

private void UpdateUI(bool cancelled, string text)
{
    this.btnAsync.Enabled = true;
    this.btnCancel.Enabled = false;
    this.txtContents.Text = text;
}

Define a delegate to the above method:

private delegate void delUpdateUI(bool value, string text);
dUpdateUI = new delUpdateUI(UpdateUI);

Call the BeginInvoke() method of the form:

object[] args = { this.cancelled, result };
this.BeginInvoke(dUpdateUI, args);

One thing to be noted here is that once a thread is spawned using a delegate, it cannot be cancelled, suspended, or aborted. We have no control on that thread.

Using the Thread Class

The same operation can be achieved using the Thread class. The advantage is that the Thread class gives you more power over suspending and cancelling the operation. The Thread class resides in the namespace System.Threading.

We have a private method LoadData() which is a wrapper to our Utility.LoadData().

private void LoadData()
{
    string result = Utility.LoadData(upCount);
    object[] args = { this.cancelled, result };
    this.BeginInvoke(dUpdateUI, args);
}

The reason we have this is because, Utility.LoadData() requires an argument. We need a thread start delegate to initialize the thread.

doWork = new Thread(new ThreadStart(this.LoadData));
doWork.Start();

The delegate has a void, void signature. In case we need to pass an argument, we have to use a parameterized thread start delegate. Unfortunately, the parameterized thread start delegate can take only objects as parameters. We need a string and would have to implement a type casting.

doWork = new Thread(new ParameterizedThreadStart(this.LoadData));
doWork.Start(parameter);

The Thread class gives a lot of power over the thread like Suspend, Abort, Interrupt, ThreadState, etc.

Using BackgroundWorker

The BackgroundWorker is a control which helps to make threading simple. The main feature of the BackgroundWorker is that it can report progress asynchronously which can be used to update a status bar, keeping the UI updated about the progress of the operation in a visual way.

To do this, we need to set the following properties to true. These are false by default.

  • WorkerReportsProgress
  • WorkerSupportsCancel

The control has three main events: DoCount, ProgressChanged, RunWorkerCompleted. We need to register these events at initializing:

this.bgCount.DoWork += new DoWorkEventHandler(bgCount_DoWork);
this.bgCount.ProgressChanged +=
     new ProgressChangedEventHandler(bgCount_ProgressChanged);
this.bgCount.RunWorkerCompleted +=
     new RunWorkerCompletedEventHandler(bgCount_RunWorkerCompleted);

The operation can be started by invoking the RunWorkerAsync() method as shown below:

this.bgCount.RunWorkerAsync();

Once this is invoked, the following method is invoked for processing the operation:

void bgCount_DoWork(object sender, DoWorkEventArgs e)
{
    string result = string.Empty;
    if (this.bgCount.CancellationPending)
    {
        e.Cancel = true;
        e.Result = "Operation Cancelled";
    }
    else
    {
        for (int i = 0; i < this.upCount; i++)
        {
            result += i.ToString();
            this.bgCount.ReportProgress((i / this.upCount) * 100);
        }
        e.Result = result;
    }
}

The CancellationPending property can be checked to see if the operation has been cancelled. The operation can be cancelled by calling:

this.bgCount.CancelAsync();

The below line reports the percentage progress:

this.bgCount.ReportProgress((i / this.upCount) * 100);

Once this is called, the below method is invoked to update the UI:

void bgCount_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
    if (this.bgCount.CancellationPending)
        this.txtContents.Text = "Cancelling....";
    else
        this.progressBar.Value = e.ProgressPercentage;
}

Finally, the bgCount_RunWorkerCompleted method is called to complete the operation:

void bgCount_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
{
    this.btnAsync.Enabled = true;
    this.btnCancel.Enabled = false;
    this.txtContents.Text = e.Result.ToString();
}

Thread  Pools

It is not recommended that programmers create as many threads as possible on their own. Creating threads is an expensive operation. There are overheads involved in terms of memory and computing. Also, the computer can only work one thread at a given time per CPU. So if there are multiple threads on a single core system, the computer will only be able to cater to one thread at a time. It does so by allocating time “slices” to each thread and working on the available threads in a round robin manner (which also depends on their priority). This is called context switching which in itself is another overhead. So if we have too many threads practically doing noting or sitting idle, we only have overheads in terms of memory consumption, context switching, etc without any net gain. So as developers we need to be extremely cautious when creating threads and be diligent about the number of existing threads we are working with.

Fortunately the CLR has a managed code library that does this for us. This is the ThreadPool class. This class manages a number of  threads in its pool and decides on the need to create or destroy any threads based on our application need. The threadPool has no thread to start with. As and when requests start queuing up, it starts creating the threads. If we set the SetMinThreads property, the threadpool quickly assigns that many number of threads as work items start queuing up. When the threadpool finds out that threads are getting idle or are asleep for a long time, it decides to kill threads appropriately.

So, Thread pools are a great way to tap into the pool of background threads that is maintained by our computer. The ThreadPool class allows us to queue a work item which is then delegated to a background thread.

WaitCallback threadCallback = new WaitCallback(HeavyOperation);

for (int i = 0; i < 3; i++)
{
  System.Threading.ThreadPool.QueueUserWorkItem(HeavyOperation, i);
}

The heavy operation is defined as :

 

private static void HeavyOperation(object WorkItem)
{
  System.Threading.Thread.Sleep(5000);
  Console.WriteLine("Executed work Item {0}", (int)WorkItem);
}

 

Notice the signature of the WaitCallBack delegate. It must take an object as a method parameter. This is generally used to pass across state information between threads.

Now that we know how to delegate work to a background thread using ThreadPool, we must explore on the callback techniques that go with it. We capture callbacks by using a WaitHandle. The WaitHandle class lends inheritance to two children classes – AutoResetEvent and ManualResetEvent.

public static void Demo_ResetEvent()
{
  Server s = new Server();
  ThreadPool.QueueUserWorkItem(new WaitCallback((o) =>
  {
     s.DoWork();                

   }));

   ((AutoResetEvent)Global.GetHandle(Handles.AutoResetEvent)).WaitOne();
    Console.WriteLine("Work complete signal received");
}

Here we have a Global class which maintains a singleton instance for the WaitHandles.

public static class Global
{
  static WaitHandle w = null;
  static AutoResetEvent ae = new AutoResetEvent(false);
  static ManualResetEvent me = new ManualResetEvent(false);
  public static WaitHandle GetHandle(Handles Type)
  {
    switch (Type)
    {
      case Handles.ManualResetEvent:
         w = me;
         break;
      case Handles.AutoResetEvent:
         w = ae;
         break;
      default:
         break;
    }
    return w;
  }
}

The WaitOne method, blocks the code execution till the value is set on the WaitHandle from the background thread.

 

public void DoWork()
{
  Console.WriteLine("Work Starting ...");
  Thread.Sleep(5000);
  Console.WriteLine("Work Ended ...");
  ((AutoResetEvent)Global.GetHandle(Handles.AutoResetEvent)).Set();
}

 

The AutoResetEvent resets itself after being set automatically. It is analogous to a toll gate on an expressway where two or more lanes merge so that vehicles can only pass one at a time. When a vehicle approaches, the gate is set allowing it to pass through and then immediately resets automatically for the other vehicle.

The following example elaborates on the AutoResetEvent. Consider, we have a server with a method DoWork(). This method is a heavy operation and the application needs to update the log file after calling this method. Consider that several threads access this method asynchronously. Hence, we must make sure that the update log is thread safe or is only available to one thread at a time.

public void DoWork(int threadID, int waitSingal)
{
  Thread.Sleep(waitSingal);
  Console.WriteLine("Work Complete by Thread : {0} @ {1}", threadID, DateTime.Now.ToString("hh:mm:ss"));
  ((AutoResetEvent)Global.GetHandle(Handles.AutoResetEvent)).Set();

}
public void UpdateLog(int threadID)
{
  if(((AutoResetEvent)Global.GetHandle(Handles.AutoResetEvent)).WaitOne(5000))
       Console.WriteLine("Update Log File by thread : {0} @ {1}", threadID, DateTime.Now.ToString("hh:mm:ss"));
  else
       Console.WriteLine("Time out");
}

We create two threads and delegate the DoWork() method simultaneously. Then we call the UpdateLog(). The code execution at update log will wait for each thread to complete their respective task before updating.

public static void Demo_AutoResetEvent()
{
  Console.WriteLine("Demo Autoreset event...");
  Server s = new Server();

  Console.WriteLine("Start Thread 1..");
  ThreadPool.QueueUserWorkItem(new WaitCallback((o) =>
  {
     s.DoWork(1, 4000);  

  }));            

  Console.WriteLine("Start Thread 2..");
  ThreadPool.QueueUserWorkItem(new WaitCallback((o) =>
  {
     s.DoWork(2, 4000);                

  }));

  s.UpdateLog(1);
  s.UpdateLog(2);
}

 

 

 

The ManualResetEvent differs from the AutoResetEvent by the fact that we need to reset it manually before setting it again. Unlike AutoResetEvent, it does not reset automatically. Consider we have a server which sends messages continuously in a background thread.  The server runs a continuous loop awaiting the signal to send messages. When the value is set, the server starts sending messages. When the wait handle is reset the server stops and the process can be repeated.

public void SendMessages(bool monitorSingal)
{
  int counter=1;
  while (monitorSingal)
  {
     if (((ManualResetEvent)Global.GetHandle(Handles.ManualResetEvent)).WaitOne())
     {
        Console.WriteLine("Sending message {0}", counter);
        Thread.Sleep(3000);
        counter += 1;
     }
  }
}
public static void Demo_ManualResetEvent()
{
  Console.WriteLine("Demo Mnaulreset event...");
  Server s = new Server();
  ThreadPool.QueueUserWorkItem(new WaitCallback((o) =>
  {
    s.SendMessages(true);
  }));

  Console.WriteLine("Press 1 to send messages");
  Console.WriteLine("Prress 2 to stop messages");

  while (true)
  {
    int input = Convert.ToInt16(Console.ReadLine());                              

    switch (input)
    {
      case 1:
         Console.WriteLine("Starting to send message ...");
         ((ManualResetEvent)Global.GetHandle(Handles.ManualResetEvent)).Set();
         break;
      case 2:
         ((ManualResetEvent)Global.GetHandle(Handles.ManualResetEvent)).Reset();
         Console.WriteLine("Message Stopped ...");
         break;
      default:
         Console.WriteLine("Invalid Input");
         break;
    }
  }
}

The Task Class  

.NET  4.0 came up with an extension of the ThreadPool class in the form of a Task class. The concept remains pretty much the same with the exception that we have the power to cancel the task, wait on a task and tap on the thread’s status from time to time to check the progress. Consider the following example where we have three methods

static void DoHeavyWork(CancellationToken ct)
{
 try
 {
                while (true)
                {
                    ct.ThrowIfCancellationRequested();
                    Console.WriteLine("Background thread working for task 3..");
                    Thread.Sleep(2000);
                    if (ct.IsCancellationRequested)
                    {
                        ct.ThrowIfCancellationRequested();
                    }
                }
            }
            catch (OperationCanceledException ex)
            {
                Console.WriteLine("Exception :" + ex.Message);
            }
            catch (Exception ex)
            {
                Console.WriteLine("Exception :", ex.Message);
            }            

        }

static void DoHeavyWork(int n)
{
  Thread.Sleep(5000);
  Console.WriteLine("Operation complete for thread {0}", Thread.CurrentThread.ManagedThreadId);
}
static int DoHeavyWorkWithResult(int num)
{
  Thread.Sleep(5000);
  Console.WriteLine("Operation complete for thread {0}", Thread.CurrentThread.ManagedThreadId);
  return num;
}

We have 3 tasks designed to run these 3 methods. The first thread completes without returning a result. The second thread completes and returns a result while the third is cancelled before completion.

 try
            {
                Console.WriteLine(DateTime.Now);
                CancellationTokenSource cts1 = new CancellationTokenSource();
                CancellationTokenSource cts2 = new CancellationTokenSource();
                CancellationTokenSource cts3 = new CancellationTokenSource();

                Task t1 = new Task((o) => DoHeavyWork(2), cts1.Token);

                Console.WriteLine("Starting Task 1");
                Console.WriteLine("Thread1 state {0}", t1.Status);
                t1.Start();

                Console.WriteLine("Starting Task 2");
                Task<int> t2 = Task<int>.Factory.StartNew((o) => DoHeavyWorkWithResult(2), cts2.Token);

                Console.WriteLine("Starting Task 3");
                Task t3 = new Task((o) => DoHeavyWork(cts3.Token), cts3);
                t3.Start();               

                Console.WriteLine("Thread1 state {0}", t1.Status);
                Console.WriteLine("Thread2 state {0}", t2.Status);
                Console.WriteLine("Thread3 state {0}", t3.Status);

                // wait for task 1 to be over
                t1.Wait();

                Console.WriteLine("Task 1 complete");

                Console.WriteLine("Thread1 state {0}", t1.Status);
                Console.WriteLine("Thread2 state {0}", t2.Status);
                Console.WriteLine("Thread3 state {0}", t3.Status);

                //cancel task 3
                Console.WriteLine("Task 3 is : {0} and cancelling...", t3.Status);
                cts3.Cancel();

                // wait for task 2 to be over
                t2.Wait();

                Console.WriteLine("Task 2 complete");

                Console.WriteLine("Thread1 state {0}", t1.Status);
                Console.WriteLine("Thread2 state {0}", t2.Status);
                Console.WriteLine("Thread3 state {0}", t3.Status);

                Console.WriteLine("Result {0}", t2.Result);
                Console.WriteLine(DateTime.Now);

                t3.Wait();

                Console.WriteLine("Task 3 complete");
                Console.WriteLine(DateTime.Now);
            }

            catch (Exception ex)
            {
                Console.WriteLine("Exception : " + ex.Message.ToString());
            }
            finally
            {
                Console.Read();
            }

Parallel Programming with .NET 4.0 (Time Slicing)

.NET 4.0 came with a cool feature of parallel processing. Most of the  threading examples that we saw above were only  about delegating bulk jobs to idle threads. The computer was still processing one thread at a time in a round robin way. In a nutshell we were not multitasking in the true sense of the word. All that is possible with the Parallel class.

Consider you have an Employee class which has a heavy operation ProcessEmployeeInformation

class Employee
{
  public Employee(){}

  public int EmployeeID {get;set;}

  public void ProcessEmployeeInformation()
  {
    Thread.Sleep(5000);
    Console.WriteLine("Processed Information for Employee {0}",EmployeeID);
  }
}

We create 8 instances and fire parallel requests. On a 4 core processor, 4 of the requests will be processed simultaneously and the rest will be queued waiting for any thread to free up.

 List<employee> empList = new List<employee>()
 {
   new Employee(){EmployeeID=1},
   new Employee(){EmployeeID=2},
   new Employee(){EmployeeID=3},
   new Employee(){EmployeeID=4},
   new Employee(){EmployeeID=5},
   new Employee(){EmployeeID=6},
   new Employee(){EmployeeID=7},
   new Employee(){EmployeeID=8},
 };

 Console.WriteLine("Start Operation {0}", DateTime.Now);
 System.Threading.Tasks.Parallel.ForEach(empList, (e) =>e.ProcessEmployeeInformation());

</employee></employee>

We can control or limit the number of concurrent tasks by using the MaxDegreeOfParallelism property. If it is set to -1, there is no limit.

System.Threading.Tasks.Parallel.For(0, 8, new ParallelOptions() { MaxDegreeOfParallelism = 4 }, (o) =>
       {
          Thread.Sleep(5000);
          Console.WriteLine("Thread ID - {0}", Thread.CurrentThread.ManagedThreadId);
        });

The problem with parallelism is that if we fire a set of requests we have no guarantee that the responses will bear the same order. The order in which the threads get processed is non deterministic. The AsOrdered property helps us to ensure just that. The inputs can be processed in any order but the output will be delivered in that order.

Console.WriteLine("Start Operation {0}", DateTime.Now);
var q = from e in empList.AsParallel().AsOrdered()
        select new { ID = e.EmployeeID };

foreach (var item in q)
{
  Console.WriteLine(item.ID);
}
Console.WriteLine("End Operation {0}", DateTime.Now);

Web Applications

Threading in ASP.NET web applications can be achieved by sending an AJAX request from the client to the server. This makes the client request certain data to the server without blocking the UI. When the data is ready, the client is notified via a callback and only the part of the client concerned is updated, making the client agile and responsive. Threading in ASP.NET web applications can be achieved by sending an AJAX request from the client to the server. This makes the client request certain data to the server without blocking the UI. When the data is ready, the client is notified via a callback and only the part of the client concerned is updated, making the client agile and responsive. The most common way to achieve this is by ICallbackEventHandler. Refer to the project Demo.Threading.Web. I have the same interface as Windows with a text box to enter a number and a textbox to show the data. The Load Data button performs the previously discussed “heavy” operation.
<div>
    <asp:Label runat="server" >Enter Number</asp:Label>
    <input type="text" id="inputText" /><br /><br />
    <asp:TextBox ID="txtContentText" runat="server" TextMode="MultiLine" /><br /><br />
    <input type="button" id="LoadData" title="LoadData"
           onclick="LoadHeavyData()" value="LoadData" />
</div>
I have a JavaScript function LoadHeavyData() which is called on the click event of the button. This function calls the function CallServer with parameters.
<script type="text/ecmascript">
    function LoadHeavyData() {

        var lb = document.getElementById("inputText");
        CallServer(lb.value.toString(), "");
    }

    function ReceiveServerData(rValue) {
        document.getElementById("txtContentText").innerHTML = rValue;
    }
</script>
The CallServer function is registered with the server in the script that is defined at the page load event of the page:
protected void Page_Load(object sender, EventArgs e)
{
    String cbReference = Page.ClientScript.GetCallbackEventReference(this,
                         "arg", "ReceiveServerData", "context");

    String callbackScript;
    callbackScript = "function CallServer(arg, context)" +
                     "{ " + cbReference + ";}";

    Page.ClientScript.RegisterClientScriptBlock(this.GetType(),
                      "CallServer", callbackScript, true);
}
The above script defines and registers a CallServer function. On calling the CallServer function, the RaiseCallBackEvent of ICallbackeventHandler is invoked. This method invokes the LoadData() method which performs the heavy operation and returns the data.
public void RaiseCallbackEvent(string eventArgument)
{
    if (eventArgument!=null)
    {
        Result = this.LoadData(Convert.ToUInt16(eventArgument));
    }
}

private string LoadData(int num)
{
    // call Heavy data
    return Utility.LoadData(num);
}
Once LoadData() is executed, the GetCallbackResult() method of ICallbackEventHandler is executed, which returns the data:
public string GetCallbackResult()
{
    return Result;
}
Finally, the ReceiveServerData() function is called to update the UI. The ReceiveServerData function is registered as the callback for the CallServer() function in the page load event.
function ReceiveServerData(rValue) {
    document.getElementById("txtContentText").innerHTML = rValue;
}

WPF

Typically WPF applications start with two threads –

 

  1. Rendering Thread – Runs in the background handling low level tasks.
  2. UI Thread –  Receives input, handles event, paints the screen and runs application code.

 

Threading  in WPF is achieved in the same way as win forms with an exception that we use the Dispatcher object to bridge UI update from a background thread. The UI thread queues work items inside an object called Dispatcher. The Dispatcher selects work items on a priority basis and runs each one to completion. Every UI thread has one Dispatcher and each Dispatcher can execute items in one thread.  When an expensive work is completed in a background thread and the UI needs to be updated with the result, we use the dispatcher to queue the item in the task list of the UI thread.

Consider the following example where we have a Grid split into two parts. On the 1st part we have a property called ViewModelProperty bound to the view Model and on the 2nd part we have a bound collection ViewModelCollection. We also have a button which updates these properties. To simulate a “heavy work” we put the thread to sleep before updating the properties.

<DockPanel>
    <TextBlock Text="View Model Proeprty: " DockPanel.Dock="Left"/>
    <TextBlock Text="{Binding ViewModelProperty}" DockPanel.Dock="Right"/>
</DockPanel>
<ListBox Grid.Row="1" ItemsSource="{Binding ViewModelCollection}"/>
<Button Grid.Row="2" Content="Change Property" Width="100" Command="{Binding ChangePropertyCommand}"/>

Here is the View Model. Notice the method DoWork() which  is called via a background thread. As discussed we have two properties –  ViewModelProperty and ViewModelCollection. These implement the INotifyCollectionChanged and the view model itself inherits from DispatcherObject. The  main purpose of this example is to show how a data change from a background thread is passed on to the UI. In the DoWork() method, the change in the property ViewModelProperty is handled automatically but an addition to the collection is queued into the UI thread from the background thread via the Dispatcher object. The key point to note here is that while the WPF run time takes care of the property changed notification from a  background thread, the notification from a change in collection has to be handled by the programmer.

public ViewModel()
        {
            ChangePropertyCommand = new MVVMCommand((o) => DoWork(), (o)=> DoWorkCanExecute());
            ViewModelCollection = new ObservableCollection<string>();
            ViewModelCollection.CollectionChanged +=
                new System.Collections.Specialized.NotifyCollectionChangedEventHandler(ViewModelCollection_CollectionChanged);
        }

        public ICommand ChangePropertyCommand { get; set; }

        private string viewModelProperty;
        public string ViewModelProperty
        {
            get { return viewModelProperty; }
            set
            {
                if (value!=viewModelProperty)
                {
                    viewModelProperty = value;
                    OnPropertyChanged("ViewModelProperty");
                }
            }
        }

        private ObservableCollection<string> viewModelCollection;
        public ObservableCollection<string> ViewModelCollection
        {
            get { return viewModelCollection; }
            set
            {
                if (value!= viewModelCollection)
                {
                    viewModelCollection = value;
                }
            }

        }

        public void DoWork()
        {
            ThreadPool.QueueUserWorkItem((o) =>
                {
                    Thread.Sleep(5000);
                    ViewModelProperty = "New VM Property";
                    Dispatcher.Invoke(DispatcherPriority.Background,
                        (SendOrPostCallback)delegate
                        {
                            ViewModelCollection.Add("New Collection Item");
                        },null);
                });
        }

        private bool DoWorkCanExecute()
        {
            return true;
        }

        public event PropertyChangedEventHandler PropertyChanged;

        private void OnPropertyChanged(string PropertyName)
        {
            if (PropertyChanged!=null)
            {
                PropertyChanged(this, new PropertyChangedEventArgs(PropertyName));
            }
        }

    }

Thread Safety

A talk on threads is never over without talking about thread safety. Consider a resource being used by multiple threads. That would mean that the resource is being used and shared by the control over multiple threads. This would result in the resource behaving in an in-deterministic way and the results getting haywire. That is why we need to implement “thread safe” applications so that a resource is only available to one single thread at any point in time. The following are the ways of implementing thread safety in .NET:
  • Interlocked– The Interlocked class treats an operation as atomic. For example, simple addition, subtraction operations are three step operations inside the processor. When multiple threads access the same resource subject to these operations, the results can get confusing because one thread can be preempted after executing the first two steps. Another thread can then execute all three steps. When the first thread resumes execution, it overwrites the value in the instance variable, and the effect of the operation performed by the second thread is lost. Hence we need to use the Interlocked class which treats these operations as atomic, making them thread safe. E.g.: Increment, Decrement, Add, Read, Exchange, CompareExchange.
    System.Threading.Interlocked.Increment(object);
  • Monitor– The Monitor class is used to lock an object which might be vulnerable to the perils of multiple threads accessing that object concurrently.
    if (Monitor.TryEnter(this, 300)) {
        try {
            // code protected by the Monitor here.
        }
        finally {
            Monitor.Exit(this);
        }
    }
    else {
        // Code if the attempt times out.
    }
  •  Locks – The Lock class is an enhanced version of the monitor. In other words it encapsulates the features of the monitor without explicitly having to exit as is the case with the Monitor. The most popular example is that of the GetInstance() method of the Singleton class. Here the method can be used by various modules accessing it concurrently. Thread safety is implemented by locking that block of code with an object syncLock. Note that the object that is used to lock is similar to a real world key of a lock. if two or more resources have the key they can each open the lock and access the underlying resource. Hence we need to make sure that the key (or the object in this case) can never be shared. It is best to have the object as a private member of the class.
  • static object syncLock = new object();
    
    if (_instance == null)
    {
        lock (syncLock)
        {
            if (_instance == null)
            {
                _instance = new LoadBalancer();
            }
        }
    }
  • Reader-Writer Lock – The lock can be acquired by an unlimited number of concurrent readers, or exclusively by a single writer. This can provide better performance than a Monitor if most accesses are reads while writes are infrequent and of short duration. At any point in time readers and writer queue up separately. When the writer thread has the lock, the readers queue up and wait for the writer to finish. When the readers have the lock, all writing threads queue up separately. Readers and writers alternate to get the job done. The below code explains in detail. We have two methods –  ReadFromCollection and WriteToCollection to read and write from a collection respectively. Note the use of the methods  –AcquireReaderLock and AcquireWriterLock. These methods hold the thread till the reader or writer is free.
    static void Main(string[] args)
            {
                // Thread 1 writing
                new Thread(new ThreadStart(() =>
                    {
                        WriteToCollection(new int[]{1,2,3});
    
                    })).Start();
    
                // Thread 2 Reading
                new Thread(new ThreadStart(() =>
                {
                    ReadFromCollection();
                })).Start();
    
                // Thread 3 Writing
                new Thread(new ThreadStart(() =>
                {
                    WriteToCollection(new int[] { 4, 5, 6 });
    
                })).Start();
    
                // Thread 4 Reading
                new Thread(new ThreadStart(() =>
                {
                    ReadFromCollection();
                })).Start();            
    
                Console.ReadLine();
            }
    
            static void ReadFromCollection()
            {
                rwLock.AcquireReaderLock(5000);
                try
                {
                    Console.WriteLine("Read Lock acquired by thread : {0}  @ {1}", Thread.CurrentThread.ManagedThreadId, DateTime.Now.ToString("hh:mm:ss"));
                    Console.Write("Collection : ");
                    foreach (int item in myCollection)
                    {
                        Console.Write(item + ", ");
                    }
                    Console.Write("\n");
                }
                catch (Exception ex)
                {
                    Console.WriteLine("Exception : " + ex.Message);
                }
                finally
                {
                    Console.WriteLine("Read Lock released by thread : {0}  @ {1}", Thread.CurrentThread.ManagedThreadId, DateTime.Now.ToString("hh:mm:ss"));
                    rwLock.ReleaseReaderLock();
    
                }
            }
    
            static void WriteToCollection(int[] num)
            {
                rwLock.AcquireWriterLock(5000);
                try
                {
                    Console.WriteLine("Write Lock acquired by thread : {0}  @ {1}", Thread.CurrentThread.ManagedThreadId, DateTime.Now.ToString("hh:mm:ss"));
                    myCollection.AddRange(num);
                    Console.WriteLine("Written to collection ............: {0}", DateTime.Now.ToString("hh:mm:ss"));
                }
                catch (Exception ex)
                {
                    Console.WriteLine("Exception : " + ex.Message);
                }
                finally
                {
                    Console.WriteLine("Write Lock released by thread : {0}  @ {1}", Thread.CurrentThread.ManagedThreadId, DateTime.Now.ToString("hh:mm:ss"));
                    rwLock.ReleaseWriterLock();
                }
            }
  • Mutex  – A Mutex is used to share resources across the Operating system. A good example is to detect if multiple versions of the same applicationare running concurrently.

There are other ways of implementing thread safety. Please refer to MSDN for further information.   Dead Lock

A discussion on how to create a thread safe application can never be complete without touching on the concept of deadlocks. Let’s look at what that is.

A deadlock is a situation when two or more threads lock the same resource, each waiting for the other to let go. Such a situation will result in the operation being stuck indefinitely. Deadlocks can be avoided by careful programming. Example:
  • Thread A locks object A
  • Thread A locks object B
  • Thread B locks object B
  • Thread B locks object A

Thread A waits for Thread B to release object B and Thread B waits for Thread A to release object A. Consider the below example where we have a class DeadLock. We have two methods with nested locking of two objects – OperationA and OperationB.  We will have a deadlock situation when we fire two threads running operation A and operation B simultaneously.

public class DeadLock
{
 static object lockA = new object();
 static object lockB = new object();

 public void OperationA()
 {
  lock (lockA)
  {
   Console.WriteLine("Thread {0} has locked Obect A", Thread.CurrentThread.ManagedThreadId);
   lock (lockB)
   {
    Console.WriteLine("Thread {0} has locked Obect B", Thread.CurrentThread.ManagedThreadId);
   }
   Console.WriteLine("Thread {0} has released Obect B", Thread.CurrentThread.ManagedThreadId);
  }
  Console.WriteLine("Thread {0} has released Obect A", Thread.CurrentThread.ManagedThreadId);
 }

 public void OperationB()
 {
  lock (lockB)
  {
   Console.WriteLine("Thread {0} has locked Obect B", Thread.CurrentThread.ManagedThreadId);
   lock (lockA)
   {
    Console.WriteLine("Thread {0} has locked Obect A", Thread.CurrentThread.ManagedThreadId);
   }
   Console.WriteLine("Thread {0} has released Obect A", Thread.CurrentThread.ManagedThreadId);
  }
  Console.WriteLine("Thread {0} has released Obect B", Thread.CurrentThread.ManagedThreadId);
 } }
 DeadLock deadLock = new DeadLock();

 Thread tA = new Thread(new ThreadStart(deadLock.OperationA));
 Thread tB = new Thread(new ThreadStart(deadLock.OperationB));

 Console.WriteLine("Starting Thread A");
 tA.Start();

 Console.WriteLine("Starting Thread B");
 tB.Start();

 

 

 

 

 

Worker Threads vs I/O Threads

The Operating System has only one concept of threads which is what it uses to run various processes. But the .NET CLR has abstracted out a layer for us where we can deal with two types of threads – Worker Threads and I/O Threads. The method ThreadPool.GetAvailableThreads(out workerThread, out ioThread) shows us the number of each of these threads available. While coding, the heavy tasks in our applications should be classified into two categories – Compute bound or I/O bound operations. A compute bound operation is an operation where the CPU is used for heavy computation like running search results or complex algorithms. The I/O bound operations  are those operations which utilize the system I/O hardware or network drives. For example – reading and writing a file, fetching data from database or querying a remote web server. Compute bound operations should be delegated to worker threads and I/O bound operations should be delegated to I/O threads. When we queue items in a ThreadPool we are delegating items to the worker threads. If we use the worker threads to perform I/O bound operations, the threads remains blocked while the device driver performs that operation. A blocked thread is a wasted resource. On the other hand, if we use a I/O thread for the same task, the calling thread will delegate the task to the device driver and return to the thread pool. When the operation is completed, a thread from the thread pool will be notified and handle the task completion. The advantage is that the threads remain unblocked to handle other tasks because when an I/O operation is initiated the calling thread only delegates the task to the part of OS which handles the device drivers. There is no reason why the thread should remain blocked till the task is completed. In the .NET class library the Async Programming Model on certain types handles the I/O threads. For example – BeginRead() and EndRead() in FileStream class. As a thumb rule all methods with BeginXXX and EndXXX fall into this category.

Summary

“With great power comes great responsibility” – ThreadPool 

 

  1. No application should ever run heavy tasks on the UI thread. There is nothing uglier than a frozen UI. Threads should be created to manage the heavy work asynchronously using thread pools when ever possible.
  2. The UI cannot update data directly from a non UI or a background thread. Programmers need to delegate that work to the UI thread. This is done using the Invoke method of the winform class, Dispatcher in WPF or handled automatically when using BackGroundWorker.
  3. Threads are expensive resources and should be treated with respect. The term “More the merrier..” is unfortunately not applicable.
  4. Problems in our application will not go away by simply assigning a task to another thread. There is no magic happening and we need to carefully consider our design and purpose for maximizing efficiency.
  5. Creating a Thread with Thread class should be dealt with caution. Wherever possible a thread pool should be used. It is also not a good idea to fiddle around with the priority of a thread as it may stop other important threads from getting executed.
  6. Setting the IsBackground property to false carelessly can have catastrophic effect. Foreground threads will not let the application terminate till its task is complete. So if the user wants to terminate an application and there is a task that running in the background which has been marked as a foreground thread, then the application wont be terminated till the task is completed.
  7. Thread synchronization techniques should be carefully implemented when multiple threads are sharing  resources in an application. Deadlocks should be avoided through careful coding. Nesting of locks should always be avoided as these may result in deadlocks.
  8. Programmers should make sure that we do not end up creating more threads than required. Idle threads only give us overheads and may result in ‘Out of Memory” exception.
  9. I/ O operations must be delegated to I/O threads rather than working threads.

 

from:http://www.codeproject.com/Articles/212377/Multithreading-Demystified
中文版:http://www.cnblogs.com/lazycoding/archive/2013/02/06/2904918.html