All posts by dotte

微服务部署:蓝绿部署、滚动部署、灰度发布等部署方案对比与总结

在项目迭代的过程中,不可避免需要”上线“。上线对应着部署,或者重新部署;部署对应着修改;修改则意味着风险。

目前有很多用于部署的技术,有的简单,有的复杂;有的得停机,有的不需要停机即可完成部署。本文笔者简单讨论一下目前比较流行的几种部署方案,或者说策略。如有不足之处请指出,如有谬误,请指正^_^。

Blue/Green Deployment(蓝绿部署)

蓝绿部署无需停机,并且风险较小。

(1) 部署版本1的应用(一开始的状态)

所有外部请求的流量都打到这个版本上。

(2) 部署版本2的应用

版本2的代码与版本1不同(新功能、Bug修复等)。

(3) 将流量从版本1切换到版本2。

(4) 如版本2测试正常,就删除版本1正在使用的资源(例如实例),从此正式用版本2。

从过程不难发现,在部署的过程中,我们的应用始终在线。并且,新版本上线的过程中,并没有修改老版本的任何内容,在部署期间,老版本的状态不受影响。这样风险很小,并且,只要老版本的资源不被删除,理论上,我们可以在任何时间回滚到老版本。

rolling update(滚动发布)

滚动发布,一般是取出一个或者多个服务器停止服务,执行更新,并重新将其投入使用。周而复始,直到集群中所有的实例都更新成新版本。

这种部署方式相对于蓝绿部署,更加节约资源——它不需要运行两个集群、两倍的实例数。我们可以部分部署,例如每次只取出集群的20%进行升级。

这种方式也有很多缺点,例如:

(1) 没有一个确定OK的环境。使用蓝绿部署,我们能够清晰地知道老版本是OK的,而使用滚动发布,我们无法确定。

(2) 修改了现有的环境。

(3) 如果需要回滚,很困难。举个例子,在某一次发布中,我们需要更新100个实例,每次更新10个实例,每次部署需要5分钟。当滚动发布到第80个实例时,发现了问题,需要回滚。此时,脾气不好的程序猿很可能想掀桌子,因为回滚是一个痛苦,并且漫长的过程。

(4) 有的时候,我们还可能对系统进行动态伸缩,如果部署期间,系统自动扩容/缩容了,我们还需判断到底哪个节点使用的是哪个代码。尽管有一些自动化的运维工具,但是依然令人心惊胆战。

并不是说滚动发布不好,滚动发布也有它非常合适的场景。

灰度发布/金丝雀部署

先贴个百度百科:
灰度发布是指在黑与白之间,能够平滑过渡的一种发布方式。AB test就是一种灰度发布方式,让一部分用户继续用A,一部分用户开始用B,如果用户对B没有什么反对意见,那么逐步扩大范围,把所有用户都迁移到B上面来。灰度发布可以保证整体系统的稳定,在初始灰度的时候就可以发现、调整问题,以保证其影响度。
很多人把灰度发布与蓝绿部署混为一谈,笔者认为,与灰度发布最类似的应该是金丝雀部署。

“金丝雀部署”是增量发布的一种类型,它的执行方式是在原有软件生产版本可用的情况下,同时部署一个新的版本。同时运行同一个软件产品的多个版本需要软件针对配置和完美自动化部署进行特别设计。

我们来看一下金丝雀部署的步骤:

(1) 准备好部署各个阶段的工件,包括:构建工件,测试脚本,配置文件和部署清单文件。

(2) 从负载均衡列表中移除掉“金丝雀”服务器。

(3) 升级“金丝雀”应用(排掉原有流量并进行部署)。

(4) 对应用进行自动化测试。

(5) 将“金丝雀”服务器重新添加到负载均衡列表中(连通性和健康检查)。

(6) 如果“金丝雀”在线使用测试成功,升级剩余的其他服务器。(否则就回滚)

灰度发布中,常常按照用户设置路由权重,例如90%的用户维持使用老版本,10%的用户尝鲜新版本。不同版本应用共存,经常与A/B测试一起使用,用于测试选择多种方案。灰度发布比较典型的例子,是阿里云那个“新版本”,点击“进入新版本”,我们就成了金丝雀。

趣闻 :金丝雀部署(同理还有金丝雀测试),“金丝雀”的由来:17世纪,英国矿井工人发现,金丝雀对瓦斯这种气体十分敏感。空气中哪怕有极其微量的瓦斯,金丝雀也会停止歌唱;而当瓦斯含量超过一定限度时,虽然鲁钝的人类毫无察觉,金丝雀却早已毒发身亡。当时在采矿设备相对简陋的条件下,工人们每次下井都会带上一只金丝雀作为“瓦斯检测指标”,以便在危险状况下紧急撤离。

总结

(1) 蓝绿部署:不停止老版本,额外搞一套新版本,等测试发现新版本OK后,删除老版本。

(2) 滚动发布:按批次停止老版本实例,启动新版本实例。

(3) 灰度发布/金丝雀部署:不停止老版本,额外搞一套新版本,常常按照用户设置路由权重,例如90%的用户维持使用老版本,10%的用户尝鲜新版本。不同版本应用共存,经常与A/B测试一起使用,用于测试选择多种方案。

参考文档

(1) 《Blue-green Deployments, A/B Testing, and Canary Releases》(有图文说明,必看):http://blog.christianposta.com/deploy/blue-green-deployments-a-b-testing-and-canary-releases/

(2) Martin Fowler《BlueGreenDeployment》(必看):https://martinfowler.com/bliki/BlueGreenDeployment.html

(3) 《在生产中使用金丝雀部署来进行测试》:http://www.infoq.com/cn/news/2013/03/canary-release-improve-quality

(4) 《Using Blue-Green Deployment to Reduce Downtime and Risk(使用烂蓝绿部署降降低停机时间与风险,基于CloudFoundry)》:http://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html

(5) 《marathon:Blue-Green Deployment》:https://mesosphere.github.io/marathon/docs/blue-green-deploy.html ,译文:http://blog.csdn.net/zhuchuangang/article/details/51064974

(6) 《微服务不是免费的午餐》:http://blog.csdn.net/phodal/article/details/27098005

(7) 《蓝绿发布的整个部署过程》:http://www.tuicool.com/articles/2Iji2ue

Java concurrency (multi-threading)

Table of Contents
Java concurrency (multi-threading). This article describes how to do concurrent programming with Java. It covers the concepts of parallel programming, immutability, threads, the executor framework (thread pools), futures, callables CompletableFuture and the fork-join framework.

1. Concurrency

1.1. What is concurrency?

Concurrency is the ability to run several programs or several parts of a program in parallel. If a time consuming task can be performed asynchronously or in parallel, this improve the throughput and the interactivity of the program.

A modern computer has several CPU’s or several cores within one CPU. The ability to leverage these multi-cores can be the key for a successful high-volume application.

1.2. Process vs. threads

A process runs independently and isolated of other processes. It cannot directly access shared data in other processes. The resources of the process, e.g. memory and CPU time, are allocated to it via the operating system.

A thread is a so called lightweight process. It has its own call stack, but can access shared data of other threads in the same process. Every thread has its own memory cache. If a thread reads shared data it stores this data in its own memory cache. A thread can re-read the shared data.

A Java application runs by default in one process. Within a Java application you work with several threads to achieve parallel processing or asynchronous behavior.

2. Improvements and issues with concurrency

2.1. Limits of concurrency gains

Within a Java application you work with several threads to achieve parallel processing or asynchronous behavior. Concurrency promises to perform certain task faster as these tasks can be divided into subtasks and these subtasks can be executed in parallel. Of course the runtime is limited by parts of the task which can be performed in parallel.

The theoretical possible performance gain can be calculated by the following rule which is referred to as Amdahl’s Law.

If F is the percentage of the program which can not run in parallel and N is the number of processes, then the maximum performance gain is 1 / (F+ ((1-F)/n)).

2.2. Concurrency issues

Threads have their own call stack, but can also access shared data. Therefore you have two basic problems, visibility and access problems.

A visibility problem occurs if thread A reads shared data which is later changed by thread B and thread A is unaware of this change.

An access problem can occur if several thread access and change the same shared data at the same time.

Visibility and access problem can lead to

  • Liveness failure: The program does not react anymore due to problems in the concurrent access of data, e.g. deadlocks.
  • Safety failure: The program creates incorrect data.

3. Concurrency in Java

3.1. Processes and Threads

A Java program runs in its own process and by default in one thread. Java supports threads as part of the Java language via the Thread code. The Java application can create new threads via this class.

Java 1.5 also provides improved support for concurrency with the in the java.util.concurrent package.

3.2. Locks and thread synchronization

Java provides locks to protect certain parts of the code to be executed by several threads at the same time. The simplest way of locking a certain method or Java class is to define the method or class with the synchronized keyword.

The synchronized keyword in Java ensures:

  • that only a single thread can execute a block of code at the same time
  • that each thread entering a synchronized block of code sees the effects of all previous modifications that were guarded by the same lock

Synchronization is necessary for mutually exclusive access to blocks of and for reliable communication between threads.

You can use the synchronized keyword for the definition of a method. This would ensure that only one thread can enter this method at the same time. Another threads which is calling this method would wait until the first threads leaves this method.

public synchronized void critial() {
    // some thread critical stuff
    // here
}

You can also use the synchronized keyword to protect blocks of code within a method. This block is guarded by a key, which can be either a string or an object. This key is called the lock.

All code which is protected by the same lock can only be executed by one thread at the same time

For example the following datastructure will ensure that only one thread can access the inner block of the add() and next() methods.

package de.vogella.pagerank.crawler;

import java.util.ArrayList;
import java.util.List;

/**
 * Data structure for a web crawler. Keeps track of the visited sites and keeps
 * a list of sites which needs still to be crawled.
 *
 * @author Lars Vogel
 *
 */
public class CrawledSites {
    private List<String> crawledSites = new ArrayList<String>();
    private List<String> linkedSites = new ArrayList<String>();

    public void add(String site) {
        synchronized (this) {
            if (!crawledSites.contains(site)) {
                linkedSites.add(site);
            }
        }
    }

    /**
     * Get next site to crawl. Can return null (if nothing to crawl)
     */
    public String next() {
        if (linkedSites.size() == 0) {
            return null;
        }
        synchronized (this) {
            // Need to check again if size has changed
            if (linkedSites.size() > 0) {
                String s = linkedSites.get(0);
                linkedSites.remove(0);
                crawledSites.add(s);
                return s;
            }
            return null;
        }
    }

}

3.3. Volatile

If a variable is declared with the volatile keyword then it is guaranteed that any thread that reads the field will see the most recently written value. The volatile keyword will not perform any mutual exclusive lock on the variable.

As of Java 5 write access to a volatile variable will also update non-volatile variables which were modified by the same thread. This can also be used to update values within a reference variable, e.g. for a volatile variable person. In this case you must use a temporary variable person and use the setter to initialize the variable and then assign the temporary variable to the final variable. This will then make the address changes of this variable and the values visible to other threads.

4. The Java memory model

4.1. Overview

The Java memory model describes the communication between the memory of the threads and the main memory of the application.

It defines the rules how changes in the memory done by threads are propagated to other threads.

The Java memory model also defines the situations in which a thread re-fresh its own memory from the main memory.

It also describes which operations are atomic and the ordering of the operations.

4.2. Atomic operation

An atomic operation is an operation which is performed as a single unit of work without the possibility of interference from other operations.

The Java language specification guarantees that reading or writing a variable is an atomic operation(unless the variable is of type long or double ). Operations variables of type long or double are only atomic if they declared with the volatile keyword.

Assume i is defined as int. The i++ (increment) operation it not an atomic operation in Java. This also applies for the other numeric types, e.g. long. etc).

The i++ operation first reads the value which is currently stored in i (atomic operations) and then it adds one to it (atomic operation). But between the read and the write the value of i might have changed.

Since Java 1.5 the java language provides atomic variables, e.g. AtomicInteger or AtomicLong which provide methods like getAndDecrement(), getAndIncrement() and getAndSet() which are atomic.

4.3. Memory updates in synchronized code

The Java memory model guarantees that each thread entering a synchronized block of code sees the effects of all previous modifications that were guarded by the same lock.

5. Immutability and Defensive Copies

5.1. Immutability

The simplest way to avoid problems with concurrency is to share only immutable data between threads. Immutable data is data which cannot changed.

To make a class immutable make

  • all its fields final
  • the class declared as final
  • the this reference is not allowed to escape during construction
  • Any fields which refer to mutable data objects are
  • private
  • have no setter method
  • they are never directly returned of otherwise exposed to a caller
  • if they are changed internally in the class this change is not visible and has no effect outside of the class

An immutable class may have some mutable data which is uses to manages its state but from the outside this class nor any attribute of this class can get changed.

For all mutable fields, e.g. Arrays, that are passed from the outside to the class during the construction phase, the class needs to make a defensive-copy of the elements to make sure that no other object from the outside still can change the data

5.2. Defensive Copies

You must protect your classes from calling code. Assume that calling code will do its best to change your data in a way you didn’t expect it. While this is especially true in case of immutable data it is also true for non-immutable data which you still not expect that this data is changed outside your class.

To protect your class against that you should copy data you receive and only return copies of data to calling code.

The following example creates a copy of a list (ArrayList) and returns only the copy of the list. This way the client of this class cannot remove elements from the list.

package de.vogella.performance.defensivecopy;

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;

public class MyDataStructure {
    List<String> list = new ArrayList<String>();

    public void add(String s) {
        list.add(s);
    }

    /**
     * Makes a defensive copy of the List and return it
     * This way cannot modify the list itself
     *
     * @return List
     */
    public List<String> getList() {
        return Collections.unmodifiableList(list);
    }
}

6. Threads in Java

The base means for concurrency are is the java.lang.Threads class. A Thread executes an object of type java.lang.Runnable.

Runnable is an interface with defines the run() method. This method is called by the Thread object and contains the work which should be done. Therefore the “Runnable” is the task to perform. The Thread is the worker who is doing this task.

The following demonstrates a task (Runnable) which counts the sum of a given range of numbers. Create a new Java project called de.vogella.concurrency.threads for the example code of this section.

package de.vogella.concurrency.threads;

/**
 * MyRunnable will count the sum of the number from 1 to the parameter
 * countUntil and then write the result to the console.
 * 

* MyRunnable is the task which will be performed * * @author Lars Vogel * */ public class MyRunnable implements Runnable { private final long countUntil; MyRunnable(long countUntil) { this.countUntil = countUntil; } @Override public void run() { long sum = 0; for (long i = 1; i < countUntil; i++) { sum += i; } System.out.println(sum); } }

The following example demonstrate the usage of the Thread and the Runnable class.

package de.vogella.concurrency.threads;

import java.util.ArrayList;
import java.util.List;

public class Main {

    public static void main(String[] args) {
        // We will store the threads so that we can check if they are done
        List<Thread> threads = new ArrayList<Thread>();
        // We will create 500 threads
        for (int i = 0; i < 500; i++) {
            Runnable task = new MyRunnable(10000000L + i);
            Thread worker = new Thread(task);
            // We can set the name of the thread
            worker.setName(String.valueOf(i));
            // Start the thread, never call method run() direct
            worker.start();
            // Remember the thread for later usage
            threads.add(worker);
        }
        int running = 0;
        do {
            running = 0;
            for (Thread thread : threads) {
                if (thread.isAlive()) {
                    running++;
                }
            }
            System.out.println("We have " + running + " running threads. ");
        } while (running > 0);

    }
}

Using the Thread class directly has the following disadvantages.

  • Creating a new thread causes some performance overhead.
  • Too many threads can lead to reduced performance, as the CPU needs to switch between these threads.
  • You cannot easily control the number of threads, therefore you may run into out of memory errors due to too many threads.

The java.util.concurrent package offers improved support for concurrency compared to the direct usage of Threads. This package is described in the next section.

7. Threads pools with the Executor Framework

You find this examples in the source section in Java project called de.vogella.concurrency.threadpools.

Thread pools manage a pool of worker threads. The thread pools contains a work queue which holds tasks waiting to get executed.

A thread pool can be described as a collection of Runnable objects.

(work queue) and a connections of running threads. These threads are constantly running and are checking the work query for new work. If there is new work to be done they execute this Runnable. The Thread class itself provides a method, e.g. execute(Runnable r) to add a new Runnable object to the work queue.

The Executor framework provides example implementation of the java.util.concurrent.Executor interface, e.g. Executors.newFixedThreadPool(int n) which will create n worker threads. The ExecutorService adds life cycle methods to the Executor, which allows to shutdown the Executor and to wait for termination.

If you want to use one thread pool with one thread which executes several runnables you can use the Executors.newSingleThreadExecutor() method.

Create again the Runnable.

package de.vogella.concurrency.threadpools;

/**
 * MyRunnable will count the sum of the number from 1 to the parameter
 * countUntil and then write the result to the console.
 * 

* MyRunnable is the task which will be performed * * @author Lars Vogel * */ public class MyRunnable implements Runnable { private final long countUntil; MyRunnable(long countUntil) { this.countUntil = countUntil; } @Override public void run() { long sum = 0; for (long i = 1; i < countUntil; i++) { sum += i; } System.out.println(sum); } }

Now you run your runnables with the executor framework.

package de.vogella.concurrency.threadpools;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class Main {
    private static final int NTHREDS = 10;

    public static void main(String[] args) {
        ExecutorService executor = Executors.newFixedThreadPool(NTHREDS);
        for (int i = 0; i < 500; i++) {
            Runnable worker = new MyRunnable(10000000L + i);
            executor.execute(worker);
        }
        // This will make the executor accept no new threads
        // and finish all existing threads in the queue
        executor.shutdown();
        // Wait until all threads are finish
        executor.awaitTermination();
        System.out.println("Finished all threads");
    }
}

In case the threads should return some value (result-bearing threads) then you can use the java.util.concurrent.Callable class.

8. Futures and Callables

8.1. Futures and Callables

The executor framework presented in the last chapter uses Runnable objects. Unfortunately a Runnable cannot return a result to the caller.

In case you expect your threads to return a computed result you can use java.util.concurrent.Callable. The Callable object allows to return values after completion.

The Callable object uses generics to define the type of object which is returned.

If you submit a Callable object to an Executor, the framework returns an object of type java.util.concurrent.Future. Future exposes methods allowing a client to monitor the progress of a task being executed by a different thread. Therefore, a Future object can be used to check the status of a Callable. It can also be used to retrieve the result from the Callable.

On the Executor you can use the method submit to submit a Callable and to get a future. To retrieve the result of the future use the get() method.

package de.vogella.concurrency.callables;

import java.util.concurrent.Callable;

public class MyCallable implements Callable<Long> {
    @Override
    public Long call() throws Exception {
        long sum = 0;
        for (long i = 0; i <= 100; i++) {
            sum += i;
        }
        return sum;
    }
}
package de.vogella.concurrency.callables;

import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

public class CallableFutures {
    private static final int NTHREDS = 10;

    public static void main(String[] args) {

        ExecutorService executor = Executors.newFixedThreadPool(NTHREDS);
        List<Future<Long>> list = new ArrayList<Future<Long>>();
        for (int i = 0; i < 20000; i++) {
            Callable<Long> worker = new MyCallable();
            Future<Long> submit = executor.submit(worker);
            list.add(submit);
        }
        long sum = 0;
        System.out.println(list.size());
        // now retrieve the result
        for (Future<Long> future : list) {
            try {
                sum += future.get();
            } catch (InterruptedException e) {
                e.printStackTrace();
            } catch (ExecutionException e) {
                e.printStackTrace();
            }
        }
        System.out.println(sum);
        executor.shutdown();
    }
}

8.2. Drawbacks with Futures and Callables

The Future interface is limited as a model of asynchronously executed tasks. Future allows a client to query a Callable task for its result. It does not provide the option to register a callback method. A callback method would allow you to get a callback once a task is done. In Java 5 you could use ExecutorCompletionService for this purpose but as of Java 8 you can use the CompletableFuture interface which allows to provide a callback interface which is called once a task is completed.

9. CompletableFuture

Asynchronous task handling is important for any application which performs time consuming activities, as IO operations. Two basic approaches to asynchronous task handling are available to a Java application:

  • application logic blocks until a task completes
  • application logic is called once the task completes, this is called a nonblocking approach.

CompletableFuture extends the functionality of the Future interface for asynchronous calls. It also implements the CompletionStage interface. CompletionStage offers methods, that let you attach callbacks that will be executed on completion.

It adds standard techniques for executing application code when a task completes, including various ways to combine tasks. CompletableFuture support both blocking and nonblocking approaches, including regular callbacks.

This callback can be executed in another thread as the thread in which the CompletableFuture is executed.

The following example demonstrates how to create a basic CompletableFuture.

CompletableFuture.supplyAsync(this::doSomething);

CompletableFuture.supplyAsync runs the task asynchronously on the default thread pool of Java. It has the option to supply your custom executor to define the ThreadPool.

package snippet;

import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;

public class CompletableFutureSimpleSnippet {
    public static void main(String[] args) {
        long started = System.currentTimeMillis();

        // configure CompletableFuture
        CompletableFuture<Integer> futureCount = createCompletableFuture();

            // continue to do other work
            System.out.println("Took " + (started - System.currentTimeMillis()) + " milliseconds" );

            // now its time to get the result
            try {
              int count = futureCount.get();
                System.out.println("CompletableFuture took " + (started - System.currentTimeMillis()) + " milliseconds" );

               System.out.println("Result " + count);
             } catch (InterruptedException | ExecutionException ex) {
                // Exceptions from the future should be handled here
            }
    }

    private static CompletableFuture<Integer> createCompletableFuture() {
        CompletableFuture<Integer> futureCount = CompletableFuture.supplyAsync(
                () -> {
                    try {
                        // simulate long running task
                        Thread.sleep(5000);
                    } catch (InterruptedException e) { }
                    return 20;
                });
        return futureCount;
    }

}

The usage of the thenApply method is demonstrated by the following code snippet.

package snippet;

import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;

public class CompletableFutureCallback {
    public static void main(String[] args) {
        long started = System.currentTimeMillis();

        CompletableFuture<String>  data = createCompletableFuture()
                .thenApply((Integer count) -> {
                    int transformedValue = count * 10;
                    return transformedValue;
                }).thenApply(transformed -> "Finally creates a string: " + transformed);

            try {
                System.out.println(data.get());
            } catch (InterruptedException | ExecutionException e) {

            }
    }

    public static CompletableFuture<Integer> createCompletableFuture() {
        CompletableFuture<Integer>  result = CompletableFuture.supplyAsync(() -> {
            try {
                // simulate long running task
                Thread.sleep(5000);
            } catch (InterruptedException e) { }
            return 20;
        });
        return result;
    }

}

10. Nonblocking algorithms

Java 5.0 provides supports for additional atomic operations. This allows to develop algorithm which are non-blocking algorithm, e.g. which do not require synchronization, but are based on low-level atomic hardware primitives such as compare-and-swap (CAS). A compare-and-swap operation check if the variable has a certain value and if it has this value it will perform this operation.

Non-blocking algorithms are typically faster than blocking algorithms, as the synchronization of threads appears on a much finer level (hardware).

For example this created a non-blocking counter which always increases. This example is contained in the project called de.vogella.concurrency.nonblocking.counter.

package de.vogella.concurrency.nonblocking.counter;

import java.util.concurrent.atomic.AtomicInteger;

public class Counter {
    private AtomicInteger value = new AtomicInteger();
    public int getValue(){
        return value.get();
    }
    public int increment(){
        return value.incrementAndGet();
    }

    // Alternative implementation as increment but just make the
    // implementation explicit
    public int incrementLongVersion(){
        int oldValue = value.get();
        while (!value.compareAndSet(oldValue, oldValue+1)){
             oldValue = value.get();
        }
        return oldValue+1;
    }

}

And a test.

package de.vogella.concurrency.nonblocking.counter;

import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

public class Test {
        private static final int NTHREDS = 10;

        public static void main(String[] args) {
            final Counter counter = new Counter();
            List<Future<Integer>> list = new ArrayList<Future<Integer>>();

            ExecutorService executor = Executors.newFixedThreadPool(NTHREDS);
            for (int i = 0; i < 500; i++) {
                Callable<Integer> worker = new  Callable<Integer>() {
                    @Override
                    public Integer call() throws Exception {
                        int number = counter.increment();
                        System.out.println(number );
                        return number ;
                    }
                };
                Future<Integer> submit= executor.submit(worker);
                list.add(submit);

            }


            // This will make the executor accept no new threads
            // and finish all existing threads in the queue
            executor.shutdown();
            // Wait until all threads are finish
            while (!executor.isTerminated()) {
            }
            Set<Integer> set = new HashSet<Integer>();
            for (Future<Integer> future : list) {
                try {
                    set.add(future.get());
                } catch (InterruptedException e) {
                    e.printStackTrace();
                } catch (ExecutionException e) {
                    e.printStackTrace();
                }
            }
            if (list.size()!=set.size()){
                throw new RuntimeException("Double-entries!!!");
            }

        }


}

The interesting part is how the incrementAndGet() method is implemented. It uses a CAS operation.

public final int incrementAndGet() {
        for (;;) {
            int current = get();
            int next = current + 1;
            if (compareAndSet(current, next))
                return next;
        }
    }

The JDK itself makes more and more use of non-blocking algorithms to increase performance for every developer. Developing correct non-blocking algorithm is not a trivial task.

For more information on non-blocking algorithm, e.g. examples for a non-blocking Stack and non-block LinkedList, please see http://www.ibm.com/developerworks/java/library/j-jtp04186/index.html

11. Fork-Join in Java 7

Java 7 introduce a new parallel mechanism for compute intensive tasks, the fork-join framework. The fork-join framework allows you to distribute a certain task on several workers and then wait for the result.

For Java 6.0 you can download the package (jsr166y) from the Download site.

For testing create the Java project “de.vogella.performance.forkjoin”. If you are not using Java 7 you also need to jsr166y.jar to the classpath.

Create first a algorithm package and then the following class.

package algorithm;

import java.util.Random;

/**
 *
 * This class defines a long list of integers which defines the problem we will
 * later try to solve
 *
 */
public class Problem {
    private final int[] list = new int[2000000];

    public Problem() {
        Random generator = new Random(19580427);
        for (int i = 0; i < list.length; i++) {
            list[i] = generator.nextInt(500000);
        }
    }

    public int[] getList() {
        return list;
    }

}

Define now the Solver class as shown in the following example coding.

The API defines other top classes, e.g. RecursiveAction, AsyncAction. Check the Javadoc for details.
package algorithm;

import java.util.Arrays;

import jsr166y.forkjoin.RecursiveAction;

public class Solver extends RecursiveAction {
    private int[] list;
    public long result;

    public Solver(int[] array) {
        this.list = array;
    }

    @Override
    protected void compute() {
        if (list.length == 1) {
            result = list[0];
        } else {
            int midpoint = list.length / 2;
            int[] l1 = Arrays.copyOfRange(list, 0, midpoint);
            int[] l2 = Arrays.copyOfRange(list, midpoint, list.length);
            Solver s1 = new Solver(l1);
            Solver s2 = new Solver(l2);
            forkJoin(s1, s2);
            result = s1.result + s2.result;
        }
    }
}

Now define a small test class for testing it efficiency.

package testing;

import jsr166y.forkjoin.ForkJoinExecutor;
import jsr166y.forkjoin.ForkJoinPool;
import algorithm.Problem;
import algorithm.Solver;

public class Test {

    public static void main(String[] args) {
        Problem test = new Problem();
        // check the number of available processors
        int nThreads = Runtime.getRuntime().availableProcessors();
        System.out.println(nThreads);
        Solver mfj = new Solver(test.getList());
        ForkJoinExecutor pool = new ForkJoinPool(nThreads);
        pool.invoke(mfj);
        long result = mfj.getResult();
        System.out.println("Done. Result: " + result);
        long sum = 0;
        // check if the result was ok
        for (int i = 0; i < test.getList().length; i++) {
            sum += test.getList()[i];
        }
        System.out.println("Done. Result: " + sum);
    }
}

12. Deadlock

A concurrent application has the risk of a deadlock. A set of processes are deadlocked if all processes are waiting for an event which another process in the same set has to cause.

For example if thread A waits for a lock on object Z which thread B holds and thread B wait for a look on object Y which is hold be process A then these two processes are locked and cannot continue in their processing.

This can be compared to a traffic jam, where cars(threads) require the access to a certain street(resource), which is currently blocked by another car(lock).

Deadlock

Copyright © 2012-2017 vogella GmbH. Free use of the software examples is granted under the terms of the EPL License. This tutorial is published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Germany license.

See Licence.

from:http://www.vogella.com/tutorials/JavaConcurrency/article.html

正确使用 Volatile 变量

Java 语言中的 volatile 变量可以被看作是一种 “程度较轻的 synchronized”;与 synchronized 块相比,volatile 变量所需的编码较少,并且运行时开销也较少,但是它所能实现的功能也仅是 synchronized 的一部分。本文介绍了几种有效使用 volatile 变量的模式,并强调了几种不适合使用 volatile 变量的情形。

锁提供了两种主要特性:互斥(mutual exclusion)可见性(visibility)。互斥即一次只允许一个线程持有某个特定的锁,因此可使用该特性实现对共享数据的协调访问协议,这样,一次就只有一个线程能够使用该共享数据。可见性要更加复杂一些,它必须确保释放锁之前对共享数据做出的更改对于随后获得该锁的另一个线程是可见的 —— 如果没有同步机制提供的这种可见性保证,线程看到的共享变量可能是修改前的值或不一致的值,这将引发许多严重问题。

Volatile 变量

Volatile 变量具有 synchronized 的可见性特性,但是不具备原子特性。这就是说线程能够自动发现 volatile 变量的最新值。Volatile 变量可用于提供线程安全,但是只能应用于非常有限的一组用例:多个变量之间或者某个变量的当前值与修改后值之间没有约束。因此,单独使用 volatile 还不足以实现计数器、互斥锁或任何具有与多个变量相关的不变式(Invariants)的类(例如 “start <=end”)。 出于简易性或可伸缩性的考虑,您可能倾向于使用 volatile 变量而不是锁。当使用 volatile 变量而非锁时,某些习惯用法(idiom)更加易于编码和阅读。此外,volatile 变量不会像锁那样造成线程阻塞,因此也很少造成可伸缩性问题。在某些情况下,如果读操作远远大于写操作,volatile 变量还可以提供优于锁的性能优势。

正确使用 volatile 变量的条件

您只能在有限的一些情形下使用 volatile 变量替代锁。要使 volatile 变量提供理想的线程安全,必须同时满足下面两个条件:

  • 对变量的写操作不依赖于当前值。
  • 该变量没有包含在具有其他变量的不变式中。

实际上,这些条件表明,可以被写入 volatile 变量的这些有效值独立于任何程序的状态,包括变量的当前状态。

第一个条件的限制使 volatile 变量不能用作线程安全计数器。虽然增量操作(x++)看上去类似一个单独操作,实际上它是一个由读取-修改-写入操作序列组成的组合操作,必须以原子方式执行,而 volatile 不能提供必须的原子特性。实现正确的操作需要使 x 的值在操作期间保持不变,而 volatile 变量无法实现这点。(然而,如果将值调整为只从单个线程写入,那么可以忽略第一个条件。)

大多数编程情形都会与这两个条件的其中之一冲突,使得 volatile 变量不能像 synchronized 那样普遍适用于实现线程安全。清单 1 显示了一个非线程安全的数值范围类。它包含了一个不变式 —— 下界总是小于或等于上界。

清单 1. 非线程安全的数值范围类
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
@NotThreadSafe
public class NumberRange {
    private int lower, upper;
    public int getLower() { return lower; }
    public int getUpper() { return upper; }
    public void setLower(int value) {
        if (value > upper)
            throw new IllegalArgumentException(...);
        lower = value;
    }
    public void setUpper(int value) {
        if (value < lower)
            throw new IllegalArgumentException(...);
        upper = value;
    }
}

这种方式限制了范围的状态变量,因此将 lower 和 upper 字段定义为 volatile 类型不能够充分实现类的线程安全;从而仍然需要使用同步。否则,如果凑巧两个线程在同一时间使用不一致的值执行 setLowersetUpper 的话,则会使范围处于不一致的状态。例如,如果初始状态是 (0, 5),同一时间内,线程 A 调用 setLower(4) 并且线程 B 调用 setUpper(3),显然这两个操作交叉存入的值是不符合条件的,那么两个线程都会通过用于保护不变式的检查,使得最后的范围值是 (4, 3) —— 一个无效值。至于针对范围的其他操作,我们需要使 setLower()setUpper() 操作原子化 —— 而将字段定义为 volatile 类型是无法实现这一目的的。

性能考虑

使用 volatile 变量的主要原因是其简易性:在某些情形下,使用 volatile 变量要比使用相应的锁简单得多。使用 volatile 变量次要原因是其性能:某些情况下,volatile 变量同步机制的性能要优于锁。

很难做出准确、全面的评价,例如 “X 总是比 Y 快”,尤其是对 JVM 内在的操作而言。(例如,某些情况下 VM 也许能够完全删除锁机制,这使得我们难以抽象地比较 volatilesynchronized 的开销。)就是说,在目前大多数的处理器架构上,volatile 读操作开销非常低 —— 几乎和非 volatile 读操作一样。而 volatile 写操作的开销要比非 volatile 写操作多很多,因为要保证可见性需要实现内存界定(Memory Fence),即便如此,volatile 的总开销仍然要比锁获取低。

volatile 操作不会像锁一样造成阻塞,因此,在能够安全使用 volatile 的情况下,volatile 可以提供一些优于锁的可伸缩特性。如果读操作的次数要远远超过写操作,与锁相比,volatile 变量通常能够减少同步的性能开销。

正确使用 volatile 的模式

很多并发性专家事实上往往引导用户远离 volatile 变量,因为使用它们要比使用锁更加容易出错。然而,如果谨慎地遵循一些良好定义的模式,就能够在很多场合内安全地使用 volatile 变量。要始终牢记使用 volatile 的限制 —— 只有在状态真正独立于程序内其他内容时才能使用 volatile —— 这条规则能够避免将这些模式扩展到不安全的用例。

模式 #1:状态标志

也许实现 volatile 变量的规范使用仅仅是使用一个布尔状态标志,用于指示发生了一个重要的一次性事件,例如完成初始化或请求停机。

很多应用程序包含了一种控制结构,形式为 “在还没有准备好停止程序时再执行一些工作”,如清单 2 所示:

清单 2. 将 volatile 变量作为状态标志使用
1
2
3
4
5
6
7
8
9
10
11
volatile boolean shutdownRequested;
...
public void shutdown() { shutdownRequested = true; }
public void doWork() {
    while (!shutdownRequested) {
        // do stuff
    }
}

很可能会从循环外部调用 shutdown() 方法 —— 即在另一个线程中 —— 因此,需要执行某种同步来确保正确实现 shutdownRequested 变量的可见性。(可能会从 JMX 侦听程序、GUI 事件线程中的操作侦听程序、通过 RMI 、通过一个 Web 服务等调用)。然而,使用 synchronized 块编写循环要比使用清单 2 所示的 volatile 状态标志编写麻烦很多。由于 volatile 简化了编码,并且状态标志并不依赖于程序内任何其他状态,因此此处非常适合使用 volatile。

这种类型的状态标记的一个公共特性是:通常只有一种状态转换;shutdownRequested 标志从 false 转换为 true,然后程序停止。这种模式可以扩展到来回转换的状态标志,但是只有在转换周期不被察觉的情况下才能扩展(从 falsetrue,再转换到 false)。此外,还需要某些原子状态转换机制,例如原子变量。

模式 #2:一次性安全发布(one-time safe publication)

缺乏同步会导致无法实现可见性,这使得确定何时写入对象引用而不是原语值变得更加困难。在缺乏同步的情况下,可能会遇到某个对象引用的更新值(由另一个线程写入)和该对象状态的旧值同时存在。(这就是造成著名的双重检查锁定(double-checked-locking)问题的根源,其中对象引用在没有同步的情况下进行读操作,产生的问题是您可能会看到一个更新的引用,但是仍然会通过该引用看到不完全构造的对象)。

实现安全发布对象的一种技术就是将对象引用定义为 volatile 类型。清单 3 展示了一个示例,其中后台线程在启动阶段从数据库加载一些数据。其他代码在能够利用这些数据时,在使用之前将检查这些数据是否曾经发布过。

清单 3. 将 volatile 变量用于一次性安全发布
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public class BackgroundFloobleLoader {
    public volatile Flooble theFlooble;
    public void initInBackground() {
        // do lots of stuff
        theFlooble = new Flooble();  // this is the only write to theFlooble
    }
}
public class SomeOtherClass {
    public void doWork() {
        while (true) {
            // do some stuff...
            // use the Flooble, but only if it is ready
            if (floobleLoader.theFlooble != null)
                doSomething(floobleLoader.theFlooble);
        }
    }
}

如果 theFlooble 引用不是 volatile 类型,doWork() 中的代码在解除对 theFlooble 的引用时,将会得到一个不完全构造的 Flooble

该模式的一个必要条件是:被发布的对象必须是线程安全的,或者是有效的不可变对象(有效不可变意味着对象的状态在发布之后永远不会被修改)。volatile 类型的引用可以确保对象的发布形式的可见性,但是如果对象的状态在发布后将发生更改,那么就需要额外的同步。

模式 #3:独立观察(independent observation)

安全使用 volatile 的另一种简单模式是:定期 “发布” 观察结果供程序内部使用。例如,假设有一种环境传感器能够感觉环境温度。一个后台线程可能会每隔几秒读取一次该传感器,并更新包含当前文档的 volatile 变量。然后,其他线程可以读取这个变量,从而随时能够看到最新的温度值。

使用该模式的另一种应用程序就是收集程序的统计信息。清单 4 展示了身份验证机制如何记忆最近一次登录的用户的名字。将反复使用 lastUser 引用来发布值,以供程序的其他部分使用。

清单 4. 将 volatile 变量用于多个独立观察结果的发布
1
2
3
4
5
6
7
8
9
10
11
12
13
public class UserManager {
    public volatile String lastUser;
    public boolean authenticate(String user, String password) {
        boolean valid = passwordIsValid(user, password);
        if (valid) {
            User u = new User();
            activeUsers.add(u);
            lastUser = user;
        }
        return valid;
    }
}

该模式是前面模式的扩展;将某个值发布以在程序内的其他地方使用,但是与一次性事件的发布不同,这是一系列独立事件。这个模式要求被发布的值是有效不可变的 —— 即值的状态在发布后不会更改。使用该值的代码需要清楚该值可能随时发生变化。

模式 #4:“volatile bean” 模式

volatile bean 模式适用于将 JavaBeans 作为“荣誉结构”使用的框架。在 volatile bean 模式中,JavaBean 被用作一组具有 getter 和/或 setter 方法 的独立属性的容器。volatile bean 模式的基本原理是:很多框架为易变数据的持有者(例如 HttpSession)提供了容器,但是放入这些容器中的对象必须是线程安全的。

在 volatile bean 模式中,JavaBean 的所有数据成员都是 volatile 类型的,并且 getter 和 setter 方法必须非常普通 —— 除了获取或设置相应的属性外,不能包含任何逻辑。此外,对于对象引用的数据成员,引用的对象必须是有效不可变的。(这将禁止具有数组值的属性,因为当数组引用被声明为 volatile 时,只有引用而不是数组本身具有 volatile 语义)。对于任何 volatile 变量,不变式或约束都不能包含 JavaBean 属性。清单 5 中的示例展示了遵守 volatile bean 模式的 JavaBean:

清单 5. 遵守 volatile bean 模式的 Person 对象
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
@ThreadSafe
public class Person {
    private volatile String firstName;
    private volatile String lastName;
    private volatile int age;
    public String getFirstName() { return firstName; }
    public String getLastName() { return lastName; }
    public int getAge() { return age; }
    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }
    public void setLastName(String lastName) {
        this.lastName = lastName;
    }
    public void setAge(int age) {
        this.age = age;
    }
}

volatile 的高级模式

前面几节介绍的模式涵盖了大部分的基本用例,在这些模式中使用 volatile 非常有用并且简单。这一节将介绍一种更加高级的模式,在该模式中,volatile 将提供性能或可伸缩性优势。

volatile 应用的的高级模式非常脆弱。因此,必须对假设的条件仔细证明,并且这些模式被严格地封装了起来,因为即使非常小的更改也会损坏您的代码!同样,使用更高级的 volatile 用例的原因是它能够提升性能,确保在开始应用高级模式之前,真正确定需要实现这种性能获益。需要对这些模式进行权衡,放弃可读性或可维护性来换取可能的性能收益 —— 如果您不需要提升性能(或者不能够通过一个严格的测试程序证明您需要它),那么这很可能是一次糟糕的交易,因为您很可能会得不偿失,换来的东西要比放弃的东西价值更低。

模式 #5:开销较低的读-写锁策略

目前为止,您应该了解了 volatile 的功能还不足以实现计数器。因为 ++x 实际上是三种操作(读、添加、存储)的简单组合,如果多个线程凑巧试图同时对 volatile 计数器执行增量操作,那么它的更新值有可能会丢失。

然而,如果读操作远远超过写操作,您可以结合使用内部锁和 volatile 变量来减少公共代码路径的开销。清单 6 中显示的线程安全的计数器使用 synchronized 确保增量操作是原子的,并使用 volatile 保证当前结果的可见性。如果更新不频繁的话,该方法可实现更好的性能,因为读路径的开销仅仅涉及 volatile 读操作,这通常要优于一个无竞争的锁获取的开销。

清单 6. 结合使用 volatile 和 synchronized 实现 “开销较低的读-写锁”
1
2
3
4
5
6
7
8
9
10
11
12
@ThreadSafe
public class CheesyCounter {
    // Employs the cheap read-write lock trick
    // All mutative operations MUST be done with the 'this' lock held
    @GuardedBy("this") private volatile int value;
    public int getValue() { return value; }
    public synchronized int increment() {
        return value++;
    }
}

之所以将这种技术称之为 “开销较低的读-写锁” 是因为您使用了不同的同步机制进行读写操作。因为本例中的写操作违反了使用 volatile 的第一个条件,因此不能使用 volatile 安全地实现计数器 —— 您必须使用锁。然而,您可以在读操作中使用 volatile 确保当前值的可见性,因此可以使用锁进行所有变化的操作,使用 volatile 进行只读操作。其中,锁一次只允许一个线程访问值,volatile 允许多个线程执行读操作,因此当使用 volatile 保证读代码路径时,要比使用锁执行全部代码路径获得更高的共享度 —— 就像读-写操作一样。然而,要随时牢记这种模式的弱点:如果超越了该模式的最基本应用,结合这两个竞争的同步机制将变得非常困难。

结束语

与锁相比,Volatile 变量是一种非常简单但同时又非常脆弱的同步机制,它在某些情况下将提供优于锁的性能和伸缩性。如果严格遵循 volatile 的使用条件 —— 即变量真正独立于其他变量和自己以前的值 —— 在某些情况下可以使用 volatile 代替 synchronized 来简化代码。然而,使用 volatile 的代码往往比使用锁的代码更加容易出错。本文介绍的模式涵盖了可以使用 volatile 代替 synchronized 的最常见的一些用例。遵循这些模式(注意使用时不要超过各自的限制)可以帮助您安全地实现大多数用例,使用 volatile 变量获得更佳性能。


相关主题

  • 您可以参阅本文在 developerWorks 全球站点上的 英文原文
  • Java Concurrency in Practice:使用 Java 代码开发并发程序的 how-to 手册,内容包括构建并编写线程安全的类和程序、避免性能影响、管理性能和测试并发应用程序。
  • 流行的原子:介绍了 Java 5.0 中新增的原子变量类,该特性对 volatile 变量进行了扩展,从而支持原子状态转换。
  • 非阻塞算法简介:介绍如何使用原子变量而不是锁实现并发算法。
  • Volatiles:从 Wikipedia 获得关于 volatile 变量的更多信息。
  • Java 技术专区:提供了数百篇有关 Java 编程各个方面的文章。

from:https://www.ibm.com/developerworks/cn/java/j-jtp06197.html

What are some good blogs about algorithms and technical interviews?

Here is a website on good algorithms and technical interviews.

Website: IDeserve

Youtube channel: IDeserve

Here are some good features I found on this:

1. Many problems are explained with the help of videos which makes it very easy to understand the problem within minutes.

2. The Algorithm Visualization on the page is a great feature provided for visualizing how an algorithm works at runtime for a given example. Normally it takes careful observation to dry run a code, but through the algorithm visualization given here, it becomes easier to understand.

For example, check out the visualization given here:

Create a balanced Binary Search Tree from a sorted array

Here are videos on various topics:

Programming Interview Questions

Arrays – YouTube

Strings – YouTube

Trees – YouTube

Linked Lists – YouTube

Dynamic Programming

Graph – YouTube

Here are some of the good problems:

Linked Lists and Stack Problems:

Merge two sorted linked lists

Reverse a Linked List – Recursive

Reverse a Linked List – Iterative

Reverse every alternate k nodes of a Linked List

Find nth Node from the end of a Linked List

Sum of Two Linked Lists using Stacks

Sum of Two Linked Lists using Recursion | Set 1

LRU Cache Implementation

Detect a loop in a linked list and find the node where the loop starts.

Convert a sorted Doubly Linked List to Balanced Binary Search Tree

Convert a binary tree to doubly linked list

Find intersection of two Linked Lists

Find intersection of two Linked Lists – O(m + n) Time Complexity and O(1) Space Complexity

Minimum Stack O(1)

Strings Problems:

Reverse words in a string

Remove spaces from a given string

Longest Substring with non-Repeating Characters

Check balanced parentheses in a string

Postfix Expression Evaluation

Group all anagrams together from a given array of strings | Set 1

First non-repeating character in a string

Find all permutations of a String

Word Break Problem

Subset Sum Problem

Shortest Palindrome

Palindrome Min Cut

Minimum number of trials to reach from source word to destination word

Longest Palindromic Substring

Manacher’s Algorithm

Longest Palindromic Subsequence

Longest Common Substring

Longest Common Subsequence

To print maximum number of As using given four keys.

Finding 10 letter repeated DNA sequences.

Find minimum edit distance between given two strings

Distinct binary strings of length n with no consecutive 1s

The longest prefix suffix array computation in KMP pattern matching algorithm.

The Knuth Morris Pratt algorithm for pattern matching.

Arrays Problems:

Sorting Algorithm – Selection Sort

Sorting Algorithm – Insertion Sort

Sorting Algorithm – Bubble Sort

Merge Sort

Pancake Sorting

Sorting Algorithm – Heap Sort

Rotate an Array

Fibonacci Number

Merge two sorted arrays without using extra space

Maximum subarray sum

Maximum average subarray of size k

Longest Substring with non-Repeating Characters

Leaders in an array

Find Minimum Length Sub Array With Sum K

Binary Search in a Sorted Array

Search a sorted matrix

Re-arrange elements in an array to put positive and negative elements in alternate order

Find the next greater number using same digits

Next greater element in an array

First non-repeating character in a string

Find the ‘n’th most frequent number in array

Find the missing number in the increasing sequence

Find duplicates in an integer array

Find common elements in ‘n’ sorted arrays

Find a Peak Element in an array

Distribute Chocolates Problem

Count frequencies of array elements in range 1 to n

Find all permutations of a String

Find pivot in a sorted rotated array

Find an element in a sorted rotated array

Find element in sorted rotated array without finding pivot

Buy and sell stocks | Part 2

Buy and sell stocks | Part 1

Find index of 0 to replace to get longest continuous sequence of 1s

O(n) time approach to find index of 0 to replace to get longest continuous sequence of 1s

Trapping Rain Water between Towers

The Skyline Problem

Minimum number of coins to make change

Find minimum cost path in a matrix

Find the length of longest increasing subsequence in an array

Longest Increasing Subsequence O(n logn)

Find the length of longest bitonic subsequence in an array

Given an array with all distinct elements, find the length of the longest sub-array which has elements(not in any particular order) that could form a contiguous sequence

Find an integer array corresponding to the string specifying increase-decrease transitions

Gold Mine Problem

Find median of two sorted arrays

Find Majority Element in an Array

0-1 Knapsack Problem

Count all possible decodings of a given digit sequence

Find total number of ways to make change using given set of coins

Find increasing sub-sequence of length three having maximum product

Find increasing sub-sequence of length three having maximum product | Optimized approach

Set Partition Problem | Recursion

Set Partition Problem | Dynamic Programming

Trees and Graphs Problems:

Pre-order Traversal of a Binary Tree

Post-order Traversal of a Binary Tree

In-order Traversal of a Binary Tree

Binary Tree Level Order Traversal

Spiral Level Order Traversal of a Binary Tree | Set 1

Print right view of a binary tree

Print all nodes of a binary tree that do not have sibling

Print all Root to Leaf paths of a Binary Tree

Minimum Depth of a Binary Tree

Print left view of a binary tree

Find sum of all left leaves of a binary tree

Find depth of deepest odd level leaf node

Check whether a binary tree is a full binary tree or not

Check whether a binary tree is complete or not

Check if two nodes are cousins in a Binary tree

Check if two binary trees are identical

Check if all internal nodes of BST have only one child without building tree

Convert the given n-ary tree to its mirror image

Convert a binary tree to its mirror tree

Print top view of a binary tree

Print top view of a binary tree using level order traversal

Print bottom view of a binary tree

Print bottom view of a binary tree using level order traversal

Remove the nodes of binary search tree which are outside the given range

Remove all nodes which lie on path having sum less than k

Remove all the half nodes from a given binary tree

Print binary tree in vertical order

Populate right neighbors for all nodes in a binary tree

Lowest Common Ancestor of two nodes in a Binary Search Tree

In-order Successor of a Node in a Binary Tree

Recover a Binary Search Tree if positions of two nodes are swapped.

Find floor and ceiling of an element from given dataset using binary search tree

Diagonal Sum of a Binary Tree.

Create a balanced Binary Search Tree from a sorted array

Convert a sorted Doubly Linked List to Balanced Binary Search Tree

Convert a binary tree to doubly linked list

Check if a binary tree is balanced or not

Check if a binary tree is a binary search tree

Check if two binary search trees are identical given their array representations | Set 2

Check if two binary search trees are identical given their array representations

Check if a binary tree is sub-tree of another binary tree in time O(n)

Check if a binary tree is sub-tree of another binary tree in space O(1)

Binary Search tree | Insertion and Search

Binary Search tree | Deletion

Check if a given binary tree is symmetric tree or not

Check if the given n-ary tree is symmetric tree or not

Total number of possible Binary Search Trees with ‘n’ keys

Find the size of largest BST in a binary tree

Lowest Common Ancestor of 2 nodes in a Binary Tree

Find height of the binary tree from its parent array representation

Convert binary tree to binary search tree

Construct the binary tree from its parent array representation

Construct binary tree from inorder and preorder traversals

Construct binary tree from inorder and postorder traversals

AVL tree | Basics

AVL tree | Insertion

AVL tree | Deletion

Trie Data Structure | Insert and search

Trie Data Structure | Delete

Pattern matching using Trie

Longest Prefix Matching using Trie

Given a sequence of words, group together all anagrams and print them.

Serialize and Deserialize a binary search tree

Serialize and Deserialize a binary search tree using post order traversal

Breadth first search in a graph

Topological Sorting of a Directed Acyclic Graph.

Minimum number of trials to reach from source word to destination word

Friend Circles Problem – Graph Theory

Dijkstra’s Shortest Path algorithm

Bellman-Ford Algorithm

Dynamic Programming Problems:

Fibonacci Number

Maximum subarray sum

Word Break Problem

Total number of possible Binary Search Trees with ‘n’ keys

Subset Sum Problem

Shortest Palindrome

Palindrome Min Cut

Minimum number of trials to reach from source word to destination word

Minimum number of coins to make change

Find minimum cost path in a matrix

Longest Palindromic Substring

Longest Palindromic Subsequence

Find the length of longest increasing subsequence in an array

Longest Increasing Subsequence O(n logn)

Longest Common Substring

Longest Common Subsequence

Find the length of longest bitonic subsequence in an array

To print maximum number of As using given four keys.

Gold Mine Problem

Find minimum edit distance between given two strings

0-1 Knapsack Problem

Distinct binary strings of length n with no consecutive 1s

Count all possible decodings of a given digit sequence

Find total number of ways to make change using given set of coins

Set Partition Problem | Dynamic Programming

 

油猴脚本

用户脚本是什么?

用户脚本为您增强对浏览体验的控制权。在安装之后,它们可自动为您访问的网站添加功能,或使其更加易用、更加清新。在 Greasy Fork 上的用户脚本是由其他用户编写并向全世界发表的,您可以免费和轻松地安装。

第一步:安装一个用户脚本管理器

Tampermonkey manage
Chrome 上的 Tampermonkey

要使用用户脚本,您首先需要安装一个用户脚本管理器。根据您使用的浏览器不同,可用的用户脚本管理器也有所不同。

Violentmonkey

Violentmonkey

第二步:安装一个用户脚本

Install button example
用户脚本的安装按钮

浏览此网站 查找您想尝试的用户脚本。最流行的用户脚本有:

在您找到想要的用户脚本后,点击用户脚本页面上绿色的安装按钮,您的用户脚本管理器将询问您是否安装。

第三步:使用用户脚本

转至用户脚本适用的网站。它应该已自动启动和生效。在试用用户脚本后,您可以返回用户脚本页面,给用户脚本的作者留下反馈。

可用的脚本列表

https://greasyfork.org/zh-CN/scripts