Dynamic Programming Technique
Post date: Nov 15, 2013 1:44:51 PM
< Previous-Greedy Technique Next - Introduction to Graphs>
Dynamic Programming Technique is similar to divide-and-conquer technique. Both techniques solve a problem by breaking it down into several sub-problems that can be solved recursively. The main difference between is that, Divide & Conquer approach partitions the problems into independent sub-problems, solve the sub-problems recursively, and then combine their solutions to solve the original problems. Where as dynamic programming is applicable when the sub-problems are not independent, that is, when sub-problems share subsubproblems. Also A dynamic programming algorithms solves every sub problem just once and then saves its answer in a table, there by avoiding the work of recomputing the answer every time the subsubproblems is encountered.
Therefore "Dynamic programming is a applicable when sub problem are not independent ,that is,when sub problem share sub problems."
As Greedy approach, Dynamic programming is typically applied to optimization problems and for them there can be many possible solutions and the requirement is to find the optimal solution among those. But, Dynamic programming approach is little different greedy approach. In greedy solutions are computed by making choices in serial forward way and in this no backtracking & revision of choices is done where as Dynamic programming computes its solution bottom up by producing them from smaller sub problems, and by trying many possibilities and choices before it arrives at the optimal set of choices.
The Development of a dynamic-programming algorithm can be broken into a sequence of four steps:
- Divide, Sub problems: The main problems is divided into several smaller sub problems. In this the solution of the main problem is expressed in terms of the solution for the smaller sub problems. Basically it is all about characterizing the structure of an optimal solution and recursively define the value of an optimal solution.
- Table, Storage: The solution for each sub problem is stored in a table, so that it can be used many times whenever required.
- Combine, bottom-up Computation: The solution to main problem is obtained by combining the solutions of smaller sub problems. i.e. compute the value of an optimal solution in a bottom up fashion.
- Construct an optimal solution from computed information.(This step is optional and is required in case if some additional information is required after finding out optimal solution.)
Now for any problem to be solved through dynamic programming approach it must follow the following conditions:
- Principle of Optimality: It states that for solving the master problem optimally, its sub problems should be solved optimally. It should be noted that not all the times each sub problem(s) is solved optimally , so in that case we should go for optimal majority.
- Polynomial Breakup: For solving the main problem, the problem is divided into several sub problems and for efficient performance of dynamic programming the total number of sub problems to be solved should be at-most a polynomial number.
Various algorithms which make use of Dynamic programming technique are as follows:
- Knapsack problem.
- Chain matrix multiplication.
- All pair shortest path.
- Travelling sales man problem.
- Tower of hanoi.
- Checker Board.
- Fibonacci Sequence.
- Assembly line scheduling.
- Optimal binary search trees.