Dynamic Programming Paradigm

Dynamic Programming Paradigm

Dynamic programming can be said as the optimization of the plain recursion technique.

So before defining any technicalities what is the essence of this Dynamic programming paradigm?

The dynamic programming paradigm has been to solve the complex problems by decomposing it into various sub-parts, now the most unique thing is that, these small sub-parts will hold onto their individual result to avoid the recalculation of the same.

One of the most popular problems solving paradigm is Dynamic Programming Paradigm. This paradigm focuses on the concept of optimization over the recursion concept. The main objective of this Dynamic Programming Paradigm is to optimize the solution for cases where the same inputs are being used repeatedly again and again.

This objective can be achieved by simply providing storage for the issues/sub-problems that we face which in turn makes the solution finding the process easier for afterward. This storage of values of the problems also guarantees to save time by reducing the revaluation steps for the solution; which in turn provides a more dynamic output for the problem.

This Dynamic Programming Paradigm works on the principle of less time complexity. The sole reason for this principle is based on the above-discussed optimization practice for the problem. The reduced time complexity enables the solution to work faster in the real world scenario. This, in turn, converts the time complexities of the problems from exponential (highly complex) to a polynomial(less complex).

Optimal Substructure-

The concept of Optimal substructure says that the answer/solution for the given optimized problem can be achieved by a combination of various optimal solutions to its sub-problems. The technique of Recursion is used to define the optimal substructures in dynamic programming.

Overlapping Sub-problems-

The concept of Overlapping sub-problems states that the ideal space of sub-problems must be small. This is because as for any recursive algorithm should be solving the same sub-problems again and again till the time it is completely dissolved, rather than generating new problems.

 

 

Dynamic programming based Algorithms are as follows:

  • Knapsack Problem
  • Tower of Hanoi
  • Fibonacci Series
  • Dijkstra Algorithm
  • Floyd – Warshall Algorithm

 

 

Example: Following Dynamic Programming Paradigm for Fibonacci series-

i=2;

F[0]=0;

F[1]=1;

While( i<=n)

{ F[i] = F[i-1] + F[i-2];

i++;

}

Return F[n];

 

Leave a Reply