Dynamic programming
Dynamic programmingis an algorithmic technique that solves problems by breaking
them down into smaller sub-problems and storing the results of sub-problems to avoid
re-computing them. It is useful for optimization problems with overlapping sub-
problems.
2
o The problem should be able to be divided
into smaller overlapping sub-problem.
o Final optimum solution can be achieved
by using an optimum solution of smaller
sub-problems.
o Dynamic algorithms use memorization.
So we can say that −
Top-Down Approach
4
💭“Think recursively,store wisely!”
The Top-Down Approach starts with the
original problem and divides it into
smaller sub-problems.
The sub-problems are solved
recursively, and their solutions are
cached (stored) to avoid recomputation.
If a subproblem has already been solved
(i.e., its solution is in the cache), it’s
directly used without recalculating.
The following recursive relations
define the Fibonacci numbers:
5.
Bottom-Up Approach
5
💭“Build fromthe ground up!”
The Bottom-Up Approach starts
with the simplest sub-problems (base
cases) and iteratively solves larger
sub-problems.
The solution to the main problem is
built step-by-step by filling up a
table (or array) from the smallest
sub-problems to the largest.
The following iteration relations
define the Fibonacci numbers:
6.
Difference between Top-Down& Bottom-Up Approach
6
Aspect Top-Down Approach Bottom-Up Approach
Approach Uses recursion and caching
(memoization)
Uses loops and table
(tabulation)
Memory
Usage
May use more space due to
recursion stack
More memory efficient with
fixed-size table
Speed Can be slower due to recursion
overhead
Usually faster, avoids
recursion
Ease of Use Easier to write if you're
comfortable with recursion
Requires manual table setup,
more optimized
Best Use
Case
Best when not all sub-problems are
needed
Best when all sub-problems
must be solved
7.
7
Advantages of DynamicProgramming
Avoids Repetition: DP stores solutions to sub-problems, preventing
redundant calculations and speeding up the process.
Fast Performance: By converting slow recursive algorithms into
efficient ones, DP reduces time complexity (e.g., O(2^n) to O(n²)).
Easy to Debug: The clear structure of DP (building tables step-by-step)
makes it simple to trace and debug.
Optimal Solutions: DP guarantees the best solution when the problem
satisfies overlapping sub-problems and optimal substructure.
8.
8
Limitations of DynamicProgramming
High Memory Usage: DP requires memory to store intermediate results,
which can become an issue for large-scale problems.
Problem-Specific: DP only works for problems that have overlapping sub-
problems and optimal substructure. It’s not universally applicable.
Requires Insight: Successfully applying DP requires careful understanding
of the problem's structure and how sub-problems relate.
9.
9
Why Use DP?
Because:
Boosts Efficiency: DP transforms slow recursive algorithms into fast
solutions by avoiding redundant calculations. This saves time, especially for
large problems.
Ensures Optimal Solutions: DP doesn’t just solve problems, it finds the best
possible solution by systematically solving sub-problems and combining their
results.
Saves Time and Resources: By storing intermediate results in a table, DP
avoids recalculating the same values, making the process much quicker and
more memory-efficient.
Improves Performance: DP reduces time complexity, turning problems that
would take too long (like O(2^n)) into more manageable ones (like O(n²)).
10.
Applications of DynamicProgramming
10
Text Processing: Spell check using Edit Distance to find the closest match
between words.
Bioinformatics: DNA Sequence Alignment to compare genetic sequences
and find similarities.
Machine Learning: Speech recognition using the Viterbi Algorithm to
decode spoken words.
Finance: Stock portfolio optimization to maximize returns while
minimizing risk.
Networking: Optimal routing and shortest path algorithms to find the best
network paths.
Compression: Video/Audio compression (e.g., Huffman Coding) to reduce
file sizes.