Design and Analysis
of Algorithms
Prof. Dr. K ADISESHA
Introduction
Algorithm Characteristics
Problem Solving
Algorithm Analysis
Algorithm Design
2
Design and Analysis
of Algorithms
Asymptotic Notations
INTRODUCTION
Prof. Dr. K. Adisesha
3
Algorithm:
A finite set of instruction that specifies a sequence of operation is to be carried out in
order to solve a specific problem or class of problems is called an Algorithm.
 The word algorithm comes from the name of a Persian author, Abu Ja'far Mohammed
ibn Musa al Khowarizmi (c. 825 A.D.), who wrote a textbook on mathematics.
 Algorithms are generally created independent of underlying languages.
 The notion of the algorithm illustrates some important points:
 The non-ambiguity requirement for each step of an algorithm cannot be compromised.
 The range of inputs for which an algorithm works has to be specified carefully.
 There may exist several algorithms for solving the same problem.
 Algorithms for the same problem can be based on very different ideas and can solve
the problem with dramatically different speeds.
INTRODUCTION
Prof. Dr. K. Adisesha
4
Algorithm:
A finite set of instruction that specifies a sequence of operation is to be carried out in
order to solve a specific problem or class of problems is called an Algorithm.
 Steps for writing an algorithm:
 An Algorithm is a procedure. It has two part: head and body section.
 The Head section consists of keyword Algorithm and Name of the algorithm with
parameter list. E.g. Algorithm name1(p1, p2,…,p3)
 In the body of an algorithm various programming constructs like if, for, while and
some statements like assignments are used. The compound statements may be
enclosed with { and } brackets.
 Comments are written using // at the beginning.
Algorithm
Prof. Dr. K. Adisesha
5
Euclid’s algorithm:
Euclid’s algorithm is based on applying repeatedly the equality gcd(m, n) = gcd(n, m
mod n), where m mod n is the remainder of the division of m by n, until m mod n is
equal to 0.
 Euclid’s algorithm for computing gcd(m, n) in simple steps
Step 1: If n = 0, return the value of m as the answer
and stop; otherwise, proceed to Step 2.
Step 2: Divide m by n and assign the value of the
remainder to r.
Step 3: Assign the value of n to m and the value of r
to n. Go to Step 1.
ALGORITHM Euclid_gcd(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two non-zero integers m and n
//Output: Greatest common divisor of m and n
while n ≠ 0 do
r ←m mod n
m←n
n←r
return m
Algorithm
Prof. Dr. K. Adisesha
6
Euclid’s algorithm:
Euclid’s algorithm is based on applying repeatedly the equality gcd(m, n) = gcd(n, m
mod n), where m mod n is the remainder of the division of m by n, until m mod n is
equal to 0.
 Euclid’s algorithm is based on repeated application of equality
 gcd(m,n) = gcd(n, m mod n)
 Example:
 Euclid’s algorithm for computing gcd(60, 24) in simple steps
 gcd(60,24) = gcd(24,12) = gcd(12,0) = 12
Algorithm
Prof. Dr. K. Adisesha
7
Consecutive integer checking algorithm:
Other methods for computing gcd(m,n). Compute the product of all the common prime
factors and return it as gcd(m,n)
 Algorithm gcd(m,n)
Step 1: Assign the value of min{m,n} to t
Step 2: Divide m by t. If the remainder is 0, go to Step 3;
otherwise, go to Step 4
Step 3: Divide n by t. If the remainder is 0, return t and stop;
otherwise, go to Step 4
Step 4: Decrease t by 1 and go to Step 2
 This is slower than Euclid’s algorithm
Algorithm
Prof. Dr. K. Adisesha
8
Characteristics of an Algorithm:
An algorithm should have the following characteristics −.
 Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or
phases), and their inputs/outputs should be clear and must lead to only one meaning.
 Input: Zero / more quantities are externally supplied.
 Output: At least one quantity is produced.
 Definiteness: Each instruction is clear and unambiguous.
 Finiteness: If the instructions of an algorithm is traced then for all cases the
algorithm must terminates after a finite number of steps.
 Efficiency: Every instruction must be very basic and runs in short time.
Algorithm
Prof. Dr. K. Adisesha
9
Algorithmic Problem Solving:
A sequence of steps involved in designing and analyzing an algorithm is shown in the
figure below.
 Understanding the Problem
 Read the problem’s description carefully to understand
the problem statement completely.
 Ask questions for clarifying the doubts about the
problem.
 Identify the problem types and use existing algorithm to
find solution.
 Input (instance) to the problem and range of the input
get fixed.
Algorithm
Prof. Dr. K. Adisesha
10
Algorithmic Problem Solving:
A sequence of steps involved in designing and analyzing an algorithm is shown in the
figure below.
 Analyzing an Algorithm
 For an algorithm the most important is efficiency. In fact,
there are two kinds of algorithm efficiency. They are:
 Time efficiency: indicating how fast the algorithm runs.
 Space efficiency: indicating how much extra memory it
uses.
 So factors to analyze an algorithm are:
 Time efficiency of an algorithm
 Space efficiency of an algorithm
 Simplicity of an algorithm
 Generality of an algorithm
Algorithm
Prof. Dr. K. Adisesha
11
Problem Types :
The most important problem types are:
 Sorting
 Searching
 String processing
 Graph problems
 Combinatorial problems
 Geometric problems
 Numerical problems
Algorithm
Prof. Dr. K. Adisesha
12
Design Strategies:
There are various types of strategies in order to design algorithms for various
problems. Some of them are listed as follows:
 Divide & Conquer Approach
 Greedy Approach
 Dynamic Programming Approach
 Randomization Approach
 Approximation Approach
 Recursive Approach
 Branch and Bound Approach
Algorithm
Prof. Dr. K. Adisesha
13
Design Strategies:
There are various types of strategies in order to design algorithms for various
problems. Some of them are listed as follows:
 Divide and Conquer Approach: It is a top-down approach. The algorithms which
follow the divide & conquer techniques involve three steps:
 Divide the original problem into a set of subproblems.
 Solve every subproblem individually, recursively.
 Combine the solution of the subproblems (top level) into a solution of the whole original
problem.
 Greedy Technique: Greedy method is used to solve the optimization problem in which
we are given a set of input values, which are required either to be maximized or
minimized (known as objective), i.e. some constraints or conditions.
Algorithm
Prof. Dr. K. Adisesha
14
Design Strategies:
There are various types of strategies in order to design algorithms for various
problems. Some of them are listed as follows:
 Dynamic Programming: Dynamic Programming is a bottom-up approach we solve all
possible small problems and then combine them to obtain solutions for bigger problems.
 Branch and Bound: In Branch & Bound algorithm a given subproblem, which cannot be
bounded, has to be divided into at least two new restricted subproblems.
 Randomized Algorithms: A randomized algorithm is defined as an algorithm that is
allowed to access a source of independent, unbiased random bits, and it is then allowed to
use these random bits to influence its computation.
 Backtracking Algorithm: Backtracking Algorithm tries each possibility until they find the
right one. It is a depth-first search of the set of possible solution.
Algorithm
Prof. Dr. K. Adisesha
15
Algorithm Analysis:
Efficiency of an algorithm can be analyzed at two different stages, before
implementation and after implementation. They are the following .
 A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency of an
algorithm is measured by assuming factors, like processor speed, are constant and have
no effect on the implementation.
 A Posterior Analysis − This is an empirical analysis of an algorithm. The selected
algorithm is implemented using programming language. In this analysis, actual
statistics like running time and space required, are collected.
Algorithm
Prof. Dr. K. Adisesha
16
Analysis of Algorithm Efficiency:
The efficiency of an algorithm can be in terms of time and space.
 The algorithm efficiency can be analyzed by the following ways.
 Analysis Framework.
 Asymptotic Notations and its properties.
 Mathematical analysis for Recursive algorithms.
 Mathematical analysis for Non-recursive algorithms.
Algorithm
Prof. Dr. K. Adisesha
17
Algorithm Complexity:
Step Count Method− The step Count method is also called as Frequency Count method.
 we count the number of times each instruction is executed.
Algorithm
 Calculate
 Lines 1 and 4 count for one unit each
 Line 3: executed N times, each time four units
 Line 2: (1 for initialization, N+1 for all the tests, N for all the increments) total 2N + 2
 total cost: 6N + 4  O(N)


N
i
i
1
3 1
2
3
4
1
2N+2
4N
1
Algorithm
Prof. Dr. K. Adisesha
19
Analysis of Algorithms:
To analyse the feasibility of an algorithm designed, we calculate the complexity of it.
This is represented in three notations, called asymptotic notations. Asymptotic
notations are as follows −
 Worst-Case Scenario − Big Oh and Little Oh Notations
 Best-Case Scenario − Big Omega and Little Omega Notations
 Average-Case Scenario − Theta Notation
Algorithm
Prof. Dr. K. Adisesha
20
Asymptotic Analysis of an algorithm:
Asymptotic analysis of an algorithm refers to Asymptotic analysis refers to computing the
running time of any operation in mathematical units of computation of its run-time
performance.
 The Asymptotic analysis of an algorithm falls under three types:
 Best Case − Minimum time required for program execution.
 Average Case − Average time required for program execution.
 Worst Case − Maximum time required for program execution.
Algorithm
Prof. Dr. K. Adisesha
21
Asymptotic Notations of an algorithm:
Following are the commonly used asymptotic notations to calculate the running time
complexity of an algorithm..
 The Asymptotic Notations of an algorithm falls under three types:
 Big Oh Notation, Ο− It measures the worst case time complexity.
 Omega Notation, Ω− It measures the best case time complexity.
 Theta Notation, θ− It measures the Average case time complexity.
Algorithm
Prof. Dr. K. Adisesha
22
Big Oh Notation, ( Ο )of an algorithm:
The notation Ο(n) is the formal way to express the upper bound of an algorithm's
running time.
 It measures the worst case time complexity or the longest amount of time an algorithm
can possibly take to complete.
 For example, for a function f(n)
 It is represented as follows
 f (n) k.g (n)f(n) k.g(n) for n>n0n>n0 in all case
⩽ ⩽
 Example:
 3n+2=O(n) as 3n+2≤4n for all n≥2
 3n+3=O(n) as 3n+3≤4n for all n≥3
Algorithm
Prof. Dr. K. Adisesha
23
Omega Notation, Ω of an algorithm:
The notation Ω(n) is the formal way to express the lower bound of an algorithm's
running time. It measures the best case time complexity or the best amount of time an
algorithm can possibly take to complete
• For example, for a function f(n)
• It is represented as follows
F (n) ≥ k* g (n) for all n, n≥ n0
f (n) =8n2+2n-3≥8n2-3
=7n2+(n2-3)≥7n2 (g(n))
Thus, k1=7
• Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }
Algorithm
Prof. Dr. K. Adisesha
24
Theta Notation, θ of an algorithm:
The notation θ(n) is the formal way to express both the lower bound and the upper
bound of an algorithm's running time.
 For example, for a function f(n)
 It is represented as follows
 k1 * g (n) ≤ f(n)≤ k2 g(n)for all n, n≥ n0
 Example:
3n+2= θ (n) as 3n+2≥3n and 3n+2≤ 4n, for n
k1=3, k2=4, and n0=2
 θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }
Algorithm
Prof. Dr. K. Adisesha
25
Common Asymptotic Notations:
Following is a list of some common asymptotic notations.
 constant − Ο(1)
 logarithmic − Ο(log n)
 linear − Ο(n)
 n log n− Ο(n log n)
 quadratic − Ο(n2)
 cubic − Ο(n3)
 polynomial − nΟ(1)
 exponential − 2Ο(n)
Algorithm
Prof. Dr. K. Adisesha
26
Compare the Rates of Change of Two Functions Using Limits:
In order to determine the relationship between f(n) and g(n), it is often usefuly to
examine L’Hopital’s rule. Let f(x) and g(x) be differential functions with derivatives f’(x)
and g’(x), respectively, such that
Algorithm
Prof. Dr. K. Adisesha
27
Compare the Rates of Change of Two Functions Using Limits:
Example: Given F(n)=5n+3, prove that f(n)=O(n)
Since is constant, f(n) is of same growth order of g(n)
Recurrence Relation
Prof. Dr. K. Adisesha
28
Recurrence:
Recursion is a fundamental concept in computer science and mathematics that
allows functions to call themselves, enabling the solution of complex problems
through iterative steps.
 There are four methods for solving Recurrence:
 Substitution Method
 Iteration Method
 Recursion Tree Method
 Master Method
 For Example, the Worst Case Running Time T(n) of the
MERGE SORT Procedures is described by the recurrence.
Recurrence Relation
Prof. Dr. K. Adisesha
29
Iteration Methods:
It means to expand the recurrence and express it as a summation of terms of n and
initial condition.
 Example: Consider the Recurrence:
T (n) = 1 if n=1
= 2T (n-1) if n>1
 Solution:
T (n) = 2T (n-1)
= 2[2T (n-2)] = 22
T (n-2)
= 4[2T (n-3)] = 23
T (n-3)
= 8[2T (n-4)] = 24
T (n-4) (Eq.1)
Repeat the procedure for i times
T (n) = 2i
T (n-i)
Put n-i=1 or i= n-1 in (Eq.1)
T (n) = 2n-1
T (1)
= 2n-1
.1 {T (1) =1 .....given}
= 2n-1
Recurrence Relation
Prof. Dr. K. Adisesha
30
Recursion Tree:
A recursion tree is a graphical representation that illustrates the execution flow of a
recursive function. It provides a visual breakdown of recursive calls.
 Each node in a recursion tree represents a particular recursive call. The initial call is
depicted at the top, with subsequent calls branching out beneath it.
/* How function calls are made,
Factorial(5) [ 120 ]
5 * Factorial(4) ==> 120
4. * Factorial(3) ==> 24
3 * Factorial(2) ==> 6
2 * Factorial(1) ==> 2
1
*/
// find factorial of a number
factorial(n) {
if n is less than 2: // Factorial of 0, 1 is 1
return n
else // Recursive step
return n * factorial(n-1); // Factorial of 5 => 5 * Factorial(4)...
}
Brute Force Approach
Prof. Dr. K. Adisesha
31
Brute force approach:
A brute force approach is an approach that finds all the possible solutions to find a
satisfactory solution to a given problem.
 The brute force algorithm tries out all the possibilities till a satisfactory solution is
not found.
 Such an algorithm can be of two types:
 Optimizing: In this case, the best solution is found. To find the best solution, it
may either find all the possible solutions. For example: Finding the best path for
the travelling salesman problem.
 Satisficing: It stops finding the solution as soon as the satisfactory solution is
found. Or example, finding the travelling salesman path which is within 10% of
optimal.
Brute Force Approach
Prof. Dr. K. Adisesha
32
Brute force approach:
A brute force approach is an approach that finds all the possible solutions to find a
satisfactory solution to a given problem.
 The following are the advantages of the brute-force algorithm:
 This algorithm finds all the possible solutions, and it also guarantees that it finds
the correct solution to a problem.
 This type of algorithm is applicable to a wide range of domains.
 It is mainly used for solving simpler and small problems.
 It can be considered a comparison benchmark to solve a simple problem and
does not require any particular domain knowledge.
Brute Force Approach
Prof. Dr. K. Adisesha
33
Linear search:
Linear Searching: Also called Sequential Searching.
 It is used to find out the location of the data item if it exists in the given collection of
data items.
 Example Searching element 33 from the array of elements:
 The complexity of linear search is therefore O(n).
Brute Force Approach
Prof. Dr. K. Adisesha
34
Linear search:
Linear Searching: Also called Sequential Searching.
Brute Force Approach
Prof. Dr. K. Adisesha
35
Linear search:
Time and Space Complexity of Linear Search Algorithm:
 Time Complexity:
 Best Case: In the best case, the key might be present at the first index. So the best
case complexity is O(1)
 Worst Case: In the worst case, the key might be present at the last index i.e.,
opposite to the end from which the search has started in the list. So the worst-case
complexity is O(N) where N is the size of the list.
 Average Case: O(N)
 Auxiliary Space: O(1) as except for the variable to iterate through the list, no other
variable is used.
Brute Force Approach
Prof. Dr. K. Adisesha
36
Sorting in Arrays:
A Sorting Algorithm is used to rearrange a given array or list elements according to a
comparison operator on the elements:
 The comparison operator is used to decide the new order of element in the respective
data structure.
 Types of Sorting Algorithms are:
 Bubble Sort
 Selection Sort
 Insertion Sort
 Merge Sort
 Quick Sort
 Heap Sort
Brute Force Approach
Prof. Dr. K. Adisesha
37
Bubble Sort in Arrays:
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the
adjacent elements if they are in wrong order.
Brute Force Approach
Prof. Dr. K. Adisesha
38
Selection Sort:
Brute Force Approach
Prof. Dr. K. Adisesha
39
Selection Sort:
Brute Force Approach
Prof. Dr. K. Adisesha
40
Selection Sort:
Complexity Analysis of Selection Sort
Input: Given n input elements.
Output: Number of steps incurred to sort a list.
Logic: If we are given n elements, then in the
 First pass, it will do n-1 comparisons;
 Second pass, it will do n-2; in the third pass,
 it will do n-3 and so on.
Therefore, the selection sort algorithm encompasses a time complexity of O(n2) and a
space complexity of O(1) because it necessitates some extra memory space for temp
variable for swapping.
Exhaustive search Approach
Prof. Dr. K. Adisesha
41
Exhaustive search:
Exhaustive search, also known as brute-force search or generate and test, is a problem-
solving technique that involves systematically checking all possible candidates to see if
they meet the problem's requirements.
Exhaustive Search:
 Traveling Salesman Problem
 Knapsack Problem
 Assignment Problem
 Graph Traversals
Exhaustive search Approach
Prof. Dr. K. Adisesha
42
Traveling Salesman Problem:
The Travelling Salesman Problem (TSP) is a classic algorithmic problem in the field of
computer science and operations research, focusing on optimization. It seeks the
shortest possible route that visits every point in a set of locations just once.
 To solve the TSP using the brute-force approach, you
must calculate the total number of routes and then
draw and list all the possible routes.
 Calculate the distance of each route and then choose
the shortest one — this is the optimal solution.
Exhaustive search Approach
Prof. Dr. K. Adisesha
43
Traveling Salesman Problem:
To solve the TSP using the brute-force approach, you must calculate the total number
of routes and then draw and list all the possible routes just once.
 Calculate the distance of each route and then choose the
shortest one — this is the optimal solution.
Exhaustive search Approach
Prof. Dr. K. Adisesha
44
Knapsack Problem:
In the knapsack problem, you need to pack a set of items, with given values and sizes
(such as weights or volumes), into a container with a maximum capacity.
 If the total size of the items exceeds the capacity, you can't pack them all. In that case,
the problem is to choose a subset of the items of maximum total value that will fit in
the container.
 The data includes the following:
 Weights: A vector containing the weights of the items.
 Values: A vector containing the values of the items.
 Capacities: A vector with just one entry, the capacity of the knapsack.
Exhaustive search Approach
Prof. Dr. K. Adisesha
45
Knapsack Problem:
The Knapsack problem is an example of the combinational optimization problem. This
problem is also commonly known as the “Rucksack Problem“.
 The knapsack problem can be classified into the following types:
 Fractional Knapsack Problem
 0/1 Knapsack Problem
 Bounded Knapsack Problem
 Unbounded Knapsack Problem
Exhaustive search Approach
Prof. Dr. K. Adisesha
46
Fractional Knapsack Problem:
Given the weights and values of N items, put these items in a knapsack of capacity W to
get the maximum total value in the knapsack. In Fractional Knapsack, we can break
items for maximizing the total value of the knapsack.
Exhaustive search Approach
Prof. Dr. K. Adisesha
47
Exhaustive search Approach
Prof. Dr. K. Adisesha
48
Fractional Knapsack Problem:
Example: Consider the example: arr[] = {{100, 20}, {60, 10}, {120, 30}}, W = 50.
Sorting: Initially sort the array based on the profit/weight ratio.
The sorted array will be {{60, 10}, {100, 20}, {120, 30}}.
Iteration:
For i = 0, weight = 10 which is less than W. So add this element in the knapsack. profit = 60 and
remaining W = 50 – 10 = 40.
For i = 1, weight = 20 which is less than W. So add this element too. profit = 60 + 100 = 160 and
remaining W = 40 – 20 = 20.
For i = 2, weight = 30 is greater than W. So add 20/30 fraction = 2/3 fraction of the element.
Therefore profit = 2/3 * 120 + 160 = 80 + 160 = 240 and remaining W becomes 0.
Exhaustive search Approach
Prof. Dr. K. Adisesha
49
0/1 Knapsack Problem:
We are given N items where each item has some weight (wi) and value (vi) associated
with it. We are also given a bag with capacity W. The target is to put the items into the
bag such that the sum of values associated with them is the maximum possible.
Exhaustive search Approach
Prof. Dr. K. Adisesha
50
0/1 Knapsack Problem:
The 0/1 Knapsack Problem states that you have a backpack with a weight limit, and
you are in a room full of treasures, each treasure with a value and a weight.
 To solve the 0/1 Knapsack Problem using brute force means to:
 Calculate the value of every possible combination of
items in the knapsack.
 Discard the combinations that are heavier than the
knapsack weight limit.
 Choose the combination of items with the highest
total value.
Exhaustive search Approach
Prof. Dr. K. Adisesha
51
0/1 Knapsack Problem:
We are given N items where each item has some weight (wi) and value (vi) associated
with it. We are also given a bag with capacity W. The target is to put the items into the
bag such that the sum of values associated with them is the maximum possible.
Exhaustive search Approach
Prof. Dr. K. Adisesha
52
0/1 Knapsack Problem:
A similar dynamic programming solution for the 0-1 knapsack problem also runs in
pseudo-polynomial time. Assume w1, w2,…, wn, W are strictly positive integers.
 We can define m[i, w] recursively as
follows:
 Example:
Wi={4, 3, 2, 1}
Vi={5, 4, 3, 2 } Capacity=6
Exhaustive search Approach
Prof. Dr. K. Adisesha
53
0/1 Knapsack Problem:
Given weights and profits of N items, put these items in a knapsack of capacity W. The
possible solutions in such a way that there are no remaining items left whose weight is
less than the remaining capacity of the knapsack. With maximum profit.
Input: Profits[] = {60, 100, 120, 50}
Weights[] = {10, 20, 30, 40}, W = 40
Output:
10: 60, 20: 100,
10: 60, 30: 120,
Maximum Profit = 180
Explanation: Maximum profit from all the possible
solutions is 180
Input: Profits[] = {60, 100, 120, 50}
Weights[] = {10, 20, 30, 40}, W = 50
Output:
10: 60, 20: 100,
10: 60, 30: 120,
20: 100, 30: 120,
Maximum Profit = 220
Explanation: Maximum profit from all the possible
solutions is 220
Exhaustive search Approach
Prof. Dr. K. Adisesha
54
Graphs Traversal Algorithms:
The graph is one non-linear data structure. That is consists of some nodes and their
connected edges. The edges may be director or undirected.
 This graph can be represented as G(V, E). The following graph can be represented as
G({A, B, C, D, E}, {(A, B), (B, D), (D, E), (B, C), (C, A)})
 The graph has two types of traversal algorithms.
 Breadth First Search
 Depth First Search.
Exhaustive search Approach
Prof. Dr. K. Adisesha
55
BFS algorithm:
Breadth-first search is a graph traversal algorithm that starts traversing the graph
from the root node and explores all the neighboring nodes.
 The applications of breadth-first-algorithm are given as follows -
 BFS can be used to find the neighboring locations from a given source location.
 In a peer-to-peer network, BFS algorithm can be used as a traversal method to find all the
neighboring nodes. Most torrent clients, such as BitTorrent, uTorrent, etc. employ this
process to find "seeds" and "peers" in the network.
 BFS can be used in web crawlers to create web page indexes. It is one of the main
algorithms that can be used to index web pages.
 BFS is used to determine the shortest path and minimum spanning tree.
 BFS is also used in Cheney's technique to duplicate the garbage collection.
Exhaustive search Approach
Prof. Dr. K. Adisesha
56
BFS algorithm:
Breadth-first search is a graph traversal algorithm that starts traversing the graph
from the root node and explores all the neighboring nodes.
 The steps involved in the BFS algorithm to explore a graph are given as follows -
Step 1: SET STATUS = 1 (ready state) for each node in G
Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until QUEUE is empty
Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).
Step 5: Enqueue all the neighbours of N that are in the ready state (whose
STATUS = 1) and set their STATUS = 2
(waiting state)
[END OF LOOP]
Step 6: EXIT.
Exhaustive search Approach
Prof. Dr. K. Adisesha
57
Depth First Search:
The depth-first search (DFS) algorithm starts with the initial node of graph G and
goes deeper until we find the goal node or the node with no children.
 Applications of DFS algorithm
 The applications of using the DFS algorithm are given as follows -
 DFS algorithm can be used to implement the topological sorting.
 It can be used to find the paths between two vertices.
 It can also be used to detect cycles in the graph.
 DFS algorithm is also used for one solution puzzles.
 DFS is used to determine if a graph is bipartite or not.
Exhaustive search Approach
Prof. Dr. K. Adisesha
58
Depth First Search:
The depth-first search (DFS) algorithm starts with the initial node of graph G and
goes deeper until we find the goal node or the node with no children.
The step by step process to implement the DFS traversal is given as follows -
1) First, create a stack with the total number of vertices in the graph.
2) Now, choose any vertex as the starting point of traversal, and push that vertex into the stack.
3) After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to the top
of the stack.
4) Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the stack's top.
5) If no vertex is left, go back and pop a vertex from the stack.
6) Repeat steps 2, 3, and 4 until the stack is empty.
Exhaustive search Approach
Prof. Dr. K. Adisesha
59
Depth First Search:
The depth-first search (DFS) algorithm starts with the initial node of graph G and
goes deeper until we find the goal node or the node with no children.
Algorithm
Step 1: SET STATUS = 1 (ready state) for each node in G
Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until STACK is empty
Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
Step 5: Push on the stack all the neighbors of N that are in the ready state (whose
STATUS = 1) and set their STATUS = 2 (waiting state)
[END OF LOOP]
Step 6: EXIT
Exhaustive search Approach
Prof. Dr. K. Adisesha
60
Depth First Search:
The depth-first search (DFS) algorithm starts with the initial node of graph G and goes
deeper until we find the goal node or the node with no children.
Input: Profits[] = {60, 100, 120, 50}
Weights[] = {10, 20, 30, 40}, W = 40
Output:
10: 60, 20: 100,
10: 60, 30: 120,
Maximum Profit = 180
Explanation: Maximum profit from all the possible
solutions is 180
Input: Profits[] = {60, 100, 120, 50}
Weights[] = {10, 20, 30, 40}, W = 50
Output:
10: 60, 20: 100,
10: 60, 30: 120,
20: 100, 30: 120,
Maximum Profit = 220
Explanation: Maximum profit from all the possible
solutions is 220
Decrease and Conquer
Prof. Dr. K. Adisesha
61
Decrease and Conquer:
A problem-solving strategy known as "Decrease and Conquer" involves slicing the size
of the input at every stage of the solution procedure.
 Identical to divide-and-conquer as it breaks the problem down into smaller sub-
problems, decrease-and-conquer reduces the size of the input data at each stage rather
than increasing it.
 Algorithm
Step 1: Start
Step 2: Set p (the element whose position to be found)
Step 3: Set the pointers first (at the first position) and last (at the last position)
Step 4: Find the middle element (mid)
Step 5: If p==mid, then print the middle element
Step 6: If p>mid, check p with elements right to mid
Step 7: If p<mid, check p with elements left to mid
Decrease and Conquer
Prof. Dr. K. Adisesha
62
Binary Searching:
The binary search algorithm can be used with
only sorted list of elements.
 Binary Search first divides a large array into
two smaller sub-arrays and then recursively
operate the sub-arrays.
 Binary Search basically reduces the search
space to half at each step
Decrease and Conquer
Prof. Dr. K. Adisesha
63
Binary Searching:
The binary search algorithm can be used with only sorted list of elements.
 Example: Searching the element 57 from the array of elements
Decrease and Conquer
Prof. Dr. K. Adisesha
64
Binary Searching:
 Example:
Decrease and Conquer
Prof. Dr. K. Adisesha
65
Binary Searching:
Decrease and Conquer
Prof. Dr. K. Adisesha
66
Binary Searching Analysis:
 Input: an array A of size n, already sorted in the ascending or descending order.
 Output: analyze to search an element item in the sorted array of size n.
 Logic: Let T (n) = number of comparisons of an item with n elements in a sorted array.
Set BEG = 1 and END = n
 Compare the search item with the mid item.
 Case 1: item = A[mid], then LOC = mid, but it the best case and T (n) = 1
 Case 2: item ≠A [mid], then we will split the array into two equal parts of size n/2.
 Repeat the same process until a search element is found
 Time to compare the search element with mid element, with half of the selected half part of array
Decrease and Conquer
Prof. Dr. K. Adisesha
67
Insertion Sorting:
Insertion sort is a simple sorting algorithm that builds the final sorted array (or list)
one item at a time.
 This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained
which is always sorted.
Decrease and Conquer
Prof. Dr. K. Adisesha
68
Insertion Sorting:
This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained
which is always sorted.
 This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained
which is always sorted.
Decrease and Conquer
Prof. Dr. K. Adisesha
69
Insertion Sorting:
 ALGORITHM: Insertion Sort (A, N) A is an array with N unsorted elements.
 Step 1: for I=1 to N-1
 Step 2: J = I
While(J >= 1)
if ( A[J] < A[J-1] ) then
Temp = A[J];
A[J] = A[J-1];
A[J-1] = Temp;
[End if]
J = J-1
[End of While loop]
[End of For loop]
 Step 3: Exit
Divide and Conquer
Prof. Dr. K. Adisesha
70
Merging from Array:
Merging: It is used to combine the data items of two sorted files into single file in the
sorted form.
 We have sorted linear array A as below:
1 2 3 4 5 6
10 40 50 80 95 100
 And sorted linear array B as below:
1 2 3 4
20 35 45 90
 After merging merged array C is as below:
1 2 3 4 5 6 7 8 9 10
10 20 35 40 45 50 80 90 95 100
Divide and Conquer
Prof. Dr. K. Adisesha
71
Binary tree traversal:
Binary tree traversal is a process that involves visiting each node in a binary tree data
structure.
 Tree Traversal refers to the process of visiting or accessing each node of the tree
exactly once in a certain order.
 A Tree Data Structure can be traversed in following ways:
 Depth First Search or DFS
 Inorder Traversal
 Preorder Traversal
 Postorder Traversal
 Level Order Traversal or Breadth First Search or BFS
Divide and Conquer
Prof. Dr. K. Adisesha
72
Binary tree traversal (DFS):
Tree traversal is the process of visiting each node in a tree, such as a binary tree or
binary search tree, exactly once. There are several effective traversal algorithms which
we will cover below.
 Pre-order: A pre-order traversal on a tree performs the following steps starting from
the root:
1) Return the root node value.
2) Traverse the left subtree by recursively calling the pre-order function.
3) Traverse the right subtree by recursively calling the pre-order function.
 Pre-order traversal would output the node values in the following
order: A, B, D, E, C
Divide and Conquer
Prof. Dr. K. Adisesha
73
Binary tree traversal (DFS):
A pre-order traversal on a tree performs the following steps starting from the root:
function Node(data)
{ this.data = data;
this.left = null;
this.right = null; }
// create nodes
var root = new Node('A');
var n1 = new Node('B');
var n2 = new Node('C');
var n3 = new Node('D');
var n4 = new Node('E');
// setup children
root.left = n1;
Divide and Conquer
Prof. Dr. K. Adisesha
74
Binary tree traversal (DFS):
Tree traversal is the process of visiting each node in a tree, such as a binary tree or
binary search tree, exactly once. There are several effective traversal algorithms which
we will cover below.
 In-order: An in-order traversal on a tree performs the following steps starting from the
root:
1) Traverse the left subtree by recursively calling the in-order function.
2) Return the root node value.
3) Traverse the right subtree by recursively calling the in-order function.
 In-order traversal would output the node values in the following order:
D, B, E, A, C
Divide and Conquer
Prof. Dr. K. Adisesha
75
Binary tree traversal (DFS):
An in-order traversal on a tree performs the following steps starting from the root:
function Node(data)
{ this.data = data;
this.left = null;
this.right = null; }
// create nodes
var root = new Node('A');
var n1 = new Node('B');
var n2 = new Node('C');
var n3 = new Node('D');
var n4 = new Node('E');
// setup children
root.left = n1;
Divide and Conquer
Prof. Dr. K. Adisesha
76
Binary tree traversal (DFS):
Tree traversal is the process of visiting each node in a tree, such as a binary tree or
binary search tree, exactly once. There are several effective traversal algorithms which
we will cover below.
 Post-order: A post-order traversal on a tree performs the following steps starting from
the root.
1) Traverse the left subtree by recursively calling the post-order function.
2) Traverse the right subtree by recursively calling the post-order function.
3) Return the root node value.
 Post-order traversal would output the node values in the following order:
D, E, B, C, A
Divide and Conquer
Prof. Dr. K. Adisesha
77
Binary tree traversal (DFS):
A post-order traversal on a tree performs the following steps starting from the root:
function Node(data)
{ this.data = data;
this.left = null;
this.right = null; }
// create nodes
var root = new Node('A');
var n1 = new Node('B');
var n2 = new Node('C');
var n3 = new Node('D');
var n4 = new Node('E');
// setup children
root.left = n1;
Divide and Conquer
Prof. Dr. K. Adisesha
78
Binary tree traversal (BFS):
Tree traversal is the process of visiting each node in a tree, such as a binary tree or
binary search tree, exactly once. There are several effective traversal algorithms which
we will cover below.
 Level-order: A level-order traversal on a tree performs the following steps starting
from the root.
1) Add the root to a queue.
2) Pop the last node from the queue, and return its value.
3) Add all children of popped node to queue, and continue from step 2 until
queue is empty.
 Level-order traversal would output the node values in the following
order: A, B, C, D, E
Divide and Conquer
Prof. Dr. K. Adisesha
79
Binary tree traversal (BFS):
A level-order traversal on a tree performs the following steps starting from the root:
function Node(data)
{ this.data = data;
this.left = null;
this.right = null; }
// create nodes
var root = new Node('A');
var n1 = new Node('B');
var n2 = new Node('C');
var n3 = new Node('D');
var n4 = new Node('E');
// setup children
root.left = n1;
root.right = n2;
Divide and Conquer
Prof. Dr. K. Adisesha
80
Advantages of Divide and Conquer:
Divide and Conquer Algorithm is a problem-solving method in Data Structures working
on recursion principle.
 Advantages of Divide and Conquer:
 Efficiency: Divide and conquer algorithms typically have a time complexity of O(n log
n), which is more efficient than many other algorithms for large datasets.
 Simplicity: Divide and conquer algorithms are often easy to understand and implement.
 Parallelizability: Divide and conquer algorithms can be easily parallelized, as each
subproblem can be solved independently.
 Cache-friendliness: Divide and conquer algorithms tend to have good cache
performance, as they access data in a predictable pattern.
Divide and Conquer
Prof. Dr. K. Adisesha
81
Disadvantages of Divide and Conquer:
Divide and Conquer Algorithm is a problem-solving method in Data Structures working
on recursion principle.
 Disadvantages of Divide and Conquer:
 Recursion overhead: Divide and conquer algorithms use recursion, which can lead to
significant overhead in terms of stack space and function calls.
 Not suitable for all problems: Divide and conquer algorithms are not suitable for all
types of problems. They are most effective for problems that can be recursively divided
into smaller subproblems.
 Limited memory efficiency: Divide and conquer algorithms can require a significant
amount of memory, as they create multiple copies of the input data.
Greedy Technique
Prof. Dr. K. Adisesha
82
Greedy Algorithm:
A greedy algorithm is an approach for solving a problem by selecting the best option
available at the moment. It works in a top-down approach.
 This technique is basically used to determine the feasible solution that may or may not
be optimal.
 The following are the characteristics of a greedy method:
 To construct the solution in an optimal way, this algorithm creates two sets where one
set contains all the chosen items, and another set contains the rejected items.
 A Greedy algorithm makes good local choices in the hope that the solution should be
either feasible or optimal.
Greedy Technique
Prof. Dr. K. Adisesha
83
Greedy Algorithm:
A greedy algorithm is an approach for solving a problem by selecting the best option
available at the moment. It works in a top-down approach.
 The components that can be used in the greedy algorithm are:
 Candidate set: A solution that is created from the set is known as a candidate set.
 Selection function: This function is used to choose the candidate or subset which can be
added in the solution.
 Feasibility function: A function that is used to determine whether the candidate or subset can
be used to contribute to the solution or not.
 Objective function: A function is used to assign the value to the solution or the partial
solution.
 Solution function: This function is used to intimate whether the complete function has been
reached or not.
Greedy Technique
Prof. Dr. K. Adisesha
84
Greedy Algorithm:
A greedy algorithm is an approach for solving a problem by selecting the best option
available at the moment. It works in a top-down approach.
 The components that can be used in the greedy algorithm are:
 Candidate set: A solution that is created from the set is known as a candidate set.
 Selection function: This function is used to choose the candidate or subset which can be
added in the solution.
 Feasibility function: A function that is used to determine whether the candidate or subset can
be used to contribute to the solution or not.
 Objective function: A function is used to assign the value to the solution or the partial
solution.
 Solution function: This function is used to intimate whether the complete function has been
reached or not.
Greedy Technique
Prof. Dr. K. Adisesha
85
Greedy Algorithm:
A greedy algorithm is an approach for solving a problem by selecting the best option
available at the moment. It works in a top-down approach.
 Applications of Greedy Algorithm
 It is used in finding the shortest path.
 It is used to find the minimum spanning tree using the prim's algorithm or the
Kruskal's algorithm.
 It is used in a job sequencing with a deadline.
 This algorithm is also used to solve the fractional knapsack problem.
Greedy Technique
Prof. Dr. K. Adisesha
86
Greedy Algorithm:
A greedy algorithm is an approach for solving a problem by selecting the best option
available at the moment. It works in a top-down approach.
 Types of Greedy Algorithms
 Knapsack Problem.
 Minimum Spanning Tree.
 Single-Source Shortest Path Problem.
 Job Scheduling Problem.
 Prim's Minimal Spanning Tree Algorithm.
 Kruskal's Minimal Spanning Tree Algorithm.
 Dijkstra's Minimal Spanning Tree Algorithm.
Greedy Technique
Prof. Dr. K. Adisesha
87
Knapsack Problem using Greedy Algorithm:
n this method, the Knapsack’s filling is done so that the maximum capacity of the
knapsack is utilized so that maximum profit can be earned from it.
 The knapsack problem using the Greedy Method is referred to as:
 Given a list of n objects, say {I1, I2,……, In) and a knapsack (or bag).
 The capacity of the knapsack is M.
 Each object Ij has a weight wj and a profit of pj
 If a fraction xj (where x {0…., 1)) of an object Ij is placed into a knapsack, then a profit of pjxj
∈
is earned.
 The problem is to fill the knapsack (up to its maximum capacity M), maximizing the total profit
earned.
Greedy Technique
Prof. Dr. K. Adisesha
88
Minimum Spanning Tree (MST):
In a weighted graph, a minimum spanning tree is a spanning tree that has minimum
weight than all other spanning trees of the same graph.
 In real-world situations, this weight can be measured as distance, congestion, traffic
load or any arbitrary value denoted to the edges.
 two most important spanning tree algorithms(greedy algorithms):
 Kruskal's Algorithm
 Prim's Algorithm
Greedy Technique
Prof. Dr. K. Adisesha
89
Kruskal's Algorithm:
Kruskal's algorithm is a minimum spanning tree algorithm that takes a graph as input
and finds the subset of the edges of that graph which.
 form a tree that includes every vertex
 has the minimum sum of weights among all the trees that can be formed from the
graph
 The steps for implementing Kruskal's algorithm are as follows:
 Sort all the edges from low weight to high
 Take the edge with the lowest weight and add it to the spanning tree. If adding the
edge created a cycle, then reject this edge.
 Keep adding edges until we reach all vertices.
Greedy Technique
Prof. Dr. K. Adisesha
90
Kruskal's Algorithm:
Kruskal's algorithm is a subgraph that connects all the vertices present in the main graph
with the least possible edges and minimum cost.
 Examples: Construct a minimum spanning tree using kruskal’s algorithm for the graph given
 Solution:
 As the first step, sort all the edges in the given graph in an
ascending order and store the values in an array.
Edge BD AB CF FE BC GF AG CD DE CG
Cost 5 6 9 10 11 12 15 17 22 25
Minimum cost = 5 + 6 + 9 + 10 + 11 + 12 = 53
Visited array, v = {B, D, A, C, F, E, G}
Greedy Technique
Prof. Dr. K. Adisesha
91
Prim's algorithm:
Prim's algorithm to find minimum cost spanning tree uses the greedy approach.
 Algorithm
 Declare an array visited[] to store the visited vertices and, add the arbitrary root, say S, to the visited
array.
 Check whether the adjacent vertices of the last visited vertex are present in the visited[] array or not.
 If the vertices are not in the visited[] array, compare the cost of edges and add the least cost edge to the
output spanning tree.
 The adjacent unvisited vertex with the least cost edge is added into the visited[] array and the least cost
edge is added to the minimum spanning tree output.
 Steps 2 and 4 are repeated for all the unvisited vertices in the graph to obtain the full minimum spanning
tree output for the given graph.
 Calculate the cost of the minimum spanning tree obtained.
Greedy Technique
Prof. Dr. K. Adisesha
92
Prim's algorithm:
Prim's algorithm to find minimum cost spanning tree (as Kruskal's algorithm) uses the
greedy approach. Prim's algorithm shares a similarity with the shortest path first
algorithms.
 The algorithm, similar to any shortest path algorithm, begins from a vertex that is set as a root
and walks through all the vertices in the graph by determining the least cost adjacent edges.
 Step 1 - Remove all loops and parallel edges
 Step 2 - Choose any arbitrary node as root node
Greedy Technique
Prof. Dr. K. Adisesha
93
Prim's algorithm:
Prim's algorithm shares a similarity with the shortest path first algorithms.
Greedy Technique
Prof. Dr. K. Adisesha
94
Dijkstra’s Algorithm:
Dijkstra's shortest path algorithm was invented in 1956 by the Dutch computer scientist
Edsger W. Dijkstra during a twenty minutes coffee break, while out shopping with his
fiancée in Amsterdam.
 Dijkstra's algorithm is used for solving single-source
shortest path problems for directed or undirected paths.
 Dijkstra's algorithm finds the shortest path from one vertex
to all other vertices.
 It does so by repeatedly selecting the nearest unvisited
vertex and calculating the distance to all the unvisited
neighboring vertices.
Greedy Technique
Prof. Dr. K. Adisesha
95
Dijkstra’s Algorithm:
Dijkstra's algorithm is used for solving single-source shortest path problems for directed
or undirected paths.
1. Set initial distances for all vertices: 0 to source vertex, and infinity for all the other.
2. Choose the unvisited vertex with the shortest distance from the start to be current vertex.
3. For each of the current vertex's unvisited neighbor vertices, calculate the distance from
the source and update the distance if the new, calculated, distance is lower.
4. We are now done with the current vertex, so we mark it as visited. A visited vertex is not
checked again.
5. Go back to step 2 to choose a new current vertex, and keep repeating these steps until all
vertices are visited.
6. In the end we are left with the shortest path from the source vertex to every other vertex.
Greedy Technique
Prof. Dr. K. Adisesha
96
Dijkstra’s Algorithm:
Given a weighted graph and a source vertex in the graph, find the shortest paths from the
source to all the other vertices in the given graph.
Input: src = 0, the graph.
Output: 0 4 12 19 21 11 9 8 14
Explanation: The distance from 0 to 1 = 4.
The minimum distance from 0 to 2 = 12. 0->1->2
The minimum distance from 0 to 3 = 19. 0->1->2->3
The minimum distance from 0 to 4 = 21. 0->7->6->5->4
The minimum distance from 0 to 5 = 11. 0->7->6->5
The minimum distance from 0 to 6 = 9. 0->7->6
The minimum distance from 0 to 7 = 8. 0->7
The minimum distance from 0 to 8 = 14. 0->1->2->8
Greedy Technique
Prof. Dr. K. Adisesha
97
Lower Bound Theory:
According to the lower bound theory, for a lower bound L(n) of an algorithm, it is not
possible to have any other algorithm (for a common problem) whose time complexity is
less than L(n) for random input.
 Let L(n) be the running time of an algorithm A(say), then g(n) is the Lower Bound of A if
there exist two constants C and N such that L(n) >= C*g(n) for n > N.
 Lower bound of an algorithm is shown by the asymptotic notation called Omega.
 Consider sorting three numbers a1, a2, and a3. There are 3! = 6 possible combinations:
 (a1, a2, a3), (a1, a3, a2),
 (a2, a1, a3), (a2, a3, a1),
 (a3, a1, a2), (a3, a2, a1)
Greedy Technique
Prof. Dr. K. Adisesha
98
Decision Tree Algorithms:
Decision trees are a type of machine-learning algorithm that can be used for both
classification and regression tasks.
 The algorithm works by recursively splitting the data into smaller and smaller subsets based
on the feature values.
 Before we divide into Decision Tree, we need to know about the following important terms:
 Root Node: It is the topmost node in the tree, which represents the complete dataset. It is the
starting point of the decision-making process.
 Internal Node: A node that symbolizes a choice regarding an input feature. Branching off of
internal nodes connects them to leaf nodes or other internal nodes.
 Leaf/Terminal Node: A node without any child nodes that indicates a class label or a
numerical value.
Greedy Technique
Prof. Dr. K. Adisesha
99
Decision Tree Algorithms:
A decision tree is a full binary tree that shows the comparisons between elements that are
executed by an appropriate sorting algorithm operating on an input of a given size.
Control, data movement, and all other conditions of the algorithm are ignored.
 In a decision tree, there will be an array of length n.
 So, total leaves will be n! (I.e. total number of
comparisons)
 If tree height is h, then surely
n! ≤2n
(tree will be binary)
Greedy Technique
Prof. Dr. K. Adisesha
100
P Class Problem:
P-type problems are a class of problems in computational complexity theory that can be
solved by a deterministic Turing machine within a polynomial amount of time.
 This is also known as polynomial-time deterministic algorithms.
 P: the class of problems that have polynomial-time deterministic algorithms.
 A deterministic algorithm is (essentially) one that always computes the correct
answer.
 This problem is easy to understand and tractable.
 Sample Problems in P
 Searching
 Element uniqueness
 Graph connectivity
Greedy Technique
Prof. Dr. K. Adisesha
101
NP Class Problem:
A problem is said to be Nondeterministic polynomial time that can be solvable in
polynomial time by a nondeterministic Turing machine.
 NP stands for “Nondeterministic Polynomial-time”
 The solutions to the NP class problem are hard to find since they are being solved by a
non-deterministic machine.
 Some of the examples of NP problems are given as follows:
 Traveling Salesman
 N-Queens
 Classroom Scheduling
 Packing
 Scheduling
Greedy Technique
Prof. Dr. K. Adisesha
102
NP-Complete Problems:
NP-Complete (or NPC) problems are a set of problems that are well connected.
 A problem x that is in NP, if any one finds an polynomial time algorithm even for one
problem, it implies that polynomial time algorithm for all NP-Complete problems.
 NP-complete problems are a subset of the larger class of NP (nondeterministic
polynomial time) problems.
 NP problems are a class of computational problems that can be solved in polynomial
time by a non-deterministic machine and can be verified in polynomial time by a
deterministic Machine.
Greedy Technique
Prof. Dr. K. Adisesha
103
N Queen Problem:
The goal of the N-Queen Problem is to place N queens on an N×N chessboard so that no
two queens can attack each other.
 In the 4-queens problem, the solver starts at the leftmost column and successively places
one queen in each column, at a location that is not attacked by any previously placed
queens.
 One constraint is that there can be only one queen in a column
 Second constraint prohibits two queens on the same diagonal
 Third constraint prohibits queens on the same row
Greedy Technique
Prof. Dr. K. Adisesha
104
N Queen Problem:
The N Queen is the problem of placing N chess queens on an N×N chessboard so that no
two queens attack each other.
 Given an integer n, return all distinct solutions to the n-queens puzzle. You may return
the answer in any order.
Input: n = 4
Output: [[".Q..","...Q","Q...","..Q."],
["..Q.","Q...","...Q",".Q.."]]
Greedy Technique
Prof. Dr. K. Adisesha
105
Job Sequencing Problem:
Given an array of jobs where every job has a deadline and associated profit if the job is
finished before the deadline.
 Using Greedy method choose the jobs with maximum profit first, by sorting the jobs in
decreasing order of their profit.
 Follow the given steps to solve the problem:
 Sort all jobs in decreasing order of profit.
 Iterate on jobs in decreasing order of profit. For each job , do the following :
 Find a time slot i, such that slot is empty and i < deadline and i is greatest.
 Put the job in this slot and mark this slot filled.
 If no such i exists, then ignore the job.
Greedy Technique
Prof. Dr. K. Adisesha
106
Job Sequencing Problem:
Consider the following tasks with their deadlines and profits. Schedule the tasks in such a
way that they produce maximum profit after being executed −.
 Step 1: Find the maximum deadline value, dm, from the deadlines given.
 Step 2: Arrange the jobs in descending order of their profits.
 The maximum deadline, dm, is 4. Therefore, all the tasks must end before 4.
 The output set of jobs scheduled within the deadline are {J4, J3} with the maximum
profit of 140.
Jobs J1 J2 J3 J4 J5
Deadlines 2 2 1 3 4
Profits 20 60 40 100 80
Jobs J4 J5 J2 J3 J1
Deadlines 3 4 2 1 2
Profits 100 80 60 40 20

Design and Analysis of Algorithms
Prof. Dr. K. Adisesha
107
Thank you:
All the best

Design and Analysis of Algorithms ppt by K. Adi

  • 1.
    Design and Analysis ofAlgorithms Prof. Dr. K ADISESHA
  • 2.
    Introduction Algorithm Characteristics Problem Solving AlgorithmAnalysis Algorithm Design 2 Design and Analysis of Algorithms Asymptotic Notations
  • 3.
    INTRODUCTION Prof. Dr. K.Adisesha 3 Algorithm: A finite set of instruction that specifies a sequence of operation is to be carried out in order to solve a specific problem or class of problems is called an Algorithm.  The word algorithm comes from the name of a Persian author, Abu Ja'far Mohammed ibn Musa al Khowarizmi (c. 825 A.D.), who wrote a textbook on mathematics.  Algorithms are generally created independent of underlying languages.  The notion of the algorithm illustrates some important points:  The non-ambiguity requirement for each step of an algorithm cannot be compromised.  The range of inputs for which an algorithm works has to be specified carefully.  There may exist several algorithms for solving the same problem.  Algorithms for the same problem can be based on very different ideas and can solve the problem with dramatically different speeds.
  • 4.
    INTRODUCTION Prof. Dr. K.Adisesha 4 Algorithm: A finite set of instruction that specifies a sequence of operation is to be carried out in order to solve a specific problem or class of problems is called an Algorithm.  Steps for writing an algorithm:  An Algorithm is a procedure. It has two part: head and body section.  The Head section consists of keyword Algorithm and Name of the algorithm with parameter list. E.g. Algorithm name1(p1, p2,…,p3)  In the body of an algorithm various programming constructs like if, for, while and some statements like assignments are used. The compound statements may be enclosed with { and } brackets.  Comments are written using // at the beginning.
  • 5.
    Algorithm Prof. Dr. K.Adisesha 5 Euclid’s algorithm: Euclid’s algorithm is based on applying repeatedly the equality gcd(m, n) = gcd(n, m mod n), where m mod n is the remainder of the division of m by n, until m mod n is equal to 0.  Euclid’s algorithm for computing gcd(m, n) in simple steps Step 1: If n = 0, return the value of m as the answer and stop; otherwise, proceed to Step 2. Step 2: Divide m by n and assign the value of the remainder to r. Step 3: Assign the value of n to m and the value of r to n. Go to Step 1. ALGORITHM Euclid_gcd(m, n) //Computes gcd(m, n) by Euclid’s algorithm //Input: Two non-zero integers m and n //Output: Greatest common divisor of m and n while n ≠ 0 do r ←m mod n m←n n←r return m
  • 6.
    Algorithm Prof. Dr. K.Adisesha 6 Euclid’s algorithm: Euclid’s algorithm is based on applying repeatedly the equality gcd(m, n) = gcd(n, m mod n), where m mod n is the remainder of the division of m by n, until m mod n is equal to 0.  Euclid’s algorithm is based on repeated application of equality  gcd(m,n) = gcd(n, m mod n)  Example:  Euclid’s algorithm for computing gcd(60, 24) in simple steps  gcd(60,24) = gcd(24,12) = gcd(12,0) = 12
  • 7.
    Algorithm Prof. Dr. K.Adisesha 7 Consecutive integer checking algorithm: Other methods for computing gcd(m,n). Compute the product of all the common prime factors and return it as gcd(m,n)  Algorithm gcd(m,n) Step 1: Assign the value of min{m,n} to t Step 2: Divide m by t. If the remainder is 0, go to Step 3; otherwise, go to Step 4 Step 3: Divide n by t. If the remainder is 0, return t and stop; otherwise, go to Step 4 Step 4: Decrease t by 1 and go to Step 2  This is slower than Euclid’s algorithm
  • 8.
    Algorithm Prof. Dr. K.Adisesha 8 Characteristics of an Algorithm: An algorithm should have the following characteristics −.  Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or phases), and their inputs/outputs should be clear and must lead to only one meaning.  Input: Zero / more quantities are externally supplied.  Output: At least one quantity is produced.  Definiteness: Each instruction is clear and unambiguous.  Finiteness: If the instructions of an algorithm is traced then for all cases the algorithm must terminates after a finite number of steps.  Efficiency: Every instruction must be very basic and runs in short time.
  • 9.
    Algorithm Prof. Dr. K.Adisesha 9 Algorithmic Problem Solving: A sequence of steps involved in designing and analyzing an algorithm is shown in the figure below.  Understanding the Problem  Read the problem’s description carefully to understand the problem statement completely.  Ask questions for clarifying the doubts about the problem.  Identify the problem types and use existing algorithm to find solution.  Input (instance) to the problem and range of the input get fixed.
  • 10.
    Algorithm Prof. Dr. K.Adisesha 10 Algorithmic Problem Solving: A sequence of steps involved in designing and analyzing an algorithm is shown in the figure below.  Analyzing an Algorithm  For an algorithm the most important is efficiency. In fact, there are two kinds of algorithm efficiency. They are:  Time efficiency: indicating how fast the algorithm runs.  Space efficiency: indicating how much extra memory it uses.  So factors to analyze an algorithm are:  Time efficiency of an algorithm  Space efficiency of an algorithm  Simplicity of an algorithm  Generality of an algorithm
  • 11.
    Algorithm Prof. Dr. K.Adisesha 11 Problem Types : The most important problem types are:  Sorting  Searching  String processing  Graph problems  Combinatorial problems  Geometric problems  Numerical problems
  • 12.
    Algorithm Prof. Dr. K.Adisesha 12 Design Strategies: There are various types of strategies in order to design algorithms for various problems. Some of them are listed as follows:  Divide & Conquer Approach  Greedy Approach  Dynamic Programming Approach  Randomization Approach  Approximation Approach  Recursive Approach  Branch and Bound Approach
  • 13.
    Algorithm Prof. Dr. K.Adisesha 13 Design Strategies: There are various types of strategies in order to design algorithms for various problems. Some of them are listed as follows:  Divide and Conquer Approach: It is a top-down approach. The algorithms which follow the divide & conquer techniques involve three steps:  Divide the original problem into a set of subproblems.  Solve every subproblem individually, recursively.  Combine the solution of the subproblems (top level) into a solution of the whole original problem.  Greedy Technique: Greedy method is used to solve the optimization problem in which we are given a set of input values, which are required either to be maximized or minimized (known as objective), i.e. some constraints or conditions.
  • 14.
    Algorithm Prof. Dr. K.Adisesha 14 Design Strategies: There are various types of strategies in order to design algorithms for various problems. Some of them are listed as follows:  Dynamic Programming: Dynamic Programming is a bottom-up approach we solve all possible small problems and then combine them to obtain solutions for bigger problems.  Branch and Bound: In Branch & Bound algorithm a given subproblem, which cannot be bounded, has to be divided into at least two new restricted subproblems.  Randomized Algorithms: A randomized algorithm is defined as an algorithm that is allowed to access a source of independent, unbiased random bits, and it is then allowed to use these random bits to influence its computation.  Backtracking Algorithm: Backtracking Algorithm tries each possibility until they find the right one. It is a depth-first search of the set of possible solution.
  • 15.
    Algorithm Prof. Dr. K.Adisesha 15 Algorithm Analysis: Efficiency of an algorithm can be analyzed at two different stages, before implementation and after implementation. They are the following .  A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency of an algorithm is measured by assuming factors, like processor speed, are constant and have no effect on the implementation.  A Posterior Analysis − This is an empirical analysis of an algorithm. The selected algorithm is implemented using programming language. In this analysis, actual statistics like running time and space required, are collected.
  • 16.
    Algorithm Prof. Dr. K.Adisesha 16 Analysis of Algorithm Efficiency: The efficiency of an algorithm can be in terms of time and space.  The algorithm efficiency can be analyzed by the following ways.  Analysis Framework.  Asymptotic Notations and its properties.  Mathematical analysis for Recursive algorithms.  Mathematical analysis for Non-recursive algorithms.
  • 17.
    Algorithm Prof. Dr. K.Adisesha 17 Algorithm Complexity: Step Count Method− The step Count method is also called as Frequency Count method.  we count the number of times each instruction is executed.
  • 18.
    Algorithm  Calculate  Lines1 and 4 count for one unit each  Line 3: executed N times, each time four units  Line 2: (1 for initialization, N+1 for all the tests, N for all the increments) total 2N + 2  total cost: 6N + 4  O(N)   N i i 1 3 1 2 3 4 1 2N+2 4N 1
  • 19.
    Algorithm Prof. Dr. K.Adisesha 19 Analysis of Algorithms: To analyse the feasibility of an algorithm designed, we calculate the complexity of it. This is represented in three notations, called asymptotic notations. Asymptotic notations are as follows −  Worst-Case Scenario − Big Oh and Little Oh Notations  Best-Case Scenario − Big Omega and Little Omega Notations  Average-Case Scenario − Theta Notation
  • 20.
    Algorithm Prof. Dr. K.Adisesha 20 Asymptotic Analysis of an algorithm: Asymptotic analysis of an algorithm refers to Asymptotic analysis refers to computing the running time of any operation in mathematical units of computation of its run-time performance.  The Asymptotic analysis of an algorithm falls under three types:  Best Case − Minimum time required for program execution.  Average Case − Average time required for program execution.  Worst Case − Maximum time required for program execution.
  • 21.
    Algorithm Prof. Dr. K.Adisesha 21 Asymptotic Notations of an algorithm: Following are the commonly used asymptotic notations to calculate the running time complexity of an algorithm..  The Asymptotic Notations of an algorithm falls under three types:  Big Oh Notation, Ο− It measures the worst case time complexity.  Omega Notation, Ω− It measures the best case time complexity.  Theta Notation, θ− It measures the Average case time complexity.
  • 22.
    Algorithm Prof. Dr. K.Adisesha 22 Big Oh Notation, ( Ο )of an algorithm: The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time.  It measures the worst case time complexity or the longest amount of time an algorithm can possibly take to complete.  For example, for a function f(n)  It is represented as follows  f (n) k.g (n)f(n) k.g(n) for n>n0n>n0 in all case ⩽ ⩽  Example:  3n+2=O(n) as 3n+2≤4n for all n≥2  3n+3=O(n) as 3n+3≤4n for all n≥3
  • 23.
    Algorithm Prof. Dr. K.Adisesha 23 Omega Notation, Ω of an algorithm: The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time. It measures the best case time complexity or the best amount of time an algorithm can possibly take to complete • For example, for a function f(n) • It is represented as follows F (n) ≥ k* g (n) for all n, n≥ n0 f (n) =8n2+2n-3≥8n2-3 =7n2+(n2-3)≥7n2 (g(n)) Thus, k1=7 • Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }
  • 24.
    Algorithm Prof. Dr. K.Adisesha 24 Theta Notation, θ of an algorithm: The notation θ(n) is the formal way to express both the lower bound and the upper bound of an algorithm's running time.  For example, for a function f(n)  It is represented as follows  k1 * g (n) ≤ f(n)≤ k2 g(n)for all n, n≥ n0  Example: 3n+2= θ (n) as 3n+2≥3n and 3n+2≤ 4n, for n k1=3, k2=4, and n0=2  θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }
  • 25.
    Algorithm Prof. Dr. K.Adisesha 25 Common Asymptotic Notations: Following is a list of some common asymptotic notations.  constant − Ο(1)  logarithmic − Ο(log n)  linear − Ο(n)  n log n− Ο(n log n)  quadratic − Ο(n2)  cubic − Ο(n3)  polynomial − nΟ(1)  exponential − 2Ο(n)
  • 26.
    Algorithm Prof. Dr. K.Adisesha 26 Compare the Rates of Change of Two Functions Using Limits: In order to determine the relationship between f(n) and g(n), it is often usefuly to examine L’Hopital’s rule. Let f(x) and g(x) be differential functions with derivatives f’(x) and g’(x), respectively, such that
  • 27.
    Algorithm Prof. Dr. K.Adisesha 27 Compare the Rates of Change of Two Functions Using Limits: Example: Given F(n)=5n+3, prove that f(n)=O(n) Since is constant, f(n) is of same growth order of g(n)
  • 28.
    Recurrence Relation Prof. Dr.K. Adisesha 28 Recurrence: Recursion is a fundamental concept in computer science and mathematics that allows functions to call themselves, enabling the solution of complex problems through iterative steps.  There are four methods for solving Recurrence:  Substitution Method  Iteration Method  Recursion Tree Method  Master Method  For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is described by the recurrence.
  • 29.
    Recurrence Relation Prof. Dr.K. Adisesha 29 Iteration Methods: It means to expand the recurrence and express it as a summation of terms of n and initial condition.  Example: Consider the Recurrence: T (n) = 1 if n=1 = 2T (n-1) if n>1  Solution: T (n) = 2T (n-1) = 2[2T (n-2)] = 22 T (n-2) = 4[2T (n-3)] = 23 T (n-3) = 8[2T (n-4)] = 24 T (n-4) (Eq.1) Repeat the procedure for i times T (n) = 2i T (n-i) Put n-i=1 or i= n-1 in (Eq.1) T (n) = 2n-1 T (1) = 2n-1 .1 {T (1) =1 .....given} = 2n-1
  • 30.
    Recurrence Relation Prof. Dr.K. Adisesha 30 Recursion Tree: A recursion tree is a graphical representation that illustrates the execution flow of a recursive function. It provides a visual breakdown of recursive calls.  Each node in a recursion tree represents a particular recursive call. The initial call is depicted at the top, with subsequent calls branching out beneath it. /* How function calls are made, Factorial(5) [ 120 ] 5 * Factorial(4) ==> 120 4. * Factorial(3) ==> 24 3 * Factorial(2) ==> 6 2 * Factorial(1) ==> 2 1 */ // find factorial of a number factorial(n) { if n is less than 2: // Factorial of 0, 1 is 1 return n else // Recursive step return n * factorial(n-1); // Factorial of 5 => 5 * Factorial(4)... }
  • 31.
    Brute Force Approach Prof.Dr. K. Adisesha 31 Brute force approach: A brute force approach is an approach that finds all the possible solutions to find a satisfactory solution to a given problem.  The brute force algorithm tries out all the possibilities till a satisfactory solution is not found.  Such an algorithm can be of two types:  Optimizing: In this case, the best solution is found. To find the best solution, it may either find all the possible solutions. For example: Finding the best path for the travelling salesman problem.  Satisficing: It stops finding the solution as soon as the satisfactory solution is found. Or example, finding the travelling salesman path which is within 10% of optimal.
  • 32.
    Brute Force Approach Prof.Dr. K. Adisesha 32 Brute force approach: A brute force approach is an approach that finds all the possible solutions to find a satisfactory solution to a given problem.  The following are the advantages of the brute-force algorithm:  This algorithm finds all the possible solutions, and it also guarantees that it finds the correct solution to a problem.  This type of algorithm is applicable to a wide range of domains.  It is mainly used for solving simpler and small problems.  It can be considered a comparison benchmark to solve a simple problem and does not require any particular domain knowledge.
  • 33.
    Brute Force Approach Prof.Dr. K. Adisesha 33 Linear search: Linear Searching: Also called Sequential Searching.  It is used to find out the location of the data item if it exists in the given collection of data items.  Example Searching element 33 from the array of elements:  The complexity of linear search is therefore O(n).
  • 34.
    Brute Force Approach Prof.Dr. K. Adisesha 34 Linear search: Linear Searching: Also called Sequential Searching.
  • 35.
    Brute Force Approach Prof.Dr. K. Adisesha 35 Linear search: Time and Space Complexity of Linear Search Algorithm:  Time Complexity:  Best Case: In the best case, the key might be present at the first index. So the best case complexity is O(1)  Worst Case: In the worst case, the key might be present at the last index i.e., opposite to the end from which the search has started in the list. So the worst-case complexity is O(N) where N is the size of the list.  Average Case: O(N)  Auxiliary Space: O(1) as except for the variable to iterate through the list, no other variable is used.
  • 36.
    Brute Force Approach Prof.Dr. K. Adisesha 36 Sorting in Arrays: A Sorting Algorithm is used to rearrange a given array or list elements according to a comparison operator on the elements:  The comparison operator is used to decide the new order of element in the respective data structure.  Types of Sorting Algorithms are:  Bubble Sort  Selection Sort  Insertion Sort  Merge Sort  Quick Sort  Heap Sort
  • 37.
    Brute Force Approach Prof.Dr. K. Adisesha 37 Bubble Sort in Arrays: Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements if they are in wrong order.
  • 38.
    Brute Force Approach Prof.Dr. K. Adisesha 38 Selection Sort:
  • 39.
    Brute Force Approach Prof.Dr. K. Adisesha 39 Selection Sort:
  • 40.
    Brute Force Approach Prof.Dr. K. Adisesha 40 Selection Sort: Complexity Analysis of Selection Sort Input: Given n input elements. Output: Number of steps incurred to sort a list. Logic: If we are given n elements, then in the  First pass, it will do n-1 comparisons;  Second pass, it will do n-2; in the third pass,  it will do n-3 and so on. Therefore, the selection sort algorithm encompasses a time complexity of O(n2) and a space complexity of O(1) because it necessitates some extra memory space for temp variable for swapping.
  • 41.
    Exhaustive search Approach Prof.Dr. K. Adisesha 41 Exhaustive search: Exhaustive search, also known as brute-force search or generate and test, is a problem- solving technique that involves systematically checking all possible candidates to see if they meet the problem's requirements. Exhaustive Search:  Traveling Salesman Problem  Knapsack Problem  Assignment Problem  Graph Traversals
  • 42.
    Exhaustive search Approach Prof.Dr. K. Adisesha 42 Traveling Salesman Problem: The Travelling Salesman Problem (TSP) is a classic algorithmic problem in the field of computer science and operations research, focusing on optimization. It seeks the shortest possible route that visits every point in a set of locations just once.  To solve the TSP using the brute-force approach, you must calculate the total number of routes and then draw and list all the possible routes.  Calculate the distance of each route and then choose the shortest one — this is the optimal solution.
  • 43.
    Exhaustive search Approach Prof.Dr. K. Adisesha 43 Traveling Salesman Problem: To solve the TSP using the brute-force approach, you must calculate the total number of routes and then draw and list all the possible routes just once.  Calculate the distance of each route and then choose the shortest one — this is the optimal solution.
  • 44.
    Exhaustive search Approach Prof.Dr. K. Adisesha 44 Knapsack Problem: In the knapsack problem, you need to pack a set of items, with given values and sizes (such as weights or volumes), into a container with a maximum capacity.  If the total size of the items exceeds the capacity, you can't pack them all. In that case, the problem is to choose a subset of the items of maximum total value that will fit in the container.  The data includes the following:  Weights: A vector containing the weights of the items.  Values: A vector containing the values of the items.  Capacities: A vector with just one entry, the capacity of the knapsack.
  • 45.
    Exhaustive search Approach Prof.Dr. K. Adisesha 45 Knapsack Problem: The Knapsack problem is an example of the combinational optimization problem. This problem is also commonly known as the “Rucksack Problem“.  The knapsack problem can be classified into the following types:  Fractional Knapsack Problem  0/1 Knapsack Problem  Bounded Knapsack Problem  Unbounded Knapsack Problem
  • 46.
    Exhaustive search Approach Prof.Dr. K. Adisesha 46 Fractional Knapsack Problem: Given the weights and values of N items, put these items in a knapsack of capacity W to get the maximum total value in the knapsack. In Fractional Knapsack, we can break items for maximizing the total value of the knapsack.
  • 47.
  • 48.
    Exhaustive search Approach Prof.Dr. K. Adisesha 48 Fractional Knapsack Problem: Example: Consider the example: arr[] = {{100, 20}, {60, 10}, {120, 30}}, W = 50. Sorting: Initially sort the array based on the profit/weight ratio. The sorted array will be {{60, 10}, {100, 20}, {120, 30}}. Iteration: For i = 0, weight = 10 which is less than W. So add this element in the knapsack. profit = 60 and remaining W = 50 – 10 = 40. For i = 1, weight = 20 which is less than W. So add this element too. profit = 60 + 100 = 160 and remaining W = 40 – 20 = 20. For i = 2, weight = 30 is greater than W. So add 20/30 fraction = 2/3 fraction of the element. Therefore profit = 2/3 * 120 + 160 = 80 + 160 = 240 and remaining W becomes 0.
  • 49.
    Exhaustive search Approach Prof.Dr. K. Adisesha 49 0/1 Knapsack Problem: We are given N items where each item has some weight (wi) and value (vi) associated with it. We are also given a bag with capacity W. The target is to put the items into the bag such that the sum of values associated with them is the maximum possible.
  • 50.
    Exhaustive search Approach Prof.Dr. K. Adisesha 50 0/1 Knapsack Problem: The 0/1 Knapsack Problem states that you have a backpack with a weight limit, and you are in a room full of treasures, each treasure with a value and a weight.  To solve the 0/1 Knapsack Problem using brute force means to:  Calculate the value of every possible combination of items in the knapsack.  Discard the combinations that are heavier than the knapsack weight limit.  Choose the combination of items with the highest total value.
  • 51.
    Exhaustive search Approach Prof.Dr. K. Adisesha 51 0/1 Knapsack Problem: We are given N items where each item has some weight (wi) and value (vi) associated with it. We are also given a bag with capacity W. The target is to put the items into the bag such that the sum of values associated with them is the maximum possible.
  • 52.
    Exhaustive search Approach Prof.Dr. K. Adisesha 52 0/1 Knapsack Problem: A similar dynamic programming solution for the 0-1 knapsack problem also runs in pseudo-polynomial time. Assume w1, w2,…, wn, W are strictly positive integers.  We can define m[i, w] recursively as follows:  Example: Wi={4, 3, 2, 1} Vi={5, 4, 3, 2 } Capacity=6
  • 53.
    Exhaustive search Approach Prof.Dr. K. Adisesha 53 0/1 Knapsack Problem: Given weights and profits of N items, put these items in a knapsack of capacity W. The possible solutions in such a way that there are no remaining items left whose weight is less than the remaining capacity of the knapsack. With maximum profit. Input: Profits[] = {60, 100, 120, 50} Weights[] = {10, 20, 30, 40}, W = 40 Output: 10: 60, 20: 100, 10: 60, 30: 120, Maximum Profit = 180 Explanation: Maximum profit from all the possible solutions is 180 Input: Profits[] = {60, 100, 120, 50} Weights[] = {10, 20, 30, 40}, W = 50 Output: 10: 60, 20: 100, 10: 60, 30: 120, 20: 100, 30: 120, Maximum Profit = 220 Explanation: Maximum profit from all the possible solutions is 220
  • 54.
    Exhaustive search Approach Prof.Dr. K. Adisesha 54 Graphs Traversal Algorithms: The graph is one non-linear data structure. That is consists of some nodes and their connected edges. The edges may be director or undirected.  This graph can be represented as G(V, E). The following graph can be represented as G({A, B, C, D, E}, {(A, B), (B, D), (D, E), (B, C), (C, A)})  The graph has two types of traversal algorithms.  Breadth First Search  Depth First Search.
  • 55.
    Exhaustive search Approach Prof.Dr. K. Adisesha 55 BFS algorithm: Breadth-first search is a graph traversal algorithm that starts traversing the graph from the root node and explores all the neighboring nodes.  The applications of breadth-first-algorithm are given as follows -  BFS can be used to find the neighboring locations from a given source location.  In a peer-to-peer network, BFS algorithm can be used as a traversal method to find all the neighboring nodes. Most torrent clients, such as BitTorrent, uTorrent, etc. employ this process to find "seeds" and "peers" in the network.  BFS can be used in web crawlers to create web page indexes. It is one of the main algorithms that can be used to index web pages.  BFS is used to determine the shortest path and minimum spanning tree.  BFS is also used in Cheney's technique to duplicate the garbage collection.
  • 56.
    Exhaustive search Approach Prof.Dr. K. Adisesha 56 BFS algorithm: Breadth-first search is a graph traversal algorithm that starts traversing the graph from the root node and explores all the neighboring nodes.  The steps involved in the BFS algorithm to explore a graph are given as follows - Step 1: SET STATUS = 1 (ready state) for each node in G Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state) Step 3: Repeat Steps 4 and 5 until QUEUE is empty Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state). Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1) and set their STATUS = 2 (waiting state) [END OF LOOP] Step 6: EXIT.
  • 57.
    Exhaustive search Approach Prof.Dr. K. Adisesha 57 Depth First Search: The depth-first search (DFS) algorithm starts with the initial node of graph G and goes deeper until we find the goal node or the node with no children.  Applications of DFS algorithm  The applications of using the DFS algorithm are given as follows -  DFS algorithm can be used to implement the topological sorting.  It can be used to find the paths between two vertices.  It can also be used to detect cycles in the graph.  DFS algorithm is also used for one solution puzzles.  DFS is used to determine if a graph is bipartite or not.
  • 58.
    Exhaustive search Approach Prof.Dr. K. Adisesha 58 Depth First Search: The depth-first search (DFS) algorithm starts with the initial node of graph G and goes deeper until we find the goal node or the node with no children. The step by step process to implement the DFS traversal is given as follows - 1) First, create a stack with the total number of vertices in the graph. 2) Now, choose any vertex as the starting point of traversal, and push that vertex into the stack. 3) After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to the top of the stack. 4) Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the stack's top. 5) If no vertex is left, go back and pop a vertex from the stack. 6) Repeat steps 2, 3, and 4 until the stack is empty.
  • 59.
    Exhaustive search Approach Prof.Dr. K. Adisesha 59 Depth First Search: The depth-first search (DFS) algorithm starts with the initial node of graph G and goes deeper until we find the goal node or the node with no children. Algorithm Step 1: SET STATUS = 1 (ready state) for each node in G Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state) Step 3: Repeat Steps 4 and 5 until STACK is empty Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state) Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS = 1) and set their STATUS = 2 (waiting state) [END OF LOOP] Step 6: EXIT
  • 60.
    Exhaustive search Approach Prof.Dr. K. Adisesha 60 Depth First Search: The depth-first search (DFS) algorithm starts with the initial node of graph G and goes deeper until we find the goal node or the node with no children. Input: Profits[] = {60, 100, 120, 50} Weights[] = {10, 20, 30, 40}, W = 40 Output: 10: 60, 20: 100, 10: 60, 30: 120, Maximum Profit = 180 Explanation: Maximum profit from all the possible solutions is 180 Input: Profits[] = {60, 100, 120, 50} Weights[] = {10, 20, 30, 40}, W = 50 Output: 10: 60, 20: 100, 10: 60, 30: 120, 20: 100, 30: 120, Maximum Profit = 220 Explanation: Maximum profit from all the possible solutions is 220
  • 61.
    Decrease and Conquer Prof.Dr. K. Adisesha 61 Decrease and Conquer: A problem-solving strategy known as "Decrease and Conquer" involves slicing the size of the input at every stage of the solution procedure.  Identical to divide-and-conquer as it breaks the problem down into smaller sub- problems, decrease-and-conquer reduces the size of the input data at each stage rather than increasing it.  Algorithm Step 1: Start Step 2: Set p (the element whose position to be found) Step 3: Set the pointers first (at the first position) and last (at the last position) Step 4: Find the middle element (mid) Step 5: If p==mid, then print the middle element Step 6: If p>mid, check p with elements right to mid Step 7: If p<mid, check p with elements left to mid
  • 62.
    Decrease and Conquer Prof.Dr. K. Adisesha 62 Binary Searching: The binary search algorithm can be used with only sorted list of elements.  Binary Search first divides a large array into two smaller sub-arrays and then recursively operate the sub-arrays.  Binary Search basically reduces the search space to half at each step
  • 63.
    Decrease and Conquer Prof.Dr. K. Adisesha 63 Binary Searching: The binary search algorithm can be used with only sorted list of elements.  Example: Searching the element 57 from the array of elements
  • 64.
    Decrease and Conquer Prof.Dr. K. Adisesha 64 Binary Searching:  Example:
  • 65.
    Decrease and Conquer Prof.Dr. K. Adisesha 65 Binary Searching:
  • 66.
    Decrease and Conquer Prof.Dr. K. Adisesha 66 Binary Searching Analysis:  Input: an array A of size n, already sorted in the ascending or descending order.  Output: analyze to search an element item in the sorted array of size n.  Logic: Let T (n) = number of comparisons of an item with n elements in a sorted array. Set BEG = 1 and END = n  Compare the search item with the mid item.  Case 1: item = A[mid], then LOC = mid, but it the best case and T (n) = 1  Case 2: item ≠A [mid], then we will split the array into two equal parts of size n/2.  Repeat the same process until a search element is found  Time to compare the search element with mid element, with half of the selected half part of array
  • 67.
    Decrease and Conquer Prof.Dr. K. Adisesha 67 Insertion Sorting: Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time.  This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained which is always sorted.
  • 68.
    Decrease and Conquer Prof.Dr. K. Adisesha 68 Insertion Sorting: This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained which is always sorted.  This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained which is always sorted.
  • 69.
    Decrease and Conquer Prof.Dr. K. Adisesha 69 Insertion Sorting:  ALGORITHM: Insertion Sort (A, N) A is an array with N unsorted elements.  Step 1: for I=1 to N-1  Step 2: J = I While(J >= 1) if ( A[J] < A[J-1] ) then Temp = A[J]; A[J] = A[J-1]; A[J-1] = Temp; [End if] J = J-1 [End of While loop] [End of For loop]  Step 3: Exit
  • 70.
    Divide and Conquer Prof.Dr. K. Adisesha 70 Merging from Array: Merging: It is used to combine the data items of two sorted files into single file in the sorted form.  We have sorted linear array A as below: 1 2 3 4 5 6 10 40 50 80 95 100  And sorted linear array B as below: 1 2 3 4 20 35 45 90  After merging merged array C is as below: 1 2 3 4 5 6 7 8 9 10 10 20 35 40 45 50 80 90 95 100
  • 71.
    Divide and Conquer Prof.Dr. K. Adisesha 71 Binary tree traversal: Binary tree traversal is a process that involves visiting each node in a binary tree data structure.  Tree Traversal refers to the process of visiting or accessing each node of the tree exactly once in a certain order.  A Tree Data Structure can be traversed in following ways:  Depth First Search or DFS  Inorder Traversal  Preorder Traversal  Postorder Traversal  Level Order Traversal or Breadth First Search or BFS
  • 72.
    Divide and Conquer Prof.Dr. K. Adisesha 72 Binary tree traversal (DFS): Tree traversal is the process of visiting each node in a tree, such as a binary tree or binary search tree, exactly once. There are several effective traversal algorithms which we will cover below.  Pre-order: A pre-order traversal on a tree performs the following steps starting from the root: 1) Return the root node value. 2) Traverse the left subtree by recursively calling the pre-order function. 3) Traverse the right subtree by recursively calling the pre-order function.  Pre-order traversal would output the node values in the following order: A, B, D, E, C
  • 73.
    Divide and Conquer Prof.Dr. K. Adisesha 73 Binary tree traversal (DFS): A pre-order traversal on a tree performs the following steps starting from the root: function Node(data) { this.data = data; this.left = null; this.right = null; } // create nodes var root = new Node('A'); var n1 = new Node('B'); var n2 = new Node('C'); var n3 = new Node('D'); var n4 = new Node('E'); // setup children root.left = n1;
  • 74.
    Divide and Conquer Prof.Dr. K. Adisesha 74 Binary tree traversal (DFS): Tree traversal is the process of visiting each node in a tree, such as a binary tree or binary search tree, exactly once. There are several effective traversal algorithms which we will cover below.  In-order: An in-order traversal on a tree performs the following steps starting from the root: 1) Traverse the left subtree by recursively calling the in-order function. 2) Return the root node value. 3) Traverse the right subtree by recursively calling the in-order function.  In-order traversal would output the node values in the following order: D, B, E, A, C
  • 75.
    Divide and Conquer Prof.Dr. K. Adisesha 75 Binary tree traversal (DFS): An in-order traversal on a tree performs the following steps starting from the root: function Node(data) { this.data = data; this.left = null; this.right = null; } // create nodes var root = new Node('A'); var n1 = new Node('B'); var n2 = new Node('C'); var n3 = new Node('D'); var n4 = new Node('E'); // setup children root.left = n1;
  • 76.
    Divide and Conquer Prof.Dr. K. Adisesha 76 Binary tree traversal (DFS): Tree traversal is the process of visiting each node in a tree, such as a binary tree or binary search tree, exactly once. There are several effective traversal algorithms which we will cover below.  Post-order: A post-order traversal on a tree performs the following steps starting from the root. 1) Traverse the left subtree by recursively calling the post-order function. 2) Traverse the right subtree by recursively calling the post-order function. 3) Return the root node value.  Post-order traversal would output the node values in the following order: D, E, B, C, A
  • 77.
    Divide and Conquer Prof.Dr. K. Adisesha 77 Binary tree traversal (DFS): A post-order traversal on a tree performs the following steps starting from the root: function Node(data) { this.data = data; this.left = null; this.right = null; } // create nodes var root = new Node('A'); var n1 = new Node('B'); var n2 = new Node('C'); var n3 = new Node('D'); var n4 = new Node('E'); // setup children root.left = n1;
  • 78.
    Divide and Conquer Prof.Dr. K. Adisesha 78 Binary tree traversal (BFS): Tree traversal is the process of visiting each node in a tree, such as a binary tree or binary search tree, exactly once. There are several effective traversal algorithms which we will cover below.  Level-order: A level-order traversal on a tree performs the following steps starting from the root. 1) Add the root to a queue. 2) Pop the last node from the queue, and return its value. 3) Add all children of popped node to queue, and continue from step 2 until queue is empty.  Level-order traversal would output the node values in the following order: A, B, C, D, E
  • 79.
    Divide and Conquer Prof.Dr. K. Adisesha 79 Binary tree traversal (BFS): A level-order traversal on a tree performs the following steps starting from the root: function Node(data) { this.data = data; this.left = null; this.right = null; } // create nodes var root = new Node('A'); var n1 = new Node('B'); var n2 = new Node('C'); var n3 = new Node('D'); var n4 = new Node('E'); // setup children root.left = n1; root.right = n2;
  • 80.
    Divide and Conquer Prof.Dr. K. Adisesha 80 Advantages of Divide and Conquer: Divide and Conquer Algorithm is a problem-solving method in Data Structures working on recursion principle.  Advantages of Divide and Conquer:  Efficiency: Divide and conquer algorithms typically have a time complexity of O(n log n), which is more efficient than many other algorithms for large datasets.  Simplicity: Divide and conquer algorithms are often easy to understand and implement.  Parallelizability: Divide and conquer algorithms can be easily parallelized, as each subproblem can be solved independently.  Cache-friendliness: Divide and conquer algorithms tend to have good cache performance, as they access data in a predictable pattern.
  • 81.
    Divide and Conquer Prof.Dr. K. Adisesha 81 Disadvantages of Divide and Conquer: Divide and Conquer Algorithm is a problem-solving method in Data Structures working on recursion principle.  Disadvantages of Divide and Conquer:  Recursion overhead: Divide and conquer algorithms use recursion, which can lead to significant overhead in terms of stack space and function calls.  Not suitable for all problems: Divide and conquer algorithms are not suitable for all types of problems. They are most effective for problems that can be recursively divided into smaller subproblems.  Limited memory efficiency: Divide and conquer algorithms can require a significant amount of memory, as they create multiple copies of the input data.
  • 82.
    Greedy Technique Prof. Dr.K. Adisesha 82 Greedy Algorithm: A greedy algorithm is an approach for solving a problem by selecting the best option available at the moment. It works in a top-down approach.  This technique is basically used to determine the feasible solution that may or may not be optimal.  The following are the characteristics of a greedy method:  To construct the solution in an optimal way, this algorithm creates two sets where one set contains all the chosen items, and another set contains the rejected items.  A Greedy algorithm makes good local choices in the hope that the solution should be either feasible or optimal.
  • 83.
    Greedy Technique Prof. Dr.K. Adisesha 83 Greedy Algorithm: A greedy algorithm is an approach for solving a problem by selecting the best option available at the moment. It works in a top-down approach.  The components that can be used in the greedy algorithm are:  Candidate set: A solution that is created from the set is known as a candidate set.  Selection function: This function is used to choose the candidate or subset which can be added in the solution.  Feasibility function: A function that is used to determine whether the candidate or subset can be used to contribute to the solution or not.  Objective function: A function is used to assign the value to the solution or the partial solution.  Solution function: This function is used to intimate whether the complete function has been reached or not.
  • 84.
    Greedy Technique Prof. Dr.K. Adisesha 84 Greedy Algorithm: A greedy algorithm is an approach for solving a problem by selecting the best option available at the moment. It works in a top-down approach.  The components that can be used in the greedy algorithm are:  Candidate set: A solution that is created from the set is known as a candidate set.  Selection function: This function is used to choose the candidate or subset which can be added in the solution.  Feasibility function: A function that is used to determine whether the candidate or subset can be used to contribute to the solution or not.  Objective function: A function is used to assign the value to the solution or the partial solution.  Solution function: This function is used to intimate whether the complete function has been reached or not.
  • 85.
    Greedy Technique Prof. Dr.K. Adisesha 85 Greedy Algorithm: A greedy algorithm is an approach for solving a problem by selecting the best option available at the moment. It works in a top-down approach.  Applications of Greedy Algorithm  It is used in finding the shortest path.  It is used to find the minimum spanning tree using the prim's algorithm or the Kruskal's algorithm.  It is used in a job sequencing with a deadline.  This algorithm is also used to solve the fractional knapsack problem.
  • 86.
    Greedy Technique Prof. Dr.K. Adisesha 86 Greedy Algorithm: A greedy algorithm is an approach for solving a problem by selecting the best option available at the moment. It works in a top-down approach.  Types of Greedy Algorithms  Knapsack Problem.  Minimum Spanning Tree.  Single-Source Shortest Path Problem.  Job Scheduling Problem.  Prim's Minimal Spanning Tree Algorithm.  Kruskal's Minimal Spanning Tree Algorithm.  Dijkstra's Minimal Spanning Tree Algorithm.
  • 87.
    Greedy Technique Prof. Dr.K. Adisesha 87 Knapsack Problem using Greedy Algorithm: n this method, the Knapsack’s filling is done so that the maximum capacity of the knapsack is utilized so that maximum profit can be earned from it.  The knapsack problem using the Greedy Method is referred to as:  Given a list of n objects, say {I1, I2,……, In) and a knapsack (or bag).  The capacity of the knapsack is M.  Each object Ij has a weight wj and a profit of pj  If a fraction xj (where x {0…., 1)) of an object Ij is placed into a knapsack, then a profit of pjxj ∈ is earned.  The problem is to fill the knapsack (up to its maximum capacity M), maximizing the total profit earned.
  • 88.
    Greedy Technique Prof. Dr.K. Adisesha 88 Minimum Spanning Tree (MST): In a weighted graph, a minimum spanning tree is a spanning tree that has minimum weight than all other spanning trees of the same graph.  In real-world situations, this weight can be measured as distance, congestion, traffic load or any arbitrary value denoted to the edges.  two most important spanning tree algorithms(greedy algorithms):  Kruskal's Algorithm  Prim's Algorithm
  • 89.
    Greedy Technique Prof. Dr.K. Adisesha 89 Kruskal's Algorithm: Kruskal's algorithm is a minimum spanning tree algorithm that takes a graph as input and finds the subset of the edges of that graph which.  form a tree that includes every vertex  has the minimum sum of weights among all the trees that can be formed from the graph  The steps for implementing Kruskal's algorithm are as follows:  Sort all the edges from low weight to high  Take the edge with the lowest weight and add it to the spanning tree. If adding the edge created a cycle, then reject this edge.  Keep adding edges until we reach all vertices.
  • 90.
    Greedy Technique Prof. Dr.K. Adisesha 90 Kruskal's Algorithm: Kruskal's algorithm is a subgraph that connects all the vertices present in the main graph with the least possible edges and minimum cost.  Examples: Construct a minimum spanning tree using kruskal’s algorithm for the graph given  Solution:  As the first step, sort all the edges in the given graph in an ascending order and store the values in an array. Edge BD AB CF FE BC GF AG CD DE CG Cost 5 6 9 10 11 12 15 17 22 25 Minimum cost = 5 + 6 + 9 + 10 + 11 + 12 = 53 Visited array, v = {B, D, A, C, F, E, G}
  • 91.
    Greedy Technique Prof. Dr.K. Adisesha 91 Prim's algorithm: Prim's algorithm to find minimum cost spanning tree uses the greedy approach.  Algorithm  Declare an array visited[] to store the visited vertices and, add the arbitrary root, say S, to the visited array.  Check whether the adjacent vertices of the last visited vertex are present in the visited[] array or not.  If the vertices are not in the visited[] array, compare the cost of edges and add the least cost edge to the output spanning tree.  The adjacent unvisited vertex with the least cost edge is added into the visited[] array and the least cost edge is added to the minimum spanning tree output.  Steps 2 and 4 are repeated for all the unvisited vertices in the graph to obtain the full minimum spanning tree output for the given graph.  Calculate the cost of the minimum spanning tree obtained.
  • 92.
    Greedy Technique Prof. Dr.K. Adisesha 92 Prim's algorithm: Prim's algorithm to find minimum cost spanning tree (as Kruskal's algorithm) uses the greedy approach. Prim's algorithm shares a similarity with the shortest path first algorithms.  The algorithm, similar to any shortest path algorithm, begins from a vertex that is set as a root and walks through all the vertices in the graph by determining the least cost adjacent edges.  Step 1 - Remove all loops and parallel edges  Step 2 - Choose any arbitrary node as root node
  • 93.
    Greedy Technique Prof. Dr.K. Adisesha 93 Prim's algorithm: Prim's algorithm shares a similarity with the shortest path first algorithms.
  • 94.
    Greedy Technique Prof. Dr.K. Adisesha 94 Dijkstra’s Algorithm: Dijkstra's shortest path algorithm was invented in 1956 by the Dutch computer scientist Edsger W. Dijkstra during a twenty minutes coffee break, while out shopping with his fiancée in Amsterdam.  Dijkstra's algorithm is used for solving single-source shortest path problems for directed or undirected paths.  Dijkstra's algorithm finds the shortest path from one vertex to all other vertices.  It does so by repeatedly selecting the nearest unvisited vertex and calculating the distance to all the unvisited neighboring vertices.
  • 95.
    Greedy Technique Prof. Dr.K. Adisesha 95 Dijkstra’s Algorithm: Dijkstra's algorithm is used for solving single-source shortest path problems for directed or undirected paths. 1. Set initial distances for all vertices: 0 to source vertex, and infinity for all the other. 2. Choose the unvisited vertex with the shortest distance from the start to be current vertex. 3. For each of the current vertex's unvisited neighbor vertices, calculate the distance from the source and update the distance if the new, calculated, distance is lower. 4. We are now done with the current vertex, so we mark it as visited. A visited vertex is not checked again. 5. Go back to step 2 to choose a new current vertex, and keep repeating these steps until all vertices are visited. 6. In the end we are left with the shortest path from the source vertex to every other vertex.
  • 96.
    Greedy Technique Prof. Dr.K. Adisesha 96 Dijkstra’s Algorithm: Given a weighted graph and a source vertex in the graph, find the shortest paths from the source to all the other vertices in the given graph. Input: src = 0, the graph. Output: 0 4 12 19 21 11 9 8 14 Explanation: The distance from 0 to 1 = 4. The minimum distance from 0 to 2 = 12. 0->1->2 The minimum distance from 0 to 3 = 19. 0->1->2->3 The minimum distance from 0 to 4 = 21. 0->7->6->5->4 The minimum distance from 0 to 5 = 11. 0->7->6->5 The minimum distance from 0 to 6 = 9. 0->7->6 The minimum distance from 0 to 7 = 8. 0->7 The minimum distance from 0 to 8 = 14. 0->1->2->8
  • 97.
    Greedy Technique Prof. Dr.K. Adisesha 97 Lower Bound Theory: According to the lower bound theory, for a lower bound L(n) of an algorithm, it is not possible to have any other algorithm (for a common problem) whose time complexity is less than L(n) for random input.  Let L(n) be the running time of an algorithm A(say), then g(n) is the Lower Bound of A if there exist two constants C and N such that L(n) >= C*g(n) for n > N.  Lower bound of an algorithm is shown by the asymptotic notation called Omega.  Consider sorting three numbers a1, a2, and a3. There are 3! = 6 possible combinations:  (a1, a2, a3), (a1, a3, a2),  (a2, a1, a3), (a2, a3, a1),  (a3, a1, a2), (a3, a2, a1)
  • 98.
    Greedy Technique Prof. Dr.K. Adisesha 98 Decision Tree Algorithms: Decision trees are a type of machine-learning algorithm that can be used for both classification and regression tasks.  The algorithm works by recursively splitting the data into smaller and smaller subsets based on the feature values.  Before we divide into Decision Tree, we need to know about the following important terms:  Root Node: It is the topmost node in the tree, which represents the complete dataset. It is the starting point of the decision-making process.  Internal Node: A node that symbolizes a choice regarding an input feature. Branching off of internal nodes connects them to leaf nodes or other internal nodes.  Leaf/Terminal Node: A node without any child nodes that indicates a class label or a numerical value.
  • 99.
    Greedy Technique Prof. Dr.K. Adisesha 99 Decision Tree Algorithms: A decision tree is a full binary tree that shows the comparisons between elements that are executed by an appropriate sorting algorithm operating on an input of a given size. Control, data movement, and all other conditions of the algorithm are ignored.  In a decision tree, there will be an array of length n.  So, total leaves will be n! (I.e. total number of comparisons)  If tree height is h, then surely n! ≤2n (tree will be binary)
  • 100.
    Greedy Technique Prof. Dr.K. Adisesha 100 P Class Problem: P-type problems are a class of problems in computational complexity theory that can be solved by a deterministic Turing machine within a polynomial amount of time.  This is also known as polynomial-time deterministic algorithms.  P: the class of problems that have polynomial-time deterministic algorithms.  A deterministic algorithm is (essentially) one that always computes the correct answer.  This problem is easy to understand and tractable.  Sample Problems in P  Searching  Element uniqueness  Graph connectivity
  • 101.
    Greedy Technique Prof. Dr.K. Adisesha 101 NP Class Problem: A problem is said to be Nondeterministic polynomial time that can be solvable in polynomial time by a nondeterministic Turing machine.  NP stands for “Nondeterministic Polynomial-time”  The solutions to the NP class problem are hard to find since they are being solved by a non-deterministic machine.  Some of the examples of NP problems are given as follows:  Traveling Salesman  N-Queens  Classroom Scheduling  Packing  Scheduling
  • 102.
    Greedy Technique Prof. Dr.K. Adisesha 102 NP-Complete Problems: NP-Complete (or NPC) problems are a set of problems that are well connected.  A problem x that is in NP, if any one finds an polynomial time algorithm even for one problem, it implies that polynomial time algorithm for all NP-Complete problems.  NP-complete problems are a subset of the larger class of NP (nondeterministic polynomial time) problems.  NP problems are a class of computational problems that can be solved in polynomial time by a non-deterministic machine and can be verified in polynomial time by a deterministic Machine.
  • 103.
    Greedy Technique Prof. Dr.K. Adisesha 103 N Queen Problem: The goal of the N-Queen Problem is to place N queens on an N×N chessboard so that no two queens can attack each other.  In the 4-queens problem, the solver starts at the leftmost column and successively places one queen in each column, at a location that is not attacked by any previously placed queens.  One constraint is that there can be only one queen in a column  Second constraint prohibits two queens on the same diagonal  Third constraint prohibits queens on the same row
  • 104.
    Greedy Technique Prof. Dr.K. Adisesha 104 N Queen Problem: The N Queen is the problem of placing N chess queens on an N×N chessboard so that no two queens attack each other.  Given an integer n, return all distinct solutions to the n-queens puzzle. You may return the answer in any order. Input: n = 4 Output: [[".Q..","...Q","Q...","..Q."], ["..Q.","Q...","...Q",".Q.."]]
  • 105.
    Greedy Technique Prof. Dr.K. Adisesha 105 Job Sequencing Problem: Given an array of jobs where every job has a deadline and associated profit if the job is finished before the deadline.  Using Greedy method choose the jobs with maximum profit first, by sorting the jobs in decreasing order of their profit.  Follow the given steps to solve the problem:  Sort all jobs in decreasing order of profit.  Iterate on jobs in decreasing order of profit. For each job , do the following :  Find a time slot i, such that slot is empty and i < deadline and i is greatest.  Put the job in this slot and mark this slot filled.  If no such i exists, then ignore the job.
  • 106.
    Greedy Technique Prof. Dr.K. Adisesha 106 Job Sequencing Problem: Consider the following tasks with their deadlines and profits. Schedule the tasks in such a way that they produce maximum profit after being executed −.  Step 1: Find the maximum deadline value, dm, from the deadlines given.  Step 2: Arrange the jobs in descending order of their profits.  The maximum deadline, dm, is 4. Therefore, all the tasks must end before 4.  The output set of jobs scheduled within the deadline are {J4, J3} with the maximum profit of 140. Jobs J1 J2 J3 J4 J5 Deadlines 2 2 1 3 4 Profits 20 60 40 100 80 Jobs J4 J5 J2 J3 J1 Deadlines 3 4 2 1 2 Profits 100 80 60 40 20 
  • 107.
    Design and Analysisof Algorithms Prof. Dr. K. Adisesha 107 Thank you: All the best