Course Content DAA
Breadth First Search
Breadth first search(BFS) is the simplest graph traversal and search algorithm. In this a given graph G=( V , E ) we start traversing the graph from on particular vertex 's' and include all nodes that are joined by an edge to vertex 's'. Once done, we then include all additional nodes that are joined by an edge to any other node previously searched for vertex 's' and we continue in the way until no new nodes are encountered. Therefore this is a layer based search and at each layer we try to discover more connecting vertices. Also, It computes the distance from 's' to each reachable vertex with minimum number of edges on a path joining them. Thus, BFS is not only determining the nodes that 's' can reach, but it also computing the shortest path to them. BFS also produces a tree 'T' with root 's' that contains all reachable vertices. For any vertex 'v' reachable from s, the simple path in the breadthfirst tree from 's' to 'v' corresponds to a “shortest path” from 's' to 'v' in G, that is, a path containing the smallest number of edges. BFS algorithm works on both directed and undirected graphs. For defining an algorithm of BFS, we assume following inputs  Graph G =(V , E) and is represented using adjacency list.  's' will be denoting the source vertex(root).  To keep track of progress, BFS color each vertex green, grey and black. All vertices start out green and later become first grey then black. So to store the color of each vertex 'u' Є 'V' in the attribute 'u.color'  To store the predecessor of 'u' in attribute 'u.π'. In case of no predecessor it will be denoted NIL.  To store the distance(in terms of level) attribute 'u.d' will be used. This distance will be from the root(source).  A firstin, firstout queue 'Q' will be used to manage the grey vertices. Below is the algorithm of BFS with explanation Let us solve following example for BFS and traverse through the directed graph Above example explains how the vertices color is changed and how the queue will look like when the BFS algorithm will execute as program. Few more applications of BFS are as:

Introduction to Graphs
So formally we can say that, A graph G = ( V , E ) consists of two things:
Examples Now above figure shows a graph consisting of 5 round object. Therefore if we define it as per above mentioned definition it will be written as: G= ( V , E ) V = { 1,2,3,4,5} (as there are 5 numbered objects in this graph) E = {[1,2] , [1,3] ,[2,1] , [2,4] , [3,1] , [3,4] , [3,5] , [4,2] , [4,3] ,[4,5] , [5,3] , [5,4] } Now if you notice carefully the figure shows only 6 edges, where as set 'E' consists of 12 edges. This is because this graph is undirected graph, so each vertices connected to other vertices will also allows opposite relation to like [u,v] <> [v,u]. Now lets consider another example with a few changes in same graph G= ( V , E ) V = { 1,2,3,4,5} (as there are 5 numbered objects in this graph) E = {[1,2] , [2,4] , [3,1] , [3,4] , [4,5] , [5,3] } Here set 'E' will now only contain those edges which directs to certain node, this is because it is a directed graph. for eg. there is a directed edge from node 1 to node 2, but reverse is not available there fore only one entry will be allowed in 'E' set. Next important point about graphs is how they are stored in computer? this is a little tricky work. There are two standard ways to represent a graph G= (V,E) :
In above figures adjacency list of undirected and directed graphs are shown. If you are aware about data structures this adjacency list actually make used of Linked list data structure. In this we make an array of pointers and then point each index corresponding to each vertices(node) to a linked list containing all end node numbers of the edge.
In above figures adjacency matrix of undirected and directed graphs are shown. This List is basically an array of 'v' by 'v' size and fill value one on the intersection box of start node and end node. For example if we have and edge [1,2] and [2,1] in undirected graph(Left figure) then if you notice the intersection box of 'row 1' and 'column 2' is assigned value '1'. Similarly the intersection of 'row 2' and 'column 1' is also assigned value '1' and so on. Similar setup is used in directed graph. Degree of Vertex This is the most important term in graphs and is denoted differently for Directed and Undirected Graph. In case of Undirected graph the degree of vertex is the number of edges inclined to it. In case of Directed graphs the degree of vertex is categorized as: OutDegree(number of out going edges) and InDegree(number of incoming edges) 
Dynamic Programming Technique
Therefore "Dynamic programming is a applicable when sub problem are not independent ,that is,when sub problem share sub problems." As Greedy approach, Dynamic programming is typically applied to optimization problems and for them there can be many possible solutions and the requirement is to find the optimal solution among those. But, Dynamic programming approach is little different greedy approach. In greedy solutions are computed by making choices in serial forward way and in this no backtracking & revision of choices is done where as Dynamic programming computes its solution bottom up by producing them from smaller sub problems, and by trying many possibilities and choices before it arrives at the optimal set of choices. The Development of a dynamicprogramming algorithm can be broken into a sequence of four steps:
Now for any problem to be solved through dynamic programming approach it must follow the following conditions:
Various algorithms which make use of Dynamic programming technique are as follows:

Greedy Technique
Greedy Technique uses iterative/recursive approach to solve an optimization problem and selects an optimal(best) solution. Greedy approach look for simple, easytoimplement solutions to complex, multi step problems by deciding which next step will provide the most benefit. It is not always the greedy technique produce an optimal solution, but it helps in producing an approximate optimal solution in a reasonable time. Greedy generally works on heuristic approach. The greedy technique is one of the simplest approaches to solve the optimization problems in which we want to determine the local optimum of a given function by a sequence of steps where at each stage we can make a choice among a class of possible decisions. In this, the choice of the optimal decision is made in the information at hand without worrying about the effect these decisions may have in the future. The greedy choice property and optimal subproblem are two ingredients in the problem that lend to a greedy strategy. It says that a globally optimal solution can be arrived at by making a locally optimal choice(with no Guarantee). OR we can say greedy method arrives at a solution by making a sequence of choices, each of which simply looks the best at moment. Greedy Choice Property: It says, we can assemble a globally optimal solution by making locally optimal choices i.e. The greedy choice is always part of certain optimal solution. Optimal SubProblem:
It says, an optimal solution to the problem contains within its optimal
solution to subproblems i.e. Global optimal solution is constructed
from local optimal solutions.
Any Greedy Algorithm will have following five components:

Divide and Conquer Technique
Divide and Conquer approach basically works on breaking the problem into sub problems that are similar to the original problem but smaller in size & simpler to solve. once divided sub problems are solved recursively and then combine solutions of sub problems to create a solution to original problem. At each level of the recursion the divide and conquer approach follows three steps:
Divide and conquer approach has several advantages as follows:
Following are the problems which has become very efficient with use of Divide and Conquer approach:
All above mentioned problems will be explained on other pages with concerned topics or you can click on provided link to directly read them. 
Polynomial vs. Exponential Running Time
The time complexity(generally referred as running time) of an algorithm is expressed as the amount of time taken by an algorithm for some size of the input to the problem. Big O notation is commonly used to express the time complexity of any algorithm as this suppresses the lower order terms and is described asymptotically. Time complexity is estimated by counting the operations(provided as instructions in a program) performed in an algorithm. Here each operation takes a fixed amount of time in execution. Generally time complexities are classified as constant, linear, logarithmic, polynomial, exponential etc. Among these the polynomial and exponential are the most prominently considered and defines the complexity of an algorithm. These two parameters for any algorithm are always influenced by size of input. Polynomial Running Time An algorithm is said to be solvable in polynomial time if the number of steps required to complete the algorithm for a given input is O(n^{k}) for some nonnegative integer k, where n is the complexity of the input. Polynomialtime algorithms are said to be "fast." Most familiar mathematical operations such as addition, subtraction, multiplication, and division, as well as computing square roots, powers, and logarithms, can be performed in polynomial time. Computing the digits of most interesting mathematical constants, including pi and e, can also be done in polynomial time. Ref:WolframMathWorld All basic arithmetic operations ((i.e.) Addition, subtraction, multiplication,division), comparison operations, sort operations are considered as polynomial time algorithms. Exponential Running Time The set of problems which can be solved by an exponential time algorithms, but for which no polynomial time algorithms is known. An algorithm is said to be exponential time, if T(n) is upper bounded by 2^{poly(n)}, where poly(n) is some polynomial in n. More formally, an algorithm is exponential time if T(n) is bounded by O(2^{nk}) for some constant k.Ref:Wiki Examples of exact Exponential time algorithms can be read from following link of Computer Science Algorithms Algorithms which have exponential time complexity grow much faster than polynomial algorithms. The difference you are probably looking for happens to be where the variable is in the equation that expresses the run time. Equations that show a polynomial time complexity have variables in the bases of their terms. Examples: n^{3} + 2n^{2} + 1. Notice n is in the base, NOT the exponent. In exponential equations, the variable is in the exponent. Examples: 2^{n}. As said before, exponential time grows much faster. If n is equal to 1000 (a reasonable input for an algorithm), then notice 1000^{3} is 1 billion, and 2^{1000} is simply huge! For a reference, there are about 2^{80} hydrogen atoms in the sun, this is much more than 1 billion. for more reference you can check this PDF file 
Asymptotic Notations
Order of Growth: Whenever we work with algorithms, the most
critical factor considered for them is their efficiency. And for this we
are interested in analyzing how the running time increases when input
size increases. when to algorithms are compared with respect to their
behavior for large input size, the measure is know as Order of growth. The
order of growth can be estimated by taking into account the dominant
term of the running expression. The dominant term overpasses the values
of the other terms when the input size is large.
For example:
In Asymptotic analysis it is considered that an algorithm
'A1' is better than algorithm 'A2' if the order of growth of the running
time of the 'A1' is lower than that of 'A2'. Therefore
asymptotic efficiency of algorithms are concerned with how the running
time of an algorithm increases with the size of the input in the limit,
as the size of the input increases without bound.
Asymptotic Notations: For every algorithm corresponding to efficiency analysis, we have three basic cases :
And there are 5 notations to resemble them:

Performance Analysis
Performance analysis of an
algorithm depends upon two factors i.e. amount of memory used and amount
of compute time consumed on any CPU. Formally they are notified as
complexities in terms of:
Space Complexity of an algorithm is the amount of memory it needs to run to completion i.e. from start of execution to its termination. Space need by any algorithm is the sum of following components:
Therefore the total space requirement of any algorithm 'A' can be provided as Space(A) = Fixed Components(A) + Variable Components(A) Among both fixed and variable component the variable part is important to be determined accurately, so that the actual space requirement can be identified for an algorithm 'A'. To identify the space complexity of any algorithm following steps can be followed:
Example: Space Complexity Algorithm Sum(number,size)\\ procedure will produce sum of all numbers provided in 'number' list { result=0.0; for count = 1 to size do \\will repeat from 1,2,3,4,....size times result= result + number[count]; return result; } Fixed components as 'result','count' and 'size' variable there for total space required is three(3) words. Variable components is characterized as the value stored in 'size' variable (suppose value store in variable 'size 'is 'n'). because this will decide the size of 'number' list and will also drive the for loop. therefore if the space used by size is one word then the total space required by 'number' variable will be 'n'(value stored in variable 'size'). Time Complexity
of an algorithm(basically when converted to program) is the amount of
computer time it needs to run to completion. The time taken by a program
is the sum of the compile time and the run/execution time . The compile
time is independent of the instance(problem specific) characteristics.
following factors effect the time complexity:
Time(A) = Fixed Time(A) + Instance Time(A) Here the number of steps is the most prominent instance characteristics and The number of steps any program statement is assigned depends on the kind of statement like
Example: Time Complexity
In above example if you analyze carefully frequency of " for count = 1 to size do "
it is 'size +1' this is because the statement will be executed one time
more die to condition check for false situation of condition provided
in for statement. Now once the total steps are calculated they will
resemble the instance characteristics in time complexity of algorithm.
Also the repeated compile time of an algorithm will also be constant
every time we compile the same set of instructions so we can consider
this time as constant 'C'. Therefore the time complexity can be
expressed as: Time(Sum) = C + (2size +3) So
in this way both the Space complexity and Time complexity can be
calculated. Combination of both complexity comprises the Performance
analysis of any algorithm and can not be used independently. Both these
complexities also helps in defining parameters on basis of which we
optimize algorithms. 
What is an Algorithm?
Definitions There are various definitions available for term Algorithm as:
Any of the above mentioned definition will work, and even all definitions provides a similar summary i.e. a set of well defined steps to solve a defined problem corresponding to provided inputs. Five characteristics of an Algorithm are:
One important point about algorithm, it is not a computer program. Instead it is a procedure written as pseudo code or English language type description, providing information about what calculations are to be performed and in which flow. For Example : To add two numbers we can have an algorithm like: English language Type 1. Ask user to enter two integer numbers. 2. Add both the integer numbers. 3. Print the final sum value of two numbers OR Pseudo Code Type Algorithm SUM Input: Two Integer Numbers as ONE, TWO Output : sum of ONE & TWO as RESULT ONE < User input required TWO < User input required RESULT < ONE + TWO return RESULT Even
for some cases all the three types of defining the algorithm is used. I
depends on what kind of document you are working on. 