Meetings

Click on the theme item for the meeting plan for that theme.Click on the meeting item for references, exercises, and additional reading related to it.

**Theme 1 : Divide and Conquer**- 17 meetings- Meeting 01 : Mon, Jul 31, 11:00 am-11:50 am
- Meeting 02 : Tue, Aug 01, 10:00 am-10:50 am
- Meeting 03 : Wed, Aug 02, 09:00 am-09:50 am
- Meeting 04 : Thu, Aug 03, 12:00 pm-12:50 pm
- Meeting 05 : Mon, Aug 07, 11:00 am-11:50 am
- Meeting 06 : Tue, Aug 08, 10:00 am-10:50 am
- Meeting 07 : Wed, Aug 09, 09:00 am-09:50 am
- Meeting 08 : Thu, Aug 10, 12:00 pm-12:50 pm
- Meeting 09 : Mon, Aug 14, 11:00 am-11:50 am
- Meeting 10 : Tue, Aug 15, 10:00 am-10:50 am
- Meeting 11 : Wed, Aug 16, 09:00 am-09:50 am
- Meeting 12 : Thu, Aug 17, 12:00 pm-12:50 pm
- Meeting 13 : Tue, Aug 22, 10:00 am-10:50 am
- Meeting 14 : Wed, Aug 23, 09:00 am-09:50 am
- Meeting 15 : Thu, Aug 24, 12:00 pm-12:50 pm
- Meeting 16 : Mon, Aug 28, 11:00 am-11:50 am
- Meeting 17 : Tue, Aug 29, 10:00 am-10:50 am

Introduction to the course. The Party organization problem, formulation, a greedy solution, attempt to prove correctness, counter examples. Course Grading policy, other announcements. References Exercises Reading Introduction to the course. The Party organization problem, formulation, a greedy solution, attempt to prove correctness, counter examples. Course Grading policy, other announcements.References: None Sorting problem, Quick sort recap, Partition function. References Chapter 7, CLRS. Exercises Reading Sorting problem, Quick sort recap, Partition function.References: Chapter 7, CLRS. Proving correctness of Quicksort via loop invariants. <br> Analysis of quicksort.<br> Model of computation, RAM model, Asymptotic time complexity. Best case versus worst case. <br> What happens when we have an unbalanced partition in Quick-sort, which is not necessarily the worst case partition? References Exercises Reading Proving correctness of Quicksort via loop invariants.

Analysis of quicksort.

Model of computation, RAM model, Asymptotic time complexity. Best case versus worst case.

What happens when we have an unbalanced partition in Quick-sort, which is not necessarily the worst case partition?References: None What happens when we have an unbalanced partition in Quick-sort, which is not necessarily the worst case partition? Intuition for average case. Solving the recurrence for T(n) = T(n/c) + T((c-1)n/c) + O(n) using guess and verify method. <br> Why does guess O(n) as the solution not work? <br> Proving that the guess O(n log(n)) does work for a suitable choice of constants. References Chapter 7, CLRS. Exercises Reading What happens when we have an unbalanced partition in Quick-sort, which is not necessarily the worst case partition? Intuition for average case. Solving the recurrence for T(n) = T(n/c) + T((c-1)n/c) + O(n) using guess and verify method.

Why does guess O(n) as the solution not work?

Proving that the guess O(n log(n)) does work for a suitable choice of constants.References: Chapter 7, CLRS. Lower Bound for Comparison Based Sorting algorithms. Analogy to 20 Questions game. Type of questions asked versus type of operations allowed by the algorithm. Decision trees based argument for the Omega (n log(n)) lower bound. References CLRS Chapter 8 Exercises Reading Lower Bound for Comparison Based Sorting algorithms. Analogy to 20 Questions game. Type of questions asked versus type of operations allowed by the algorithm. Decision trees based argument for the Omega (n log(n)) lower bound.References: CLRS Chapter 8 Randomized Quick Sort. Meaning of expected running time. What is the sample space? How many comparisons do we expect to see on a random run? <br> Defining Random variables, examples, expectation of a random variable. Breaking down a random variable into simpler indicator random variables. <br> References Chapter 7 CLRS Exercises Reading Randomized Quick Sort. Meaning of expected running time. What is the sample space? How many comparisons do we expect to see on a random run?

Defining Random variables, examples, expectation of a random variable. Breaking down a random variable into simpler indicator random variables.References: Chapter 7 CLRS Examples on random variables and computing expectation continued. Number of fixed points in a random permutation, number of inversions in a random permutation.<br> Back to Analyzing the expected running time of Quick Sort. Probability that two elements in the sorted order get compared. References CLRS Chapter 7, Exercises Reading Examples on random variables and computing expectation continued. Number of fixed points in a random permutation, number of inversions in a random permutation.

Back to Analyzing the expected running time of Quick Sort. Probability that two elements in the sorted order get compared.References: CLRS Chapter 7, Completing the analysis of randomized quicksort. Expected running time of Quick-sort is O(n log(n)). <br> Randomized select. What is the sample space? How is it different from the one in randomized quicksort? References Exercises Reading Completing the analysis of randomized quicksort. Expected running time of Quick-sort is O(n log(n)).

Randomized select. What is the sample space? How is it different from the one in randomized quicksort?References: None Randomize select and its analysis. Setting up the random variables and breaking it into indicator random variables. References Exercises Reading Randomize select and its analysis. Setting up the random variables and breaking it into indicator random variables.References: None Deterministic Selection algorithm. Intuition about diving the elements into constant sized groups. Analyzing where does the median of median fall in the sorted set? <br> The algorithm for randomized selection. References Exercises Reading Deterministic Selection algorithm. Intuition about diving the elements into constant sized groups. Analyzing where does the median of median fall in the sorted set?

The algorithm for randomized selection.References: None Deterministic Selection: running time analysis using guess and check method. Note that the constants indeed turn out to be large. <br> Other methods to solving recurrences. The master theorem with a simplified case f(n) = n^d. Proof of the master theorem for this simplified case. References Exercises Reading Deterministic Selection: running time analysis using guess and check method. Note that the constants indeed turn out to be large.

Other methods to solving recurrences. The master theorem with a simplified case f(n) = n^d. Proof of the master theorem for this simplified case.References: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None Solving Recurrences using Master method when f(n) is not of the form n^d. Statement of the cases with connections to the special case when f(n) = n^d. <br> Introducing the definition little-oh. If f(n) = little-oh(g(n)) does it imply f(n) = big-Oh(g(n))? How about the converse? Examples. Exercise: Definition of little-omega and examples. <br> Discussion on comparing functions asymptotically. If f(n) = big-Oh(g(n)) does it imply 2^f(n) = big-Oh(g(n))? Counter examples for the same. References Section 4.5 CLRS for Masters method.<br> Section 3.1 CLRS for Asymptotic notation. Exercises Reading Solving Recurrences using Master method when f(n) is not of the form n^d. Statement of the cases with connections to the special case when f(n) = n^d.

Introducing the definition little-oh. If f(n) = little-oh(g(n)) does it imply f(n) = big-Oh(g(n))? How about the converse? Examples. Exercise: Definition of little-omega and examples.

Discussion on comparing functions asymptotically. If f(n) = big-Oh(g(n)) does it imply 2^f(n) = big-Oh(g(n))? Counter examples for the same.References: Section 4.5 CLRS for Masters method.

Section 3.1 CLRS for Asymptotic notation.Maximum Subarray problem. Problem Definition and a brute force solution. Discussion on possible solution ideas. Does the solution have to contain all positive elements? Does the solution have to contain the maximum element in the array? <br> A simpler problem: maximum subarray with a given index included. How does it help solving the general case? <br> A divide and conquer solution. Recurrence for the same and running time. References Chapter 4, Section 4.1, CLRS. Exercises Reading Maximum Subarray problem. Problem Definition and a brute force solution. Discussion on possible solution ideas. Does the solution have to contain all positive elements? Does the solution have to contain the maximum element in the array?

A simpler problem: maximum subarray with a given index included. How does it help solving the general case?

A divide and conquer solution. Recurrence for the same and running time.References: Chapter 4, Section 4.1, CLRS. Discussion on SE1 + Lower bound arguments. <br> Proving that computing maximum of n distinct elements requires Omega(n) comparisons. References Exercises Reading Discussion on SE1 + Lower bound arguments.

Proving that computing maximum of n distinct elements requires Omega(n) comparisons.References: None Closest Pair of Points, problem definition, an incorrect lower bound of Omega(n^2). <br> The 1D case is solvable via sorting. <br> 2D case a divide and conquer approach. <br> What happens when Theta(n) points lie inside the "small strip" that we constructed? Can we examine constant number of points for every point in the strip? References Chapter 33, Section 33.4, CLRS. Exercises Reading Closest Pair of Points, problem definition, an incorrect lower bound of Omega(n^2).

The 1D case is solvable via sorting.

2D case a divide and conquer approach.

What happens when Theta(n) points lie inside the "small strip" that we constructed? Can we examine constant number of points for every point in the strip?References: Chapter 33, Section 33.4, CLRS. Completing the Closest Pair of points. <br> Discussion on a divide and conquer approach for Matrix Multiplication. Brief discussion on Strassen's improvement. <br> Majority Problem (in Problem Sheet 1) discussion of O(n log(n)) solution. References Section 33.4 of CLRS for Closest Pair of Points. Section 2.5 of DPV for Strassen's Matrix Multiplication. Exercises Reading Completing the Closest Pair of points.

Discussion on a divide and conquer approach for Matrix Multiplication. Brief discussion on Strassen's improvement.

Majority Problem (in Problem Sheet 1) discussion of O(n log(n)) solution.References: Section 33.4 of CLRS for Closest Pair of Points. Section 2.5 of DPV for Strassen's Matrix Multiplication. **Theme 2 : Evaluation Meetings**- 2 meetings- Meeting 18 : Mon, Aug 21, 11:00 am-11:50 am
- Meeting 19 : Thu, Oct 05, 12:00 pm-12:50 pm

Short Exam 1 References Exercises Reading Short Exam 1References: None Short Exam 2 References Exercises Reading Short Exam 2References: None **Theme 3 : Other**- 9 meetings- Meeting 20 : Thu, Nov 02, 12:00 pm-12:50 pm
- Meeting 21 : Mon, Nov 06, 11:00 am-11:50 am
- Meeting 22 : Tue, Nov 07, 10:00 am-10:50 am
- Meeting 23 : Wed, Nov 08, 09:00 am-09:50 am
- Meeting 24 : Thu, Nov 09, 12:00 pm-12:50 pm
- Meeting 25 : Mon, Nov 13, 11:00 am-11:50 am
- Meeting 26 : Tue, Nov 14, 10:00 am-10:50 am
- Meeting 27 : Wed, Nov 15, 09:00 am-09:50 am
- Meeting 28 : Thu, Nov 16, 12:00 pm-12:50 pm

To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None **Theme 4 : Greedy Strategy**- 6 meetings- Meeting 29 : Wed, Aug 30, 09:00 am-09:50 am
- Meeting 30 : Thu, Aug 31, 12:00 pm-12:50 pm
- Meeting 31 : Mon, Sep 18, 11:00 am-11:50 am
- Meeting 32 : Tue, Sep 19, 10:00 am-10:50 am
- Meeting 33 : Wed, Sep 20, 09:00 am-09:50 am
- Meeting 34 : Thu, Sep 21, 12:00 pm-12:50 pm

The familiar Minimum Weight Spanning tree problem : definition and does Divide and Conquer strategy work to solve this problem? <br> Greedy Strategy. Locally optimal choice made (greedy choice). Example problem MST. A generic algorithm to compute MST. <br> Definition of a cut. Edges crossing the cut. How many edges crossing any cut must belong to an MST? Does every MST have to contain the min-weight edge crossing the cut? <br> Definition of a safe edge. Showing that for any set of edges A that is a subset of some MST in G, there exists a choice for a safe edge. References CLRS Chapter 23. Exercises Reading The familiar Minimum Weight Spanning tree problem : definition and does Divide and Conquer strategy work to solve this problem?

Greedy Strategy. Locally optimal choice made (greedy choice). Example problem MST. A generic algorithm to compute MST.

Definition of a cut. Edges crossing the cut. How many edges crossing any cut must belong to an MST? Does every MST have to contain the min-weight edge crossing the cut?

Definition of a safe edge. Showing that for any set of edges A that is a subset of some MST in G, there exists a choice for a safe edge.References: CLRS Chapter 23. Kruskal's algorithm for computing MST as a refinement of the generic algorithm. <br> Focus on how to decide whether an edge is safe for the current collection "A". Keeping a component number associated with every vertex. Changing component number when the components merge. Towards a union find data structure. An array based implementation. Challenges with using arrays. <br> How many times does a vertex change its component? Proof that this happens at most log(n) times. References Kleinberg and Tardos (KT) (Chapter 4) Exercises Reading Kruskal's algorithm for computing MST as a refinement of the generic algorithm.

Focus on how to decide whether an edge is safe for the current collection "A". Keeping a component number associated with every vertex. Changing component number when the components merge. Towards a union find data structure. An array based implementation. Challenges with using arrays.

How many times does a vertex change its component? Proof that this happens at most log(n) times.References: Kleinberg and Tardos (KT) (Chapter 4) To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None **Theme 5 : Dynamic Programming**- 9 meetings- Meeting 35 : Mon, Oct 09, 11:00 am-11:50 am
- Meeting 36 : Tue, Oct 10, 10:00 am-10:50 am
- Meeting 37 : Wed, Oct 11, 09:00 am-09:50 am
- Meeting 38 : Thu, Oct 12, 12:00 pm-12:50 pm
- Meeting 39 : Mon, Oct 16, 11:00 am-11:50 am
- Meeting 40 : Tue, Oct 17, 10:00 am-10:50 am
- Meeting 41 : Wed, Oct 18, 09:00 am-09:50 am
- Meeting 42 : Thu, Oct 19, 12:00 pm-12:50 pm
- Meeting 43 : Mon, Oct 23, 11:00 am-11:50 am

Shortest Paths revisited. Allow negative edge weights. <br> Does Dijkstra's algorithm work? Where does it fail?<br> How about adding an offset to make all edge weights positive? Why does this fail?<br> Considering number of edges in the path. Define shortest path from s to v having at most i edges. Compute this recursively. References Exercises Reading Shortest Paths revisited. Allow negative edge weights.

Does Dijkstra's algorithm work? Where does it fail?

How about adding an offset to make all edge weights positive? Why does this fail?

Considering number of edges in the path. Define shortest path from s to v having at most i edges. Compute this recursively.References: None Shortest path with negative edge weights continued.<br> The recursive algorithm. <br> The correctness of the recursive formulation and its running time O(mn). <br> The standard Bellman Ford algorithm. References Exercises Reading Shortest path with negative edge weights continued.

The recursive algorithm.

The correctness of the recursive formulation and its running time O(mn).

The standard Bellman Ford algorithm.References: None Feedback + Discussion of Solutions for SE2. References Exercises Reading Feedback + Discussion of Solutions for SE2.References: None Setting up Telephone Towers in a city. Problem description and simple approaches (greedy strategy, choosing the best of odd numbered cities and even cities). <br> How does the optimum look like? It either contains tower at city 1 or city 2. Using this observation to formulate a recursive solution. <br> Correctness of recursive formula. Running time of recursive algorithm. Making the algorithm efficient by reusing computed solutions. Finally, how do we get the cities where the towers need to be placed? References Slides emailed on group. Exercises Reading Setting up Telephone Towers in a city. Problem description and simple approaches (greedy strategy, choosing the best of odd numbered cities and even cities).

How does the optimum look like? It either contains tower at city 1 or city 2. Using this observation to formulate a recursive solution.

Correctness of recursive formula. Running time of recursive algorithm. Making the algorithm efficient by reusing computed solutions. Finally, how do we get the cities where the towers need to be placed?References: Slides emailed on group. To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None Knapsack with and without repetition. A simple example with different values of Opt. Formulation of Knapsack without repetition. The need for a 2D table. Arguing correctness of the recursive formulation. Running time of the algorithm. Discussion on pseudopolynomial time algorithm. References DPV Exercises Reading Knapsack with and without repetition. A simple example with different values of Opt. Formulation of Knapsack without repetition. The need for a 2D table. Arguing correctness of the recursive formulation. Running time of the algorithm. Discussion on pseudopolynomial time algorithm.References: DPV Knapsack without repetition <br> Longest common subsequence problem. Another example of 2D Dynamic Programming. Recursive formulation, argument of correctness and an iterative O(mn) algorithm. References DPV for Knapsack problem and CLRS for LCS problem. Exercises Reading Knapsack without repetition

Longest common subsequence problem. Another example of 2D Dynamic Programming. Recursive formulation, argument of correctness and an iterative O(mn) algorithm.References: DPV for Knapsack problem and CLRS for LCS problem. **Theme 6 : Matroids (Greedy Strategy)**- 4 meetings- Meeting 44 : Mon, Sep 11, 11:00 am-11:50 am
- Meeting 45 : Tue, Sep 12, 10:00 am-10:50 am
- Meeting 46 : Wed, Sep 13, 09:00 am-09:50 am
- Meeting 47 : Thu, Sep 14, 12:00 pm-12:50 pm

Applicability of the Greedy Method: Matroids. <br> Four problems on sets. <br> 1. Minimum Weight Spanning Forest. <br> 2. Maximum Weight Matching. <br> 3. Given a directed graph with positive edge weights output a maximum weight subset of edges such that no two edges share a common "head". Make it seem like a cousin of undirected matching. This does have a greedy solution.<br> 4. A set of columns of a matrix each having a positive weight associated with it. Output a maximum weight subset of independent columns. <br> Which of these have greedy algorithms producing optimal solution? If a problem satisfies some "properties" we will show that that a greedy algorithm is optimal for the problem. <br> Definition of a Matroid, the hereditary property and exchange property. References Exercises Reading Applicability of the Greedy Method: Matroids.

Four problems on sets.

1. Minimum Weight Spanning Forest.

2. Maximum Weight Matching.

3. Given a directed graph with positive edge weights output a maximum weight subset of edges such that no two edges share a common "head". Make it seem like a cousin of undirected matching. This does have a greedy solution.

4. A set of columns of a matrix each having a positive weight associated with it. Output a maximum weight subset of independent columns.

Which of these have greedy algorithms producing optimal solution? If a problem satisfies some "properties" we will show that that a greedy algorithm is optimal for the problem.

Definition of a Matroid, the hereditary property and exchange property.References: None Matroids continued. Discussed the hereditary and exchange property for the 3 problems (MST, Weighted Matching, Problem on directed graphs). References CLRS Section 16.4 (for definition of Matroid and Properties), Combinatorial Optimization by Papadimitriou and Steiglitz Chapter 12, Section 4 for different problems discussed. Exercises Reading Matroids continued. Discussed the hereditary and exchange property for the 3 problems (MST, Weighted Matching, Problem on directed graphs).References: CLRS Section 16.4 (for definition of Matroid and Properties), Combinatorial Optimization by Papadimitriou and Steiglitz Chapter 12, Section 4 for different problems discussed. Definition of a maximal set in a Matroid. Proof that all maximal sets have the same size. (connection to minimum spanning tree -- all spanning trees have n-1 edges)<br> A greedy algorithm on weighted Matroids. Proof that the Greedy algorithm is correct. <br> Correctness of Greedy Choice <br> Correctness of Optimal Substructure <br> References CLRS Section 16.4 Exercises Reading Definition of a maximal set in a Matroid. Proof that all maximal sets have the same size. (connection to minimum spanning tree -- all spanning trees have n-1 edges)

A greedy algorithm on weighted Matroids. Proof that the Greedy algorithm is correct.

Correctness of Greedy Choice

Correctness of Optimal SubstructureReferences: CLRS Section 16.4 Task Scheduling problem -- A Matroid formulation. Proving that the set-system satisfies the hereditary and exchange properties. References CLRS Section 16.5 Exercises Reading Task Scheduling problem -- A Matroid formulation. Proving that the set-system satisfies the hereditary and exchange properties.References: CLRS Section 16.5 **Theme 7 : Data Structures**- 11 meetings- Meeting 48 : Mon, Sep 04, 11:00 am-11:50 am
- Meeting 49 : Tue, Sep 05, 10:00 am-10:50 am
- Meeting 50 : Wed, Sep 06, 09:00 am-09:50 am
- Meeting 51 : Thu, Sep 07, 12:00 pm-12:50 pm
- Meeting 52 : Mon, Sep 25, 11:00 am-11:50 am
- Meeting 53 : Tue, Sep 26, 10:00 am-10:50 am
- Meeting 54 : Wed, Sep 27, 09:00 am-09:50 am
- Meeting 55 : Thu, Sep 28, 12:00 pm-12:50 pm
- Meeting 56 : Mon, Oct 02, 11:00 am-11:50 am
- Meeting 57 : Tue, Oct 03, 10:00 am-10:50 am
- Meeting 58 : Wed, Oct 04, 09:00 am-09:50 am

Revisiting the Data Structure for Disjoint Datasets (union and find operations). A data structure using rooted trees. A simple implementation which achieves a worst case log(n) bound. Towards a further improvement using union by rank and path compression. References KT, Chapter 4. Exercises Reading Revisiting the Data Structure for Disjoint Datasets (union and find operations). A data structure using rooted trees. A simple implementation which achieves a worst case log(n) bound. Towards a further improvement using union by rank and path compression.References: KT, Chapter 4. Properties of the rank function. What is the maximum value that rank can take? How many nodes can we have with a particular rank? <br> Properties of nodes. Nodes can become non-roots to roots but not the other way. Once a node becomes non-root its rank is frozen. <br> A useful function to analyze the running time. log*(n). Recursive definition and examples. An extremely slow growing function. References DPV Chapter 5. Exercises Reading Properties of the rank function. What is the maximum value that rank can take? How many nodes can we have with a particular rank?

Properties of nodes. Nodes can become non-roots to roots but not the other way. Once a node becomes non-root its rank is frozen.

A useful function to analyze the running time. log*(n). Recursive definition and examples. An extremely slow growing function.References: DPV Chapter 5. Analysis of union find continued. What unions and finds are expensive? Charging non-root nodes with enough charge to pay for path traversals upto the root. How much charge does every node get? Total charge distributed over all nodes is bounded by nlog*(n). <br> Partitioning the nodes based on ranks <br> Analysis of different kinds of edges on a path from a node to its root. Edges that belong to nodes in same bucket (blue edges) vs edges between nodes across buckets (red edges).<br> Red edges are paid for by the operation, blue edges by the nodes. Proof that nodes never run out of charge<br> References DPV Chapter 5 Exercises Reading Analysis of union find continued. What unions and finds are expensive? Charging non-root nodes with enough charge to pay for path traversals upto the root. How much charge does every node get? Total charge distributed over all nodes is bounded by nlog*(n).

Partitioning the nodes based on ranks

Analysis of different kinds of edges on a path from a node to its root. Edges that belong to nodes in same bucket (blue edges) vs edges between nodes across buckets (red edges).

Red edges are paid for by the operation, blue edges by the nodes. Proof that nodes never run out of chargeReferences: DPV Chapter 5 Applications of union find data structure <br> Binary counter -- an example of amortized analysis. References Exercises Reading Applications of union find data structure

Binary counter -- an example of amortized analysis.References: None References Exercises Reading References: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None To Be Announced References Exercises Reading To Be AnnouncedReferences: None Using Potential Method : Examples. Blue Flip example (Quiz1 problem), Variant of a Queue (with enqueue, dequeue and makecorrupt) -- different potential functions and analysis of which work and which do not. References Exercises Reading Using Potential Method : Examples. Blue Flip example (Quiz1 problem), Variant of a Queue (with enqueue, dequeue and makecorrupt) -- different potential functions and analysis of which work and which do not.References: None Analysis of the Fibonacci Heap using Potential Function. References CLRS Exercises Reading Analysis of the Fibonacci Heap using Potential Function.References: CLRS **Theme 8 : Network Flow**- 6 meetings- Meeting 59 : Tue, Oct 24, 10:00 am-10:50 am
- Meeting 60 : Wed, Oct 25, 09:00 am-09:50 am
- Meeting 61 : Thu, Oct 26, 12:00 pm-12:50 pm
- Meeting 62 : Mon, Oct 30, 11:00 am-11:50 am
- Meeting 63 : Tue, Oct 31, 10:00 am-10:50 am
- Meeting 64 : Wed, Nov 01, 09:00 am-09:50 am

Network Flow, problem definition, capacity constraint and flow conservation. References CLRS Section 26.1 Exercises Reading Network Flow, problem definition, capacity constraint and flow conservation.References: CLRS Section 26.1 Examples of Flow network, possible solutions to get max-flow. An incorrect definition of residual network. Why does it not work? <br> Modifying the residual network to have backward edges. References CLRS 26.1 26.2 Exercises Reading Examples of Flow network, possible solutions to get max-flow. An incorrect definition of residual network. Why does it not work?

Modifying the residual network to have backward edges.References: CLRS 26.1 26.2 Ford Fulkerson method. Augmenting a flow f with a flow along the path p. Proving that the augmented flow satisfies capacity constraints and flow conservation. <br> Toward proving that Ford Fulkerson Method gives maximum flow. Relating to cuts in the network. Capacity of cut. Min capacity cut. References CLRS Section 26.2. Exercises Reading Ford Fulkerson method. Augmenting a flow f with a flow along the path p. Proving that the augmented flow satisfies capacity constraints and flow conservation.

Toward proving that Ford Fulkerson Method gives maximum flow. Relating to cuts in the network. Capacity of cut. Min capacity cut.References: CLRS Section 26.2. Recall of definitions from Last class. Capacity of a cut. Net Flow across a cut. At termination of Ford Fulkerson Algorithm, compute a partition of vertices by picking all vertices reachable from source. Properties of this special cut. Proving that the cut has all forward edges saturated and all back edges with zero flow. References CLRS Section 26.2 (Max Flow Min Cut theorem). Exercises Reading Recall of definitions from Last class. Capacity of a cut. Net Flow across a cut. At termination of Ford Fulkerson Algorithm, compute a partition of vertices by picking all vertices reachable from source. Properties of this special cut. Proving that the cut has all forward edges saturated and all back edges with zero flow.References: CLRS Section 26.2 (Max Flow Min Cut theorem). Max-Flow Min-Cut theorem statement and proof revisited. Running time of the Ford Fulkerson Method. Why is the running time not acceptable? <br> Towards a strongly polynomial time algorithm for network flow problem. Edmond Karp Algorithm -- select shortest path in the residual network (where edges have directions but no weights). References CLRS Section 26.2. Exercises Reading Max-Flow Min-Cut theorem statement and proof revisited. Running time of the Ford Fulkerson Method. Why is the running time not acceptable?

Towards a strongly polynomial time algorithm for network flow problem. Edmond Karp Algorithm -- select shortest path in the residual network (where edges have directions but no weights).References: CLRS Section 26.2. To Be Announced References Exercises Reading To Be AnnouncedReferences: None