good coders code, great reuse |
MIT’s Introduction to Algorithms, Lectures 15: Dynamic Programming Posted: 27 Nov 2008 09:45 AM CST This is the tenth post in an article series about MIT’s lecture course “Introduction to Algorithms.” In this post I will review lecture fifteen, which introduces the concept of Dynamic Programming and applies it to the Longest Common Subsequence problem. Dynamic programming is a design technique similar to divide and conquer. Divide-and-conquer algorithms partition the problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. Dynamic programming is applicable when the subproblems are not independent, that is, when subproblems share subsubproblems. A dynamic-programming algorithm solves every subsubproblem just once and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time the subsubproblem is encountered. Dynamic programming was systematized by Richard E. Bellman. He began the systematic study of dynamic programming in 1955. The word “programming,” both here and in linear programming, refers to the use of a tabular solution method and not to writing computer code. Dynamic programming can be effectively applied to solve the longest common subsequence (LCS) problem. The problem is stated as following: given two sequences (or strings) x and y find a maximum-length common subsequence (substring) of x and y. Lecture fifteen introduces dynamic programming via longest common subsequence problem. It first gives a brute-force, exponential time algorithm for solving it. It is no good for large sequences and the lecture continues with a simplification - let’s look at the length of a longest-common subseq and then extend this algorithm to find the LCS itself. The simplified algorithm is recursive in nature and computes same subproblems. At this moment two dynamic programming hallmarks are stated:
As the subproblems are overlapping the lecture introduces concept of memoization algorithm (note that it’s not memorization). A better known word for memoization is caching. The subproblems are cached (memoized) so that they are not recomputed over and over again. The lecture ends with constructing a dynamic programming table for LCS problem and explains how to find a LCS from this table. You’re welcome to watch lecture fifteen: Topics covered in lecture fifteen:
Lecture fifteen notes: Have fun with programming dynamically! The next post will be about graphs, greedy algorithms and minimum spanning trees. PS. This course is taught from the CLRS book (also called “Introduction to Algorithms”): |
You are subscribed to email updates from good coders code, great reuse To stop receiving these emails, you may unsubscribe now. | Email Delivery powered by FeedBurner |
Inbox too full? Subscribe to the feed version of good coders code, great reuse in a feed reader. | |
If you prefer to unsubscribe via postal mail, write to: good coders code, great reuse, c/o FeedBurner, 20 W Kinzie, 9th Floor, Chicago IL USA 60610 |
No comments:
Post a Comment
Keep a civil tongue.