Some of the constraints:
- Student background is good, but not extensive (i.e they are familiar with Chernoff bounds, but not much else; not much complexity background, and Kleinberg-Tardos-level algorithms familiarity)
- Breadth is important: it's important to me that students become familiar with a wide range of topics rather than drilling deeply down into one (you could run an entire semester-long course on concentration of measure). This goes back to the previous point: at this stage, I'd like students to get as much wide exposure as feasible, in the hope that it will make later, deeper material more accessible to them later on.
- Source material should be accessible and presentable (by students) in 80 minute chunks.
A randomized algorithm either has an uncertain running time, or a likelihood of error. How do we analyze such algorithms ? This will lead directly to an investigation of tail bounds on distributions, martingales, and the method of bounded differences. We will also look at the famous minimax principle for proving lower bounds on the complexity of randomized algorithms. We'll take a tour of the many places where randomization leads to more efficient (and more elegant) algorithms (hashing, approximate counting, randomized incremental constructions, randomized rounding) We'll get acquainted with BPP (the randomized analogue of P), and other random complexity classes, and how randomness and hardness are connected. We'll also look at interactive proofs. Where does randomness come from ? If we have a deterministic procedure to generate random bits, how is it random anymore ? We'll talk about pseudorandomness (the art of making more from less). Since it's tricky to make random bits, we'd like to minimize the effort involved. How can we 'derandomize' algorithms ? (if time permits) Quantum computing would appear to be "probability with complex numbers". What makes it similar (and different) to probabilistic computation ?
Suresh, if you like, I would be glad to email you my class notes from a grad class on "Randomness and Computation". I am not making them available online yet since they are not fully polished.
ReplyDeleteLet me know, and wishing you fun with the course,
Aravind
Hi Aravind,
ReplyDeleteThat would be excellent. In fact I had spied on your web page earlier when I was designing the content, but I didn't find anything (and now I know why :))
thanks !
This list of topics would be the
ReplyDeletestarting point for me as well
if (and when) I teach a class
on randomization. However, I feel
that it will be difficult to do
justice to both algorithms and
complexity related topics such
as pseudo randomness and randomness
extraction in the same class unless
the students are already mature.
Maybe I am mistaken and more
experienced folk can correct me.
Would you cover fingerprinting in hashing? How about showing that randomness and shared randomness are necessary sometimes - say some lower bound from communication complexity?
ReplyDeletesudipto
Almost certainly I'd want to cover fingerprinting (and should have modified the 'applications' section properly.
ReplyDeleteAs for shared randomness in communication complexity: this is a cool thing to talk about, but I wonder whether I can afford to fit it in. What I definitely want to have students review are examples (like in CC) of where randomization is provably better.
In fact, right now I can only think of deterministic vs randomized CC, and volume estimation for polytopes.
Suresh, I will email you my notes soon. In response to Chandra's post, while it is true that mixing the "algorithms" and "complexity" parts in such a course can be challenging, one way of quickly covering the latter is the following, in case you can only allot 5-8 classes for it:
ReplyDelete1. BPP contained in P/poly, thus showing that explicit construction of (small sets of) "advice strings" is the main issue in derandomization.
2. k-wise independence, and how it leads to some types of explicit constructions for the above.
3. Blum-Micali and Yao (quick look if time is an issue), and a detailed study of Nisan-Wigderson.
4. Weak random sources: the models of von Neumann, Santha-Vazirani and Chor-Goldreich, and how they are all subsumed by the delta-sources of Zuckerman.
5. Extractors, and an exercise using the probabilistic method that good extractors *exist*.
6. Explicit construction of extractors: the simplest I can think of is Luca's "Extractors and Pseudorandom Generators", where the earlier study of Nisan-Wigderson will be useful.
If time is a problem, the method of conditional probabilities can be given as an exercise or in a handout.
Of course, the above is just one among several ways of covering the material.
Aravind
I believe Valiant's randomized hypercube routing algorithm is an example where randomness is provably better. In general, cryptography and theory of distributed computing are good sources of such examples.
ReplyDeleteI was also going to mention distributed computing as an area where randomized algorithms can do provably better than derandomized ones.
ReplyDeleteAlso black-box polynomial identity testing. Along the same lines, you could probably cover some of the PCP theorem -- at least the proof that NP \in PCP(poly, 1).
Another example of where randomization "provably beats" determinism is with regard to existence of Nash equilibria!
Are you planning to maintain a webpage for the seminar?
ReplyDeleteindeed I will. it will be at
ReplyDeletehttp://apollonius.cs.utah.edu/mediawiki/index.php/Algorithms_Seminar