Showing posts with label focs. Show all posts
Showing posts with label focs. Show all posts

Wednesday, August 17, 2016

FOCS workshops and travel grants.

Some opportunities relating to the upcoming FOCS.

  • Alex Andoni and Alex Madry are organizing the workshops/tutorials day at FOCS, and are looking for submissions. The deadline is August 31, so get those submissions ready!
  • Aravind Srinivasan and Rafi Ostrovsky are managing travel and registration grants for students to go to FOCS. The deadline for applications is September 12. 
Of course FOCS itself has an early registration deadline of September 17, which is also the cutoff for hotel registrations. 

Monday, September 05, 2011

FOCS 2011 Registration open

Marek Chrobak informs me that FOCS 2011 registration is open. The conference is being held from Oct 22-25 in Palm Springs CA, home of the Palm Springs Aerial Tramway that does a record 8500 ft (2600 m) elevation gain to give you views of the valley.

When you're not being made dizzy by the views, attend a tutorial or two or three ! Cynthia Dwork, Kirk Pruhs and Vinod Vaikuntanathan are the headliners for a day of tutorials on Saturday Oct 22. Shafi Goldwasser will be awarded the 2011 IEEE Emmanuel R. Piore Award, given for "outstanding contributions in the field of information processing, in relation to computer science".

I'm told there are also some talks, including some that smash through barriers for the k-server conjecture, metric TSP (in graphs) (twice over), and exact 3SAT.

Register now so that local organizers don't go crazy (I can assure you that they do :)). The deadline is Sep 29 for cheap rates on hotels and registration. And if you'd like to be a FOCS 2011 reporter and write a set of blog posts summarizing the conference for cstheory.blogoverflow.com, please add a note to this post.

Friday, November 05, 2010

Odds and Ends

Glencora Borradaile makes her recommendations for FOCS videos to watch (can we have all our conferences do videos, please ? My 40 minutes on the bus will take on a whole new dimension). I like the idea of starring speakers who do a good job: might increase the competitive pressure. Come to think of it, here's another perk of having videos for conferences: sheer embarrassment will make sure that speakers improve their presentations.

The cstheory Q&A site rolls on. Those of you at FOCS might have noticed the little sheets in your registration packets, but I don't know if it actually encouraged new people to visit. If you're one of them, do drop a note in the comments. Three points of interest:
  • We have a collaboratively edited article that we just submitted to SIGACT News (you can read the draft here). The article highlights some of the recent fascinating questions, answers and discussions on the site - do check it out
  • This is purely anecdotal, but I've been hearing both at FOCS and on the site that people are having to avoid visiting the site because it's so addictive ! Don't worry - it isn't that bad - as a matter of fact I only spent 6 hours last night ! down from.. (never mind)..
  • Another sign of catastrophic success: our first incident of undergrads in a theory class systematically trying to get their homework questions answered by posting on the site. Many alert users were able to close down the questions, but occasionally answers slip by. If you're an undergraduate and read this blog (HA ! blogs are for old fogies !), be warned...
Finally, because I'd be remiss not to do so, some neat questions (and answers) to chew on:

Monday, October 25, 2010

FOCS Day 1: Clustering

A long time ago I realized that if I took blogging duties too seriously, I'd be exhausted trying to report on talks that I attended. So I'm not going to even try. I will talk about a pair of talks on clustering though.

A core question of clustering is this: learn a mixture of $k$ $d$-dimensional Gaussians with different means and covariance matrices from samples drawn from the mixture.A long line of research, starting with a paper by Sanjoy Dasgupta back in 1999, produced variations on the following result:
If the Gaussians are "well separated" and have "nice" shapes, then a "small number" of samples suffices to determine the parameters of the system. 
Here, "parameters" would be means, covariance matrix elements, and relative weights. As time went on, the separations got smaller, the niceness reduced, and the "small number" got better.

These two papers take a slightly more general tack, in different ways. They both eliminate the conditional nature of prior bounds ("if the Gaussians are well separated"), instead replacing this by an unconditional bound ("poly in a parameter capturing the separation"). 

The first, by Moitra and Valiant (both Ph.D students!) measures the separation between the distributions via their statistical distance, and gives a bound for learning in terms of the reciprocal of this distance. Their main insight is to project the distributions onto a few directions (i.e 1D), learn what they can, and then recurse on the data that doesn't appear to be well-separated enough to learn properly. This takes a lot of technical skill to do correctly.

The second paper, by Belkin and Sinha, takes a different approach. They  pay more attention to the parameters of the distribution. This can be a problem in general because closeness in distribution does not imply closeness in parameter space. However, they show that the set of all distributions with fixed moments forms a nice algebraic variety, and that the above problem can be quantified by determining how closely this variety folds in on itself (intuitively, if the variety intersects itself, then two far away parameter sets yield the same distribution). Using some standard (but not trivial) results in algebraic geometry, they can bound the complexity of the resulting structures (so that they can estimate the number of samples needed to get to a distiinct region of the space). There's more detail involved in the estimation of parameters and moments and so on.

While both papers are impressive achievements that essentially resolve the theoretical aspects of learning mixtures of distributions, I came away not entirely sure how to compare them. On the one hand, the Moitra/Valiant paper is more explicit and is likely to be less "galactic" in nature. But it appears to be limited to Gaussians. The Sinha/Belkin paper is more general, but uses more abstract machinery that is unlikely to be useful in practice.

So what remains is stilll an effective way of learning Gaussians or other families of distributions, and also a better understanding of how these bounds relate to each other.

Monday, September 27, 2010

FOCS travel awards and registration deadline: Apply SOON !

Paul Beame reminds me that the FOCS early registration deadline and hotle registration deadline are coming up soon (this Thursday). Don't forget that FOCS has three tutorials on Saturday before the conference starts, by Ketan Mulmuley, Mihai Patrascu, and Tim Roughgarden (which will be on geometric complexity theory, algorithmic game theory, and how to annoy everyone with your blog - sorry Mihai, I'm joking :))

Travel awards are also still available for the conference: Quoting Paul (emphasis mine):

There is still a bunch of NSF travel support money available for students/postdocs because we can only fund one person per US institution. All the award decisions have been made at institutions where someone applied by Sept 23rd but we are still open to new applications for travel support (not including registration) from other institutions. There is no hard deadline but people should please apply ASAP (ideally by the end of this week) because we will make final decisions on awards soon.  Follow the Travel Support link at: http://www.egr.unlv.edu/~larmore/FOCS/focs2010/
p.s cstheory Q&A has been sucking up all the time that I should have been spending blogging ... ahem.. doing research. We're at 1500+ users and counting, so come on in and join Lance's students and all the others :). 

Thursday, September 24, 2009

Recent items

Deadlines are keeping me busy, and away from blogging.

Speaking of which, the Fall Workshop on Computational Geometry has a submission deadline this Friday. The Fall workshop is a "true" workshop, in that you go, give a talk on whatever you're working on, and there's no messing around with proceedings, publications and things like that. It's a pure meet-and-chat kind of venue, which means the pressure is low, and the visibility is quite high (many of the east-coast geometry folks show up).

This year it's in Tufts, so the location is even better. So get those 2-page abstracts in !

In other news, FOCS registration deadline is looming. Bill mentions a workshop to celebrate 50 years of FOCS (which dates back to when it was the conference on switching and circuits (!)) and 20 years of the GATech ACO program.

The workshop is NOT like the Fall workshop :). It's a series of invited talks by a star-studded cast: KARP !! YANNAKAKIS !! ALON !! BLUM !! All together, for one brief engagement !!

Sounds like a great idea: the kind of thing we probably need to do more of to get more folks to the conference.

Friday, July 03, 2009

FOCS 2009 results.

The FOCS list is out, (with abstracts). 73 papers accepted. Some papers that popped up on my radar screen (post more in the comments since I'll probably miss many good ones):
  • Matt Gibson and Kasturi Varadarajan. Decomposing Coverings and the Planar Sensor Cover Problem.

    This paper continues a line of work that I like to think we started in a SODA paper a few years ago. The problem is this: you're given a collection of sensors that monitor a region. But they have limited battery life, and they're also redundant (many different subsets of sensors have the range to cover the region). Can you schedule the sensors to go on and off so that at all times, the entire region is covered, and the region is covered for as long as possible ?

    Early work on this problem formulated it in a combinatorial setting, which immediately led to things like log n-approximability lower bounds via set cover variants. Our belief was that the geometry of the regions would make things a bit easier. We made some small progress, getting a sublogarithmic approximation ratio for intervals on the line, and a constant factor for ranges with special structure. Subsequent work made steady improvements, and this work has brought things down to a constant for general classes of regions in the plane
  • Saugata Basu and Thierry Zell. Polynomial hierarchy, Betti numbers and a real analogue of Toda's Theorem

    Betti number computation has been a hot topic in computational geometry of late, but the idea of using Betti numbers to bound the (algebraic complexity) of basic geometric operations dates back to work by Dobkin and Lipton that relates the complexity of certain computations to the number of connected components of "invariant paths" in a computation. Ben-Or generalized these results to more complicated algebraic computation trees, and noting that "number of connected components" is the zero-th Betti number of the set of computations, asked whether higher-order Betti numbers might be useful for proving lower bounds. A series of results by Bjorner, Lovasz and Yao, Bjorner and Lovasz, and finally Yao showed that indeed the higher-order Betti numbers can be used to lower bound the complexity of algebraic computations.

    So Betti numbers are important as a measure of "counting" in algebraic complexity. This paper reinforces this intuition, by relating #P over the reals to the complexity of computing Betti numbers over semi-algebraic sets. Interestingly, this paper mentions the difficult nature of Toda's original proof, and Lance just recently tweeted a new very simple proof of Toda's theorem that he just published in ToC.

  • David Arthur, Bodo Manthey and Heiko Roeglin. k-Means has Polynomial Smoothed Complexity

    Read my post on k-means for more on the significance of such results. This paper finally resolves the smoothed complexity issue, giving a polynomial bound (expected).

  • Peyman Afshani, Jeremy Barbay and Timothy M. Chan. Instance-Optimal Geometric Algorithms

    An interesting result that seems to be able to take away order-dependence from some geometric problems.

  • T.S. Jayram and David Woodruff. The Data Stream Space Complexity of Cascaded Norms

    Cascaded norms (where you compute one aggregate on each of a collection of streams, and compute another aggregate on these aggregates) are tricky for streaming algorithms, and this paper provides some interesting results. Again, I'm waiting for the document to show up, and I'd be interesting in seeing the kinds of techniques they use.

Papers I'd like to learn more about:


Comments ?

Friday, March 27, 2009

The FOCS submission experiment

Via Muthu, an explanation by Umesh Vazirani of the rationale for the new 'submmary after the deadline' experiment at TOCS. A key paragraph:
To understand the motivation for the abstract and better picture
its contents, think about how often the 20 minute presentation at
STOC/FOCS provides a better insight into the research than the
paper. While preparing the talk, the authors can step back and
try to explain something interesting about their work - either the
core of their proof, or a special case of their theorem, or the
new conceptual framework that they introduce. The one week
period after the mad rush to the STOC/FOCS deadline would
provide a chance to reflect, and additionally there would be an
incentive for the authors (just as in the conference presentations),
to simplify.
I think it's a great idea to try things like this. I hope it will work though. One of the reasons that papers often get written badly for STOC/FOCS is because there's an incentive to obfuscate and make things look rather technical. Whether the post-deadline calm will allow people to see beyond the chest-thumping will decide whether the 2-pages are useful.

Friday, September 26, 2008

Announcements

Monday, June 23, 2008

FOCS 08 list ?

The FOCS 2008 page has a link for accepted papers, but it's empty for now. We know at least 4 of the papers that got in, but does anyone know of a list ?

Update: that was quick! here it is.

Tuesday, July 24, 2007

STOC/FOCS/SODA: The Cage Match (with data!)

(Ed: This post, and the attendant web page and data, was initiated as a joint effort of Piotr Indyk and myself. Many others have helped improve the data, presentation and conclusions.)

Inspired by Michael Mitzenmacher's flamebait post on SODA/STOC/FOCS, we decided to roll up our sleeves, and resolve the biggest outstanding issue in Theoretical Computer Science, namely the great "STOC/FOCS vs. SODA" debate ("P vs. NP" is a tainted second, sullied by all that money being offered to solve it). We have some interesting preliminary observations, and there are many interesting problems left open by our work ;)

The hypothesis:
There is a significant difference in citation patterns between STOC/FOCS and SODA
The plan:

First, we obtained the list of titles of conference papers appearing in STOC, FOCS and SODA in the last 10 years (1997-2006). We deliberately excluded 2007 because FOCS hasn't happened yet. We got this list from DBLP (Note: DBLP does not make any distinction between actual papers and tutorials/invited articles; we decided to keep all titles because there weren't that many tutorials/invited papers in any case).

For each title, we extracted the citation count from Google Scholar, using a process that we henceforth refer to as "The Extractor". Life is too short to describe what "The Extractor" is. Suffices to say that its output, although not perfect, has been verified to be somewhat close to the true distribution (see below).

The results, and methodology, can be found at this link. The tables and graphs are quite self-explanatory. All source data used to generate the statistics are also available; you are welcome to download the data and make your own inferences. We'll be happy to post new results here and at the webpage.

OBSERVATIONS:

The main conclusion is that the hypothesis is valid: a systematic discrepancy between citation counts of SODA vs. STOC/FOCS does appear to exist. However, the discrepancy varies significantly over time, with years 1999-2001 experiencing the highest variations. It is interesting to note that 1999 was the the year when SODA introduced four parallel sessions as well as the short paper option.

Although most of the stats for STOC and FOCS are quite similar, there appears to be a discrepancy at the end of the tail. Specifically, the 5 highest citation counts per year for STOC (years 1997-2001) are all higher than the highest citation count for FOCS (year 2001). (Note: the highest cited STOC article in 2001 was Christos Papadimitriou's tutorial paper on algorithms and game theory). The variation between SODA and STOC/FOCS in the 1999-2001 range shows up here too, between STOC and FOCS themselves. So maybe it's just something weird going on these years. Who knows :)

Another interesting observation comes from separating the SODA cites into long and short paper groups (for the period 1999-2005). Plotting citations for short vs long papers separately indicates that the presence of short papers caused a net downward influence on SODA citation counts, but as fewer and fewer shorts were accepted, this influence decreased.

There are other observations we might make, especially in regard to what happens outside the peak citations, but for that we need more reliable data. Which brings us to the next point.

VALIDATION OF THE DATA:

To make sure that the output makes sense, we performed a few "checks and balances". In particular:
  • we sampled 10 random titles from each of FOCS, STOC and SODA for each of the 10 years, and for each title we checked the citation count by hand. Results: there were 7 mistakes in FOCS, 9 in STOC, and 11 in SODA, indicating a current error rate in the 9-10% range.
  • for each of FOCS, STOC, SODA, we verified (by hand) the values of the top 10 citation numbers, as reported by The Extractor
  • we compared our stats for the year 2000 with the stats obtained by Michael. The results are pretty close:





Median (Mike's/Ours)Total (over all 10 years) (Mike's/Ours)
FOCS 38/38 3551/3315
STOC 21/21 3393/2975
SODA 14/13 2578/2520

A CALL FOR HELP:
Warning: the data displayed here is KNOWN to contain errors (our estimate is that around 10% of citation counts are incorrect). We would very much appreciate any efforts to reduce the error rate. If you would like to help:
  1. choose a "random" conference/year pair (e.g., STOC 1997)
  2. check if this pair has been already claimed in this blog; if yes, go to (1)
  3. post a short message claiming your pair (e.g., "CLAIMING STOC 1997") on the blog.
  4. follow the links to check the citations. For each incorrect citation, provide two lines: (1) paper title (2) a Google Scholar link to the correct paper
  5. Email the file to Suresh Venkatasubramanian.
(and yes, we know that this algorithm has halting problems). Many thanks in advance. Of course, feel free to send us individual corrections as well.

Citation guidelines: The world of automatic citation engines is obviously quite messy, and sometimes it is not immediately clear what is the "right" citation count of a paper. The common "difficult case" is when you see several (e.g., conference and journal) versions of the same paper. In this case, our suggestion is that you ADD all the citation counts that you see, and send us the links to ALL the papers that you accounted for.


Acknowledgement: Adam Buchsbaum for suggesting the idea of, and generating data for the short-long variants of SODA. David Johnson, Graham Cormode, and Sudipto Guha for bug fixes, useful suggestions and ideas for further plots (which we can look into after the data is cleaned up)

Tuesday, July 17, 2007

SODA vs STOC/FOCS

Michael Mitzenmacher has an interesting post up comparing citation counts for papers at SODA/STOC/FOCS 2000. The results are quite stunning (and SODA does not fare well). For a better comparison, we'd need more data of course, but this is a reasonable starting point.

As my (minor) contribution to this endeavour, here are the paper titles from 1997-2006 (last 10 years) for STOC, FOCS, and SODA. There might be errors: this was all done using this script. Someone more proficient in Ruby than I might try to hack this neat script written by Mark Reid for stemming and plotting keywords trends in ICML papers; something I've been hoping to do for STOC/FOCS/SODA papers.

If there are any Google researchers reading this, is there some relatively painless way using the Google API to return citation counts from Google Scholar ? In fact, if people were even willing to do blocks of 10 papers each, we might harness the distributed power of the community to get the citation counts ! If you're so inspired, email me stating which block of 10 (by line number, starting from 1) you're planning to do and for which conference. Don't let Michael's effort go in vain !

There are 681 titles from FOCS, 821 from STOC, and 1076 from SODA.

Wednesday, July 04, 2007

FOCS 07 list out

The list is here. As I was instructed by my source, let the annual "roasting of the PC members" begin !

63 papers were accepted, and on a first look there appears to be a nice mix of topics: it doesn't seem as if any one area stands out. Not many papers are online though, from my cursory random sample, so any informed commenting on the papers will have to wait. People who know more about any of the papers are free to comment (even if you're the author!). Does anyone know the number of submissions this year ? I heard it was quite high.

Disqus for The Geomblog