Showing posts with label conferences. Show all posts
Showing posts with label conferences. Show all posts

Saturday, February 06, 2016

ITA FTW: Bayesian surprise and eigenvectors in your meal.

I've been lounging in San Diego at my favorite conference, the Information Theory and Applications workshop. It's so friendly that even the sea lions are invited (the Marine Room is where we had the conference banquet).

Sadly this year I was mired in deadlines and couldn't take advantage of the wonderful talks on tap and the over 720 people who attended. Pro tip, ITA: Could you try to avoid the ICML/KDD/COLT deadlines next time :) ?

ITA always has fun events that our more "serious" conferences should learn time. This time, the one event I attended was a Man vs Machine cookoff. Which I thought was particularly apropos since I had just written a well-received article with a cooking metaphor for thinking about algorithms and machine learning.

The premise: Chef Watson (IBM's Watson, acting as a chef) designs recipes for dinner (appetizer/entree/dessert) with an assist from a human chef. Basically the chef puts in some ingredients and Watson suggests a recipe (not from a list of recipes, but from its database of knowledge of chemicals, what tastes 'go well together' and so on. This was facilitated by Kush Varshney from IBM, who works on this project.

Each course is presented as a blind pairing of Watson and Human recipes, and its our job to vote for which one we think is which.

It was great fun. We had four special judges, and each of us had a placard with red and blue sides to make our votes. After each course, Kush gave us the answer.

The final score: 3-0. The humans guessed correctly for each course. The theme was "unusualness": the machine-generated recipes had somewhat stranger combinations, and because Watson doesn't (yet) know about texture, the machine-recipes had a different mouthfeel to them.

This was probably the only time I've heard the words 'Bayesian surprise' and 'eigenvector' used in the context of food reviews.


Tuesday, May 19, 2015

ITA, or a conference I really enjoy.

Continuing my thoughts on the STOC 2017 reboot, I went back to Boaz's original question:

What would make you more likely to go to STOC?

And thought I'd answer it by mentioning an event that I really enjoy attending. I didn't post it as a comment because it's a little out of scope for the blog post itself: it doesn't make concrete recommendations so much as relay anecdotal evidence. 

The Information Theory and Applications workshop is a workshop: it doesn't have printed proceedings, and it encourages people to present work that has been published (or is under review) elsewhere. Keep that caveat in mind: the structure here might not work for a peer-reviewed venue like STOC. 

Having said that, the ITA is a wonderful event to go to. 
  • It's in San Diego every year in February - what's not to like about that
  • It runs for 5 days, so is quite long. But the topics covered change over the course of the 5 days: the early days are heavy on information theory and signal processing, and the algorithms/ml/stats shows up later in the week. 
  • There are multiple parallel sessions: usually 5. And lots of talks (no posters)
  • There are lots of fun activities. There's an irreverent streak running through the entire event, starting with the countdown clock to the invitations, the comedy show where professional comedians come and make fun of us :), various other goofy events interspersed with the workshop, and tee-shirts and mugs with your name and picture on them. 
The talks are very relaxed, probably precisely because there isn't a sense of "I must prove my worth because my paper got accepted here". Talk quality varies as always, but the average quality is surprisingly high, possibly also because it's by invitation. 

But the attendance is very high. I think the last time I attended there were well over 600 people, drawn from stats, math, CS, and EE. This had the classic feel of a 'destination workshop' that STOC wants to emulate. People came to share their work and listen to others, and there was lots of space for downtime discussions. 

My assertion is that the decoupling of presentation from publication (i.e the classical workshop nature of ITA), makes for more fun talks, because people aren't trying to prove a theorem from the paper and feel the freedom to be more expansive in their talks (maybe covering related results, or giving some larger perspective). 

Obviously this would be hard to do at STOC. But I think the suggestions involving posters are one way of getting to this: namely, that you get a pat on the back for producing quality research via a CV bullet ("published at STOC") and an opportunity to share your work (the poster). But giving a talk is a privilege (you're occupying people's time for slice of a day), not a right, and that has to be earned. 

I also think that a commenter (John) makes a good point when they ask "Who's the audience?". I'm at a point where I don't really enjoy 20 minutes of a dry technical talk and I prefer talks with intuition and connections (partly because I can fill in details myself, and partly because I know I'll read the details later if I really care). I don't know if my view is shared by everyone, especially grad students who have the stamina and the inclination to sit through hours of very technical presentations. 




Monday, May 18, 2015

STOC 2017 as a theory festival

Over on Windows on Theory, there's a solid discussion going on about possible changes to the format for STOC 2017 to make it more of a 'theory festival'. As Michael Mitzenmacher exhorts, please do go and comment there: this is a great chance to influence the form of our major conferences, and you can't make a change (or complain about the lack of change) if you're not willing to chime in.

I posted my two comment there, and you should go and read number one  and number two. Two things that I wanted to pull out and post here are in the form of a 'meta-suggestion':
1. Promise to persist with the change for a few years. Any kind of change takes time to get used to, and every change feels weird and crazy till you get used to it, after which point it’s quite natural. 
Case in point: STOC experimented one year with a two-tier committee, but there was no commitment to stick to the change for a few years, and I’m not sure what we learned at all from one data point (insert joke about theorists not knowing how to run experiments). 
Another case in point: I’m really happy about the continued persistence with workshops/tutorials. It’s slowly becoming a standard part of STOC/FOCS, and that’s great. 
2. Make a concerted effort to collect data about the changes. Generate surveys, and get people to answer them (not as hard as one might think). Collect data over a few years, and then put it all together to see how the community feels. In any discussion (including this one right here), there are always a few people with strong opinions who speak up, and the vast silent majority doesn’t really chip in. But surveys will reach a larger crowd, especially people who might be uncomfortable engaging in public.

Friday, December 05, 2014

Experiments with conference processes

NIPS is a premier conference in machine learning (arguably the best, or co-best with ICML). NIPS has also been a source of interesting and ongoing experiments with the process of reviewing.

For example, in 2010 Rich Zemel, who was a PC chair of NIPS at the time, experimented with a new system he and Laurent Charlin were developing that would determine the "fit" between a potential reviewer and a submitted paper. This system, called the Toronto Paper Matching System, is now being used regularly in the ML/vision communities.

This year, NIPS is trying another experiment. In brief,

10% of the papers submitted to NIPS were duplicated and reviewed by two independent groups of reviewers and Area Chairs.
And the goal is to determine how inconsistent the reviews are, as part of a larger effort to measure the variability in reviewing. There's even a prediction market set up to guess what the degree of inconsistency will be. Also see Neil Lawrence's fascinating blog describing the mechanics of constructing this year's PC and handling the review process.

I quite like the idea of 'data driven' experiments with format changes. It's a pity that we didn't have a way of measuring the effect of having a two-tier committee for STOC a few years ago, and instead had to rely on anecdotes about its effectiveness, and didn't even run the experiment long enough to collect enough data. I feel that every time there are proposals to change anything about the theory conference process, the discussion gets drowned out in a din of protests, irrelevant anecdotes, and opinions based entirely on ... nothing..., and nothing ever changes.

Maybe there's something about worst-case analysis (and thinking) that makes change hard :).

Tuesday, August 19, 2014

Long Live the Fall Workshop (guest post by Don Sheehy)

An announcement for the Fall Workshop in Computational Geometry, by Don Sheehy


In all the conversation about SoCG leaving the ACM, there were many discussions about ownership, paywalls, and money.  This leads naturally to questions of ideals.  What can and ought a research community be like?  What should it cost to realize this?  Isn't it enough to bring together researchers and in an unused lecture hall at some university somewhere, provide coffee (and wifi), and create a venue for sharing problems, solutions, and new research in an open and friendly atmosphere?  There is a place for large conferences, with grand social events (Who will forget the boat cruise on the Seine at SoCG 2011?), but there is also a place for small meetings run on shoestring budgets that are the grassroots of a research community.  

The Fall Workshop on Computational Geometry is such a meeting.  It started in 1991, at SUNY Stony Brook and has been held annually every fall since.  I first attended a Fall Workshop during my first year of graduate school, back in 2005.  This year marks the 24th edition of the workshop, and this time, I will be hosting it at the University of Connecticut.  It is organized as a labor of love, with no registration fees.  There are no published proceedings and it is a great opportunity to discuss new work and fine-tune it in preparation for submission.  It is perfectly timed to provide a forum for presenting and getting immediate feedback on your potential SoCG submissions.  I cordially invite you to submit a short abstract to give a talk and I hope to see you there.

Important dates:
Submission deadline: Oct 3 midnight (anywhere on earth)
Conference: Oct 31-Nov 1, 2014. 


Wednesday, October 02, 2013

Call for tutorials at SIAM Data Mining

I'm the tutorials chair for the 2014  SIAM conference on data mining (SDM) (for readers not aware of the data mining landscape, SDM is a SODA-style data mining conference run by SIAM - i.e it's a CS-style venue with peer-reviewed papers, rather than a math-style conference like SIAM discrete math). SDM is one of the major venues for data mining research, especially the more statistically focused kind. It will be held in Philadelphia between Apr 24-26, 2014.

SDM runs 4-5 tutorials each year on various aspects of data mining (ranging from theory/algorithms to specific application areas). I'm personally very interested in encouraging submissions from people in the theory community working with data who might want to share their expertise about new methods/techniques from theoryCS land with the larger data analysis community.

The deadline is Oct 13, and all you need is a few pages describing the tutorial content, target audience, and some sample material (more details here). If you have an idea and are not sure whether it will fit, feel free to email me directly as well.

Friday, July 05, 2013

FSTTCS in December

At this point, you're probably wondering: exactly how much more coffee do I need to infuse into my system to get my 1/3/10  papers submitted to SODA before the deadline ? Do your stomach lining (and your adenosine receptors) a favor and consider submitting to FSTTCS: the abstracts deadline is Jul 8, and the submission deadline is July 15. You get to go to Guwahati in December and you might even get to stay here:



Wednesday, June 26, 2013

SODA 2014 abstract submission deadline approaching

+Chandra Chekuri informs us that the abstract submission deadline for SODA 2014 is approaching rapidly. Abstracts MUST be in by July 3 or else you cannot submit a full paper (by Jul 8).

For more information, visit the SODA site: http://siam.org/meetings/da14/submissions.php

Thursday, January 19, 2012

SODA Review II: The business meeting

Jeff Phillips posts a roundup of the SODA business meeting. Hawaii !!!!!!

I thought I would also post a few notes on the SODA business meeting. I am sure I missed some details, but here are the main points.

Everyone thought the organization of the conference was excellent (so far). The part in parenthesis is a joke by Kazuo Iwama towards his students - I guess that is Japanese humor, and encouragement.

Despite being outside of North America for the first time, the attendance was quite high, I think around 350 people. And the splits were almost exactly 1/3 NA, 1/3 Europe, 1/3 Asia.

Yuval Rabani talked about being PC for the conference. He said there were the most submissions ever, most accepted paper ever, and largest PC ever for SODA. Each PC member reviewers about 49 papers, and over 500 were sub-reviewed. We all thank Yuval for all of his hard work.

We voted next on location for 2014 (2013 is in New Orleans). The final votes were down to Honolulu, HI and Washington DC, where Honolulu won about 60 to 49. David Johnson said they would try to book a hotel in Honolulu if he could get hotel prices below 200 USD/night. A quick look on kayak.com made it appear several large hotels could be booked next year around this time for about 160 USD/night or so. Otherwise it will be in DC where the theory group at UMaryland (via David Mount) have stated they would help with local arrangements. They did a great job with SoCG a few years ago, but I heard many suggestions that it be held more downtown than by UofM campus. And also there were requests for good weather. We'll see what happens...

Finally, there was a discussion about how SODA is organized/governed. This discussion got quite lively. Bob Sedgewick led the discussion by providing a short series of slides outlining a rough plan for a "confederated SODA." I have linked to his slides. This could mean several things, for instance:
  • Having ALENEX and ANALCO (and SODA) talks spread out over 4 days and intermixed even possibly in the same session (much like ESA).
  • The PCs would stay separate most likely (although merging them was discussed, but this had less support). 
  • For SODA the PC could be made more hierarchical where there are, say, 6 main area chairs. Then each area chair supervises say 12 or so PC members. The general chair would coordinate and normalize all of the reviews, but otherwise it would be more hierarchical and partitioned. Then PC members in each area would have fewer papers to review, and could submit to other subareas even. 
  • There was also a suggestion that PC chairs / steering committee members have some SODA attendance requirements. (Currently it consists of David Johnson, 2 people appointed by SIAM, and the past two PC chairs - as I think I understand. David Johnson said he would provide a link to the official SODA bylaws somewhere.). 
Anyways, there was a lot of discussion that was boiled down to 3 votes (I will try to paraphrase, all vote totals approximate):
  • Should the steering committee consider spreading ALENEX, ANALCO, and SODA talks over 4 days? About 50 to 7 in favor. 
  • Should the steering committee consider/explore some variant of the Confederated SODA model? About 50 to 2 in favor.
  • Should the steering committee consider making the steering committee members elected? About 50 to 1 in favor. 
There were about 100+ votes for location, so usually about half the crowd abstained for all votes. There were various arguments on either side of the positions. And other suggestions. Some people had very strong and well-argued sides of these discussion points, so I don't want to try to paraphrase (and probably get something nuanced wrong), but I encourage people to post opinions and ideas in the comments.

Wednesday, January 18, 2012

SODA review I: Talks, talks and more talks.

I asked Jeff Phillips (a regular contributor) if he'd do some conference posting for those of us unable to make it to SODA. Here's the first of his two missives.

Suresh requested I write a conference report for SODA. I never know how to write these reports since I always feel like I must have left out some nice talk/paper and then I risk offending people. The fact is, there are 3 parallel sessions, and I can't pay close attention to talks for 3 days straight, especially after spending the previous week at the Shonan meeting that Suresh has been blogging about.

Perhaps, it is apt to contrast it with the Shonan meeting. At Shonan there were many talks (often informal with much back and forth) on topics very well clustered on "Large-scale Distributed Computation". There were several talks earlier in the workshop that just overlaid the main techniques that have become quite powerful within an area, and then there were new talks on recent breakthroughs. But although we mixed up the ordering of subtopics a bit, there was never that far of a context switch, and you could see larger views coalescing in people's minds throughout the week.

At SODA, the spectrum is much more diverse - probably the most diverse conference on the theoretical end of computer science. The great thing is that I get to see colleagues across a much broader spectrum of areas. But the talks are often a bit more specific, and despite having usually fairly coherent sessions, the context switches are typically quite a bit larger and it seems harder to stay focused enough to really get at the heart at what is in each talk. Really getting the point requires both paying attention and being in the correct mind set to start with. Also, there are not too many talks in my areas of interest (i.e. geometry, big data algorithmics).



So then what is there to report. I've spent most of my time in the hallways, catching up on gossip (which either is personal, or I probably shouldn't blog about without tenure - or even with tenure), or discussing on-going or new research problems with friends (again not yet ready for a blog). And of the talks I saw, I generally captured vague notions or concepts. Usually stored away for when I think about a related problem and I need to make a similar connection, or look up a technique in the paper. And, although, I was given a CD of the proceedings, but my laptop has not CD drive. For the reasons discussed above, I rarely completely get how something works from a short conference talk. Here are some example snip-its of what I took away from a few talks:

Private Data Release Via Learning Thresholds | Moritz Hardt, Guy Rothblum, Rocco A. Servedio.
Take-away : There is a deep connection between PAC learning and differential privacy. Some results from one can be applied to the other, but perhaps many others can be as well.
Submatrix Maximum Queries in Monge Matrices and Monge Partial Matrices, and Their Applications | Haim Kaplan, Shay Mozes, Yahav Nussbaum and Micha Sharir
Take-away: There is a cool "Monge" property that matrices can have which makes many subset query operations more efficient. This can be thought of as each row represents a pseudo-line. Looks useful for matrix problems where the geometric intuition about what the columns mean is relevant. 
Analyzing Graph Structure Via Linear Measurements | Kook Jin Ahn, Sudipto Guha, Andrew McGregor
Take-away : They presented a very cool linear sketch for graphs. This allows several graph problems to be solved under a streaming (or similar models) in the way usually more abstract, if not geometric, data is. (ed: see my note on Andrew's talk at Shonan)

Lsh-Preserving Functions and Their Applications | Flavio Chierichetti, Ravi Kumar
Take-away: They present a nice characterization of what sorts of similarities (based on combinatorial sets), showing which ones can and cannot be used within a LSH framework. There techniques seemed to be a bit more general that for these discrete similarities over sets, so if need to use this for another similarity may be good to check out in more detail. 

Data Reduction for Weighted and Outlier-Resistant Clustering | Dan Feldman, Leonard Schulman
Take-away: They continue to develop the understanding on what can be done for core sets using sensitivity-based analysis. This helps outline not just what functions can be approximated with subsets as proxy, but also how the distribution of points affects these results. The previous talk by Xin Xiao (with Kasturi Varadarajan on A Near-Linear Algorithm for Projective Clustering Integer Points) also used these concepts. 

There were many other very nice results and talks that I also enjoyed, but the take-away was often even less interesting to blog about. Or sometimes they just made more progress towards closing a specific subarea. I am not sure how others use a conference, but if you are preparing your talk, you might consider trying to build a clear concise take-away message into your talk so that people like me with finite attention spans can remember something precise out of it. And so people like me are most likely to look more carefully at the paper the next time we work on a related problem.

Thursday, November 10, 2011

Computational Geometry at the joint math meetings

The Joint Mathematics Meetings  is billed as the "largest annual mathematics meeting in the world". In 2012 it's in Boston between Jan 4 and 7, and I'm told that there will be over 2000 presenters.

Happily, some of these will be geometers.
 There will also be an all-day AMS special session on Computational and Applied Topology (also on Thursday).

So there's lots to do at the JMM if you're interested in geometry. So if you're in the area in January, drop by !

Monday, September 05, 2011

FOCS 2011 Registration open

Marek Chrobak informs me that FOCS 2011 registration is open. The conference is being held from Oct 22-25 in Palm Springs CA, home of the Palm Springs Aerial Tramway that does a record 8500 ft (2600 m) elevation gain to give you views of the valley.

When you're not being made dizzy by the views, attend a tutorial or two or three ! Cynthia Dwork, Kirk Pruhs and Vinod Vaikuntanathan are the headliners for a day of tutorials on Saturday Oct 22. Shafi Goldwasser will be awarded the 2011 IEEE Emmanuel R. Piore Award, given for "outstanding contributions in the field of information processing, in relation to computer science".

I'm told there are also some talks, including some that smash through barriers for the k-server conjecture, metric TSP (in graphs) (twice over), and exact 3SAT.

Register now so that local organizers don't go crazy (I can assure you that they do :)). The deadline is Sep 29 for cheap rates on hotels and registration. And if you'd like to be a FOCS 2011 reporter and write a set of blog posts summarizing the conference for cstheory.blogoverflow.com, please add a note to this post.

Sunday, August 28, 2011

A way forward on reformatting conferences

In some circles, it's a badge of honor to attend as few talks at a conference as possible. Some of the usual comments are:
  • "I go to meet people, not attend talks"
  • "All the interesting conversations happen in the hallways"
  • "Talks are too difficult to follow: I'd rather read the paper or just ask the author"
  • "I read this paper 6 months ago when it appeared on the arxiv: it's old news now, and there are improvements"
You have to wonder why anyone gives talks at a conference any more ! And Moshe Vardi asks this question in a CACM note. Riffing off of "a conference is a journal held in a hotel" (attributed to Lance Fortnow, who attributes it to Ed Lazowska), he talks about the low quality of conference talks, and suggests ways to improve them.

But there's a certain 'band-aid on a bleeding carcass' aspect to this discussion. Indeed, between overloaded reviewers, authors who need the imprimatur of a prestigious conference, and registration fees that skyrocket as meetings get longer, it almost seems like this system is heading for a nervous breakdown.

But there are a number of experiments in play that point the way towards a gentler, kinder conference system (even if we decide not to grow up). In this G+ discussion, Fernando Pereira and Zach Ives describe two models that put together address the main problems with our conference process.

NIPS receives over 1400 submissions, and accepts a small fraction (generally under 20%, and usually much less a little over 20%). All papers are presented as posters (with a few special talks). This does two things:
  1. It removes artificial limits on number of papers accepted based on conference duration. Posters are presented in (semi)-parallel. 
  2. It eliminates the "20-minutes of droning with no questions" style of many conference talks. Posters are a much more interactive way of presenting material, and it's easier to skim papers, talk to the authors, and have a good discussion. The papers are still in the proceedings, so you can always "read the paper" if you want. As an aside, it really helps with communication skills if you have to answer questions on the fly. 
VLDB has now moved to a journal-based submission process. There's a deadline each month for submitting papers for review. The review process is fairly quick: 45 days or so, with enough time for back and forth with the authors. Accepted papers are published in the proceedings, and while I'm not sure exactly how the conference selects talks for presentation, it's possible that all accepted papers are then presented. The main advantages of this process:
  1. There isn't a huge burst of submissions, followed by a draining review process. Reviews are spread out over the year. Moreover, area chairs are used to partition papers further, so any (P)VLDB reviewer only gets  a few papers to review each month. This can only improve the quality of reviews.
  2. The journal-style back-and-forth makes papers better. Authors can make changes as recommended, rather than trying to defend their choices in an often-contentious rebuttal process. 
Between these two systems, we have a better review process for papers, and a better way of delivering the content once reviewed. Why not combine them ? 


Friday, June 24, 2011

Workshop on Coding, Complexity and Sparsity

Anna Gilbert, S Muthukrishnan, Hung Ngo, Ely Porat, Atri Rudra,  and Martin Strauss are organizing a workshop on coding, complexity and sparsity at Ann Arbor in August, and are soliciting participation for a Dagstuhl-style event. Here's the blurb:


Workshop on Coding, Complexity and Sparsity
Ann Arbor, Aug1--4, 2011. 
Organized by Anna Gilbert, S Muthukrishnan, Hung Ngo,Ely Porat, Atri Rudra, Martin Strauss.

There has been a considerable amount of fruitful interaction between coding and complexity theories, and a fledgling interaction between sparse representation and coding theories. We believe that academic research will be far richer exploring more of the connections between sparse representation and coding theories, as well as, between sparse representation and complexity theories. Could there be a general structural complexity theory of sparse  representation problems and could techniques from algorithmic coding  theory help sparse representation problems? 
The plan is to get researchers from different areas together in a  Dagstuhl-like setting and explore the question. There will be tutorials,  talks and time to research. Registration is free and the deadline is July 15th.  We have limited funding to (partially) support the participation of some  graduate students.  
For more details on the workshop (including the tutorials, list of invited  speakers and how to apply for participation support), please go to the  workshop webpage.

Thursday, June 23, 2011

MADALGO Summer School

There are lots of good things happening at MADALGO. The institute was recently renewed for another 5 years, they recently organized the now-regular MASSIVE workshop in conjunction with SoCG (maybe it should also be organized in conjunction with SIGMOD or GIS ?), and now they're organizing a summer school on high dimensional geometric computing in conjunction with the new Center for the Theory of Interactive Computation (CTIC) run jointly by Aarhus and Tsinghua.

There's a stellar list of lecturers:
  • Alexandr Andoni (Microsoft Research Silicon Valley)

  • Ken Clarkson (IBM Research)

  • Thomas Dueholm Hansen (Aarhus University)

  • Piotr Indyk (MIT)

  • Nati Linial (Hebrew University)
and they're soliciting participation. Here's the blurb:
The summer school will take place on August 8-11, 2011 at Center for Massive Data Algorithmics (MADALGO) and Center for the Theory of Interactive Computation (CTIC) in the Department of Computer Science, Aarhus University, Denmark.
The school is targeted at graduate students, as well as researchers interested in an in-depth introduction to high-dimensional geometric computing.
The capacity of the summer school is limited. Prospective participants should register using the online registration form available here as soon as possible. Registering graduate students must also have their supervisor send a letter confirming their graduate student status directly to Gerth Brodal: gerth@madalgo.au.dk; the subject line of the email should be 'student_last_name/SS_2011/confirming'. Registration is on a first-come-first-serve basis and will close on Monday July 4, 2011.
Registration is free; handouts, coffee breaks, lunches and a dinner will be provided by MADALGO, CTIC and Aarhus University.
 I'd be remiss if I didn't point out that any MADALGO-sponsored event will also have rivers of beer, if you needed more incentive to attend. 

Saturday, July 03, 2010

Tools every modern conference needs

(while I procrastinate on my SODA edits)

  • Crowdvine: an all-encompassing social network solution for conferences that includes schedule planning, networking, activity monitors (who's coming to my talk) etc
  • A paper discussion site (could be within crowdvine, or even something like Mark Reid's ICML discussion site)
  • A good paper submission/review manager like HotCRP (not CMT !)
  • Videolectures.net
  • Facebook/twitter/blog for official announcements. 
  • At the very least, a website with all the papers for download. If necessary,zip or torrent links for downloading proceedings (especially if videos are involved). 
And no, we did none of these for SoCG 2010. But a guy can dream, no ? While I'd be shocked to see crowdvine at any theory conference (price, culture, you name it), I think videolectures.net would be a valuable resource, and many of the rest require time but are otherwise free. 

Thursday, May 20, 2010

ICS 2011

Bernard highlights the 2nd incarnation of ICS, to be held again in Beijing between Jan 7-9, 2011.

What's newsworthy is the shifted submission deadline. Last year, the ICS submit-accept cycle was highly compressed, to make sure it didn't clash with either SODA or STOC. This year, it's in direct conflict with SODA (submission deadline Aug 2), which should make things interesting for the SODA submission levels.

Since the conference is still in some flux (I don't know where it will be next year), it's probably too soon to comment on the timing/deadlines, but I wonder whether it will continue to be a good idea to have SODA and ICS be in direct conflict.


Wednesday, May 12, 2010

SoCG 2010: Come one, come all :)

There's about a month left to go for SoCG, and I just returned from a walk through at Snowbird. Surprisingly, it was still snowing - the ski season has wound down though. Most of the snow will disappear by early June - we're seeing the last confused weather oscillations right now before the steady increase in temperature.

We went up there to check out the layout of the room(s) for the conference - the main conference room is nice and large, and the parallel session room is pretty big as well, there'll be wireless access throughout (with a code), and there's a coffee place right next to the rooms. There are nice balconies all around, and of course you can wander around outside as well.

Registrations have been trickling in, a little slower than my (increasingly) gray hair would like. If you haven't yet registered, consider this a gentle reminder :). It helps to have accurate numbers when estimating food quantities and number of proceedings etc.

See you all in a month !

Tuesday, March 30, 2010

Why Conference Review Must End !

Exhibit A: Matt Welch's post on how PC deliberations happen.

Notes:
  • Yes, I realize it's partly tongue-in-cheek. But it's not far from the truth !
  • No, going to all-electronic meetings doesn't solve the problem. It merely replaces one set of group dynamics by another
  • Yes, we can't hope to remove irrational biases in the review process. That's why all we can hope for is to force them to be exposed and questioned. A back-and-forth between author and reviewer can help do that. 
  • And no, it's not true that "theory PCs are much better". 
I've been to NSF panels where the program manager does an excellent job of forcing people to state precisely what they mean by "interesting", "infeasible", "novel contribution" and other such weasel-words. When that happens, it's a bit easier to assess the contribution. One could imagine enlightened PC chairs doing this at paper review time, but there's really no time, given the number of papers that need to be processed in 2 days or so. 

Tuesday, January 26, 2010

Author Feedback, or "Conference Review process considered harmful"

Author feedback is the latest attempt to put a band-aid on the bleeding carcass of the conference review process. We had author feedback at SoCG, and it's a common feature at many other conferences. The ostensible purpose of author feedback is to allow authors to clarify any misconceptions/confusions the reviewer might have so as to make the review process a bit more orderly (or less random?).

Usually, the process works like this: reviewers submit their reviews and have the option of requesting clarification on specific points from the authors. Authors get the questions, are required to submit a rebuttal/response by a certain date, and then deliberation continues. Variations on this include:
  • Length of the author response
  • When it's asked for (before discussions start, or after)
  • Whether it's called a 'rebuttal' or a 'response' or even just 'feedback' - I think the choice of word is significant
  • Whether the reviewers' current scoring for the paper is revealed or not.
While a good idea in principle, it can cause some headache for program committees, and often devolves into a game of cat and mouse: the reviewer carefully encrypts their questions so as not to tip their hand, the author tries to glean the reviewers' true intent from the questions, while trying to estimate which reviewer has the knife in, and so on and so forth.

What I want to rant about is the author feedback system for a conference I recently submitted to. The reviews came back long and vicious: as far as one reviewer is concerned, we should probably go and hide under a rock for the rest of our pathetic (and hopefully short) lives.

That doesn't bother me as much as it used to - I've grown a thick hide for these sorts of things ;). However, a combination of things has sent me into a fury:
  • The reviewer is actually wrong on most counts. This is isn't a matter of disagreeing over motivation, relevance etc. It's just a basic "please read section 5, column 1, sentence 3" type problem.
  • The author feedback limit is 2048 characters (which is a rather tiny amount if you're counting at home)
There's a basic issue of fairness here. Why does a reviewer get to go off on a rant for pages, while we have to limit our response to essentially sentences of the form "Yes. No. Maybe" ? Especially when the reviewer is basically wrong on a number of points, it takes a while to document the inaccuracies. At the very least, we should get as many characters in our response as the reviewers got in theirs ! (point of note: the set of reviews were 11225 characters long, and the specific reviewer I'm complaining about had a 2500 character long review)

This paper is not getting in, no matter what we say: that much is clear. I've almost never heard of a paper successfully rebutting the reviews, and in all fairness the other reviewers have issues that are matters of opinion and can't be resolved easily. That is a little disappointing, but perfectly fine within the way the review process works. But I'm annoyed that there's no good way to express my dissatisfaction with the reviewing short of emailing the PC chair, and it's not clear to me that this does any good anyway.

Overall, I think that author feedback in the limit gets us to journal reviews, which is a good thing (and my colleague actually suggested that conference reviewing should have more rounds of author feedback and less time for actual paper reviewing). But the way it's done right now, it's hard to see it as anything other than 'reviewing theater', to borrow a Bruce Schneier term. It looks nice, and might make authors and PCs feel good, but has little value overall.

Update: in case it was implied, this conference is NOT SoCG :)

Disqus for The Geomblog