Showing posts with label potd. Show all posts
Showing posts with label potd. Show all posts

Tuesday, April 19, 2011

POTD: Witnessed k-distance

Distance function to a compact set plays a central role in several areas of computational geometry. Methods that rely on it are robust to the perturbations of the data by the Hausdorff noise, but fail in the presence of outliers. The recently introduced distance to a measure offers a solution by extending the distance function framework to reasoning about the geometry of probability measures, while maintaining theoretical guarantees about the quality of the inferred information. A combinatorial explosion hinders working with distance to a measure as an ordinary (power) distance function. In this paper, we analyze an approximation scheme that keeps the representation linear in the size of the input, while maintaining the guarantees on the inference quality close to those for the exact (but costly) representation.

Notes:
The idea of defining distances between (or to) shapes robustly by replacing the shape by a distribution is near to my heart (see this paper for more). The authors provide an algorithmic perspective on a formulation first proposed by Chazal, Cohen-Steiner and Mérigot. The idea is that instead of constructing a shape from a point cloud by growing balls of fixed radius and measuring the distance to this union of balls, one constructs a measure by growing balls out to a fixed measure, and then defining a distance. The distance measure has the nice property of being Lipschitz-related to the EMD, making it stable. In a manner reminiscient of the Aurenhammer et al work, they relate this distance to a power diagram construction, and then design efficient approximations for it (because the exact version involves terms exponential in the thresholding parameter).

It seems to me that they're going to a lot of work to recover from what I think is a tactical problem: fixing a sharp threshold for the ball measure. It might be interesting to explore alternate ways of defining the measure that are "smoother" - maybe using a kernel instead of a hard measure boundary. It might yield an alternate measure that serves the same purpose but is easier to compute.

Sunday, April 10, 2011

POTD: A unified approach to Approximate Proximity searching

A Unified approach to Approximate Proximity Searching
Sunil Arya, Guilherme D. da Fonseca, and David M. Mount

Abstract:
The inability to answer proximity queries efficiently for spaces of dimension d > 2 has led to the study of approximation to proximity problems. Several techniques have been proposed to address different approximate proximity problems. In this paper, we present a new and unified approach to proximity searching, which provides efficient solutions for several problems: spherical range queries, idempotent spherical range queries, spherical emptiness queries, and nearest neighbor queries. In contrast to previous data structures, our approach is simple and easy to analyze, providing a clear picture of how to exploit the particular characteristics of each of these problems. As applications of our approach, we provide simple and practical data structures that match the best previous results up to logarithmic factors, as well as advanced data structures that improve over the best previous results for all aforementioned proximity problems.
Notes:
When a problem becomes interesting, papers get written quickly, and techniques start pouring out of the firehose. Not all tricks are needed, and not all machinery is effective, but cleanup only comes later, once the frenzy has settled. Approximate nearest neighbor research is like this: there are many tricks, and lots of machinery, but there are also some core ideas that keep showing up, and are sufficient for many variants of the problem.

Near-neighbor searching in low dimension is "easy" if you're given data that's uniformly sampled. Simple divide-and-conquer will give balanced search trees, and thus low query times. The problem comes when you have regions of sparsity. Informally, they prevent you from making progress as you divide the space up, and so the root-to-leaf length increases, increasing query time.

While the ANN problem in low dimensions is "solved" in that there's a fairly good theoretical understanding of the bounds and tradeoffs needed for query time vs storage, the methods themselves are quite complex. I learnt this first-hand when reading the Har-Peled paper that uses approximate Voronoi diagrams for near-neighbor search in an attempt to reverse-engineer the method for a different setting.

The beauty of the POTD is that it starts with a very simple data structure - the compressed quad tree. It shows that this structure can be used to isolate "sparse" and "dense" regions of space, and uses a hybrid strategy for processing these regions, with core sets to reduce size in dense regions, and optimized data structures for sparse regions (that necessarily only have few points). While the paper itself has no experimental results, I'd imagine that this approach would lend itself far more easily to experimentation.

Friday, February 04, 2011

POTD: Reproducing Kernel Banach Spaces with the ℓ1 Norm

Reproducing Kernel Banach Spaces with the ℓ1 Norm
Guohui Song, Haizhang Zhang, and Fred J. Hickernell

Abstract:
Targeting at sparse learning, we construct Banach spaces B of functions on an input space X with the properties that (1) B possesses an l1 norm in the sense that it is isometrically isomorphic to the Banach space of integrable functions on X with respect to the counting measure; (2) point evaluations are continuous linear functionals on B and are representable through a bilinear form with a kernel function; (3) regularized learning schemes on B satisfy the linear representer theorem. Examples of kernel functions admissible for the construction of such spaces are given. 

Notes:
This one probably requires some explanation, for the non-ML folks. Reproducing Kernel Hilbert spaces are the coin of the realm in machine learning and for good reason. They allow much of ML to be "ported" from linear classifiers to non-linear classifiers: the kernel mapping essentially linearizes (via lifting) the nonlinear classifiers so you can get the benefit of the nonlinearity while operating algorithmically in a linear world. Even though the induced Hilbert space is typically a function space and is therefore infinite-dimensional, the representer theorem allows us in most cases to operate in a finite dimensional space (where the dimension is bounded by the number of samples). From a metric embedding perspective, kernels completely characterize the class of metrics isometrically embeddable in Euclidean space.

So RKHSs are great. So what's the deal with this paper ? What it tries to do is combine the power of RKHSs with the regularity and sparsity properties guaranteed by $\ell_1$ norms. Even though your typical Banach space doesn't admit an inner product (what you need for the kernel mapping), they show that you can define special Banach spaces in which kernels can be defined as before, and the representer theorem holds, but that you can get sparse bases for solutions because of the nice $\ell_1$ properties.

I promised SHORT summaries, so I'm not going to go further. But the takeaway message here is the ability to extend the nice properties of RKHSs to Banach spaces. For completeness I should mention that there are other approaches that have tried to do this, but using different mathematical constructs that are in some way less well suited.

Wednesday, February 02, 2011

POTD: Continuous Local Search

Inspired by Oded Goldreich's "my choices" page, and my constant and futile attempts to read more, I'm going to attempt to write short (SHORT!) posts summarizing papers that I've been reading lately, with some thoughts. Each such post will be tagged 'potd' for 'paper of the day', in the spirit of John Baez which is to say, not a paper each day, but a paper on the day the post is made :). 

Continuous Local Search
C. Daskalakis and C. Papadimitriou
SODA 2011.

Abstract:
We introduce CLS, for continuous local search, a class of polynomial-time checkable total functions that lies at the intersection of PPAD and PLS, and captures a particularly benign kind of local optimization in which the domain is continuous, as opposed to combinatorial, and the functions involved are continuous. We show that this class contains several well known intriguing problems which were heretofore known to lie in the intersection of PLS and PPAD but were otherwise unclassifiable: Finding fixpoints of contraction maps, the linear complementarity problem for P matrices, finding a stationary point of a low-degree polynomial objective, the simple stochastic games of Shapley and Condon, and finding a mixed Nash equilibrium in congestion, implicit congestion, and network coordination games. The last four problems belong to CCLS, for convex CLS, another subclass of PPAD $\cap$ PLS seeking the componentwise local minimum of a componentwise convex function. It is open whether any or all of these problems are complete for the corresponding classes.
Notes
There are many iterative schemes that get used in practice for which time to convergence is unknown. The Weiszfeld algorithm for computing 1-medians is one such method. There are also alternating optimization schemes like k-means, EM and ICP that have resisted convergence analysis for a while - although we now have a pretty good understanding of the behavior of k-means. Classes like CLS appear to capture some numerical iterative schemes like gradient descent, and it might be interesting to establish connections (via reductions) between such iterative methods and other problems of a more game-theoretic nature that appear to crop up in this class.

One catch though is that while the reductions are poly time, the only requirement is that a solution to one map back to a soliution for another. So it's not clear that reductions in CLS or convex CLS preserve "time to convergence" - in fact it seems unlikely.

Disqus for The Geomblog