Skip to content Skip to navigation


Recent, sample publications from the ICME community are listed below

Active Learning for Accurate Estimation of Linear Models

Co-authored by Carlos Riquelme (ICME PhD candidate), Mohammad Ghavamzadeh, and Alessandro Lazaric

We explore the sequential decision making problem where the goal is to estimate uniformly well a number of linear models, given a shared budget of random contexts independently sampled from a known distribution. The decision maker must query one of the linear models for each incoming context, and receives an observation corrupted by noise levels that are unknown, and depend on the model instance. We present Trace-UCB, an adaptive allocation algorithm that learns the noise levels while balancing contexts accordingly across the different linear functions, and derive guarantees for simple regret in both expectation and high-probability. Finally, we extend the algorithm and its guarantees to high dimensional settings, where the number of linear models times the dimension of the contextual space is higher than the total budget of samples. Simulations with real data suggest that Trace-UCB is remarkably robust, outperforming a number of baselines even when its assumptions are violated.  Full paper:

Accelerating Markov Chain Monte Carlo with Active Subspaces

Co-authored by Paul G. Constantine (ICME MS '06, PhD '09), Carson Kent (ICME PhD candidate), and Tan Bui-Thanh

The Markov chain Monte Carlo (MCMC) method is the computational workhorse for Bayesian inverse problems. However, MCMC struggles in high-dimensional parameter spaces, since its iterates must sequentially explore the high-dimensional space. This struggle is compounded in physical applications when the nonlinear forward model is computationally expensive. One approach to accelerate MCMC is to reduce the dimension of the state space. Active subspaces are part of an emerging set of tools for subspace-based dimension reduction. An active subspace in a given inverse problem indicates a separation between a low-dimensional subspace that is informed by the data and its orthogonal complement that is constrained by the prior. With this information, one can run the sequential MCMC on the active variables while sampling independently according to the prior on the inactive variables. However, this approach to increase efficiency may introduce bias. We provide a bound on the Hellinger distance between the true posterior and its active subspace-exploiting approximation. And we demonstrate the active subspace-accelerated MCMC on two computational examples: (i) a two-dimensional parameter space with a quadratic forward model and one-dimensional active subspace and (ii) a 100-dimensional parameter space with a PDE-based forward model and a two-dimensional active subspace. Read More:

A Collection of Results on Saturation Numbers

Co-authored by Arun Jambulapati (ICME PhD candidate) and Ralph Faudree

A graph G is H-saturated if G does not contain H as a subgraph, but the addition of any edge between two nonadjacent vertices in G results in a copy of H in G. The saturation number sat(n,H) is the smallest possible number of edges in a n-vertex H-saturated. The values of saturation numbers for small graphs and n are obtained computationally, and some general results for some specific path unions are also obtained.  Full paper:

A Real-Space Green's Function Method for the Numerical Solution of Maxwell's Equations

Co-authored by Victor Minden (ICME PhD candidate), Boris Lo, and Phillip Colella

A new method for solving the transverse part of the free-space Maxwell equations in three dimensions is presented. By taking the Helmholtz decomposition of the electric field and current sources and considering only the divergence-free parts, we obtain an explicit real-space representation for the transverse propagator that explicitly respects finite speed of propagation. Because the propagator involves convolution against a singular distribution, we regularize via convolution with smoothing kernels (B-splines) prior to sampling based on a method due to Beyer and LeVeque (1992). We show that the ultimate discrete convolutional propagator can be constructed to attain an arbitrarily high order of accuracy by using higher-order regularizing kernels and finite difference stencils and that it satisfies von Neumann’s stability condition. Furthermore, the propagator is compactly supported and can be applied using Hockney’s method (1970) and parallelized using the same observation as made by Vay, Haber, and Godfrey (2013), leading to a method that is computationally efficient.  Full Paper:

Higher-Order Organization of Complex Networks

Co-authored by Austin R. Benson (ICME PhD candidate), David F. Gleich (ICME MS '06, PHD '09), and Jure Leskovec

Networks are a fundamental tool for understanding and modeling complex systems in physics, biology, neuroscience, engineering, and social science. Many networks are known to exhibit rich, lower-order connectivity patterns that can be captured at the level of individual nodes and edges. However, higher-order organization of complex networks—at the level of small network subgraphs—remains largely unknown. Here, we develop a generalized framework for clustering networks on the basis of higher-order connectivity patterns. This framework provides mathematical guarantees on the optimality of obtained clusters and scales to networks with billions of edges. The framework reveals higher-order organization in a number of networks, including information propagation units in neuronal networks and hub structure in transportation networks. Results show that networks exhibit rich higher-order organizational structures that are exposed by clustering based on higher-order connectivity patterns.  Full paper:

A Technique for Updating Hierarchical Skeletonization-Based Factorizations of Integral Operators

Co-authored by Victor Minden (ICME PhD candidate), Anil Damle (ICME PhD '16), Kenneth L. Ho, and Lexing Ying

We present a method for updating certain hierarchical factorizations for solving linear integral equations with elliptic kernels. In particular, given a factorization corresponding to some initial geometry or material parameters, we can locally perturb the geometry or coefficients and update the initial factorization to reflect this change with asymptotic complexity that is poly-logarithmic in the total number of unknowns and linear in the number of perturbed unknowns. We apply our method to the recursive skeletonization factorization and hierarchical interpolative factorization and demonstrate scaling results for a number of different two-dimensional (2D) problem setups.  Read More:

FusionNet: 3D Object Classification Using Multiple Data Representations

Co-authored by Vishakh Hegde (ICME MS student) and Reza Zadeh (ICME PhD '14)

High-quality 3D object recognition is an important com- ponent of many vision and robotics systems. We tackle the object recognition problem using two data representations, to achieve leading results on the Princeton ModelNet chal- lenge. The two representations:

  • Volumetric representation: the 3D object is discretized spatially as binary voxels - 1 if the voxel is occupied and 0 otherwise.

  • Pixel representation: the 3D object is represented as a set of projected 2D pixel images.

Current leading submissions to the ModelNet Challenge use Convolutional Neural Networks (CNNs) on pixel represen- tations. However, we diverge from this trend and addition- ally, use Volumetric CNNs to bridge the gap between the efficiency of the above two representations. We combine both representations and exploit them to learn new features, which yield a significantly better classifier than using either of the representations in isolation. To do this, we introduce new Volumetric CNN (V-CNN) architectures.  Full paper: