Recent, sample publications from the ICME community are listed below
Active Learning for Accurate Estimation of Linear Models
Coauthored by Carlos Riquelme (ICME PhD candidate), Mohammad Ghavamzadeh, and Alessandro Lazaric
We explore the sequential decision making problem where the goal is to estimate uniformly well a number of linear models, given a shared budget of random contexts independently sampled from a known distribution. The decision maker must query one of the linear models for each incoming context, and receives an observation corrupted by noise levels that are unknown, and depend on the model instance. We present TraceUCB, an adaptive allocation algorithm that learns the noise levels while balancing contexts accordingly across the different linear functions, and derive guarantees for simple regret in both expectation and highprobability. Finally, we extend the algorithm and its guarantees to high dimensional settings, where the number of linear models times the dimension of the contextual space is higher than the total budget of samples. Simulations with real data suggest that TraceUCB is remarkably robust, outperforming a number of baselines even when its assumptions are violated. Full paper: https://arxiv.org/abs/1703.00579
Accelerating Markov Chain Monte Carlo with Active Subspaces
Coauthored by Paul G. Constantine (ICME MS '06, PhD '09), Carson Kent (ICME PhD candidate), and Tan BuiThanh
The Markov chain Monte Carlo (MCMC) method is the computational workhorse for Bayesian inverse problems. However, MCMC struggles in highdimensional parameter spaces, since its iterates must sequentially explore the highdimensional space. This struggle is compounded in physical applications when the nonlinear forward model is computationally expensive. One approach to accelerate MCMC is to reduce the dimension of the state space. Active subspaces are part of an emerging set of tools for subspacebased dimension reduction. An active subspace in a given inverse problem indicates a separation between a lowdimensional subspace that is informed by the data and its orthogonal complement that is constrained by the prior. With this information, one can run the sequential MCMC on the active variables while sampling independently according to the prior on the inactive variables. However, this approach to increase efficiency may introduce bias. We provide a bound on the Hellinger distance between the true posterior and its active subspaceexploiting approximation. And we demonstrate the active subspaceaccelerated MCMC on two computational examples: (i) a twodimensional parameter space with a quadratic forward model and onedimensional active subspace and (ii) a 100dimensional parameter space with a PDEbased forward model and a twodimensional active subspace. Read More: http://epubs.siam.org/doi/10.1137/15M1042127
A Collection of Results on Saturation Numbers
Coauthored by Arun Jambulapati (ICME PhD candidate) and Ralph Faudree
A graph G is Hsaturated if G does not contain H as a subgraph, but the addition of any edge between two nonadjacent vertices in G results in a copy of H in G. The saturation number sat(n,H) is the smallest possible number of edges in a nvertex Hsaturated. The values of saturation numbers for small graphs and n are obtained computationally, and some general results for some specific path unions are also obtained. Full paper: https://www.researchgate.net/publication/271271971_
A RealSpace Green's Function Method for the Numerical Solution of Maxwell's Equations
Coauthored by Victor Minden (ICME PhD candidate), Boris Lo, and Phillip Colella
A new method for solving the transverse part of the freespace Maxwell equations in three dimensions is presented. By taking the Helmholtz decomposition of the electric field and current sources and considering only the divergencefree parts, we obtain an explicit realspace representation for the transverse propagator that explicitly respects finite speed of propagation. Because the propagator involves convolution against a singular distribution, we regularize via convolution with smoothing kernels (Bsplines) prior to sampling based on a method due to Beyer and LeVeque (1992). We show that the ultimate discrete convolutional propagator can be constructed to attain an arbitrarily high order of accuracy by using higherorder regularizing kernels and finite difference stencils and that it satisfies von Neumann’s stability condition. Furthermore, the propagator is compactly supported and can be applied using Hockney’s method (1970) and parallelized using the same observation as made by Vay, Haber, and Godfrey (2013), leading to a method that is computationally efficient. Full Paper: http://msp.org/camcos/2016/112/camcosv11n2p01s.pdf
HigherOrder Organization of Complex Networks
Coauthored by Austin R. Benson (ICME PhD candidate), David F. Gleich (ICME MS '06, PHD '09), and Jure Leskovec
Networks are a fundamental tool for understanding and modeling complex systems in physics, biology, neuroscience, engineering, and social science. Many networks are known to exhibit rich, lowerorder connectivity patterns that can be captured at the level of individual nodes and edges. However, higherorder organization of complex networks—at the level of small network subgraphs—remains largely unknown. Here, we develop a generalized framework for clustering networks on the basis of higherorder connectivity patterns. This framework provides mathematical guarantees on the optimality of obtained clusters and scales to networks with billions of edges. The framework reveals higherorder organization in a number of networks, including information propagation units in neuronal networks and hub structure in transportation networks. Results show that networks exhibit rich higherorder organizational structures that are exposed by clustering based on higherorder connectivity patterns. Full paper: http://science.sciencemag.org/content/353/6295/163.full
A Technique for Updating Hierarchical SkeletonizationBased Factorizations of Integral Operators
Coauthored by Victor Minden (ICME PhD candidate), Anil Damle (ICME PhD '16), Kenneth L. Ho, and Lexing Ying
We present a method for updating certain hierarchical factorizations for solving linear integral equations with elliptic kernels. In particular, given a factorization corresponding to some initial geometry or material parameters, we can locally perturb the geometry or coefficients and update the initial factorization to reflect this change with asymptotic complexity that is polylogarithmic in the total number of unknowns and linear in the number of perturbed unknowns. We apply our method to the recursive skeletonization factorization and hierarchical interpolative factorization and demonstrate scaling results for a number of different twodimensional (2D) problem setups. Read More: http://epubs.siam.org/doi/10.1137/15M1024500
FusionNet: 3D Object Classification Using Multiple Data Representations
Coauthored by Vishakh Hegde (ICME MS student) and Reza Zadeh (ICME PhD '14)
Highquality 3D object recognition is an important com ponent of many vision and robotics systems. We tackle the object recognition problem using two data representations, to achieve leading results on the Princeton ModelNet chal lenge. The two representations:

Volumetric representation: the 3D object is discretized spatially as binary voxels  1 if the voxel is occupied and 0 otherwise.

Pixel representation: the 3D object is represented as a set of projected 2D pixel images.
Current leading submissions to the ModelNet Challenge use Convolutional Neural Networks (CNNs) on pixel represen tations. However, we diverge from this trend and addition ally, use Volumetric CNNs to bridge the gap between the efficiency of the above two representations. We combine both representations and exploit them to learn new features, which yield a significantly better classifier than using either of the representations in isolation. To do this, we introduce new Volumetric CNN (VCNN) architectures. Full paper: https://arxiv.org/pdf/1607.05695v4.pdf