Plenary Lectures

Tuesday, July 14th: Laurent El Ghaoui, University of California Berkeley
Bio: Laurent El Ghaoui graduated from Ecole Polytechnique (Palaiseau, France) in 1985, and obtained his PhD in Aeronautics and Astronautics at Stanford University in 1990. He taught at several institutions in France, including Ecole Polytechnique, before joining the EECS department at UC Berkeley in 1999. His research interests include robust optimization, largescale machine learning, with a focus on text analytics.
Optimization in the Age of Big Data: Sparsity and Robustness at Scale
The term “big data” is too often associated with the sole task of applying machine learning analytics to large data sets. It seems that optimization has been concerned with large data sets for a long time already, not just as purveyor of algorithms for analytics but also as models for decisionmaking. What is changing in the interface between learning and decisionmaking? What is the impact of big data on optimization? I will present various approaches and perspectives stemming from the application of optimization models in a big data context. The talk will focus on sparsity and robustness, both in statistical learning and decisionmaking problems. Some case studies involving online retail problems, finance and energy resource management will be presented. The emerging picture is that of an ever closer integration between the two fields, at both practical and fundamental levels.

Monday, July 13th: Jim Geelen, University of Waterloo, Canada
Bio: Jim Geelen completed his PhD at the University of Waterloo. After three short postdoctoral fellowships at CWI (Amsterdam), RIMS (Kyoto), and ZPR (Cologne), he returned to the University of Waterloo. For the past 15 years he, together with Bert Gerards and Geoff Whittle, has been working on extending the graph minors project to binary matroids.
Matroid minors project
Over the past 15 years I have been working withBert Gerards and Geoff Whittle on extending the Graph Minors Project, of Paul Seymour and Neil Robertson, to minorclosed classes of representable matroids. This talk is intended to be a gentle overview of our project, covering the main results and some applications in coding theory. No prior exposure to matroid theory is assumed.

Thursday, July 16th: Daniel Kuhn, Ecole Polytechnique Fédérale de Lausanne, Switzerland
Bio: Daniel Kuhn holds the Chair of Risk Analytics and Optimization at EPFL. Before joining EPFL, he was a faculty member at Imperial College London (20072013) and a postdoctoral researcher at Stanford University (20052006). He received a PhD in Economics from the University of St. Gallen in 2004 and an MSc in Theoretical Physics from ETH Zurich in 1999. His research interests revolve around robust optimization and stochastic programming.
A Distributionally Robust Perspective on Uncertainty Quantification and Chance Constrained Programming
The objective of uncertainty quantification is to certify that a given physical, engineering or economic system satisfies multiple safety conditions with high probability. A more ambitious goal is to actively influence the system so as to guarantee and maintain its safety, a scenario which can be modeled through a chance constrained program. In this talk we assume that the parameters of the system are governed by an ambiguous distribution that is only known to belong to an ambiguity set characterized through generalized moment bounds and structural properties such as symmetry, unimodality or independence patterns. We delineate the watershed between tractability and intractability in ambiguityaverse uncertainty quantification and chance constrained programming. Using tools from distributionally robust optimization, we derive explicit conic reformulations for tractable problem classes and suggest efficiently computable conservative approximations for intractable ones.

Friday, July 17th: Daniel A. Spielman, Yale University
Bio: Daniel Alan Spielman received a B.A. in Mathematics and Computer Science from Yale in 1992 and a Ph.D in Applied Mathematics from M.I.T. in 1995. He spent a year as a NSF Mathematical Sciences Postdoc in the Computer Science Department at U.C. Berkeley, and then taught in the Applied Mathematics Department at M.I.T. until 2005. Since 2006, he has been a Professor of Computer Science and Mathematics at Yale University. He has received many awards, including the 1995 ACM Doctoral Dissertation Award, the 2002 IEEE Information Theory Paper Award, the 2008 Godel Prize, the 2009 Fulkerson Prize, the 2010 Nevanlinna Prize, the 2014 Polya Prize, an inaugural Simons Investigator Award, and a MacArthur Fellowship. His main research interests include the design and analysis of algorithms, network science, machine learning, digital communications and scientific computing.
Laplacian Matrices of Graphs: Algorithms and Applications
The problem of solving systems of linear equations in the Laplacian matrices of graphs arises in many fields including Optimization, Machine Learning, Computer Vision, and of course Computational Science. We will explain what these matrices are and why they arise in so many applications. We then will survey recently developed algorithms that allow us to solve such systems of linear equations in nearlylinear time. The main tools used in these algorithms are sparse approximations of graphs and approximations of graphs by lowstretch spanning trees. The ability to quickly solve such systems of equations has led to the asymptotically fastest algorithms for computing maximum flows and minimum cost flows. The techniques used in the Laplacian equation solvers have been used to design the fastest algorithms for approximating maximum flows. We will provide an introduction to these recent developments.

Wednesday, July 15th: Stephen J. Wright, University of WisconsinMadison
Bio: Stephen Wright received a B.Sc.(Hons) degree in 1981 and a Ph.D. in 1984 from the University of Queensland. He has held appointments at the University of Arizona, North Carolina State University, Argonne National Laboratory (during the 1990s), and the University of Chicago. Since 2001 he has been at the University of WisconsinMadison. His research is in continuous optimization and its applications to all areas of science and engineering.
Coordinate Descent Algorithms
Coordinate descent algorithms solve optimization problems by successively searching along coordinate directions or coordinate hyperplanes. They have been used in applications for many years, and their popularity continues to grow because of their usefulness in data analysis, machine learning, and other areas of current interest. This talk will describe the fundamentals of the coordinate descent approach, together with its variants and extensions. Convergence properties will be described, mostly with reference to convex objectives. We pay particular attention to a certain problem structure that arises commonly in machine learning applications, showing that efficient implementations of accelerated coordinate descent algorithms are possible for such structures. We also describe parallel variants and discuss their convergence properties under several models of parallel execution.
SemiPlenary Lectures

Tuesday, July 14th: Samuel A. Burer, University of Iowa
Bio: Sam Burer is Professor and Tippie Research Fellow in the Department of Management Sciences at the University of Iowa. He received his Ph.D. from the Georgia Institute of Technology, and his research and teaching interests include convex optimization, mixed integer nonlinear programming, operations research, and management sciences. His research has been supported by grants from the National Science Foundation, and he serves on the editorial board of Operations Research, SIAM Journal on Optimization, Mathematics of Operations Research, and Optima. He also serves as a Council Member of the Mathematical Optimization Society, and as a Member of the Board of Directors of the INFORMS Computing Society.
A Gentle, Geometric Introduction to Copositive Optimization
This talk illustrates the fundamental connection between nonconvex quadratic optimization and copositive optimization—a connection that allows the reformulation of nonconvex quadratic problems as convex ones in a unified way. We focus on examples having just a few variables or a few constraints for which the quadratic problem can be formulated as a copositivestyle problem, which itself can be recast in terms of linear, secondordercone, and semidefinite optimization. A particular highlight is the role played by the geometry of the feasible set.

Monday, July 13th: Roberto Cominetti, Universidad de Chile, Chile
Bio: Roberto Cominetti graduated as Mathematical Engineer from Universidad de Chile in 1986 and received a PhD in Applied Mathematics from Université Blaise Pascal (Clermontt II) in 1989. He has developed his career at the University of Chile, first at the Department of Mathematical Engineering and more recently at the Department of Industrial Engineering. His main research interests are in convex optimization and algorithmic game theory as well as their applications to equilibrium and dynamics in transportation networks.
Equilibrium routing under uncertainty
In this talk we review several alternative models that have been used to describe traffic in congested networks, both in urban transport and telecommunications. We focus on situations where travel times are subject to random fluctuations and how this variability affects the traffic flows. We consider both atomic and nonatomic equilibrium models, and we discuss a class of adaptive dynamics that describe the behavior of agents and which provides a plausible microfoundation for the emergence of equilibrium. We also discuss some recent ideas on how risk aversion to random travel times might be incorporated in the models. In our presentation we use convex optimization to provide a unifying framework for the different concepts of equilibrium.

Wednesday, July 15th: Michele Conforti, University of Padova, Italy
Bio: Michele Conforti received his BS from University of Bologna and a Ph.D. from Carnegie Mellon University. He is currently professor in the Mathematics Department, University of Padova. His interests are mainly in Combinatorial Optimization and Graph Theory. He is corecipient of the Fulkerson Prize. In the past years he has worked in Integer Programming and has recently coauthored a book on the subject.
A geometric approach to cutgenerating functions
The cuttingplane approach to integer programming was initiated more that 40 years ago: Gomory introduced the corner polyhedron as a relaxation of a mixed integer set in tableau form and Balas introduced intersection cuts for the corner polyhedron. This line of research was left dormant for several decades until relatively recently, when a paper of Andersen, Louveaux, Weismantel and Wolsey generated renewed interest in the corner polyhedron and intersection cuts. Recent developments rely on tools drawn from convex analysis, geometry and number theory, and constitute an elegant bridge between these areas and integer programming. We survey these results and highlight recent breakthroughs in this area.

Wednesday, July 15th: Tamara G. Kolda, Sandia National Laboratories
Bio: Tamara Kolda is a Distinguished Member of the Technical Staff at Sandia National Laboratories in Livermore, California. Her research interests include multilinear algebra and tensor decompositions, graph models and algorithms, data mining, optimization, nonlinear solvers, parallel computing and the design of scientific software. She received her Ph.D. at from the University of Maryland in 1997 and was the Oak Ridge National Lab Householder Postdoc in Scientific Computing from 199799.
Optimization Challenges in Tensor Factorization
Tensors are multiway arrays, and tensor decomposition is a powerful tool for compression and data interpretation. In this talk, we demonstrate the utility of tensor decomposition with several examples and explain the optimization challenges, both theoretical and practical. The optimization problems are nonconvex, but they can typically be solved via an alternating approach that yields convex subproblems. We consider open problems such as determining the model complexity, tensor completion, incorporating symmetries and other constraints, handling ambiguities in scaling and permutation, enforcing structure like sparsity, and considering alternative objective functions.

Thursday, July 16th: Andrea Lodi, University of Bologna, Italy
Bio: Andrea Lodi received the PhD in System Engineering from the University of Bologna in 2000 and he has been Herman Goldstine Fellow at the IBM Mathematical Sciences Department, NY in 2005–2006. He is full professor of Operations Research at DEI, University of Bologna since 2007. His main research interests are in MixedInteger Linear and Nonlinear Programming. He is author of more than 70 publications in the top journals of the field of Mathematical Programming. He serves as Associated Editor for several prestigious journals in the area and is currently network coordinator of two large EU projects, and, since 2006, consultant of the IBM CPLEX research and development team.
On Mathematical Programming with Indicator Constraints
In this paper we review the relevant literature on mathematical optimization with logical implications, i.e., where constraints can be either active or disabled depending on logical conditions to hold. In the case of convex functions, the theory of disjunctive programming allows one to formulate these logical implications as convex nonlinear programming problems in a space of variables lifted with respect to its original dimension. We concentrate on the attempt of avoiding the issue of dealing with large NLPs. In particular, we review some existing results that allow to work in the original space of variables for two relevant special cases where the disjunctions corresponding to the logical implications have two terms. Then, we significantly extend these special cases in two different directions, one involving more general convex sets and the other with disjunctions involving three terms. Computational experiments comparing disjunctive programming formulations in the original space of variables with straightforward bigM ones show that the former are computationally viable and promising. (Joint work with Pierre Bonami, Andrea Tramontani and Sven Wiese).

Tuesday, July 14th: Asu Özdaglar, Massachusetts Institute of Technology
Bio: Asu Özdaglar received the B.S. degree in electrical engineering from the Middle East Technical University, Ankara, Turkey, in 1996, and the S.M. and the Ph.D. degrees in electrical engineering and computer science from the Massachusetts Institute of Technology, Cambridge, in 1998 and 2003, respectively.
She is currently a professor in the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology. She is the director of the Laboratory for Information and Decision Systems. Her research expertise includes optimization theory, with emphasis on nonlinear programming and convex analysis, game theory, with applications in communication, social, and economic networks, distributed optimization and control, and network analysis with special emphasis on contagious processes, systemic risk and dynamic control.
Professor Özdaglar is the recipient of a Microsoft fellowship, the MIT Graduate Student Council Teaching award, the NSF Career award, the 2008 Donald P. Eckman award of the American Automatic Control Council, the Class of 1943 Career Development Chair, the inaugural Steven and Renee Innovation Fellowship, and the 2014 Spira teaching award. She served on the Board of Governors of the Control System Society in 2010 and was an associate editor for IEEE Transactions on Automatic Control. She is currently the area coeditor for a new area for the journal Operations Research, entitled “Games, Information and Networks” and the chair of the Control System Society Technical Committee “Networks and Communication Systems”. She is the coauthor of the book entitled “Convex Analysis and Optimization” (Athena Scientific, 2003).
Fast Distributed Algorithms for Multiagent Optimization
Motivated by today’s data processing needs over large networks with local collection and processing of information, we consider a multi agent optimization problem where a network of agents collectively solves a global optimization problem with the objective function given by the sum of locally known convex functions. We present new distributed algorithms drawing on two different approaches: The first is based on Alternating Direction Method of Multipliers (ADMM), which is a classical method for sequentially decomposing optimization problems with coupled constraints. We show that convergence rate of distributed ADMMbased algorithms is O(1/k) (where k is the iteration number), which is faster than the O(1/sqrt(k)) rate of subgradientbased methods, and highlight the dependence on the network structure. The second approach develops an incremental Newton (IN) method, which accesses problem data sequentially. Under strong convexity of local objective functions, a gradient growth condition, and with proper stepsize rules, we show that convergence rate of the IN method is linear.

Thursday, July 16th: Werner Römisch, Humboldt Universität Berlin, Germany
Bio: Werner Römisch received a mathematics diploma in 1971 and a PhD in 1976 both from HumboldtUniversity Berlin. He continued at HumboldtUniversity Berlin and received there a habilitation in 1985 and a full professorship in 1993. His research is mainly in stochastic optimization with side interests in stochastic equations and risk, and applications in energy and revenue management.
QuasiMonte Carlo methods for linear twostage stochastic programming problems
QuasiMonte Carlo algorithms are studied for generating scenarios to solve twostage linear stochastic programming problems. Their integrands are piecewise linearquadratic, but do not belong to the function spaces considered for QMC error analysis. We show that under some weak geometric condition on the twostage model all terms of their ANOVA decomposition, except the one of highest order, are continuously differentiable and second order mixed derivatives exist almost everywhere and belong to $L_{2}$. This implies that randomly shifted lattice rules may achieve the optimal rate of convergence $O(n^{1+\delta})$ with $\delta\in(0,\frac{1}{2})$ and a constant not depending on the dimension if the effective superposition dimension is less than or equal to two. The geometric condition is shown to be satisfied for almost all covariance matrices if the underlying probability distribution is normal. We discuss effective dimensions and techniques for dimension reduction. Numerical experiments for a production planning model with normal inputs show that indeed convergence rates close to the optimal rate are achieved when using randomly shifted lattice rules or scrambled Sobol’ point sets accompanied with principal component analysis for dimension reduction.

Friday, July 17th: Frank Vallentin, University of Köln, Germany
Bio: Frank Vallentin is a professor of applied mathematics and computer science at Universität zu Köln. In 2003 he received his Ph.D. in mathematics from Technische Universität München. Past appointments include assistant and associate professor at Technische Universiteit Delft, postdoc at Centrum Wiskunde & Informatica in Amsterdam and postdoc at the Hebrew University of Jerusalem. His research interests include optimization, geometry, discrete and experimental mathematics.
Mathematical optimization for packing problems
How densely can one pack given objects into a given container? Such packing problems are fundamental problems in geometric optimization. Next to being classical mathematical challenges there are many applications in diverse areas such as information theory, materials science, physics, logistics, approximation theory. Studying packing problems one is facing two basic tasks: Constructions: How to construct packings which are conjecturally optimal? Obstructions: How to prove that a given packing is indeed optimal? For the first basic task researchers in mathematics and engineering found many heuristics which often work well in practice. In the talk I want explain computational tools for the second basic task. These tools are a blend of tools coming from infinitedimensional semidefinite optimization and harmonic analysis, together with computational techniques coming from real algebraic geometry and polynomial optimization. I will report on computational results, which are frequently the bestknown.

Monday, July 13th: Pascal van Hentenryck, NICTA, Australia
Bio: Pascal Van Hentenryck got his ScB and PhD from the University of Namur in Belgium. He leads the Optimization Research Group at NICTA and holds a vicechancellor chair in dataintensive computing at the Australian National University. Prior to his NICTA appointment, he was professor at Brown University for about 20 years. His current research is at the intersection of data science and optimization with applications in energy, logistics, disaster management, and computational social science.
Complexity, Approximation, and Relaxation of the Power Flow Equations
The design, control, and operation of the power grid, probably the largest and most expansive system ever engineered, require the solving of optimization problems over the steadystate power flow equations. The resulting mixed nonconvex programs are often computationally challenging and increasingly so with the increased stochasticity in generation and load. This talk presents some new complexity results, as well as a number of advances in approximating and relaxing the power flow equations to address emerging applications in power systems, including largescale power restoration after blackouts, the design of resilient networks, and the integration of renewable generation. Extensive computational results demonstrate some of the benefits of the proposed techniques.

Friday, July 17th: Yaxiang Yuan, Chinese Academy of Sciences, China
Bio: Yaxiang Yuan received a BSc from Xiangtan University (China) in 1981 and a PhD from University of Cambridge (UK) in 1986. He was Rutherford Fellow at Fitzwilliam College, University ofCambridge from 19851988. He returned to China in 1988, and has been working as a full professor at the Chinese Academy of Sciences since then. His research area is mainly in continuous optimization, particularly on trust region methods, quasiNewton methods, nonlinear conjugate gradients, and subspace methods.
Recent Advances in TrustRegion Algorithms
Trustregion methods are a class of numerical methods for optimization. Unlike line search type methods where a line search is carried out in each iteration, trustregion methods compute a trial step by solving a trustregion subproblem where a model function is minimized within a trust region. Due to the trustregion constraint, nonconvex models can be used in trustregion subproblems, and trustregion algorithms can be applied to nonconvex and illconditioned problems. Normally it is easier to establish the global convergence of a trustregion algorithm than that of its line search counterpart. In the paper, we review recent results on trustregion methods for unconstrained optimization, constrained optimization, nonlinear equations and nonlinear least squares, nonsmooth optimization and optimization without derivatives. Results on trustregion subproblems and regularization methods are also discussed.