Abstracts of Invited Talks
Jiawei Chen, UBC Okanagan and Southwest University (China)
A Projection Method for Approximating Fixed Points of Quasi Nonexpansive Mappings Without the Usual Demiclosedness Condition
We introduce and analyze an abstract algorithm that aims to find the projection onto a closed
convex subset of a Hilbert space. When specialized to the fixed point set of a quasi nonexpansive
mapping, the required sufficient condition (termed fixed-point closed) is less restrictive than the
usual conditions based on the demiclosedness principle. A concrete example of a subgradient
projector is presented which illustrates the applicability of this generalization.
This is joint work with Heinz H. Bauschke and Xianfu Wang.
Michael Friedlander, University of British Columbia
Infimal convolution of two convex functions results in a "mixture" of the two original functions. This operation is also known as epigraphical addition. Infimal "max" convolution is an analogous
operation, except that it mixes the level sets of the two functions. The properties of max convolution are not nearly as useful for optimization as those of the usual infimal convolution. However, if
we restrict ourselves to gauges (nonnegative sublinear functions), max convolution inherits a host of useful properties. Because so many useful properties related to polarity accrue when restricting
max-convolution to the class of gauge functions, we term the operation polar convolution. We describe the properties of polar convolution, including its role in gauge optimization and in approximating
a gauge by a smooth function that is also a gauge.
This is joint work with Ting Kei Pong (Hong Kong Polytechnic University).
William Hager, University of Florida
Optimization with Polyhedral Constraints
A two-phase strategy is presented for solving an optimization
problem whose feasible set is a polyhedron. Phase one is the gradient
projection algorithm, while phase two is essentially any algorithm
for solving a linearly constrained optimization problem. Using some
simple rules for branching between the two phases, it is shown, under
suitable assumptions, that only the linearly constrained optimization
algorithm is performed asymptotically. Hence, the asymptotic convergence
behavior of the two phase algorithm coincides with the convergence behavior
of the linearly constrained optimizer. Numerical results are presented
using CUTE test problems, a recently developed algorithm for projecting
a point onto a polyhedron, and the conjugate gradient algorithm for
the linearly constrained optimizer.
Simge Küçükyavuz, University of Washington
Chance-Constrained Combinatorial Optimization with a Probability Oracle
A chance-constrained combinatorial optimization problem (CCOP) aims to find a
minimum cost selection of binary decisions, which satisfies a
constraint with high probability. Suppose that we have an oracle that
can compute the probability of satisfying the constraint exactly.
Using this oracle, we propose a general method for solving CCOP
chance-constrained problem exactly. In addition, if CCOP is solved by
a sampling-based approach, the oracle can be used as a tool for
checking and fixing the feasibility of the resulting solution. To
demonstrate the effectiveness of the proposed methods, we apply them
to an NP-hard probabilistic set covering problem, which admits a
polynomial-time exact probability oracle. This is joint work with
Scott Lindstrom, University of Newcastle
The Douglas-Rachford Method for Ellipses and p-Spheres
Notwithstanding the absence of theoretical justification, the Douglas-Rachford method has been successfully employed to solve a variety of non-convex feasibility problems. Jonathan Borwein and Brailey
Sims investigated the specific case of a p-sphere and line, and global convergence was later proven by Joel Benoist outside of a singular submanifold. We continue that investigation by considering two
generalizations of the sphere: p-spheres and ellipses. In so doing, we discover beautiful and unexpected phenomena which shed light on the behavioral properties of the algorithm more generally.
This work is a collaboration with Jonathan Borwein, Brailey Sims, Matthew Skerritt, and Anna Schneider.
Chayne Planiden, UBC Okanagan
A Derivative-free VU-algorithm for Convex Finite-max Functions
ptimization for nonsmooth functions is generally more difficult than for smooth functions, because the functions do not have gradients everywhere to indicate descent directions. Nonsmooth optimization
algorithms such as proximal methods have been developed, but they have slower convergence rates than the gradient-based methods we have for smooth functions. The VU-algorithm is a two-step
optimization technique that minimizes convex nonsmooth functions and obtains a superlinear convergence rate. Part I of this presentation introduces the VU-algorithm and explains how it works.
Derivative-free optimization (DFO) is growing in research popularity, because often in real-world application the gradients or subgradients of a function are either not available, or
problematic/expensive to calculate. In such cases, it is preferable to use a method that does not require gradient information. The main goal of this presentation is to introduce a new minimization
algorithm called the DFO VU-algorithm. The classical VU-algorithm has been modified to work in a derivative-free setting, so that we can take advantage of the VU structure while using approximations
Chris Ryan, University of Chicago (Booth School of Business)
Mixed-integer linear representability, disjunctions and Chvátal functions
— modeling implications
Jeroslow and Lowe gave an exact geometric characterization of subsets of
Rn that are projections of mixed-integer linear sets, also known as
MILP-representable sets. We give an alternate algebraic characterization
by showing that a set is MILP-representable if and only if the set can be
described as the intersection of finitely many affine Chvátal
inequalities. These inequalities are a modification of a concept
introduced by Blair and Jeroslow. This gives a sequential variable
elimination scheme that, when applied to the MILP representation of a set,
explicitly gives the affine Chvátal inequalities characterizing that set.
This is related to the elimination scheme of Williams and Williams-Hooker,
who describe projections of integer sets using disjunctions of affine
Chvátal systems. Our scheme extends their work in two ways. First, we show
that disjunctions are unnecessary, by showing how to find the affine
Chvátal inequalities that cannot be discovered by the Williams-Hooker
scheme. Second, disjunctions of Chvátal systems can give sets that are not
projections of mixed-integer linear sets; so the Williams-Hooker approach
does not give an exact characterization of MILP representability. Finally,
our work can be seen as a generalization of the approach of Blair and
Jeroslow, and Schrijver for constructing consistency testers for integer
This is joint work with Amitabh Basu, Kipp Martin, and Guanyi Wang.
Lin Xiao, Microsoft Research
Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms
We consider primal-dual first-order algorithms for solving convex-concave saddle-point problems where the coupling between the primal and dual variables are bilinear. For many problems in machine
learning, the coupling matrix consists of all the data. In order to obtain fast linear rates of convergence, we usually need to assume both strong convexity in the primal variables and strong
concavity in the dual variables. In this talk, we show that such an assumption can be relaxed if the data matrix is full rank, which can effectively transfer the strong convexity or concavity from one
side to the other. We show how to exploit strong convexity from data by setting the step sizes appropriately for the Chambolle-Pock batch algorithm, as well as for randomized primal-dual algorithms.
Since the strong convexity parameter from data is hard to estimate, we propose simple heuristics for adaptively tuning the step sizes and demonstrate their effectiveness in numerical experiments. This
is joint work with Jialei Wang from University of Chicago.
Zirui Zhou, Simon Fraser University
Iteration-complexity of first-order inexact augmented Lagrangian methods for convex conic programming
In this talk we consider a class of convex conic programming. We propose an inexact augmented Lagrangian (I-AL) method for this problem, where the subproblems are solved approximately by a variant of
Nesterov’s optimal first-order method. We show that the I-AL method is equivalent with the proximal point algorithm (PPA) applied to the monotone operator associated with the Lagrangian dual function.
Based on this fact, we establish a bound on the total number of first-order iterations for computing an approximate Karush-Kuhn-Tucker (KKT) point to the problem. We also propose a variant of I-AL,
which corresponds to applying PPA to the monotone operator associated with the ordinary Lagrangian function, and show that it has better complexity bound than the original I-AL. In contrast to
existing works on I-AL methods, our algorithms are practically applicable to a broader class of problems, and our analysis also provides sharper complexity bounds.
This is joint work with Zhaosong Lu.