
Abstracts of Invited Talks

Jiawei Chen, UBC Okanagan and Southwest University (China)
A Projection Method for Approximating Fixed Points of Quasi Nonexpansive Mappings Without the Usual Demiclosedness Condition
We introduce and analyze an abstract algorithm that aims to find the projection onto a closed
convex subset of a Hilbert space. When specialized to the fixed point set of a quasi nonexpansive
mapping, the required sufficient condition (termed fixedpoint closed) is less restrictive than the
usual conditions based on the demiclosedness principle. A concrete example of a subgradient
projector is presented which illustrates the applicability of this generalization.
This is joint work with Heinz H. Bauschke and Xianfu Wang.

Michael Friedlander, University of British Columbia
Polar convolution
Infimal convolution of two convex functions results in a "mixture" of the two original functions. This operation is also known as epigraphical addition. Infimal "max" convolution is an analogous
operation, except that it mixes the level sets of the two functions. The properties of max convolution are not nearly as useful for optimization as those of the usual infimal convolution. However, if
we restrict ourselves to gauges (nonnegative sublinear functions), max convolution inherits a host of useful properties. Because so many useful properties related to polarity accrue when restricting
maxconvolution to the class of gauge functions, we term the operation polar convolution. We describe the properties of polar convolution, including its role in gauge optimization and in approximating
a gauge by a smooth function that is also a gauge.
This is joint work with Ting Kei Pong (Hong Kong Polytechnic University).

William Hager, University of Florida
Optimization with Polyhedral Constraints
A twophase strategy is presented for solving an optimization
problem whose feasible set is a polyhedron. Phase one is the gradient
projection algorithm, while phase two is essentially any algorithm
for solving a linearly constrained optimization problem. Using some
simple rules for branching between the two phases, it is shown, under
suitable assumptions, that only the linearly constrained optimization
algorithm is performed asymptotically. Hence, the asymptotic convergence
behavior of the two phase algorithm coincides with the convergence behavior
of the linearly constrained optimizer. Numerical results are presented
using CUTE test problems, a recently developed algorithm for projecting
a point onto a polyhedron, and the conjugate gradient algorithm for
the linearly constrained optimizer.

Simge Küçükyavuz, University of Washington
ChanceConstrained Combinatorial Optimization with a Probability Oracle
A chanceconstrained combinatorial optimization problem (CCOP) aims to find a
minimum cost selection of binary decisions, which satisfies a
constraint with high probability. Suppose that we have an oracle that
can compute the probability of satisfying the constraint exactly.
Using this oracle, we propose a general method for solving CCOP
chanceconstrained problem exactly. In addition, if CCOP is solved by
a samplingbased approach, the oracle can be used as a tool for
checking and fixing the feasibility of the resulting solution. To
demonstrate the effectiveness of the proposed methods, we apply them
to an NPhard probabilistic set covering problem, which admits a
polynomialtime exact probability oracle. This is joint work with
HaoHsiang Wu.

Scott Lindstrom, University of Newcastle
The DouglasRachford Method for Ellipses and pSpheres
Notwithstanding the absence of theoretical justification, the DouglasRachford method has been successfully employed to solve a variety of nonconvex feasibility problems. Jonathan Borwein and Brailey
Sims investigated the specific case of a psphere and line, and global convergence was later proven by Joel Benoist outside of a singular submanifold. We continue that investigation by considering two
generalizations of the sphere: pspheres and ellipses. In so doing, we discover beautiful and unexpected phenomena which shed light on the behavioral properties of the algorithm more generally.
This work is a collaboration with Jonathan Borwein, Brailey Sims, Matthew Skerritt, and Anna Schneider.

Chayne Planiden, UBC Okanagan
A Derivativefree VUalgorithm for Convex Finitemax Functions
ptimization for nonsmooth functions is generally more difficult than for smooth functions, because the functions do not have gradients everywhere to indicate descent directions. Nonsmooth optimization
algorithms such as proximal methods have been developed, but they have slower convergence rates than the gradientbased methods we have for smooth functions. The VUalgorithm is a twostep
optimization technique that minimizes convex nonsmooth functions and obtains a superlinear convergence rate. Part I of this presentation introduces the VUalgorithm and explains how it works.
Derivativefree optimization (DFO) is growing in research popularity, because often in realworld application the gradients or subgradients of a function are either not available, or
problematic/expensive to calculate. In such cases, it is preferable to use a method that does not require gradient information. The main goal of this presentation is to introduce a new minimization
algorithm called the DFO VUalgorithm. The classical VUalgorithm has been modified to work in a derivativefree setting, so that we can take advantage of the VU structure while using approximations
to gradients.

Chris Ryan, University of Chicago (Booth School of Business)
Mixedinteger linear representability, disjunctions and Chvátal functions
— modeling implications
Jeroslow and Lowe gave an exact geometric characterization of subsets of
R^{n} that are projections of mixedinteger linear sets, also known as
MILPrepresentable sets. We give an alternate algebraic characterization
by showing that a set is MILPrepresentable if and only if the set can be
described as the intersection of finitely many affine Chvátal
inequalities. These inequalities are a modification of a concept
introduced by Blair and Jeroslow. This gives a sequential variable
elimination scheme that, when applied to the MILP representation of a set,
explicitly gives the affine Chvátal inequalities characterizing that set.
This is related to the elimination scheme of Williams and WilliamsHooker,
who describe projections of integer sets using disjunctions of affine
Chvátal systems. Our scheme extends their work in two ways. First, we show
that disjunctions are unnecessary, by showing how to find the affine
Chvátal inequalities that cannot be discovered by the WilliamsHooker
scheme. Second, disjunctions of Chvátal systems can give sets that are not
projections of mixedinteger linear sets; so the WilliamsHooker approach
does not give an exact characterization of MILP representability. Finally,
our work can be seen as a generalization of the approach of Blair and
Jeroslow, and Schrijver for constructing consistency testers for integer
programs.
This is joint work with Amitabh Basu, Kipp Martin, and Guanyi Wang.

Lin Xiao, Microsoft Research
Exploiting Strong Convexity from Data with PrimalDual FirstOrder Algorithms
We consider primaldual firstorder algorithms for solving convexconcave saddlepoint problems where the coupling between the primal and dual variables are bilinear. For many problems in machine
learning, the coupling matrix consists of all the data. In order to obtain fast linear rates of convergence, we usually need to assume both strong convexity in the primal variables and strong
concavity in the dual variables. In this talk, we show that such an assumption can be relaxed if the data matrix is full rank, which can effectively transfer the strong convexity or concavity from one
side to the other. We show how to exploit strong convexity from data by setting the step sizes appropriately for the ChambollePock batch algorithm, as well as for randomized primaldual algorithms.
Since the strong convexity parameter from data is hard to estimate, we propose simple heuristics for adaptively tuning the step sizes and demonstrate their effectiveness in numerical experiments. This
is joint work with Jialei Wang from University of Chicago.

Zirui Zhou, Simon Fraser University
Iterationcomplexity of firstorder inexact augmented Lagrangian methods for convex conic programming
In this talk we consider a class of convex conic programming. We propose an inexact augmented Lagrangian (IAL) method for this problem, where the subproblems are solved approximately by a variant of
Nesterovâ€™s optimal firstorder method. We show that the IAL method is equivalent with the proximal point algorithm (PPA) applied to the monotone operator associated with the Lagrangian dual function.
Based on this fact, we establish a bound on the total number of firstorder iterations for computing an approximate KarushKuhnTucker (KKT) point to the problem. We also propose a variant of IAL,
which corresponds to applying PPA to the monotone operator associated with the ordinary Lagrangian function, and show that it has better complexity bound than the original IAL. In contrast to
existing works on IAL methods, our algorithms are practically applicable to a broader class of problems, and our analysis also provides sharper complexity bounds.
This is joint work with Zhaosong Lu.
