Combinatorial optimization problems for genome rearrangement phylogeny
Extending Lovasz's Theta Body of a Graph to all Real Varieties
The theta body of a graph, constructed by Lovasz about 30 years ago, is one of
the first examples of SDP approximation in discrete optimization. This body has
a not-so-well-known definition in terms of sums of squares polynomials that
appears in papers by Lovasz in the 1990s. Using this definition and a question
he raises, we define a hierarchy of theta bodies for any real algebraic variety
(real solutions to a polynomial system over the reals). We prove that these
theta bodies are SDP relaxations of the convex hull of real varieties and are in
fact a version of Lasserre's relaxations. For the max cut problem this hierarchy
appears to be new. When the real variety is finite (as is usual in combinatorial
optimization), we give a complete characterization of when the first theta body in
the hierarchy equals the convex hull of the variety.
Joint work with Joao Gouveia and Pablo Parrilo.
On the Implementation of a Primal-Dual Interior Point Method
We discuss Mehrotra’s predictor-corrector technique in interior point methods.
Presolving in Linear Programming
We discuss Andersen and Andersen’s paper on presolving in linear programming.
Improving Patient Flow and Resource Utilization in an Ambulatory Clinic Through
In this talk we consider an ambulatory care unit (ACU) in a large cancer centre,
where operational and resource utilization challenges led to overcrowding,
excessive delays, and concerns regarding safety of critical patient care duties.
We use simulation to analyze the simultaneous impact of operations, scheduling,
and resource allocation on patient wait time, clinic overtime, and resource
utilization. The impact of these factors has been studied before, but usually in
isolation. Further, our model considers multiple clinics operating concurrently,
and includes the extra burden of training residents and medical students during
patient consults. Through scenario analyses we found that the best outcomes
were obtained when not one but multiple changes were implemented
simultaneously. We developed configurations that achieve a reduction of up to
70% in patient wait times and 25% in physical space requirements, with the
same appointment volume. We will present and discuss the key findings of this
study and potential extensions to other outpatient services.
A Redistributed Proximal Bundle Method for Nonconvex Optimization
Proximal bundle methods have been shown to be highly successful optimization
methods for nonsmooth convex optimization. This naturally leads to the
question of whether bundle methods can be extended to work for nonconvex
problems. In this talk we review some past (convex) results for proximal bundle
methods, and demonstrate a method for extending classic bundle methods to a
The method is based on generating cutting planes model not of the objective
function (as most bundle methods do) but of a local convexification of the
objective function. The convexification parameter is calculated on the fly,
which allows for both strong convergence results and the ability to inform the
user as to what proximal parameters are sufficiently large to ensure a unique
proximal point of the model functions. This novel approach is sound from both
the primal and dual perspective, which will make it possible to create workable
nonconvex algorithms based on nonconvex VU theory.
A Newton-CG Augmented Lagrangian Method for Large Scale SDPs
We consider a Newton-CG augmented Lagrangian method for solving
semidefinite programming (SDP) problems from the perspective of approximate
our proposed method, we characterize the Lipschitz continuity of the
corresponding solution mapping at the origin. For the inner problems, we show
that the positive definiteness of the generalized Hessian of the objective
function in these inner problems, a key property for ensuring the efficiency of
using an inexact semismooth Newton-CG method to solve the inner problems, is
equivalent to the constraint nondegeneracy of the corresponding dual problems.
Numerical experiments on a variety of large scale SDPs with the matrix
dimension n up to 4,110 and the number of equality constraints m up to
2,156,544 show that the proposed method is very efficient. We are also able to
solve the SDP problem fap36 (with n=4,110 and m=1,154,467) in the 7th
DIMACS Implementation Challenge much more accurately than in previous
[This is a joint work with Xin-Yuan Zhao and Kim Chuan Toh at the National
Large-scale convex optimization over matrices for multi-task learning
A recently proposed approach to multi-task learning entails minimizing a function
of the form: f(Q,W) = trace(W'Q-1W) + trace(Q) + ||AW-B||2
where Q is an n by n real matrix constrained to be positive definite, and W is an
n by m real matrix. Here A is a p by n real matrix whose columns correspond to
biological images, B is a p by m real matrix of measured data, p is the number of
data points, m is the number of tasks. Q-1 is a (unknown) covariance matrix.
In an application to Drosophila gene expression pattern analysis, m, n, p can be
up to 100, 2000, 3000, so the number of variables is about 2 million. Worse, the
minimum is typically not attained.
We show that this problem can be reduced to a semidefinite program (SDP) that
always has an optimal solution. The SDP is still too large to be solved by
general SDP solvers. By considering the dual, we show that the dual problem
reduces to a convex quadratic optimization problem over n by m real matrices
and subject to a matrix version of ball constraint.
Moreover, projection onto the "ball" can be done efficiently using singular value
decomposition. An implementation of Nesterov's accelerated gradient projection
method finds a sufficiently accurate solution of the original problem in a matter of
minutes on most instances.
This is joint work with Ting Kei Pong (Univ. Washington) and Jieping Ye
(Arizona State Univ.)
Metamodel-based Design Optimization (MBDO) in Support of Modern
Modern engineering design features computation-intensive analyses and
Simulations. Tools such as finite element analysis (FEA), computational
fluid dynamics (CFD), are widely used in engineering design applications.
How to integrate these various computation-intensive tools to support design
synthesis and optimization has been a challenging task. This talk will start with
an introduction of such a challenge, after which the inadequacy of classic
gradient-based optimization methods will be analyzed from the perspective of
design support. To tackle the challenge, a promising approach, metamodel-
based design optimization (MBDO), will be introduced. Metamodel, literally
meaning "model of model," is a simple surrogate of the computation-intensive
function, on which further analysis and optimization is performed. One MBDO
approach for multi-objective optimization, the Pareto Set Pursuing (PSP)
method, will be described in detail. PSP overcomes common difficulties of
MBDO approaches, demonstrates high efficiency for multi-objective optimization,
and represents an innovative interpretation of optimization. Testing and
application of PSP will also be given. Future research topics on MBDO will be
highlighted at the end. This talk should be of interests to researchers in complex
system identification, sampling, modeling, simulation-based design, and
Energy and Quality Optimization in
Mobile TV networks enable users to watch their favorite TV shows on small
hand-held devices while traveling. Market research forecasts that mobile TV
will grow to be a multi-billion dollar industry with several hundred million
subscribers in the next few years. In this talk, we will first give a brief overview of
mobile TV networks highlighting some of the main research issues in them.
Then, we will focus on the problem of minimizing the energy consumption of
mobile devices while maximizing the visual quality of different TV channels.
More specifically, in mobile TV broadcast networks, the base station broadcasts
TV channels in bursts such that mobile devices can receive a burst of traffic and
then turn off their radio frequency circuits till the next burst in order to save
energy. The base station must carefully construct the burst transmission
schedule for all TV channels. This is called the burst scheduling problem. We
prove that the burst scheduling problem for TV channels with arbitrary bit rates is
NP-complete. We then propose a practical simplification of the general problem,
which allows TV channels to be classified into multiple classes with bit rates that
have power of two increments, e.g., 100, 200, and 400 kbps. Using this practical
simplification, we propose an optimal and efficient burst scheduling algorithm. In
addition, we propose a near-optimal approximation algorithm to solve the
general scheduling problem. We present theoretical analysis, simulation, and
actual implementation in a mobile TV testbed to demonstrate the optimality,
practicality, and efficiency of the proposed algorithms.
Transversal structures on triangulations: a combinatorial study and algorithmic
We study the combinatorial properties and algorithmics of planar maps
(planar graphs embedded in the plane). The techniques are illustrated on a
particular family, namely plane triangulations with no separating triangle
These triangulations can be endowed with specific edge bicolorations, called
transversal structures, which makes it possible to count the triangulations
bijectively, generate them at random, encode them and draw them on a grid.
Simulations indicate that the grid used by the drawing has with high
probability a size close to 11n/27 * 11n/27, where n is the number of
vertices. As we will see, this can be proved rigorously using tools from
bijective combinatorics as well as analytic combinatorics.
Gröbner bases of lattices, corner polyhedra, and integer programming
Simon Lo, SFU
We discuss the paper of Sturmfels, Weismantel and Ziegler describing
Connections between Gröbner bases, corner polyhedra and integer progamming.
Expressing the TSP and Matching Polytopes by Symmetric Linear Programs
We discuss Mihalis Yannakakis' famous result that the TSP and matching
polytopes cannot be expressed by symmetric LPs of polynomial size.
SUR 15-300 & IRMACS 10908
Some analogies between simplex and interior point methods
Linear optimization consists of maximizing, or minimizing, a linear function over a
domain defined by a set of linear inequalities. The simplex and primal-dual
interior point methods are currently the most computationally successful
algorithms. While the simplex methods follow an edge path, the interior point
methods follow the central path. In this talk we highlight links between the edge
and central paths, and between the diameter of a polytope and the largest
possible total curvature of the associated central path. We prove continuous
analogues of two results of Holt-Klee, and Klee-Walkup. We construct a family of
polytopes which attain the conjectured order of the largest total curvature, and
we prove that the special case where the number of inequalities is twice the
dimension is equivalent to the general case.
This talk is based on joint-works with Tamas Terlaky,
Linear Satisfiability Algorithm for 3CNF Formulas of Certain Signaling Networks
A simple model of signal transduction networks in molecular biology consists of
CNF formulas with two and three literals per clause. A necessary condition for
correctness of the network model is satisfiability of the formulas. Deciding
satisfiability turns out to be NP-complete. However, for a subclass that still is of
practical interest, a linear satisfiability algorithm and related characterization of
unsatisfiability can be established. The subclass is a special case of so-called
closed sums of CNF formulas. Computational Results on an extensive curated
Human T-Cell Model are included.
Joint work with Kathrin Niermann, Klaus Truemper and Robert Weismantel
Comparing Two Systems: Beyond Common Random Numbers
Suppose one is comparing two closely related stochastic systems via simulation.
Common-random numbers (CRN) involves using the same streams of uniform
random variables on both systems to sharpen the comparison. One can view
CRN as a special choice of copula that gives the joint distribution on the inputs to
both systems. We discuss the possibility of using more general copulae,
including simple examples that show how this can outperform CRN.
Joint work with Sam Ehrlichman
Appointment Scheduling with Discrete Random Durations and Applications
We present two papers and give an overview of the existing work.
In the first paper*, we determine optimal appointment times for a given sequence
of jobs (medical procedures) on a single processor (operating room, examination
facility), to minimize the expected total underage and overage costs when each
job has a random duration given by its discrete probability distribution. A simple
condition on the cost rates implies that the objective function is submodular and
L-convex. Then there exists an optimal appointment schedule that is integer
and can be found in polynomial time.
The second paper** is also concerned with the same appointment scheduling
problem but without assuming the knowledge of the job duration distributions.
Instead, information on the duration distributions may be obtained by random
sampling. We show that the objective function is convex under a simple
condition and characterize its subdifferential, and determine the number of
independent samples required to obtain a provably near-optimal solution with
There are many other challenging and important real-life applications for
appointment scheduling including healthcare diagnostic operations (such as
CAT scan, MRI) and physician appointments, as well as project scheduling,
container vessel and terminal operations, gate and runway scheduling of
aircrafts in an airport.
*, ** Joint work with Maurice Queyranne (Sauder School of Business, UBC)
** Joint work with Retsef Levi (Sloan School of Management, MIT)
Irmacs 10908 & SUR 14-400
Kirchberger's theorem, coloured and multiplied
Rational Generating Functions and Integer Programming Games
Chris Ryan, Sauder School School of Business, UBC
We explore the computational complexity of computing pure Nash equilibria for a
new class of strategic games called integer programming games with difference
of piecewise linear convex payoffs. Integer programming games are games
where players' action sets are integer points inside of polytopes. Using recent
results from the study of short rational generating functions for encoding sets of
integer points pioneered by Alexander Barvinok, we present efficient algorithms
for enumerating all pure Nash equilibria, and other computations of interest, such
as the pure price of anarchy, and pure threat point, when the dimension and
number of convex" linear pieces in the payoff functions are fixed. Sequential
games where a leader is followed by competing followers (a Stackelberg--Nash
setting) are also considered.
(Joint work with Matthias Koeppe and Maurice Queyranne.)
A robust approach to handle risk constraints in mid and long-term energy
planning of hydro-thermal systems
We consider the optimal management of a hydro-thermal power system in the
mid and long-term. This is a large-scale multistage stochastic linear program,
often solved by combining sampling with dynamic programming, for example by
stochastic dual dynamic programming techniques. However, such
methodologies are no longer applicable, at least not in a straightforward
manner, when the model is set in a risk-averse formulation. We propose to take
into account risk by using a chance-constrained formulation of the energy-
planning problem. In particular, for a given confidence level, and having critical
minimum reservoirs levels at each time stage, reservoirs can be kept above the
critical volume in a probabilistic manner. When the uncertainty in the problem
-typically, the natural inflows- is represented by a periodic autoregressive
stochastic process, the corresponding probabilistic constraints can be expressed
as Conditional Value-at-Risk constraints and the chance-constrained problem is
a linear program. Such linear program can be thought of as a robustified
deterministic variant of the original stochastic programming problem. Several
robust adaptive management strategies are then proposed and the methodology
is applied to the Brazilian system.
This is joint work with Vincent Guigues.
Organizer: Tamon Stephen, e-mail: tamon at sfu ca.
Modified: September 8th, 2009.