SFU
Surrey   WCOM Fall 23  

 Abstracts of Invited Talks



  1. Marco Caoduro, UBC Sauder School of Business


    Boxicity and Interval-Orders: Petersen and the Complements of Line Graphs
    The boxicity of a graph is the smallest dimension d allowing a representation of it as the intersection graph of a set of d-dimensional axis-parallel boxes. We present a simple general approach to determining the boxicity of a graph based on studying its "interval-order subgraphs". The power of the method is first tested on the boxicity of some popular graphs that have resisted previous attempts: the boxicity of the Petersen graph is 3, and more generally, that of the Kneser-graphs K(n, 2) is n − 2 if n ≥ 5, confirming a conjecture of Caoduro and Lichev.

    Since every line graph is an induced subgraph of the complement of K(n, 2), the developed tools show that line graphs have only a polynomial number of edge-maximal interval-order subgraphs. This opens the way to polynomial-time algorithms for problems that are in general NP-hard: existence and optimization problems about interval order subgraphs of line-graphs, and interval-completions of their complement.


  2. Sally Dong, University of Washington


    Fast Algorithms for Structured Linear Programs
    A fruitful line of research in recent years has been the study of interior point methods in combination with data structure design. Most prominently, this has led to an almost-linear time algorithm for max flow [CKL+, FOCS23]. In this talk, we will consider linear programs of the form min cT x subjected to Ax = b, x ≥ 0, where the constraint matrix A ∈ ℜn × m satisfies certain separability properties. We present a general framework for solving these separable linear programs using robust interior point methods and nested dissection. This leads to the following SOTA algorithms: a nearly-linear time algorithm for solving min-cost flow on planar graphs; an Õ(n1.5 k2.5) time algorithm for solving k-multi-commodity flow on planar graphs; an O(mt2) time algorithm for solving LPs when given a width-t tree decomposition of the incidence graph of A; and an Õ(m t0.5) time algorithm for solving max flow on graphs given a width-t tree decomposition.

    This is based on joint work with Yu Gao, Gramoz Goranci, Yin Tat Lee, Lawrence Li, Richard Peng, Sushant Sachdeva, and Guanghao Ye.


  3. Brian Irwin, UBC Vancouver


    Neural Network Accelerated Implicit Filtering: Integrating Neural Network Surrogates With Provably Convergent Derivative Free Optimization Methods
    This talk will introduce neural network accelerated implicit filtering (NNAIF), a novel family of methods for solving noisy derivative free (i.e. black box, zeroth order) optimization problems. NNAIF intelligently combines the established literature on implicit filtering (IF) optimization methods with a neural network (NN) surrogate model of the objective function, resulting in accelerated derivative free methods for unconstrained optimization problems. The NN surrogate model consists of a fixed number of parameters, which can be as few as approximately 13000, that are updated as NNAIF progresses. NNAIF directly inherits the convergence properties of IF optimization methods, and thus NNAIF is guaranteed to converge towards a critical point of the objective function under appropriate assumptions. Numerical experiments with 31 noisy problems from the CUTEst optimization benchmark set demonstrate the benefits and costs associated with NNAIF. These benefits include NNAIF's ability to minimize structured functions of several thousand variables much more rapidly than well-known alternatives, such as Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and finite difference based variants of gradient descent (GD) and BFGS, as well as its namesake IF.


  4. Namrata Kundu, UBC Okanagan


    Closest convex piecewise linear-quadratic (PLQ) function and PLQ function approximation with minimal pieces
    We consider several optimization problems to compute the closest convex piecewise linear-quadratic (PLQ) function to a given univariate piecewise linear-quadratic function where the distance is measured with the Euclidean norm. First, we assume that the breakpoints of the output function are fixed, obtaining a convex optimization problem. Next, we assume that the breakpoints are variable and form a superset of those in the input function, and we utilize a global optimization algorithm. Finally, we develop an algorithm to approximate a given PLQ function with another PLQ function containing a minimal number of breakpoints while maintaining a specified error tolerance. We then explore an application of this algorithm in the domain of structural design.


  5. Zhaosong Lu, University of Minnesota


    First-order Methods for Convex Optimization and Monotone Inclusions under Local Lipschitz Conditions
    In this talk, I will discuss first-order methods for two problem classes: convex optimization with locally Lipschitz continuous gradient and monotone inclusions with locally Lipschitz continuous point-valued operators. For convex optimization, we will propose a first-order method to find an epsilon-KKT solution, while for monotone inclusions, a primal-dual extrapolation method will be presented to find an epsilon-residual solution. These problem classes extend beyond the well-studied ones in the literature. The proposed methods are parameter-free, with verifiable termination criteria, and exhibit nearly optimal complexity. I will also share some preliminary numerical results to demonstrate their performance. This is joint work with Sanyou Mei (University of Minnesota).


  6. Feyza Sahinyazan, Simon Fraser University


    No Country for Young Refugees: Barriers and Opportunities for Inclusive Refugee Education Practices
    The recent refugee crises in Ukraine (2022) and Syria (2011) have created a massive influx of refugees in neighboring countries, measured in millions, 40% of whom are children. Education systems of the refugee-hosting countries cannot integrate such large populations. In addition, language barriers and stigma associated with refugees hamper inclusive and equitable education opportunities for these children. In this study, we focus on the following research question: How can a host country improve the inclusion of refugee children in the education system without overburdening its infrastructure? We develop an inclusive planning strategy aligned with the existing capacity and resources: we formulate two adapted versions of the Maximum Covering Problem (MCP): Cooperative Capacitated MCP with Heterogeneity Constraints (CCMCP-HC) to improve the current schooling access in Turkey and Modular CCMCP-HC to provide a guide for early planning in the case of a future crisis. Results of our computational analyses with the real-life data illustrate that our proposed approach yields better schooling rates and capacity utilization than the existing approaches. Our results emphasize the importance of having a planning strategy in the initial phases of the crisis that considers future integration possibilities. This study analyzes Turkey's experience and lessons learned to provide a road map for other ongoing and prospective refugee crises.


  7. Shambhavi Singh, UBC Okanagan


    On the Bredies-Chenchene-Lorenz-Naldi Algorithm
    Monotone inclusion problems occur in many areas of optimization and variational analysis. Splitting methods, which utilize resolvents or proximal mappings of the underlying operators, are often applied to solve these problems. In 2022, Bredies, Chenchene, Lorenz, and Naldi introduced a new elegant algorithmic framework that encompasses various well known algorithms including Douglas-Rachford and Chambolle-Pock. They obtained powerful weak and strong convergence results, where the latter type relies on additional strong monotonicity assumptions. In this paper, we complement the analysis by Bredies et al. by relating the projections of the fixed point sets of the underlying operators that generate the (reduced and original) preconditioned proximal point sequences. We also obtain strong convergence results in the case of linear relations. Various examples are provided to illustrate the applicability of our results.


  8. Ziyuan Wang, UBC Okanagan


    Every proximal mapping is a resolvent of level proximal subdifferential
    We propose a level proximal subdifferential for a proper lower semicontinuous function. Level proximal subdifferential is a uniform refinement of the well-known proximal subdifferential, and has the pleasant feature that its resolvent always coincides with the proximal mapping of a function. It turns out that the usual resolvent representation of proximal mapping in terms of Mordukhovich limiting subdifferential is only valid for hypoconvex functions. We also provide properties of level proximal subdifferential and numerous examples to illustrate our results.


  9. Shangzhi Zeng, University of Victoria


    Moreau Envelope Based Difference-of-weakly-Convex Reformulation and Algorithm for Bilevel Programs
    Recently, Ye et al. (Mathematical Programming 2023) designed an algorithm for solving a specific class of bilevel programs with an emphasis on applications related to hyperparameter selection, utilizing the difference of convex algorithm based on the value function approach reformulation. The proposed algorithm is particularly powerful when the lower level problem is fully convex. In this work, to suit more applications related to machine learning and statistics, we substantially weaken the underlying assumption from lower level full convexity to weak convexity. Accordingly, we propose a new reformulation using Moreau envelope of the lower level problem and demonstrate that this reformulation is a difference of weakly convex program. Subsequently, we develop a sequentially convergent algorithm for solving this difference of weakly convex program. To evaluate the effectiveness of our approach, numerical experiment on the bilevel hyperparameter selection problem from RBF kernel support vector machine model is conducted.


  10. Yong Zhang, Huawei Technologies Canada


    Some Progress Towards Artificial Intelligence for Operations Research
    The rapid advancement of artificial intelligence (AI) techniques has opened up new opportunities to revolutionize various fields, including operations research (OR). In recent years, we have explored the integration of AI within the OR process (AI4OR) to enhance its effectiveness and efficiency across multiple stages.

    One area of focus has been automating the process of formulating complex OR problems. We have harnessed the power of large language models to automatically generate formulations for OR problems, streamlining the modeling process and reducing manual effort.

    Additionally, we have applied Graph Neural Networks (GNNs) in the model optimization stages. GNNs have shown promise in predicting good initial bases for Linear Programming (LP) problems and selecting optimal branching nodes for Mixed-Integer Linear Programming (MILP) problems. These applications of GNNs have improved the efficiency of the optimization process.

    In this talk, we will discuss our recent attempts and the lessons we have learned in both automating OR problem formulation and utilizing GNNs for OR model optimization. While we have made significant strides, we acknowledge that challenges remain and ongoing research is essential to further harness the potential of AI in operations research.