

Plenary Lectures
Plenary Lectures
Abstract: We demonstrate by two examples the crucial role of robust optimization (RO) in designing reliable engineering structures. We then briefly summarize cases (models) where this can be achieved by solving efficiently a tractable optimization problem. Next, results of RO for dynamic (multistage) uncertain problems are presented for which tractable robust counterparts are achieved exactly or approximately. We further show how RO can come to the rescue of, otherwise computationhard, chance constraint problems, even when only partial information is available on the distribution function of the underlying random parameters (“Distributional Ambiguity”). Finally, we present a classical linear estimation problem (with a signal processing example in mind). While a Least Squares approach produces a poor estimator of the signal, we show that using RO, an accurate estimator of the signal is obtained (in fact, analytically!). Brief bio: Aharon BenTal is a Professor of Operations Research Management at the Technion – Israel Institute of Technology. He received his Ph.D. in Applied Mathematics from Northwestern University in 1973. He has been a Visiting Professor at the University of Michigan, University of Copenhagen, Delft University of Technology, MIT and CWI Amsterdam, Columbia and NYU. His interests are in Continuous Optimization, particularly nonsmooth and largescale problems, conic and robust optimization, as well as convex and nonsmooth analysis. In recent years the focus of his research is on optimization problems affected by uncertainty. In the last 20 years he has devoted much effort to engineering applications of optimization methodology and computational schemes. Some of the algorithms developed in the MINERVA Optimization Center are in use by Industry (Medical Imaging, Aerospace). He has published more than 135 papers in professional journals and coauthored three books. Prof. BenTal was Dean of the Faculty of Industrial Engineering and Management at the Technion (19891992) and (20112014). He served in the editorial board of all major OR/Optimization journals. He gave numerous plenary and keynote lectures in international conferences. In 2007 Professor BenTal was awarded the EURO Gold Medal  the highest distinction of Operations Research within Europe. In 2009 he was named Fellow of INFORMS. In 2015 he was named Fellow of SIAM. In 2016 he was awarded by INFORMS the Khchiyan prize for Lifetime Achievement in the area of Optimization. In 2017, the Operation Research Society of Israel (ORSIS) awarded him the Lifetime Achievement Prize.
Abstract: Optimization problems with a (nonconvex) objective subject to (nonconvex) constraints and additional linear constraints, are ubiquitous in applications. Special cases include, for instance, the QCQP. The potential weakness of Lagrangian dual bounds is wellknown in nonconvex optimization. Classical Shor lifting shows that Lagrangian relaxation for QCQPs leads to an SDP, we will characterize the SemiLagrangian dual of problems of above type as more general conic optimization problems based upon copositivity. It turns out that even the weakest (lowest degree) approximation of the latter improves upon the former. Moreover, also some global optimality conditions can be formulated easily in terms of copositivity. The approach will also give rise to an apparently new approximation hierarchy which avoids memory problems with large Hessians because it mainly focuses on linear formulations, and only uses SDPs of a size of the original problem, in sharp contrast to higherorder sumofsquares or moment approximation hierarchies which employ matrices of an order which increases with a power of the problem dimension. As a special case, we will consider the socalled extended CDT problem (nonconvex quadratic optimization over the intersection of two ellipsoids with a polyhedron, APXhard). Both the global optimality criterion and the approximation hierarchy leading to tighter dual bounds will be specified, along with a condition guaranteeing exactness of them. This talk is partly based upon research results obtained jointly with M. Overton, G. Li and V. Jeyakumar.
Abstract: Let G=(V,E) be a simple graph with vertex set V=\{1,\ldots,n\} and edge set E\subseteq {V\choose2} consisting of m unordered pairs of adjacent vertices. C\subseteq V is called a clique if (i,j)\in E for any pair of distinct vertices i, j \in C. The maximum clique problem is to find a clique with the largest number of vertices in a given graph, and the clique number \omega(G) of G is the cardinality of a maximum clique in G In 1965, Motzkin and Straus discovered a nontrivial connection between the clique number of a given graph G and the global optimal value of a certain quadratic program associated with G:1\frac{1}{\omega(G)}=\max\limits_{x\in\Delta^n} x^TA_Gx,, where A_G is the adjacency matrix of G and \Delta^n=\{x\in\mathbb{R}_+^n: e_n^Tx=1\} is the standard simplex in \mathbb{R}^n, e_n is the vector of n ones. In particular, setting x=\frac{1}{n}e_n in MotzkinStraus formulation yields the celebrated Turán's theorem. The above quadratic formulation inspired some interesting developments in the field of global optimization, aiming to utilize the properties of this (continuous) model in order to solve the (discrete) maximum clique problem. In this work, the MotzkinStraus formulation is extended to two clique relaxation models, sdefective clique and splex, defined as follows. Given a simple graph G=(V,E), let S\subseteq V and s be a fixed positive integer. Let G[S] denote the subgraph induced by S in G A subset of vertices S is
These and other clique relaxations were introduced in the literature in order to overcome practical limitations of using cliques to model tightly knit clusters in networks. The resulting formulations are used to generalize Turán theorem to the considered clique relaxations.
Abstract: Nonlinear symmetric cone programs (NSCPs) constitute a general and important class of optimization problems that contains as special cases nonlinear semidefinite programs (NSDPs), nonlinear second order cone programs (NSOCPs) and traditional nonlinear programs (NLPs). We consider reformulating an NSCP as an ordinary NLP by means of squared slack variables. It is clear that the reformulated NLP is equivalent to the original NSCP in terms of not only global but also local optimality. This, however, is not the case in regard to optimality conditions. We discuss the firstorder, i.e., KarushKuhnTucker (KKT) conditions and, in particular, the secondorder necessary conditions as well as sufficient conditions for the NSCP and the reformulated NLP. Working with the reformulated NLP enables us to obtain the secondorder optimality conditions for NSCPs in an easy manner, thereby bypassing a number of difficulties associated to the usual variational analytical approach. We also mention the possibility of importing convergence results from nonlinear programming, which we illustrate by means of a simple augmented Lagrangian method for NSCPs. Brief bio: Professor Masao Fukushima obtained all academic degrees in Engineering from Kyoto University. Currently he is a full professor at the Faculty of Science and Engineering, Nanzan University, and Professor Emeritus of Kyoto University. His research interests include nonlinear optimization, variational inequality and complementarily problems, parallel optimization, nonsmooth optimization, global optimization, game theory, and applications in transportation, finance, data mining, etc. He has published over 200 papers in peer reviewed journals and was selected as an ISI Highly Cited Researcher in Mathematics in 2010. He has been awarded a number of academic prizes, including Kondo Prize from the Operations Research Society of Japan, and Paul Y. Tseng Memorial Lectureship from the Mathematical Optimization Society. Professor Fukushima is one of the founders of the Pacific Optimization Research Activity Group, and had served as the Chairman of the Working Committee. He is also the founder and the CoEditor of Pacific Journal of Optimization. Besides, he is currently on the editorial boards of 15 international journals in optimization and operations research, including Computational Optimization and Applications, Optimization Methods and Software, Journal of Optimization Theory and Applications, etc.
Abstract: Brief bio: Anna Nagurney is the John F. Smith Memorial Professor at the Isenberg School of Management at the University of Massachusetts Amherst and the Director of the Virtual Center for Supernetworks, which she founded in 2001. She holds ScB, AB, ScM and PhD degrees from Brown University in Providence, RI. She is the author of 14 books, more than 195 refereed journal articles, and over 50 book chapters. She presently serves on the editorial boards of a dozen journals and two book series and is the editor of another book series. Professor Nagurney has been a Fulbrighter twice (in Austria and Italy), was a Visiting Professor at the School of Business, Economics and Law at the University of Gothenburg in Sweden and was a Distinguished Guest Visiting Professor at the Royal Institute of Technology (KTH) in Stockholm. She was a Visiting Fellow at All Souls College at Oxford University during the 2016 Trinity Term and a Summer Fellow at the Radcliffe Institute for Advanced Study at Harvard in 2017 and 2018. Anna has held visiting appointments at MIT and at Brown University and was a Science Fellow at the Radcliffe Institute for Advanced Study at Harvard University in 20052006. She has been recognized for her research on networks with the Kempe prize from the University of Umea, the Faculty Award for Women from the US National Science Foundation, the University Medal from the University of Catania in Italy, and was elected a Fellow of the RSAI (Regional Science Association International) as well as an INFORMS (Institute for Operations Research and the Management Sciences) Fellow, among other awards. She has also been recognized with several awards for her mentorship of students and her female leadership with the WORMS Award, for example. Her research has garnered support from the AT&T Foundation, the Rockefeller Foundation through its Bellagio Center programs, the Institute for International Education, and the National Science Foundation. She has given plenary/keynote talks and tutorials on 5 continents. She is an active member of professional societies, including INFORMS, POMS, and RSAI and was the Omega Rho Distinguished Lecturer in 2018. Anna's research focuses on network systems from transportation and logistical ones, including supply chains, to financial, economic, social networks and their integration, along with the Internet. She studies and models complex behaviors on networks with a goal towards providing frameworks and tools for understanding their structure, performance, and resilience and has contributed also to the understanding of the Braess paradox in transportation networks and the Internet. She has also been researching sustainability and quality issues with applications ranging from pharmaceutical and blood supply chains to perishable food products and fast fashion to humanitarian logistics. She has advanced methodological tools used in game theory, network theory, equilibrium analysis, and dynamical systems. She was a CoPI on a multiuniversity NSF grant with UMass Amherst as the lead: Network Innovation Through Choice, which was part of the Future Internet Architecture (FIA) program and is presently a CoPI on an NSF EAGER grant.
Abstract: There is however another important network characteristic set of nodes, arising from network controllability theory, which for the time being remained beyond the attention of researchers: identify a minimum set of driver nodes, providing controllability of the network. In this talk we are going to discuss a spectrum of problems in computational neuroscience whose solution needs tools from data sciences, optimization and control. Brief bio: Panos Pardalos is a Distinguished Professor and the Paul and Heidi Brown Preeminent Professor in the Departments of Industrial and Systems Engineering at the University of Florida, and a worldrenowned leader in Global Optimization, Mathematical Modeling, and Data Sciences. He is a Fellow of AAAS, AIMBE, and INFORMS and was awarded the 2013 Constantin Caratheodory Prize of the International Society of Global Optimization. In addition, Dr. Pardalos has been awarded the 2013 EURO Gold Medal prize bestowed by the Association for European Operational Research Societies. This medal is the preeminent European award given to Operations Research (OR) professionals for “scientific contributions that stand the test of time.” Dr. Pardalos has been awarded a prestigious Humboldt Research Award (20182019). The Humboldt Research Award is granted in recognition of a researcher’s entire achievements to date – fundamental discoveries, new theories, insights that have had significant impact on their discipline. Dr. Pardalos is also a Member of the New York Academy of Sciences, the Lithuanian Academy of Sciences, the Royal Academy of Spain, and the National Academy of Sciences of Ukraine. He is the Founding Editor of Optimization Letters, Energy Systems, and CoFounder of the International Journal of Global Optimization, and Computational Management Science. He has published over 500 papers, edited/authored over 200 books and organized over 80 conferences. He has a google hindex of 95 and has graduated 63 PhD students so far.
Abstract: The area of continuous stochastic global optimization can be roughly divided into (a) Bayesian optimization (BO), where a stochastic model for the objective function is used and (b) global random search (GRS), where elements of MonteCarlo are involved. Despite a lot of recent progress in BO, mostly related to applications in design of computer experiments, there are still serious challenges especially related to the adequacy of models. This and other issues with BO will be commented upon. In what concerns GRS, I will concentrate on the following aspects: (a) meaning of the conception of convergence of GRS algorithms; (b) the role of statistical procedures in GRS especially in devising stopping rules in GRS; (c) GRS and random multistart in highdimensional optimization. For instance, I will argue that despite there may be issues with guaranteeing the result, some versions of GRS may be our best option if we decide to attempt to solve a highdimensional global optimization problem. Brief bio: Born in 1953. Graduated Faculty of Mathematics, St.Petersburg State University, in 1976. PhD on applied probability in 1981. Professor of statistics at the St.Petersburg State University during 19891997. Since 1997: Professor, Chair in Statistics at Cardiff University. Author of 4 monographs on stochastic global optimization and another 6 on time series analysis, optimal experimental design and dynamical systems; editor/coeditor of 9 books on various topics, author of about 200 research papers in refereed journals, organizer of several major conferences on applied statistics, time series analysis, experimental design and global optimization. Member of editorial board of Journal of Global Optimization and Statistics and Its Interface. During more than thirty years was involved with applied projects on marketing research and consumer behaviour; the list of companies includes Procter & Gamble, AcNielsen and GlaxoSmithKline. Winner of the Constantin Carathéodory prize in 2019.
