The school will cover the following topics:

Program at a glance

Time\Day

Monday
June 24

Tuesday
June 25

Wednesday
June 26

Thursday
June 27

Friday
June 28

9.00-10.30

CS1

SVM1

MLCO1

CLU1

CLU3

10.30-11.00 Coffee Break Coffee Break Coffee Break Coffee Break Coffee Break

11.00-12.30

CS1

DL1

SVM3

DL3

ORTEC

12.30-13.45 Lunch Break Lunch Break Lunch Break Lunch Break

13.45-15.15

CS3

IBM

MLCO2

15.15-15.30 Coffee Break Coffee Break Coffee Break

15.30-17.00

CS4

SVM2

CLU2

17.00-17.15 Pause Pause

17.15-18.45

DL2

MLCO3








Deep Learning for AI
by Yoshua Bengio
Abstract: There has been much progress in AI thanks to advances in deep learning in recent years, especially in areas such as computer vision, speech recognition, natural language processing, playing games, robotics, machine translation, etc. This tutorial aims at explaining some of the core concepts and motivations behind deep learning and representation learning. Deep learning builds on many of the ideas introduced decades earlier with the connectionist approach to machine learning, inspired by the brain. These essential early contributions include the notion of distributed representation and the back-propagation algorithm for training multi-layer neural networks, but also the architecture of recurrent neural networks and convolutional neural networks. In addition to the substantial increase in computing power and dataset sizes, many modern additions have contributed to the recent successes. These include techniques making it possible to train networks with more layers – which can generalize better (hence the name deep learning) – as well as a better theoretical understanding for the success of deep learning, both from an optimization point of view and from a generalization point of view. They also include advances in architectures which have moved neural nets from pattern recognition devices working on vectors to general-purpose differentiable modular machines which can handle arbitrary data structures, mostly thanks to the use of attention mechanisms. Two other areas of major progress have been in unsupervised learning, in particular the ability of neural networks to stochastically generate high-dimensional samples (like images) from a possibly conditional distribution, as well as the combination of reinforcement learning and deep learning techniques, not just for traditional applications of reinforcement learning like games, but also as a way to handle learning in systems involving non-differentiable or black-box components. The tutorial will end with a discussion of some major open problems for AI which are at the forefront of research in deep learning.

Tutorial outline:
  • Motivating introduction to deep learning
  • Why deep learning works so well:
    • Distributed representation & depth: compositional priors to defeat the curse of dimensionality
    • High-dimensional non-convex optimization may be easier than initially thought
    • Backprop; efficiency and regularization effects of stochastic gradient descent
  • Convolutional neural networks & parameter sharing
  • Recurrent neural networks & long-term dependencies
  • What’s new with neural nets since the 90s
    • Depth, piecewise-linear activation functions & skip connections
    • Attention mechanisms, memory and operating on data structures
    • Autoencoders and deep generative models
    • Advances in transfer learning and meta-learning
  • Architectures & applications to natural language processing
  • Architectures & applications involving images
  • Deep reinforcement learning and playing games
  • What next?
    • Challenges of unsupervised representation learning
    • Challenges of agent learning, deep reinforcement learning
    • From perception to higher cognition, from competence to comprehension






Clustering for Big Data
by Ravi Kumar
Abstract: In this course we will explore algorithmic questions in clustering, focusing on both graphs and metric spaces. For metric spaces, we will study k-means, k-median, correlation clustering, and others. For graphs, we will study community finding, densest subgraphs, and related problems. In both cases, the emphasis will be on scalability.






Machine Learning for Combinatorial Optimization: a Methodological Tour d’Horizon
by Andrea Lodi
Abstract: This course surveys the recent attempts, both from the machine learning and operations research communities, at leveraging machine learning to solve combinatorial optimization problems. Given the hard nature of these problems, state-of-the-art methodologies involve algorithmic decisions that either require too much computing time or are not mathematically well defined. Thus, machine learning looks like a promising candidate to effectively deal with those decisions. We advocate for pushing further the integration of machine learning and combinatorial optimization and detail methodology to do so. A main point of the course is seeing generic optimization problems as data points and inquiring what is the relevant distribution of problems to use for learning on a given task.





Support Vector Machines
by Laura Palagi
Abstract: Support Vector Machine (SVM) is one of the most important class of machine learning models. We will focus on supervised binary classification problems. There is strong correlation among SVM training and optimization as for most Machine Learning problems. We will present the most important and used models for SVM training and we will analyse the properties of the corresponding optimization problems. In particular the linear and nonlinear SVM models and both unconstrained and constrained nonlinear formulation will be addresses. We highlight the role of properties of these models that can be fruitful used in designing efficient algorithms.






IBM Industrial Demonstration
Mixed Integer Nonlinear Optimization in CPLEX
by Pierre Bonami
Abstract: CPLEX is one of the leading industrial optimization solvers. While originally a linear programming solver it has extended over time to solving more and more general optimization problems. In particular, it has algorithms for solving diverse type of Mixed Integer Nonlinear Optimization problems with quadratic terms: convex and non-convex mixed-integer quadratic problems and mixed integer second order cone problems.
In this talk, we will present the main algorithmic techniques used by CPLEX for solving these types of problems. We will present extended computations to highlight some of the algorithmic choices.







ORTEC Industrial Demonstration
Profit Optimization: a case study in Express Networks
by Joaquim Gromicho and Casper van Dijk
Abstract: Modeling profits should take into account the market share a company has for each of its products.
In turn, the market shares depend on the price and the service level that the company offers the customers in comparison with the offer of competitors in the market.
Traditionally the cost minimization by operations and the revenue maximization by sales are considered separately.
However, for the goal of profit maximization they are the two sides of the equation, which also depend on each other.
Such dependencies are inherently non-linear which makes profit optimization a challenging mathematical optimization endeavor. We report on a case study for Profit Optimization in Express Networks where we tried different methodologies including exact solutions of nonlinear models, likewise linear approximations and heuristics.







Complementary Skills (For MINOA members only):
Gender and diversity in team work
by Elisabeth A. Günther

Abstract: This workshop addresses issues of what makes a great team and potential benefits of working in an inclusive team. To this end, participants will learn about implicit associations and how those might – sometimes unwittingly – impact themselves or colleagues. Personal experiences of the participants will be related to empirical evidence.
The aims of the workshop are:
  • to gain insights in how gender and diversity might impact collaborations and career trajectories;
  • to strengthen communication skills and collaboration;
  • to initiate a reflexive process towards inclusive practices.
Keywords: Gender Aspects, Diversity, Communication competences and Intersectionality in Research and Daily Life
  • Lecture CS1: Gender Diversity & Intersectionality in STEM
  • Lecture CS2: Implicit Associations in STEM
  • Lecture CS3: How does gender and intersectionality influence collaborations?
  • Lecture CS4: Inclusion – what’s the benefit and how to achieve it?