Giorgio Gnecco (IMT Lucca) | 13.1.2020 16:45-17:45 | HS7 OMP1

In this talk, a variation of the classical linear regression model is presented, in which one is additionally given the possibility of controlling the conditional variance of the output given the input, by varying the computational time dedicated to supervise each training example, while fixing an upper bound on the total available computational time. Using this model and a large-sample approximation, the generalization error is minimized as a function of the computational time per example. Two main cases are considered in the analysis: in one case, the precision of the supervision increases less than proportionally when increasing the computational time per example; in the other one, it increases more than proportionally. The results of the analysis highlight, from a theoretical point of view, that increasing the number of data is not always beneficial, if it is feasible to collect a smaller number of more reliable data. Some numerical results validating the theory are presented, together with several extensions of the proposed framework to other optimization problems modeling the trade-off between sample size and precision of supervision.

Kurt Anstreicher (Univ. Iowa) | 27.1.2020 16:45-17:45 | HS7 OMP1


The maximum-entropy sampling problem (MESP) is a difficult nonlinear integer programming problem that arises in spatial statistics, for example in the design of weather monitoring networks. We describe a new bound for the MESP that is based maximizing a function of the form ldet M(x) over linear constraints, where M(x) is an n-by-n matrix function that is linear in the n-vector x. These bounds can be computed very efficiently and are superior to all previously known bounds for MESP on most benchmark test problems. A branch-and-bound algorithm using the new bounds solves challenging instances of MESP to optimality for the first time.