Skip to main content
Top

2012 | Book

Operations Research Proceedings 2011

Selected Papers of the International Conference on Operations Research (OR 2011), August 30 - September 2, 2011, Zurich, Switzerland

Editors: Diethard Klatte, Hans-Jakob Lüthi, Karl Schmedders

Publisher: Springer Berlin Heidelberg

Book Series : Operations Research Proceedings

insite
SEARCH

About this book

This book contains a selection of refereed papers presented at the “International Conference on Operations Research (OR 2011)” which took place at the University of Zurich from August 30 to September 2, 2011. The conference was jointly organized by the German speaking OR societies from Austria (ÖGOR), Germany (GOR) and Switzerland (SVOR) under the patronage of SVOR. More than 840 scientists and students from over 50 countries attended OR 2011 and presented 620 papers in 16 parallel topical streams, as well as special award sessions. The conference was designed according to the understanding of Operations Research as an interdisciplinary science focusing on modeling complex socio-technical systems to gain insight into behavior under interventions by decision makers. Dealing with “organized complexity” lies in the core of OR and designing useful support systems to master the challenge of system management in complex environment is the ultimate goal of our professional societies. To this end, algorithmic techniques and system modeling are two fundamental competences which are also well-balanced in these proceedings.

Table of Contents

Frontmatter

Continuous Optimization and Control

Frontmatter
On the controlling of liquid metal solidification in foundry practice

The optimal control problem of metal solidification in casting is considered. The process is modeled by a three dimensional two phase initial-boundary value problem of the Stefan type. The optimal control problem was solved numerically using the gradient method. The gradient of the cost function was found with the help of the conjugate problem. The discreet conjugate problem was posed with the help of the Fast Automatic Differentiation technique.

Andrey Albu, Vladimir Zubov
Optimal dividend and risk control in diffusion models with linear costs

We consider the optimization problem of dividends and risk exposures of a firm in the diffusion model with linear costs. The variational inequality associated with this problem is given by the nonlinear form of elliptic type. Using the viscosity solutions technique, we solve the corresponding penalty equation and show the existence of a classical solution to the variational inequality. The optimal policy of dividend payment and risk exposure is shown to exist.

Hiroaki Morimoto
An optimal control problem for a tri-trophic chain system with diffusion

An optimal control problem is studied for a nonlinear reaction-diffusion system that describes the behavior of a trophic chain composed by a predator, a pest and a plant species. A pesticide is introduced in the ecosystem. It is regarded as a control variable. The purpose is to minimize the pest density and to maximize the plant density.

N. C. Apreutesei
Necessary Optimality Conditions for Improper Infinite-Horizon Control Problems

The paper revisits the issue of necessary optimality conditions for infinitehorizon optimal control problems. It is proved that the normal form maximum principle holds with an explicitly specified adjoint variable if an appropriate relation between the discount rate, the growth rate of the solution and the growth rate of the objective function is satisfied. The main novelty is that the result applies to general non-stationary systems and the optimal objective value needs not be finite (in which case the concept of overtaking optimality is employed).

Sergey M. Aseev, Vladimir M. Veliov
Efficient Serial and Parallel Coordinate Descent Methods for Huge-Scale Truss Topology Design

In this work we propose solving

huge-scale

instances of the truss topology design problem with coordinate descent methods. We develop four efficient codes:

serial

and

parallel

implementations of

randomized

and

greedy

rules for the selection of the variable(s) (potential bar(s)) to be updated in the next iteration. Both serial methods enjoy an

O

(

n/k

) iteration complexity guarantee, where

n

is the number of potential bars and

k

the iteration counter. Our parallel implementations, written in CUDA and running on a graphical processing unit (GPU), are capable of speedups of up to two orders of magnitude when compared to their serial counterparts. Numerical experiments were performed on instances with up to 30 million potential bars.

Peter Richtárik, Martin Takáč
About error bounds in metric spaces

The paper presents a general primal space classification scheme of necessary and sufficient criteria for the error bound property incorporating the existing conditions. Several primal space derivative-like objects – slopes – are used to characterize the error bound property of extended-real-valued functions on metric sapces.

Marian J. Fabian, René Henrion, Alexander Y. Kruger, Jiří V. Outrata
Electricity Price Modelling for Turkey

This paper presents customized models to predict next-day’s electricity price in short-term periods for Turkey’s electricity market. Turkey’s electricity market is evolving from a centralized approach to a competitive market. Fluctuations in the electricity consumption show that there are three periods; day, peak, and night. The approach proposed here is based on robust and continuous optimization techniques, which ensures achieving the optimum electricity price to minimize error in periodic price prediction. Commonly, next-day’s electricity prices are forecasted by using time series models, specifically dynamic regression model. Therefore electricity price prediction performance was compared with dynamic regression. Numerical results show that CMARS and RCMARS predicts the prices with 30% less error compared to dynamic regression.

Miray Hanım Yıldırım, Ayşe Özmen, Özlem Türker Bayrak, Gerhard Wilhelm Weber

Discrete Optimization, Graphs and Networks

Frontmatter
The stable set polytope of claw-free graphs with stability number greater than three

In 1965 Edmonds gave the first complete polyhedral description for a combinatorial optimization problem: the Matching polytope. Many researchers tried to generalize his result by considering the Stable Set polytope of claw-free graphs. However this is still an open problem. Here we solve it for the class of claw-free graphs with stability number greater than 3 and without 1-joins.

Anna Galluccio, Claudio Gentile, Paolo Ventura
Packing Euler graphs with traces

For a graph

G

= (

V,E

) and a vertex

v ∈ V

, let

T

(

v

) be a

local trace

at

v

, i.e.

T

(

v

) is an Eulerian subgraph of

G

such that every walk

W

(

v

), with start vertex

v

can be extended to an Eulerian tour in

T

(

v

). In general, local traces are not unique. We prove that if

G

is Eulerian every maximum edge-disjoint cycle packing

Z*

of

G

induces maximum local traces

T

(

v

) at

v

for every

v ∈ V

. In the opposite, if the total size

$$ \sum $$V∈E

|(

T

(

v

)|

|

is minimal then the set of related edge-disjoint cycles in

G

must be maximum.

Peter Recht, Eva-Maria Sprengel
Minimum Cost Hyperassignments with Applications to ICE/IC Rotation Planning

Vehicle rotation planning is a fundamental problem in rail transport. It decides how the railcars, locomotives, and carriages are operated in order to implement the trips of the timetable. One important planning requirement is operational regularity, i. e., using the rolling stock in the same way on every day of operation. We propose to take regularity into account by modeling the vehicle rotation planning problem as a minimum cost hyperassignment problem (HAP). Hyperassignments are generalizations of assignments from directed graphs to directed hypergraphs. Finding a minimum cost hyperassignment is

NP

-hard. Most instances arising from regular vehicle rotation planning, however, can be solved well in practice.We show that, in particular, clique inequalities strengthen the canonical LP relaxation substantially.

Olga Heismann, Ralf Borndörfer
Primal Heuristics for Branch-and-Price Algorithms

In this paper, we present several primal heuristics which we implemented in the branch-and-price solver GCG based on the SCIP framework. This involves new heuristics as well as heuristics from the literature that make use of the reformulation yielded by the Dantzig-Wolfe decomposition. We give short descriptions of those heuristics and briefly discuss computational results. Furthermore, we give an outlook on current and further development.

Marco Lübbecke, Christian Puchert
Rounding and Propagation Heuristics for Mixed Integer Programming

Primal heuristics are an important component of state-of-the-art codes for mixed integer programming. In this paper, we focus on primal heuristics that only employ computationally inexpensive procedures such as rounding and logical deductions (propagation). We give an overview of eight different approaches. To assess the impact of these primal heuristics on the ability to find feasible solutions, in particular early during search, we introduce a new performance measure, the

primal integral

. Computational experiments evaluate this and other measures on MIPLIB 2010 benchmark instances.

Tobias Achterberg, Timo Berthold, Gregor Hendel
Inverse Shortest Path Models Based on Fundamental Cycle Bases

The inverse shortest path problem has received considerable attention in the literature since the seminal paper by Burton and Toint from 1992. Given a graph and a set of paths the problem is to find arc costs such that all specified paths are shortest paths. The quality of the arc costs is measured by the deviation from some ideal arc costs. Our contribution is a novel modeling technique for this problem based on fundamental cycle bases. For ’LP compatible’ norms we present a cycle basis model equivalent to the LP dual. The LP dual of our cycle basis model is a path based model that only requires a polynomial number of path constraints. This model is valid also for ’LP incompatible’ norms. This yields the first polynomial sized

path

formulation of the original problem.

Mikael Call, Kaj Holmberg
Parallel algorithms for the maximum flow problem with minimum lot sizes

In many transportation systems, the shipment quantities are subject to minimum lot sizes in addition to regular capacity constraints. This means that either the quantity must be zero, or it must be between the two bounds. In this work, we prove that the maximum flow problem with minimum lot-size constraints on the arcs is strongly NP-hard, and we enhance the performance of a previously suggested heuristic. Profiling the serial implementation shows that most of the execution time is spent on solving a series of regular max flow problems. Therefore, we develop a parallel augmenting path algorithm that accelerates the heuristic by an average factor of 1.25.

Mujahed Eleyat, Dag Haugland, Magnus Lie Hetland, Lasse Natvig
Estimating trenching costs in FTTx network planning

In this paper we assess to which extent trenching costs of an FTTx network are unavoidable, even if technical side constraints are neglected. For that purpose we present an extended Steiner tree model. Using a variety of realistic problem instances we demonstrate that the total trenching cost can only be reduced by about 5 percent in realistic scenarios. This work has been funded by BMBF (German Federal Ministry of Education and Research) within the program “KMU-innovativ”.

Sebastian Orlowski, Axel Werner, Roland Wessäly

Decision Analysis, Decision Support

Frontmatter
Why pairwise comparison methods may fail in MCDM rankings

This article is based on previous author’s articles on a correct way for ranking alternatives inMulti-criteria Decision-Making (MCDM). A straightforward but robust Weighed-Sum Procedure (WSP) is presented. It consists in combining additively ranks or scores evaluated against a dimensionless scale, common to all criteria. MCDM methodologies based on pair-wise comparison methods are on the contrary not providing the correct rankings in many practical situations. The roots of the observed failures are made clearer for the well-known Promethee method by means of a simple example.

PL Kunsch
Ranking of qualitative decision options using copulas

We study the ranking of classified multi-attribute qualitative options. To obtain a full ranking of options within classes, qualitative options are mapped into quantitative ones. Current approaches, such as the Qualitative-Quantitative (QQ) method, use linear functions for ranking, hence performing well for linear and monotone options; however QQ underperforms in cases of non-linear and nonmonotone options. To address this problem, we propose a new QQ-based method in which we introduce copulas as an aggregation utility instead of linear functions. In addition, we analyze the behavior of different hierarchical structures of bivariate copulas to model the non-linear dependences among attributes and classes.

Biljana Mileva-Boshkoska, Marko Bohanec
Learning utility functions from preference relations on graphs

The increasing popularity of graphs as fundamental data structures, due to their inherent flexibility in modeling information and its structure, has led to the development of methods to efficiently store, search and query graphs. Graphs are nonetheless complex entities whose analysis is cognitively challenging. This calls for the development of decision support systems that build upon a measure of ‘usefulness’ of graphs. We address this problem by introducing and defining the concept of ‘graph utility’. As the direct specification of utility functions is itself a difficult problem, we explore the problem of learning utility functions for graphs on the basis of user preferences that are extracted from pairwise graph comparisons.

Géraldine Bous
Dealing with conflicting targets by using group decision making within PROMETHEE

This paper presents an approach to applying PROMETHEE for group decision making to support strategic decision making for the implementation of product-service systems (PSS). Such strategic decisions are not only dependent on profit but also on other economic and ecological criteria, i.e. gain or loss of knowhow and chances as well as risks concerning cooperation. Therefore, the application of a multi criteria decision support methodology is necessary, in order to include both, the provider’s and the customer’s perspective. In contrast to the application of PROMETHEE by a single decision maker, the weighting process includes the share of the vote of the different decision makers – as already shown in other approaches – as well as contrary targets of the decision makers for the defined set of criteria.

Ute Weissfloch, Jutta Geldermann
Relation between the Efficiency of Public Forestry Firms and Subsidies: The Swiss Case

This study is an empirical analysis of the productive efficiency of public forestry firms in Switzerland. Because of the available database, the period under examination extends from 1998 to 2006, and is based on a balanced panel of 100 firms. We inquire whether public subsidies exert any significant impact on the efficiency of these firms. In order to determine the productive efficiency, a nonparametric method (DEA) is applied. Panel unit root tests and panel cointegration technique are employed to establish the long-run relationship between efficiency and selected variables. Results show that subsidies had a positive influence on technical efficiency.

Bernur Açıkgçz Ersoy, J. Alexander K. Mack

Energy, Environment and Climate

Frontmatter
Modeling impacts of the European Emission Trade System on the future energy sector

This study focuses on regulatory instruments and especially on the European Emission Trade System (ETS). It is in our interest to forecast the effects of the ETS on different economic indicators and its costs. Therefore we use an optimization model, in which we consider the regulation framework, the market parameters and technical constraints for the German energy market as well as an endogenous price for emission allowances, running times of plants and capacity enlargements.

Carola Hammer
Feasibility on using carbon credits: A multiobjective model

This paper aims to examine the economic feasibility on trading Certified Emission Reductions (CERs) from Clean Development Mechanisms (CDM) projects that are related to electricity generation from renewable energy sources in Brazil. Its purpose is to identify favorable conditions for combining CERs trade obtained by generating electricity from wind power, biomass cogeneration and small hydro-power plants, in replace of fossil fuel plants. As those are all seasonal sources, which means that the energy offers swing along the months of the years, some risks arise associated with the CER’s net benefit. Instead of being examined alone, given that some sources can hedge others, the projects are analyzed in a portfolio framework.

Bruna de Barros Correia, Natália Addas Porto, Paulo de Barros Correia
Integration of Fluctuating Renewable Energy in Europe

With increasing amounts of power generation from intermittent sources like wind and solar, capacity planning has not only to account for the expected load variations but also for the stochastics of volatile power feed-in. Moreover investments in power generation are no longer centrally planned in deregulated power markets but rather decided on competitive grounds by individual power companies. This poses particular challenges when it comes to dispatch. Within this article an approach is presented which allows assessing electricity market development in the presence of stochastic power feed-in and endogenous investments of power plants and renewable energies.

Stephan Spiecker, Christoph Weber
Price-Induced Load-Balancing at Consumer Households for Smart Devices

The rising share of renewable energies in today’s power grids poses challenges to electricity providers and distributors. Renewable energies, like, e.g., solar power and wind, are not as reliable as conventional energy sources. The literature introduces several concepts of how renewable energy sources can be load-balanced on the producer side. However, the consumer side also offers load-balancing potential. Smart devices are able to react to changing price signals. A rational behavior for a smart device is to run when electricity rates are low. Possible devices include washing machines, dryers, refrigerators, warm water boilers, and heat pumps. Prototypes of these devices are just starting to appear. For a field experiment with 500 households we simulate adequate device behavior. The simulation leads to a mapping from price signal to load change. We then train a neural network to output an appropriate price signal for a desired load change. Our results show that even with strong consumer-friendly constraints on acceptable price changes the resulting load change is significant.We currently implement the results with a leading energy services provider.

Cornelius Köpp, Hans-Jörg von Metthenheim, Michael H. Breitner
Integration of renewable energy sources into the electricity system with new storage and flexibility options

The integration of renewable energy sources is one of the key challenges for the transition of the energy supply chain towards a low carbon society. Due to the intermittency of wind and solar power additional storage and flexibility is required for their integration into the electricity system. Based on a scenario analysis for Germany, the economic and environmental effects of flexibility from electrical cooling as an example for demand side management in a smart grid as well as traditional flexibility from pumped storage power plants is evaluated with the PowerFlex model developed by Ö ko-Institut. In the 2030 scenario with new flexibility, about 500GWh of renewable electricity could be integrated in addition which leads to cost reduction effects as well as a decrease of carbon dioxide emissions.

Matthias Koch, Dierk Bauknecht

Financial Modeling, Risk Management, Banking

Frontmatter
Performance Evaluation of European Banks Using Multicriteria Analysis Techniques

The recent crisis has reaffirmed the need to strengthen the procedures for monitoring the performance and risk taking behavior of banks. In this paper multicriteria analysis methods are used to evaluate the financial performance of a sample of European banks over the period 2006–2009. Different evaluation schemes are used, including a value function model, an outranking approach, and a crossefficiency technique. The obtained results provide useful insights on the performance of the banks, the effects of the recent crisis, the relationship between bank performance and their size and country of origin, the stability of the evaluations over time and the factors that describe the dynamics of bank performance.

Michael Doumpos, Kyriaki Kosmidou
Growth Optimal Portfolio Insurance in Continuous and Discrete Time

We propose simple solution for the problem of insurance of the Growth Optimal Portfolio (GOP). Our approach comprise two layers, where essentially we combine OBPI and CPPI for portfolio protection. Firstly, using martingale convex duality methods, we obtain terminal payoffs of non-protected and insured portfolios. We represent terminal value of the insured GOP as a payoff of a call option written on non-protected portfolio. Using the relationship between options pricing and portfolio protection we apply Black-Scholes formula to derive simple closed form solution of optimal investment strategy. Secondly, we show that classic CPPI method provides an upper bound of investment fraction in discrete time and in a sense is an extreme investment rule. However, combined with the developed continuous time protection method it allows to handle gap risk and dynamic risk constraints (VaR, CVaR). Moreover, it maintains strategy performance against volatility misspecification. Numerical simulations show robustness of the proposed algorithm and that the developed method allows to capture market recovery, whereas CPPI method often fails to achieve that.

Sergey Sosnovskiy
The Regularization Aspect of Optimal-Robust Conditional Value-at-Risk Portfolios

In portfolio management, Robust Conditional Value - at - Risk (Robust CVaR) has been proposed to deal with structured uncertainty in the estimation of the assets probability distribution. Meanwhile, regularization in portfolio optimization has been investigated as a way to construct portfolios that show satisfactory out-ofsample performance under estimation error. In this paper, we prove that optimal- Robust CVaR portfolios possess the regularization property. Based on expected utility theory concepts, we explicitly derive the regularization scheme that these portfolios follow and its connection with the scenario set properties.

Apostolos Fertis, Michel Baes, Hans-Jakob Lüthi
The effect of different market opening structures on market quality – experimental evidence

This experimental study focuses on the effect of opening call auctions on market quality. Our examination extends the existing market microstructure literature in two respects. First, in addition of analyzing the market opening, we examine the effect of an opening call auction on the subsequent continuous trading phase. Second, we analyze two different kinds of call auctions with different pre-opening transparency levels. We find that an opening call auction significantly increases the informational efficiency of opening prices and leads to higher market quality of the subsequent continuous double auction market as compared to the continuous double auction alone.

Gernot Hinterleitner, Philipp Hornung, Ulrike Leopold-Wildburger, Roland Mestel, Stefan Palan
Reassessing recovery rates – floating recoveries

An important but rather neglected input to the pricing of credit derivatives is the recovery rate. It is a priori not clear how an (expected) recovery rate should be chosen for standard credit derivatives pricing models. The problem is especially acute when quoted prices contradict previous recovery rate assumptions. Therefore, in this paper we consider a dynamic calibration of traditional static CDS models. We start from the usual recalibration and propose heuristics for a market-consistent calibration of the recovery rate. Thus the market-implied recovery rate that is now an output can be used as an input for stochastic recovery rate models.

Chris Kenyon, Ralf Werner
Downside Risk Approach for Multi-Objective Portfolio Optimization

This paper presents a multi-objective portfolio model with the expected return as a performance measure and the expected worst-case return as a risk measure. The problems are formulated as a triple-objectivemixed integer program. One of the problem objectives is to allocate the wealth on different securities to optimize the portfolio return. The portfolio approach has allowed the two popular in financial engineering percentile measures of risk, value-at-risk (VaR) and conditional valueat- risk (CVaR) to be applied. The decision maker can assess the value of portfolio return and the risk level, and can decide how to invest in a real life situation comparing with ideal (optimal) portfolio solutions. The concave efficient frontiers illustrate the trade-off between the conditional value-at-risk and the expected return of the portfolio. Numerical examples based on historical daily input data from the Warsaw Stock Exchange are presented and selected computational results are provided. The computational experiments show that the proposed solution approach provides the decision maker with a simple tool for evaluating the relationship between the expected and the worst-case portfolio return.

Bartosz Sawik

Game Theory, Computational and Experimental Economics

Frontmatter
Social Values and Cooperation. Results from an Iterated Prisoner’s Dilemma Experiment.

The following article deals with the question of cooperation in dilemma situations. We ran an iterated Prisoner’s Dilemma experiment and measured the players social value orientation using the Ring Measure of Social Value. We then analyze the players behavior in the Prisoner’s Dilemma in relation to their social value orientation to test the hypotheses that prosocial players are more likely to cooperate. We find evidence that this is indeed the case. We do not find evidence that if two prosocial players interact with each other they achieve higher cooperation rates than two proself players or one prosocial and one proself player.

Jürgen Fleiß, Ulrike Leopold-Wildburger
Honesty in Budgetary Reporting - an Experimental Study

In the experiment we investigate how an agent’s participation in the budgetary process influences his willingness to truthful communication of private information, if there are financial incentives to misrepresentation. The paper focuses on the trade-off between the agent’s preferences for wealth and honesty. The objective is to find and examine the relation between social preferences and honesty in managerial reporting.

Arleta Mietek, Ulrike Leopold-Wildburger
Points Scoring Systems in Formula 1: A Game Theoretic Perspective

An efficient points scoring system is often vital to encouraging competition in a sport. It is also related to the viewership and spectator numbers crucial to lasting popularity and financial success. This paper analyzes the three points scoring systems of the last twenty seasons in Formula 1 (1991-2010) from a game theoretic perspective to examine their competitive impact.

Krystsina Bakhrankova
An experiment examining insider trading with respect to different market structures

As theoretical and empirical approaches suffer some shortcomings if it comes to analyzing insiders’ behavior, we conducted an adequate experiment. The experimental design incorporates traders with perfect information of the fundamental value of the tradeable asset. These insiders compete with regular uninformed participants on different market structures, in particular on a transparent and on an intransparent call market as well as on a continuous double auction. The results shed first light on a couple of interesting issues, including whether and how insiders try to stay undetected, how their profits are accumulated and what market structure is more advantageous for insiders.

Philipp Hornung, Gernot Hinterleitner, Ulrike Leopold-Wildburger, Roland Mestel, Stefan Palan
Markov Simulation of an Iterated Prisoners’ Dilemma Experiment

We study cooperation and its development over the time in a repeated prisoner’s dilemma experiment with unknown length and unknown continuation probability. Subjects are re matched with new partners several times. The study examines the influence of past decisions to future behavior. In the simulation we raise the question whether the experimentally observed time pattern of cooperation can be reconstructed with a simple Markov model of individual behavior. The model parameters are inferred from the experimental data. The resulting Markov model is used to simulate experimental outcomes. Our model indicates that a simple Markov model can be used as a reasonable description of transition probabilities for successive states of cooperation.

Thomas Burkhardt, Armin Leopold, Ulrike Leopold-Wildburger
Determining the Optimal Strategies for Antagonistic Positional Games in Markov Decision Processes

A class of stochastic antagonistic positional games for Markov decision processes with average and expected total discounted costs’ optimization criteria are formulated and studied. Saddle point conditions in the considered class of games that extend saddle point conditions for deterministic parity games are derived. Furthermore, algorithms for determining the optimal stationary strategies of the players are proposed and grounded.

Dmitrii Lozovanu, Stefan Pickl

Health, Life Sciences, Bioinformatics

Frontmatter
Analyses of Two Different Regression Models and Bootstrapping

Regression methods are used to explain the relationship between a single response variable and one or more explanatory variables. Graphical methods are generally the first step and are used to identify models that can be explored to describe the relationship. Although linear models are frequently used and they are user friendly, many important associations are not linear and require considerably more analytical effort. This study is focused on such nonlinear models. To perform statistical inference in this context, we need to account for the error structure of the data. The experimental design for the nutrition data that we use called for each subject to be studied under two different values of the explanatory variable. However, some participants did not complete the entire study and, as a result, the data are available for only one value of the explanatory variable for them. In the analysis section, the bootstrapping method will be used to re-sample the data points with replacement, which will then be used with a nonlinear parametric model. The confidence intervals for the parameters of the nonlinear model will be calculated with the reflection method for the nutrition data set. In addition, the break point of the spline regression will be determined for the same data set. Although the nutrition data set will be used for this study, the basic ideas can be used in many other fields such as education, engineering and biology.

Fulya Gokalp
Modifications of BIC for data mining under sparsity

In many research areas today the number of features for which data is collected is much larger than the sample size based on which inference is made. This is especially true for applications in bioinformatics, but the theory presented here is of general interest in any data mining context, where the number of “interesting” features is expected to be small. In particular mBIC, mBIC1 and mBIC2 are discussed, three modifications of the Bayesian information criterion BIC which in case of an orthogonal designs control the family wise error (mBIC) and the false discovery rate (mBIC1, mBIC2), respectively. In a brief simulation study the performance of these criteria is illustrated for orthogonal and non-orthogonal regression matrices.

Florian Frommlet

Location, Logistics, Transportation and Traffic

Frontmatter
Scheduling of outbound luggage handling at airports

This article considers the outbound luggage handling problem at airports. The problem is to assign handling facilities to outbound flights and decide about the handling start time. This dynamic, near real-time assignment problem is part of the daily airport operations. Quality, efficiency and robustness issues are combined in a multi-criteria objective function. We present important requirements for the real world usage of the model and compare different solution techniques. One solution method is a heuristic approach oriented on the logic of GRASP (Greedy randomized adaptive search procedure). Another solution method is a decomposition approach. The problem is divided into different subproblems and solved in iterative steps. The different solution approaches are tested on real world data from Frankfurt Airport.

Torben Barth, David Pisinger
A tree search heuristic for the container retrieval problem

The problem of finding a sequence of crane moves in order to retrieve all containers from a block of container stacks in a predefined order with a minimum of crane move time is called the container retrieval problem. In this article, we describe a tree search heuristic based on a natural classification scheme of crane moves that is able to solve the container retrieval problem for typical problem dimensions in very short computation times. A comparison with the approach recently published by Lee and Lee (2010) shows that the heuristic is a very competitive method for solving the container retrieval problem. The mean crane move time for the tested problem instances could be reduced by 15.7 %, the average number of crane moves was reduced by 8.7 %, while the computation speed could be drastically improved from several hours to less than 10 seconds on average, the latter being especially important for the application in container terminals where the retrieval decisions have to be made in short periods of time.

Florian Forster, Andreas Bortfeldt
Spatial and temporal synchronization of vehicles in logistics networks

In logistics networks, service providers offer on-site services to the customers. Often, there is a need to synchronize the routes of vehicles that provide such services. Synchronization is required for example in case of a heterogeneous fleet of serving vehicles, where large vehicles carry goods and supply small vehicles, which distribute the goods to customers. We model such a routing problem for a network of customers that are served through a fleet of heterogeneous vehicles. We present a new model for synchronizing cargo exchanges on vehicle routes and provide first computational results for this problem.

Dorota Slawa Mankowska, Christian Bierwirth, Frank Meisel
Sequential Zone Adjustment for Approximate Solving of Large p-Median Problems

The p-median problems are used very often for designing optimal structure of most public service systems. Since particular models are characterized by big number of possible facility locations, current exact approaches often fail. This paper deals with an approximate approach based on specific model reformulation. It uses an approximation of the network distance between a service center and a customer by some of pre-determined distances. The pre-determined distances are given by dividing points separating the range of possible distances. Deployment of these points influences the accuracy of the approximation. To improve this approach, we have developed a sequential method of the dividing point deployment.We study here the effectiveness of this method and compare the results to the former static method.

Jaroslav Janacek, Marek Kvet
A new alternating heuristic for the (r | p)–centroid problem on the plane

In the (

r | p

)-centroid problem, two players, called leader and follower, open facilities to service clients. We assume that clients are identified with their location on the Euclidian plane, and facilities can be opened anywhere in the plane. The leader opens

p

facilities. Later on, the follower opens

r

facilities. Each client patronizes the closest facility. Our goal is to find

p

facilities for the leader to maximize his market share. For this Stackelberg game we develop a new alternating heuristic, based on the exact approach for the follower problem. At each iteration of the heuristic, we consider the solution of one player and calculate the best answer for the other player. At the final stage, the clients are clustered, and an exact polynomial-time algorithm for the (1

|

1)-centroid problem is applied. Computational experiments show that this heuristic dominates the previous alternating heuristic of Bhadury, Eiselt, and Jaramillo.

Emilio Carrizosa, Ivan Davydov, Yury Kochetov
Column Generation for Multi-Matrix Blocking Problems

The blocking problem in railroad freight traffic is a tactical routing problem. The problem was modeled as an arc-based MIP in former publications. We reformulate the problem and introduce a new model. The modified objective function can be handled easier without loss of structural information. Nevertheless, real instances cannot be solved by standard algorithms. Therefore, we present a column generation approach. The model is reformulated in a path-based way. We exploit the new model structure in the pricing step. Sets of new paths for each relation are generated in polynomial time.

Robert Voll, Uwe Clausen
Reduction of Empty Container Repositioning Costs by Container Sharing

Empty container repositioning is a major cost driver for international container operators. While the share of empty containers of total sea-based container flows is around 20%, consultants estimate the rate of empty container transport for the hinterland even twice as high [1].

Herbert Kopfer, Sebastian Sterzik
Optimizing strategic planning in median systems subject to uncertain disruption and gradual recovery

This paper addresses the capacity planning of facilities in median systems at an early stage so that, if potential disruptions arise and forecast demand is misestimated, overall costs are minimized.We consider some realistic features such as uncertainty in the magnitude, time and location of disruptions as well as gradual recovery of disrupted facilities over time. The proposed two-stage stochastic linear program (2-SLP) is solved in a computationally efficient way via an enhanced implementation of the stochastic decomposition method. To ascertain the quality of the solutions obtained some deterministic bounds are calculated.

Chaya Losada, Atsuo Suzuki
A new simulated annealing algorithm for the freight consolidation problem

Logistics services are one of the most outsourced activities today. Freight charges for the outsourced transportation requests are mostly calculated based on tariff agreements depending on their quantities and distances. Due to economies of scale, it is beneficial for the outsourcing company to consolidate their requests, which is called the Freight Consolidation Problem (FCP). In this paper, the FCP is formally defined and modeled. To solve large-scale instances, a simulated annealing algorithm is proposed and tested on newly generated instances. Results show that the algorithm can provide good solutions within acceptable computational times.

Erk Struwe, Xin Wang, Christopher Sander, Herbert Kopfer
Models and Algorithms for Intermodal Transportation and Equipment Selection

We study a cement transportation problem with asset-management (equipment selection) for strategic planning scenarios involving multiple forms of transportation and network expansion. A deterministic mixed integer programming approach is difficult due to the underlying capacitated multi-commodity network flow model and the need to consider a time-space network to ensure operational feasibility. Considering uncertain aspects is also difficult but important. In this paper we propose three approaches: solve a deterministic mixed integer program optimally; solve stochastic programs to obtain robust bounds on the solution; and, study alternative solutions to obtain their optimal cost vectors using inverse programming.

Christina Burt, Jakob Puchinger
Contraction Hierarchies with A* for digital road maps

One of the most successful recent approaches for solving single source - single destination problems in graphs is the Contraction Hierarchies (CH) algorithm, originally published by [3]. The general algorithm consists of two phases: Firstly, a total order on the nodes in the graph is calculated. Secondly for queries, a modified bi-directional Dijkstra-search is performed on the hierarchy implied hereby. Relying on Dijkstra’s algorithm, CH makes no use of the geometric information contained within digital road maps. We propose A*-like modifications of the original query algorithm that double query speed. Results are presented in a benchmark using a map from the OpenStreetMap project.

Curt Nowak, Felix Hahne, Klaus Ambrosi
An IP approach to toll enforcement optimization on German motorways

This paper proposes the first model for toll enforcement optimization on German motorways. The enforcement is done by mobile control teams and our goal is to produce a schedule achieving network-wide control, proportional to spatial and time-dependent traffic distributions. Our model consists of two parts. The first plans control tours using a vehicle routing approach with profits and some side constraints. The second plans feasible rosters for the control teams. Both problems can be modeled as Multi-Commodity Flow Problems. Adding additional coupling constraints produces a large-scale integrated integer programming formulation. We show that this model can be solved to optimality for real world instances associated with a control area in East Germany.

Ralf Borndörfer, Guillaume Sagnol, Elmar Swarat
Driver scheduling based on “driver-friendly” vehicle schedules

In the area of optimization of public transportation there are several methods for modeling and solving vehicle and driver scheduling problems.We designed a sequential heuristic method for solving the combined (vehicle and driver scheduling) problem. Our model is based on a modification of the vehicle schedules to satisfy driver requirements.We introduced a driver friendly approach of the optimization of the scheduling, which is closer to the practice used by public transportation companies.We give test results for this, which are shown as illustrative example for the method in the scheduling of the bus trips of Szeged city, Hungary.

Viktor Árgilán, János Balogh, József Békési, Balázs Dávid, Miklós Krész, Attila Tóth
Optimizing ordered median functions with applications to single facility location

This paper considers the problem of minimizing the ordered median function of finitely many rational functions over compact semi-algebraic sets. Ordered median of rational functions are not, in general, neither rational functions nor the supremum of rational functions.We prove that the problem can be transformed into a new problem embedded in a higher dimension space where it admits a convenient representation. This reformulation admits a hierarchy of SDP relaxations that approximates, up to any degree of accuracy, the optimal value of those problems. We apply this general framework to a broad family of continuous location problems solving some difficult problems.

Victor Blanco, Justo Puerto, Safae El Haj Ben Ali
A Column-and-Row Generation Algorithm for a Crew Planning Problem in Railways

We develop a set-covering type formulation for a crew planning problem that determines the minimum sufficient crew size for a region over a finite planning horizon where the periodic repeatability of crew schedules is considered as well. The resulting problem formulation cannot be solved with a traditional column generation algorithm. We propose a column-and-row generation algorithm and present preliminary computational results.

A. Ç. Suyabatmaz, G. Şahin

Metaheuristics and Biologically Inspired Approaches

Frontmatter
Harmony Search Algorithms for binary optimization problems

In many theoretical and empirical investigations heuristic methods are the subject of investigation for optimization problems in order to get good and valid solutions. In 2001 Harmony Search, a new music-inspired meta-heuristic, was introduced [1]. It has been applied to various types of optimization problems, for example structural design problems, showing significant improvements over other heuristics. Motivated by the question whether these heuristics represent a new class of meta-heuristics or whether they are only another representation of a well-known technique, a couple of investigations were made. In this paper we will show that for the class of binary optimization problems this new nature-inspired heuristic is ”equivalent” to an evolutionary algorithm.

Miriam Padberg
Modeling Complex Crop Management-Plant Interactions in Potato Production under Climate Change

High water withdrawals for agricultural purposes during summer months led during the last decades repeatedly to intolerable ecological conditions in surface water bodies in the Broye region located in Western Switzerland. In this study, we assess different irrigation water withdrawal policies with respect to a farmer’s income and to the optimal irrigation and nitrogen fertilization strategies in potato production under current and future expected climate conditions in the Broye region. To this end, we optimize nitrogen and irrigation management decisions in potato production by the use of a bio-economic model and genetic algorithms. Our results show that limiting the water withdrawal amount leads to a substantial decrease in irrigation use, whereas the reduction in a farmer’s income in potato production is supportable.

Niklaus Lehmann, Robert Finger, Tommy Klein
An Enhanced Ant Colony System for the Sequential Ordering Problem

A well-known Ant Colony System algorithm for the Sequential Ordering Problem is studied to identify its drawbacks. Some criticalities are identified, and an Enhanced Ant Colony System method that tries to overcome them, is proposed. Experimental results show that the enhanced method clearly outperforms the original algorithm and becomes a reference method for the problem under investigation.

L. M. Gambardella, R. Montemanni, D. Weyland
Evolutionary Exploration of E/E-Architectures in Automotive Design

In this work we design an evolutionary algorithmfor the exploration of the electric and electronic (E/E-)architectures in a car. We describe a novel way to simultaneously optimize the assignments of function components to electronic control units (ECU) and of ECUs to busses. To this end we present a suitable representation as well as corresponding variation operators.We also provide heuristics for the optimization of the communication infrastructure, i.e. how busses are connected to each other. Preliminary results show that the approach is able to produce architectures similar to those which are used nowadays, as well as promising alternatives.

Ralph Moritz, Tamara Ulrich, Lothar Thiele

Network Industries and Regulation

Frontmatter
Optimal dimensioning of pipe networks: the new situation when the distribution and the transportation functions are disconnected

De Wolf and Smeers [9] consider the problem of the optimal dimensioning of a gas transmission network when the topology of the network is known. The pipe diameters must be chosen to minimize the sum of the investment and operating costs. This model does not reflect any more the current situation on the gas industry. Today, the transportation function and gas buying function are separated. This work considers the new situation for the transportation company. We present here first results obtained on the belgian gas network and on a part of the french network.

Daniel de Wolf, Bouchra Bakhouya
Analysis of Investment Incentives and the Influence of Growth Options in Regulated Industries

In this contribution, a regulated company’s investment incentives are analyzed in a setting with cost-based regulation, a two-stage investment and growth options. First, it is shown that in the first period, at the first stage of the investment, the company will clearly underinvest from the regulator’s perspective. Second, a specific formula for the calculation of the regulated price in the second period is derived, which will set incentives for the company to choose the capacity level which the regulator considers efficient.

Sabine Pallas
Robust Optimization in Non-Linear Regression for Speech and Video Quality Prediction in Mobile Multimedia Networks

Quality of service (QoS) and quality of experience (QoE) of contemporary mobile communication networks are crucial, complex and correlated.QoS describes network performance while QoE depicts perceptual quality at the user side. A set of key performance indicators (KPIs) describes in details QoS and QoE. Our research is focused specially on mobile speech and video telephony services that are widely provided by commercial UMTS mobile networks. A key point of cellular network planning and optimization is building voice and video quality prediction models. Prediction models have been developed using measurements data collected from live-world UMTS multimedia networks via drive-test measurement campaign. In this paper, we predict quality of mobile services using regression estimates inspired by the paradigm of robust optimization. The robust estimates suggest a weaker dependence than the one suggested by linear regression estimates between the QoS and QoE parameters and connect the strength of the dependence with the accuracy of the data used to compute the estimates.

Charalampos N. Pitas, Apostolos G. Fertis, Athanasios D. Panagopoulos, Philip Constantinou

OR in Industry, Software Applications, Modeling Languages

Frontmatter
Scheduling steel plates on a roller furnace

We introduce a single machine scheduling problem arising in the heat treatment of steel plates. To the best of our knowledge, there is no study on this problem in the literature up to now. We refer to this problem as the

Heat Treatment Furnace Scheduling Problem with Distance Constraints

(HTFSPD) and propose a mixed integer linear program (MILP) formulation. Since the problem itself is NPhard and computational times for real world instances are too high, a genetic algorithm is developed in order to provide heuristic solutions. Computational results for some real data sets1 demonstrate the performance of the algorithm compared to the current solution method used in practice.

Eric Ebermann, Stefan Nickel
Check-In Counter Planning via a Network Flow Approach

Finding an appropriated number of open check-in counters is a typical planning problem on an airport. The problem has to take into account the flight plan, the arrival patterns of passengers and a given service level (max. waiting time of a passenger before being served). The goal is to generate a schedule for the check-in counters that uses as few as possible check-in counters per time. We present a network flow formulation for this problem and discuss additional constraints relevant for real-world scenarios. We show how to solve the model and present numerical results on artificial as well as on real-world data. The model is part of the Inform GroundStar suite that successful is in use on more than 200 airports world-wide.

Torsten Fahle
OpenSolver - An Open Source Add-in to Solve Linear and Integer Progammes in Excel

OpenSolver is an open source Excel add-in that allows spreadsheet users to solve their LP/IP models using the COIN-OR CBC solver. OpenSolver is largely compatible with the built-in Excel Solver, allowing most existing LP and IP models to be solved without change. However, OpenSolver has none of the size limitations found in Solver, and thus can solve larger models. Further, the CBC solver is often faster than the built-in Solver, and OpenSolver provides novel model construction and on-sheet visualisation capabilities. This paper describes Open- Solver’s development and features. OpenSolver can be downloaded free of charge at

http://www.opensolver.org

.

Andrew J Mason
Producing Routing Systems Flexibly Using a VRP Metamodel and a Software Product Line

Routing problems occur in a wide variety of situations. Due to the heterogeneity of cases we do not yet know how to manage the complexity of addressing all the relevant aspects in logistic planning and solving the variety of different problem types in a cost-efficient way. In the last decade, we have witnessed an emergence of systematic approach into managing variation within a set of related software systems. This paper presents an application of these advances from software engineering into vehicle routing: we suggest the construction of a higher-level (meta-) model of routing problems and the application of a software product line approach. The proposed approach results in a flexible product line for constructing a family of routing systems cost-efficiently.

Tuukka Puranen

Production Management, Supply Chain Management

Frontmatter
Fill Time, Inventory and Capacity in a Multi-Item Production Line under Heijunka Control

The concept of production levelling (or Heijunka), which is part of the Toyota Production System, aims at reducing the variance of upstream material requirements and of workload utilization.When using Heijunka control, however, the fluctuation of the external demand has to be compensated by the fill time (the time from demand arrival until fulfilment) and/or end product inventory and/or reserve capacity. The inventory is usually controlled by Kanban loops. But precise statements on the necessary number of Kanban to guarantee a certain fill time are missing in literature.We consider a multi-item production line with a given Heijunka schedule and Poisson demand.We derive approximate analytic relationships between fill time, inventory and capacity for three different planning situations.

Andreas Tegel, Bernhard Fleischmann
Tight lower bounds by semidefinite relaxations for the discrete lot-sizing and scheduling problem with sequence-dependent changeover costs

We study a production planning problem known as the discrete lot-sizing and scheduling problem with sequence-dependent changeover costs.We discuss two alternative QIP formulations for this problem and propose to compute lower bounds using a semidefinite relaxation of the problem rather than a standard linear relaxation. Our computational results show that for a certain class of instances, the proposed solution approach provides lower bounds of significantly better quality.

Celine Gicquel, Abdel Lisser
Condition-based Maintenance Policy for decentralised Production/Maintenance-Control

A condition-based release of maintenance jobs is analysed for a production/ maintenance-systemwhere the conditions of themachines deteriorate as a function of multiple production parameters and maintenance is performed by a separate department. If a predefined operational availability is to keep up, the machine condition that triggers the release of maintenance jobs at minimum costs has to be determined. In the paper a specific continuous condition monitoring and an information exchange protocol are developed, the components of waiting time between release and execution are operationalised and relevant effects of choosing the triggering condition are presented. Stochastic approaches to calculate the triggering condition are formulated and analysed by simulations.

Ralf Gössinger, Michael Kaluzny
Mid-Term Model-Plant Allocation for Flexible Production Networks in the Automotive Industry

In order to match capacity with volatile demand, original equipment manufacturers in the automotive industry increase plant flexibility. This allows for car models to be re-allocated between plants over time, to optimally use available capacity. The planning problem moves from the strategic to the tactical level. Decision support in the form of mathematical planning models accounting for the increased flexibility and characteristics of the mid-term planning situation is required. Two different modeling approaches are considered in this work and compared for their applicability to the planning situation: An approach based on a single time index model formulation and an approach based on a double time index formulation. They differ in their computational time and ability to integrate lost customer rates.

Kai Wittek, Achim Koberstein, Thomas S. Spengler
Optimal (R, nQ) Policies for Serial Inventory Systems with Guaranteed Service

In this paper, serial inventory systems with Poisson external demand and fixed costs at each stock are considered and the guaranteed-service approach (GSA) is used to design their optimal echelon (

R

,

nQ

) inventory policies. The problem is decomposed into two independent sub-problems, batch size decision sub-problem and reorder point decision sub-problem, which are solved optimally by using a dynamic programming algorithm and a branch and bound algorithm, respectively. Numerical experiments on randomly generated instances demonstrate the efficiency of these algorithms.

Peng Li, Haoxun Chen
Channel coordination in a HMMS-type supply chain with profit sharing contract

We investigate a supply chain with a single supplier and a single manufacturer. The manufacturer is supposed to know the demand for the final product which is produced from a raw material ordered from the supplier just in time—i.e., the manufacturer holds no raw material inventory. Her costs consist of the linear purchasing cost, quadratic production cost and the final product quadratic holding costs. It is assumed that the market price of the final product is known as well, hence the sales of the manufacturer are known in advance. Her goal is to maximize her cumulated profits. The supplier’s costs are the quadratic manufacturing and inventory holding costs; his goal is to maximize the revenues minus the relevant costs.We will not examine the bargaining process that determines the adequate price and quantity. The situation is modeled as a differential game. The decision variables of the supplier are the sales price and the production quantity, while the manufacturer chooses a production plan that minimizes her costs, so maximizing the cumulated profits. The basic problem is a Holt-Modigliani-Muth-Simon (HMMS) problem extended to linear purchasing costs. We examine two cases: the decentralized Nash-solution and a centralized Pareto-solution to optimize the behavior of the players of the game.

Imre Dobos, Barbara Gobsch, Nadezhda Pakhomova, Grigory Pishchulov, Knut Richter
Enhancing Aggregate Production Planning with an Integrated Stochastic Queuing Model

Mathematical models for Aggregate Production Planning (APP) typically omit the dynamics of the underlying production system due to variable workload levels since they assume fixed capacity buffers and predetermined lead times. Pertinent approaches to overcome these drawbacks are either restrictive in their modeling capabilities or prohibitive in their computational effort. In this paper, we introduce an Aggregate Stochastic Queuing (ASQ) model to anticipate capacity buffers and lead time offsets for each time bucket of the APP model. The ASQ model allows for flexible modeling of the underlying production system and the corresponding optimization algorithm is computationally very well tractable. The APP and the ASQ model are integrated into a hierarchical framework and are solved iteratively. A numerical example is used to highlight the benefits of this novel approach.

Gerd J. Hahn, Chris Kaiser, Heinrich Kuhn, Lien Perdu, Nico J. Vandaele
Remanufacturing of used products in a closed-loop supply chain with quantity discount

An extended joint economic lot size problem is studied which incorporates the return flow of repairable (remanufacturable) used products. The supply chain under consideration consists of a single supplier and a single buyer. The buyer orders a single product from the supplier, uses it for her own needs, and collects a certain fraction of used items for the subsequent remanufacturing. The product is shipped to the buyer in the lot-for-lot fashion by a vehicle which also returns the collected used items to the supplier for remanufacturing and subsequent service of the buyer’s demand in the next order cycle. To satisfy the buyer’s demand, the supplier can remanufacture used items as well as manufacture new ones. For the given demand, productivity, disposal, setup, ordering and holding costs for serviceable and nonserviceable items at the supplier and the buyer, the optimal lot sizes and collection rates are determined which minimize the joint as well as individual total costs. Further, a problem of coordinating this supply chain by a type of quantity discount is addressed.

Grigory Pishchulov, Imre Dobos, Barbara Gobsch, Nadezhda Pakhomova, Knut Richter
Integrated capacity and inventory decisions

This paper deals with the simultaneous acquisition of capacity and material in a situation with uncertain demand, with non-zero lead-times for the supply of both material and capacity. Although there is a lot of literature on the time-phased acquisition of capacity and material, most of this literature focuses on one of the two decisions. By using a dynamic programming formulation, we describe the optimal balance between using safety stocks and contingent workforce for various lead-time situations. We compare the cost ingredients of the optimal strategy with the standard inventory approach that neglects capacity restrictions in the decision. The experimental study shows that co-ordination of both decisions in the optimal strategy leads to cost reductions of around 10%.We also derive characteristics of the optimal strategy that we expect to provide a thorough basis for operational decision making.

N. P. Dellaert, S. D. P. Flapper, T. Tan, J. Jeunet

Scheduling, Time Tabling and Project Management

Frontmatter
Lower bounds for a multi-skill project scheduling problem

The aim of this paper is to present lower bounds for a Multi-Skill Project Scheduling Problem, including some classical lower bounds and more enhanced ones. Starting from the lower bounds found in the literature, we focus on the particularity of our problem and on the adaptation of these lower bounds to our problem. We present preliminary results obtained with these new lower bounds. The adaptation of the time window adjustments used with energetic reasoning, shows the specificity of this problem, especially slack computation, which requires solving a max-flow with minimum cost problem.

Cheikh Dhib, Anis Kooli, Ameur Soukhal, Emmanuel Néron
Synchronized Two-Stage Lot Sizing and Scheduling Problem in Automotive Industry

This study proposes a mixed-integer linear programming model to solve the synchronized two-stage lot sizing and scheduling problem with setup times and costs. The main motivation behind this study is based on the need of improvement in the inventory control and management system of a supplier that serves to automotive industry. The supplier considered in this study has synchronized two-stage production system which involves three parallel injection machines in the first stage and a single spray dying station in the second stage. In order to achieve synchronization of the production between these stages and improve the inventory levels, a mixed-integer linear programming model is proposed. By the help of developed model, synchronization between the parallel injection machines and the single spray dying station with minimum total cost is obtained. Optimal lot sizes for each stage and optimal schedules of these lot sizes are determined by the developed model as well.

Fadime Uney-Yuksektepe, Rifat Gurcan Ozdemir
A branch-and-price algorithm for the long-term home care scheduling problem

In several countries, home care is provided for certain citizens living at home. The long-term home care scheduling problem is to generate work plans such that a high quality of service is maintained, the work hours of the employees are respected, and the overall cost is kept as low as possible. We propose a branchand- price algorithm for the long-term home care scheduling problem. The pricing problem generates a one-day plan for an employee, and the master problem merges the plans with respect to regularity constraints. The method is capable of generating plans with up to 44 visits during one week.

M. Gamst, T. Sejr Jensen
On a multi-project staffing problem with heterogeneously skilled workers

We examine a workforce assignment problem arising in the multi-project environment of companies. For a given set of workers and for a given set of projects, we want to find one team of workers for each project that is able to satisfy the project workload during each period of project execution. The aim is to minimize average team size. We outline two mixed integer linear programming (MIP) formulations and introduce two heuristic approaches to this problem. A numerical study indicates the performance of the approaches and provides managerial insights into flexibility issues.

Matthias Walter, Jürgen Zimmermann
Integrated schedule planning with supply-demand interactions for a new generation of aircrafts

We present an integrated schedule planning model where the decisions of schedule design, fleeting and pricing are made simultaneously. Pricing is integrated through a logit demand model where itinerary choice is modeled by defining the utilities of the alternative itineraries. Spill and recapture effects are appropriately incorporated in the model by using a logit formulation similar to the demand model. Furthermore class segmentation is considered so that the model decides the allocation of the seats to each cabin class. We propose a heuristic algorithm based on Lagrangian relaxation to deal with the high complexity of the resulting mixed integer nonlinear problem.

Bilge Atasoy, Matteo Salani, Michel Bierlaire

Stochastic Programming, Stochastic Modeling and Simulation

Frontmatter
Approximate Formula of Delay-Time Variance in Renewal-Input General-Service-Time Single-Server Queueing System

Approximate formulas of the variance of the waiting-time (also called as delay-time variance) in a renewal-input general-service-time single-server (GI/GI/1) system play an important role in practical applications of the queueing theory. However, there exists almost no literature on the approximate formulas of the delay-time variance in the GI/GI/1 system. The goal of this paper is to present an approximate formula for the delay-time variance. Our approach is based on the combination of a higher-moment relationship between the unfinished work and the waiting time, and the diffusion process approximation for the unfinished work. To derive the former relationship, we apply Miyazawa’s rate conservation law for the stationary point process. Our approximate formula is shown to converge to the exact result for the Poisson-input system as traffic intensity goes to the unity. The accuracy of our approximation is validated by simulation results.

Yoshitaka Takahashi, Yoshiaki Shikata, Andreas Frey
On solving strong multistage nonsymmetric stochastic mixed 0-1 problems

In this note we present the basic ideas of a parallelizable Branch-and- Fix Coordination algorithm for solving medium and large-scale multistage mixed 0-1 optimization problems under uncertainty, where this one is represented by nonsymmetric scenario trees. An assignment of the constraint matrix blocks into independent scenario cluster MIP submodels is performed by a named cluster splitting variable - compact representation (explicit non anticipativity constraints between the cluster submodels until break stage

t*

). Some computational experience using CPLEX within COIN-OR is reported while solving a large-scale real-life problem.

Laureano F. Escudero, M. Araceli Garín, María Merino, Gloria Pérez
A Cost-Equation Analysis of General-Input General-Service Processor Sharing System

In the computer-communication field, we frequently encounter a situation in which the processor sharing (PS) rule is adopted for a time-shared server next to the first-come-first-serve (FCFS) rule. There has been much work on the Poisson-input general-service M/GI/1 (PS) system. However, there have been few results for a general-input general-service GI/GI/1 (PS) system. We deal with this general GI/GI/1 (PS) system. We show that the cost-equation analysis enables us to derive the relationship between the mean (time-average) unfinished work and the mean (customer-average) sojourn time. Our relationship is then applied to extend and generalize the previous results, e.g., Brandt et al.’s relationship between the mean (customer-average) sojourn times under the FCFS and PS rules, and Kleinrock’s conservation law for the M/GI/1 (PS) system.

Kentaro Hoshi, Naohisa Komatsu, Yoshitaka Takahashi
Multiperiod Stochastic Optimization Problems with Time-Consistent Risk Constraints

Coherent risk measures play an important role in building and solving optimization models for decision problems under uncertainty. We consider an extension to multiple time periods, where a risk-adjusted value for a stochastic process is recursively defined over the time steps, which ensures time consistency. A prominent example of a single-period coherent risk measure that is widely used in applications is Conditional-Value-at-Risk (CVaR). We show that a recursive calculation of CVaR leads to stochastic linear programming formulations. For the special case of the risk-adjusted value of a random variable at the time horizon, a lower bound is given. The possible integration of the risk-adjusted value into multi-stage mean-risk optimization problems is outlined.

M. Densing, J. Mayer
Robustness in SSD portfolio efficiency testing

This paper deals with portfolio efficiency testing with respect to secondorder stochastic dominance (SSD) criterion. These tests classify a given portfolio as efficient or inefficient and give some additional information for possible improvements. Unfortunately, these tests usually assumes discrete probability distribution of returns with equiprobable scenarios. Moreover, they can be very sensitive to the probability distribution. Therefore, [1] derived a SSD portfolio efficiency test for a general discrete distribution and its robust version allowing for changes in probabilities of the scenarios. However, these formulations are not suitable for numerical computations. Therefore, in this paper, the numerically tractable reformulations of these SSD efficiency tests are derived.

Miloš Kopa
Order Fulfillment and Replenishment Policies for Fully Substitutable Products

We consider a fully substitutable multi product inventory system in which customer demand may be satisfied by delivering any combination of products. A customer requesting a certain quantity,

D

may accept different combinations of products as long as its order of size

D

is fulfilled.We model the system as a Markov decision process and develop a approximate dynamic programming algorithm to determine order fulfillment and replenishment policies. In order to reduce the run time of the algorithm, we use two-layer neural network that iteratively fits a function to the state values and finds an approximately optimal order fulfillment and can order type replenishment policy.

Beyazit Ocaktan, Ufuk Kula
Stochastic Programming DEA Model of Fundamental Analysis of Public Firms for Portfolio Selection

A stochastic programming (SP) extension to the traditional Data Envelopment Analysis (DEA) is developed when input/output parameters are random variables. The SPDEA framework yields a robust performance metric for the underlying firms by controlling for outliers and data uncertainty. Using accounting data, SPDEA determines a relative financial strength (RFS) metric that is strongly correlated with stock returns of public firms. In contrast, the traditional DEA model overestimates actual firm strengths. The methodology is applied to public firms covering all major U.S. market sectors using their quarterly financial statement data. RFSbased portfolios yield superior out-of-sample performance relative to sector-based ETF portfolios or broader market index.

N. C. P. Edirisinghe

Accounting and Revenue Management

Frontmatter
Consumer Choice Modelling in Product Line Pricing: Reservation Prices and Discrete Choice Theory

In the literature on product line pricing, consumer choice is often modelled using the max-surplus choice rule. In terms of this concept, consumer preferences are represented by so-called reservation prices and the deterministic decision rule is to choose the product that provides the highest positive surplus. However, the distribution of the reservation prices often seems somewhat arbitrary. In this paper, we demonstrate how reservation prices can be obtained using discrete choice analysis and that these two concepts are not as different as often perceived in the literature. A small example illustrates this approach, using data from a discrete choice model taken from the literature.

Stefan Mayer, Jochen Gönsch
Incentives for Risk Reporting with Potential Market Entrants

We analyze a situation in which an incumbent firm, endowed with private information about a risk factor in its market, can credibly disclose quantitative risk information to the market and, thus, also to its opponents. Favorable information increases the market price of the firm, but it may also induce the opponent to enter the market, which imposes a proprietary cost on the firm. We show that there exist partial-disclosure equilibria with two distinct nondisclosure intervals. If the firm does not disclose, the opponent will not enter the market. We conclude that one reason for the empirically observed lack of quantitative risk disclosure is the fact that firms are required to explain the underlying assumptions and models that they use to measure the risk, which can lead to important competitive disadvantages.

Anne Chwolka, Nicole Kusemitsch
On the Integration of Customer Lifetime Value into Revenue Management

This paper provides insights into assessing customer relationships in a revenue management environment and depicts methodological problems that may arise in this context. For this purpose, we formulate a multi-period deterministic linear programing model, which accounts for customers’ future demand behavior being dependent on the provider’s previous availability decisions. Moreover, on the basis of an illustrative example, we investigate the impact on the control policy of a traditional deterministic linear program when both the product-related revenue and the previously developed long-term profitability measure are considered.

Johannes Kolb

Forecasting, Neural Nets and Fuzzy Systems

Frontmatter
Formulation of A Sale Price Prediction Model Based on Fuzzy Regression Analysis

It is indispensable for companies to take some marketing strategy to predict and analyze the sale price met to customer satisfaction. In sale price prediction model, human factors are included in the elements composing the model, and it is getting more difficult to define and describe entire systems precisely. Therefore in case we obtain data from such a model, the data are accompanied by human subjective and experiential uncertainty. In this paper, we develop a theoretical formulation of sale price prediction model based on fuzzy regression with fuzzy input-output data (SPP-model). The solutions of SPP-model is found by solving two LP problems, both Min- and Max-problems, and each of them indicates upper and lower bounds of possibility for solutions (prices).

Michihiro Amagasa
Visualizing Forecasts of Neural Network Ensembles

Advanced neural network architectures like, e.g., Historically Consistent Neural Networks (HCNN) offer a host of information. HCNN produce distributions of multi step, multi asset forecasts. Exploiting the entire informational content of these forecasts is difficult for users because of the sheer amount of numbers. To alleviate this problem often some kind of aggregation, e.g., the ensemble mean is used. With a prototypical visualization environment we show that this might lead to loss of important information. It is common to simply plot every possible path. However, this approach does not scale well. It becomes unwieldy when the ensemble includes several hundred members.We use heat map style visualization to grasp distributional features and are able to visually extract forecast features. Heatmap style visualization shows clearly when ensembles split into different paths. This can make the forecast mean a bad representative of these multi modal forecast distributions. Our approach also allows to visualize forecast uncertainty. The results indicate that forecast uncertainty does not necessarily increase significantly for future time steps.

Hans-Jörg von Metthenheim, Cornelius Köpp, Michael H. Breitner
Forecasting Market Prices with Causal-Retro-Causal Neural Networks

Forecasting of market prices is a basis of rational decision making [Zim94]. Especially recurrent neural networks (RNN) offer a framework for the computation of a complete temporal development. Our applications include short- (20 days) and long-term (52 weeks) forecast models. We describe neural networks (NN) along a correspondence principle, representing them in form of equations, architectures and embedded local algorithms.

Hans-Georg Zimmermann, Ralph Grothmann, Christoph Tietz

AwardWinners

The Gomory-Chvátal Closure of a Non-Rational Polytope is a Rational Polytope

The question as to whether the Gomory-Chvátal closure of a non-rational polytope is a polytope has been a longstanding open problem in integer programming. In this paper, we answer this question in the affirmative, by combining ideas from polyhedral theory and the geometry of numbers.

Juliane Dunkel, Andreas S. Schulz
Dynamic Fleet Management for International Truck Transportation

Recent developments in IC technologies have facilitated the practical application of dynamic vehicle routing systems. So far, most of the available dynamic approaches have dealt with local area environments. In this work, however, the focus is set to wide area dynamic freight transportation (tramp transportation mode) covering the entire European continent. We develop and evaluate two new dynamic planning approaches and incorporate all important real-life restrictions, such as regulations on driving hours (EC 561), working hours and traffic bans. Extensive numerical tests are carried out with a five-week real-life data set from an international freight forwarding company. The results indicate that significant improvements, especially in terms of empty kilometers and service quality, can be achieved.

Steffen Schorpp
Algorithmic Cost Allocation Games: Theory and Applications

This article gives an overview on some of the results of the author’s PhD thesis [7]. This thesis deals with the cost allocation problem, which arises when several participants share the costs of building or using a common infrastructure.We attempt to answer the question: What is a fair cost allocation among participants? By combining cooperative game theory and state-of-the-art algorithms from linear and integer programming, our work not only defines fair cost allocations but also calculates them numerically for large real-world applications.

Nam-Dũng Hoàng
Backmatter
Metadata
Title
Operations Research Proceedings 2011
Editors
Diethard Klatte
Hans-Jakob Lüthi
Karl Schmedders
Copyright Year
2012
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-29210-1
Print ISBN
978-3-642-29209-5
DOI
https://doi.org/10.1007/978-3-642-29210-1