Skip to main content

2009 | Buch

Operations Research Proceedings 2008

Selected Papers of the Annual International Conference of the German Operations Research Society (GOR) University of Augsburg, September 3-5, 2008

herausgegeben von: Bernhard Fleischmann, Karl-Heinz Borgwardt, Robert Klein, Axel Tuma

Verlag: Springer Berlin Heidelberg

insite
SUCHEN

Über dieses Buch

The international conference \Operations Research 2008", the annual meeting of the German Operations Research Society (GOR), was held at the University of Augsburg on September 3-5, 2008. About 580 p- ticipants from more than 30 countries presented and listened to nearly 400 talks on a broad range of Operations Research. The general subject \Operations Research and Global Business" str- ses the important role of Operations Research in improving decisions in the increasingly complex business processes in a global environment. The plenary speakers Morris A. Cohen (Wharton School) and Bernd Liepert (Executive Board of KUKA Robotics) addressed this subject. Moreover, one of the founders of Operations Research, Saul Gass (U- versity of Maryland), gave the opening speech on the early history of Operations Research. This volume contains 93 papers presented at the conference, selected by the program committee and the section chairs, forming a representative sample of the various subjects dealt with at Operations Research 2008. The volume follows the structure of the conference, with 12 sections, grouped into six \Fields of Applications" and six \Fields of Methods and Theory". This structure in no way means a separation of theory and application, which would be detrimental in Operations Research, but displays the large spectrum of aspects in the focus of the papers. Of course, most papers present theory, methods and applications together.

Inhaltsverzeichnis

Frontmatter

Finance and Accounting

Frontmatter
Smoothing Effects of Different Ways to Cope with Actuarial Gains and Losses Under IAS 19

International accounting for pension obligations is one of the most complex issues in accounting. According to the International Accounting Standard (IAS) 19 ‘employee benefits’ the defined benefit obligation has to be measured by the projected unit credit method (IAS 19.64). Experience deviations from the expected values for future salary/wage increases, pension increases, uctuation, and mortality as well as changes in the discount rate cause actuarial gains and losses, that are explicitly considered within the IAS 19 accounting system. Actuarial gains and losses are unavoidable to a certain degree and may cause uctuations in pension costs. IAS 19 offers several ways to cope with actuarial gains and losses. The so-called corridor approach is explicitly designed to smooth pension costs (IAS 19 BC 39; see [1, p. 127]). Generally, smoothing is contrary to the overall objective to provide information that is useful for economic decision. The research question of this paper is to quantify the smoothing effects of the different ways to cope with actuarial gains and losses. Furthermore, we suggest a modification that results into a moderate smoothing, but without a loss of decision usefulness, because it avoids an artificial source of actuarial gains and losses and reduces complexity. As we are interested in pure accounting effects we focus on unfunded pension plans. Thus, the results are not inuenced by the design of the entities contributions to an external fund as well as the investment policy of such a fund.

Matthias Amen
A Regime-Switching Relative Value Arbitrage Rule

The relative value arbitrage rule, also known as “pairs trading” or “statistical arbitrage”, is a well established speculative investment strategy on financial markets, dating back to the 1980s. Today, especially hedge funds and investment banks extensively implement pairs trading as a long/short investment strategy. Based on relative mispricing between a pair of stocks, pairs trading strategies create excess returns if the spread between two normally comoving stocks is away from its equilibrium path and is assumed to be mean reverting, i.e. deviations from the long term spread are only temporary effects. In this situation, pairs trading suggests to take a long position in the relative undervalued stock, while the relative overvalued stock should be shortened. The formation of the pairs ensues from a cointegration analysis of the historical prices. Consequently, pairs trading represents a form of statistical arbitrage where econometric time series models are applied to identify trading signals. However, fundamental economic reasons might cause simple pairs trading signals to be wrong. Think of a situation in which a profit warning of one of the two stocks entails the persistant widening of the spread, whereas for the other no new information is circulated. Under these circumstances, betting on the spread to revert to its historical mean would imply a loss.

Michael Bock, Roland Mestel
Performance Measurement, Compensation Schemes,and the Allocation of Interdependence Effects

In practice, firms often exhibit divisionalized structures in which headquarters delegate decision rights to divisional managers. In this paper, we examine the problem of allocating interdependence effects stemming from interdivisional trade. For this, we analyze a model in which a divisionalized firm contracts with two managers to operate their divisions and to make relationship-specific investment decisions. Contracts can be based on both divisional profits and hence depend on the allocation of interdependence effects. In line with transfer pricing literature, we discuss a centralized as well as a decentralized setting with respect to the allocation authority. Issues of mechanism design concerning divisional trade are extensively discussed in the literature. Most related to our paper is [1] which shows that profit sharing induces managers to make first-best investment decisions in a decentralized setting. However, profit sharing imposes extra risk on the managers and therefore may not be optimal. Our paper extends the analysis of [1] by incorporating moral hazard problems with respect to investment decisions. Further, we distinguish between different organizational designs.

Michael Krapp, Wolfgang Schultze, Andreas Weiler
4. Coordination of Decentralized Departments with Existing Sales Interdependencies

This study regards a company which consists of two decentralized business units forming a value network. One business unit produces and sells a main product and the other business unit produces and sells a complementary by-product. The company pursues a firm-wide differentiation strategy. The following model assumes that the business unit being responsible for the by-product can implement this firm-wide differentiation strategy by improving, for example, the quality or the functionality of the by-product. Because of the complementary relationship of the products, the by-product and the main product obtain a unique selling proposition through a specific investment in the byproduct. The specific investment is totally defrayed by that business unit acting on its own authority, but it increases the revenue of both business units. Therefore, the allocation of the profit induced by the specific investment is not made fairly. An underinvestment problem arises which endangers the objective of firm-wide profit maximization. Coordination instruments are used to improve the reconciliation between separated business units. This article compares a contribution margin based, a revenue based and a quantity based investment contribution mechanism as coordination instruments for achieving goal congruence between the considered business units inducing eficient production and investment decisions as well as overall profit maximization. The recent literature mostly focuses on profit sharing (e.g. [3]), revenue sharing (e.g. [1] and [2]) or transfer pricing (for an overview see [4]) to coordinate a two-stage value chain of a decentralized company with existing production interdependencies. In contrast to that, this study regards production and investment decisions with existing sales interdependencies in a value network. The remainder of this paper is organized as follows: Section 2 presents the framework and the solution of an equilibrium model using a contribution margin based, a revenue based and a quantity based investment contribution mechanism. In section 3, the performance of these coordination mechanisms is compared on the basis of the expected overall firm profit. Circumstances are identified under witch each investment contribution mechanism dominates the others.

Christian Lohmann
5. Empirical Examination of Fundamental Indexation in the German Market

Fundamental Indexation is the name of an index methodology that selects and weights index constituents by means of fundamental criteria like total assets, book value or number of employees. This paper examines the performance of fundamental indices in the German equity market during the last 20 years. Furthermore the index returns are analysed under the assumption of an eficient as well as an ineficient market. Index returns in eficient markets are explained by applying the three factor model for stock returns of [2]. The results show that the outperformance of fundamental indices is partly due to a higher risk exposure, particularly to companies with a low price to book ratio. By relaxing the assumption of market eficiency, a return drag of capitalisation weighted indices can be deduced. The index methodology implies an investment strategy that benefits from well known market anomalies like the value effect without relying on active portfolio management. Furthermore under the assumption of an ineficient market there is an added value of fundamental indices.

Max Mihm
6. Trading in Financial Markets with Online Algorithms

If we trade in financial markets we are interested in buying at low and selling at high prices. We suggest an active reservation price based trading algorithm which tries to solve this type of problem. The effectiveness of the algorithm is analyzed from a worst case point of view. We want to give an answer to the question if the suggested algorithm shows a superior behaviour to buy-and-hold policies using simulation on historical data.

Esther Mohr, Günter Schmidt

Health Care and Environment

Frontmatter
7. A New Model and Tabu Search Approach for Planning the Emergency Service Stations

The location planning of emergency service stations is crucial, especially in the populated cities with heavy trafic conditions such as Istanbul. In this paper, we propose a Backup Double Covering Model (BDCM), a variant of the well-known Maximal Covering Location Problem, that requires two types of services to plan the emergency service stations. The objective of the model is to maximize the total population serviced using two distinct emergency service stations in different time limits where the total number of stations is limited. We propose a Tabu Search (TS) approach to solve the problem. We conduct an extensive experimental study on randomly generated data set with different parameters to demonstrate the effectiveness of the proposed algorithm. Finally, we apply our TS approach for planning the emergency service stations in Istanbul.

Ayfer Başar, Bülent Çatay, Tonguç Ünlüyurt
8. Integration of Carbon Efficiency into Corporate Decision-Making

Concerning increasing carbon emissions and resulting climate change discussions, methods to assess corporate carbon emissions gain importance in research and practice. A transparent and integrated method for decision support to control ratios like eco-efficiency does not exist. Against this background, a concept for the implementation of carbon emission accounting indicators into decision-making processes has been developed, which consists of different aggregation levels and their connections. Within this contribution, a model for the level of internal planning is presented and applied to assess carbon reduction provisions of a dyeing company.

Britta Engel, Grit Walther, Thomas S. Spengler
9. Production Planning Under Economic and Ecologic Objectives - A Comparison in Case Studies from Metals Production

Competition forces industrial companies to continuously improve resource efficiency. In the field of recycling of by-products from metal industries coke and coal are the most important primary resources. Hence, improvements can be achieved either by reducing the necessary coke and coal input or by maximizing the production output. As the use of coke and coal is also responsible for the major part of their CO2 emissions, the enhancement of the resource efficiency also leads to a reduction of these. It is the aim of this contribution to further analyze the relationship between economic and environmental objectives such as the maximization of the contribution margin or the minimization of CO2 emissions in operational production planning. The analysis is carried out for two case studies. We use an approach based on a problem adequate modeling of the production processes enabling to model the input-output relationships between used resources and outputs (products and by-products). In the following section the general methodological approach is explained before the case studies are described. Finally conclusions are drawn.

Magnus Fröhling, Frank Schwaderer, Otto Rentz
10. Production Planning with Common Parts in Closed-Loop Supply Chains

Common parts in different product variants can lead to decreased procurement costs if recovery strategies are applied. Thereby new parts are substituted by recovered ones. Since long- or mid-term supplier contracts for these new parts exist demand as well as return forecasts are essentially to negotiate contracts. In this article forecasting issues are addressed and a planning procedure to calculate the need of new parts is given.

Jenny Steinborn, Grit Walther, Thomas S. Spengler

Production and Service Management

Frontmatter
11. Production Planning in Continuous Process Industries:Theoretical and Optimization Issues

A solid theoretical basis related to production planning for different process systems has been established in the literature. However, continuous ow process industries with non-discrete items remain least researched with respect to specific theoretical and optimization issues. The purpose of this work is to locate continuous non-discrete production within the theoretical frameworks and apply planning techniques on a concrete industrial example. Subsequently, two mathematical optimization models are developed and analyzed in conjunction with their practical implications and computational results. The first formulation allows for a better utilization of the production capacity with respect to energy costs, whereas the second one maximizes the production output.

Krystsina Bakhrankova
12. Cost-Oriented Models of Network Industries Price Regulation

The existence of pure monopoly in network industries increases the role of regulation mechanisms in connection with objectification and increase in their social effectiveness. The objective of regulation mechanisms is to find an appropriate proportion between price and product supply of network industry under assumption of the existence competitive market. With regard to analysis of equilibrium in network industries models it is important to point out that except for competition policy protection the state fulfils another specific task - regulation of network industries. The aim of the paper is to examine the equilibrium conditions in the market of network industries. The state inuences proportional relations between price and supply of network industry production. The conditions for equilibrium of network industries and methods of their regulations will be examined in the paper. The stress will be laid on the regulation on the base of cost - return over costs regulation

Eleonora Fendekova, Michal Fendek
13. Production Chain Planning in the Automotive Industry

We allocate a given production workload between L production lines producing P products with a subsequent buffer of limited capacity by modeling the problem as a classical transportation problem. The model is embedded in a Dynamic Programming approach that finds a cost-optimal solution exploiting the given exibility of an automotive plant with respect to production capacity, production volume and staff. Preliminary computational results show that our approach solves real-world instances within some hours.

Claas Hemig, Jüurgen Zimmermann
14. Two-Dimensional Guillotineable-Layout Cutting Problems with a Single Defect - An AND/OR-Graph Approach

In this paper, a specific cutting problem will be considered that has only received very limited attention in the literature so far, namely one in which the plate to be cut down contains a (rectangular) defective region. For the corresponding cutting problem without defects, Morabito et al. (cf. [3]) have introduced a solution method which is based on the AND/OR-graph approach. We suggest several modifications of this method which allow for dealing with a plate that contains a single defect, a problem type introduced by Carnieri et al. (cf. [1]).

Vera Neidlein, Andréa C.G. Vianna, Marcos N. Arenales, Gerhard Wäscher
15. The Single Item Dynamic Lot Sizing Problem with Minimum Lot Size Restriction

In practise, production managers prefer to determine an optimal production plan by using minimum lot size restrictions instead of setup cost [3, 5]. Anderson and Cheah [1] also noticed, that in “lot sizing practice out-of-pocket setup cost are commonly accounted for by specifying a minimum batch size parameter”. Therefore, the objective is to minimize the total inventory cost only with respect to the lot size restrictions, and not the sum of setup cost and inventory cost, as in mainstream models. In the paper we formulate the single item dynamic lot sizing problem with minimum lot size restriction and elaborate a dynamic programming algorithm for its solution. The preliminary computational results show that the algorithm is highly efficient and determines optimal solutions in negligible time.

Irena Okhrin, Knut Richter
16. On the Relation Between Industrial Product-Service Systems and Business Models

Companies especially in B-to-B markets increasingly focus on the value generated for customers through innovative business models instead of merely selling products. Following such customer-oriented strategies dissolves the boundary of products and services. Most companies‘ offerings can at best be characterized as bundles of products and services, which we refer to as Industrial Product-Service Systems (IPSS) in this paper. IPSS are problem solutions for B-to-B markets, which consist of customized configurations of product and service parts tailor-made to meet individual customer needs. These product and service parts of IPSS exert a mutual inuence on each other, owing to an integrated development and operation. The possibility of adjusting an original configuration along the IPSS-life-cycle by partially substituting product and service parts (IPSS-exibility) is of special importance with regard to IPSS. Integrating services into the core product, however, triggers a transition from a transaction- to a relationship-based (long-term) business connection [4], which is encompassed by challenges for the business parties involved. As a consequence the contractual and implicit relations need to be re-designed, striving at reallocating risks and incentives. In this context, business models which are based on the dynamic bundles describe the design of the customer-supplier-relationship in the form of performance schemes and responsibilities. With business models concentrating on use orientation, the utilizability of the manufacturing assets is ensured. By doing so, the supplier executes business processes of the customer for the first time. Facing a result-oriented business model, the supplier is fully responsible for the output value. Thus, for both business models it is distinctive that the supplier participates in the customer‘s risks, which is due to connection of the supplier‘s compensation with the output value of the manufacturing asset.

Alexander Richter, Marion Steven
17. Dynamic Bid-Price Policies for Make-to-Order Revenue Management

A challenge of make-to-order (MTO) manufacturing lies in maximizing the total contribution margin by compiling the optimal order portfolio, given a fixed capacity. Particularly, bid-price based heuristics are successfully applied. The objective of this paper is to provide an extension to traditional bid-price based revenue management in MTO. Based on an analysis of the pros and cons of anticipative and reactive approaches, a hybrid approach combining both elements is proposed.

Thomas Spengler, Thomas Volling, Kai Wittek, Derya E. Akyol

Scheduling and Project Management

Frontmatter
18. A GA Based Approach for Solving Multi Criteria Project Scheduling Problems

In this paper we present a Genetic Algorithm (GA) for generating eficient solutions for multi criteria resource constrained project scheduling problems where conicting regular as well as non regular performance measures have to be respected (cf. [4]). The GA-scheme is similar to the one developed by [1], i.e. the genes of the genotype do not only represent activities of the phenotype but also information on the decoding scheme and modus. Since a large number of (mostly similar) solutions is usually not of great help for a planner not all efficient solutions are maintained in order to reduce complexity. Thus, we use a “relaxed Pareto operator" which does accept efficient solutions only which differ (substantially) with respect to solution quality (and structure). We have applied this procedure to solve the Movie Shoot Scheduling Problem (MSSP) and we report first results.

Felix Bomsdorf, Ulrich Derigs, Elisabeth von Jagwitz
19. Solving the Batch Scheduling Problem with Family Setup Times

In this paper we address the machine scheduling problem involving family setup times and batching decisions. The m-machine ow shop system is considered with the objective of minimizing the makespan. We first present a mathematical formulation which is able to solve small instances. Subsequently, a tabu search algorithm is proposed with diverse neighbourhoods.

Udo Buscher, Liji Shen
20. On Single Machine Scheduling and Due DateAssignment with Positionally Dependent Processing Times

This paper addresses single machine scheduling problems in which the decision-maker controls two parameters: the due dates of the jobs and the processing times. In the problems under consideration, the jobs have to be assigned the due dates and the objective includes the cost of such an assignment, the total cost of discarded jobs and, possibly, the holding cost of the early jobs represented in the form of total earliness. The processing times of the jobs are not constant but depend on the position of a job in a schedule. We mainly focus on scheduling models with a deterioration effect. Informally, under deterioration the processing time is not a constant but changes according to some rule, so that the later a job starts, the longer it takes to process. An alternative type of scheduling models with non-constant processing times are models with a learning effect, in which the later a job starts, the shorter its processing time is. The two types of models are close but not entirely symmetric.

Valery S. Gordon, Vitaly A. Strusevich
21. Scheduling Component Placement Operations for Collect-and-Place Type PCB Assembly Machines

Component placement is one of the most time-consuming operations in printed circuit board (PCB) assembly and dominates the cycle time of any PCB assembly line. In our study, we focus on collect-and-place machines which first collect a number of electronic components from the component magazine and then place them one-by-one onto the PCB. With this type of machinery two problems arise, i.e. generating placement tours and determining the placement sequence within each tour. To solve these problems, an effecient clustering algorithm to create placement tours and two modified nearest neighbor algorithms (MNNHs) to determine the placement sequence are proposed. The objective is to minimise the assembly cycle time per board. Numerical experiments show that high-quality solutions are obtained within very short CPU time.

Özgür Kulak, Serol Bulkan, Hans-Otto Günther
22. Scheduling of Multiple R&D{Projects in a Dynamic and Stochastic Environment

In R&D-departments typically multiple projects are processed simultaneously and compete for scarce resources. The situation is dynamic since new projects arrive continuously and stochastic in terms of interarrival times of projects as well as of the work content of the projects. The problem of scheduling projects in this context such that the weighted tardiness is minimized is particularly difficult and has not been covered much in the literature. The contribution of this paper is an assessment of priority rules originally proposed for the static and deterministic context in the dynamic and stochastic context.

Philipp Melchiors, Rainer Kolisch
23. An Evolutionary Algorithm for Sub-Daily/Sub-Shift Staff Scheduling

Staff scheduling involves the assignment of a qualified employee to the appropriate workstation at the right time while considering various constraints. According to current research employees spend 27 to 36% of their working time unproductively, depending on the branch [10]. Major reasons include a lack of planning and controlling. Most often staff scheduling takes place based on prior experience or with the aid of spreadsheets [1]. Even with popular staff planning software employees are regularly scheduled for one workstation per day. However, in many branches, such as trade and logistics, the one-employee-onestation concept does not correspond to the actual requirements and sacrifices potential resources. Therefore, sub-daily (including sub-shift) planning should be an integral component of demand-oriented staff scheduling.

Volker Nissen, René Birnstiel
24. Scheduling of Modular Production Lines in Mobile Terminal Manufacturing Using MILP

The aim towards optimal utilization of business opportunities and manufacturing resources increases the requirements on both production planning and scheduling. One of the main goals is to maintain a high level of exibility to permit a fast response to changed situations in the market. In this paper, a scheduling problem encountered in the exible manufacturing of mobile terminals is discussed.

Frank Pettersson, Janne Roslöf
25. Revenue Maximization on Parallel Machines

Revenue management is essentially the process of allocating resources to the right customer at the right time and the right price (cf. [9]). A slightly different approach to revenue maximization can be met in “classical” scheduling theory (cf. [3]), where the goal is to maximize the criterion value, i.e. the profit, for some given values of the problem parameters (cf. [8]). Such a model finds many practical applications. For example, a set of jobs can represent a set of customer orders which may give certain profit to a producer. Due to limited resources, modeled by a machine or a set of machines, the producer has to decide whether to accept or reject a particular order and how to schedule accepted orders in the system. Delays in the completions of orders cause penalties, which decrease the total revenue obtained from the realized orders. For this reason, maximizing revenue is strictly related to due date involving criteria (cf. [3]) such as minimizing tardiness or late work (cf. [11]). The maximum revenue objective function has been studied mostly for the single machine environment (cf. [2], [5], [8], [10]). In our research, we investigate the problem of selecting and executing jobs on identical parallel machines in order to maximize the total revenue (profit) with the weighted tardiness penalty.

Malgorzata Sterna, Jacek Juraszek, Erwin Pesch
26. A New Bottleneck-Based Heuristic for Reentrant Job Shops: A Case Study in a Textile Factory

The classical job shop assumes that each job visits a machine only once. In practice, this assumption is often violated. Recently, the reentrant job shop has become prominent in which a certain job may visit a specific machine or a set of machines more than once during the process ow. Reentrant job shops can be found in many production systems, particularly in high-tech industries such as semiconductor manufacturing. Another example is the manufacturing of printed circuit boards that require both surface-mounted devices and conventional pinthrough- hole devices. It is also employed in parts that go through the painting and baking divisions alternately for different coats of paint in a painting shop. The problem of minimizing makespan in a reentrant job shop is theoretically challenging. In fact, it is NP-hard in the strong sense even for the two-machine case [1]. For the solution of job shop scheduling problems (JSSPs), exact methods such as integer programming formulations [2] and branch-and-bound algorithms [3] have been developed to produce optimal solutions. However, their worst-case computational burden increases exponentially with the size of the problem instance. As noted in Aytug et al. [4], for industrial problems the computational time of any algorithm must be short enough that the resulting schedule can be used. Hence, a variety of heuristic procedures such as dispatching rules, decomposition methods, and metaheuristic search techniques have been proposed for finding “good” rather than optimal solutions in a reasonably short time.

Seyda Topaloglu, Gamze Kilincli
27. Project Scheduling with Precedence Constraints and Scarce Resources: An Experimental Analysis of Commercial Project Management Software

We report on the results of an experimental analysis where we have compared the resource allocation capabilities of recent versions of 5 commercial project management software packages against state-of-the-art solution methods from the literature. For our analysis, we have used 1560 RCPSP instances from the standard test set PSPLIB. The results indicate that using the automatic resource allocating feature of those packages may result in project durations that are considerably longer than necessary, in particular when the resource scarcity is high or when the number of activities is large.

Norbert Trautmann, Philipp Baumann

Supply Chain and Inventory Management

Frontmatter
28. Interactive Multi-Objective Stochastic Programming Approaches for Designing Robust Supply Chain Networks

Many attempts have been made to model and optimize supply chain design, most of which are based on deterministic approaches, see for example [3], [8], [4] and many others. In order to take into account the effects of the uncertainty in the production scenario, a two-stage stochastic model is proposed in this paper. There are a few research works addressing comprehensive (strategic and tactical issues simultaneously) design of supply chain networks using two-stage stochastic models including [6], [9], [1] and [7]. [2] developed a multi-objective stochastic programming approach for designing robust supply chains. Then, they used goal attainment technique, see [5] for details, to solve the resulting multi-objective problem. This method has the same disadvantages as those of goal programming; namely, the preferred solution is sensitive to the goal vector and the weighting vector given by the decision maker, and it is very hard in practice to get the proper goals and weights. To overcome this drawback, we use STEM method in this paper to solve this multi-objective model.

Amir Azaron, Kai Furmans, Mohammad Modarres
29. A MILP Model for Production and Distribution Planning in Consumer Goods Supply Chains

In the consumer goods industry there is often a natural sequence in which the various products are to be produced in order to minimize total changeover time and to maintain product quality standards. For instance, in the production of beverages, the final bottling and packaging lines determine the output rate of the entire production system. This type of production system is called “make-and-pack”. In this paper, a so-called block planning approach based on mixed-integer linear optimization modeling is presented which establishes cyclical production patterns for a number of pre-defined setup families.

Bilge Bilgen, Hans-Otto Günther
30. An Exact Discrete-Time Model of a Two-Echelon Inventory System with Two Customer Classes

In this paper, we analyze a two-echelon inventory system with one warehouse, one retailer and a priority customer class with direct supply from the warehouse. The customer demands are arbitrarily distributed. At both stockpoints, the inventory control is according to a reorder-point policy with periodic review and fixed replenishment order sizes. For such a system, we present a discrete-time Markov chain model to describe the inventory positions. With this model, the service levels for both customer classes can be computed. A simple priorization rule such that the priority customer class will be served first as well as a critical-level policy can be considered.

Lars Fischer, Michael Manitz
31. Supply Chain Coordination Models with Retailer's Attitudes Toward Risk

Supply Chain Management (SCM) to effectively integrate various facilities and partners, such as suppliers, manufacturers, distributors, and retailers is the key to yield the competitive advantage for the companies. Nowadays, integrated supply chain management is possible due to advances in information technology. Despite these advances, it is still difficult to achieve the best supply chain performance without the coordination among members of a supply chain, because different facilities and partners in the supply chain may have different, conicting objectives (see for instance [4]). We consider a two level supply chain model in a newsvendor‘s problem. First, we shall show a property on the supplier‘s share of the supply chain‘s expected profit in a buyback contract where the retailer is riskneutral. Second, we discuss the effect of the attitudes toward risk of the retailer on the coordination in a supply chain. Most of the previous literature assume the retailer is risk-neutral. But recent studies indicate that the risk-averse approach in SCM plays a very crucial role in achieving a better supply chain performance. For example, Wang et al.[5] studied about the risk-averse newsvendor order at a higher selling price. Zhang et al. [7] analyzed the supply chain coordination of loss-averse newsvendor with contract. Keren and Pliskin [2] studied a benchmark solution for the newsvendor problem where the retailer is risk-averse, in which the demand distribution is uniformly distributed. We study this problem in a more general setting; using the utility functions representing the retailer‘s preferences, the optimal order quantity placed by a risk-averse retailer is compared with that of a risk-neutral retailer under the wholesale price contract and the buyback contract respectively.We also derive interesting properties between the retailer‘s order quantity and the Pratt‘s risk aversion function in the wholesale price contract and the buyback contract in a special case where both the demand function and the retailer‘s utility function are exponential.

Harikrishnan K Kanthen, Chhaing Huy
32. Zur Erweiterung der Kennlinientheorie auf mehrstufige Lagersysteme

Zum Design logistischer Systeme sowie zur Planung und Kontrolle logistischer Prozesse ist es notwendig, messbare Kenngröβen zu deren Beurteilung heranzuziehen sowie qualitative und quantitative Relationen zxwischen diesen Kenngröβen einschätzen zu können. Für die Darstellung funktionaler Zusammenhänge zwischen wichtigen Kenngr öβen und deren graphische Umsetzung in Form von Kurvenverläufen wird im ingenieurwissenschaftlichen Bereich der Begriff der “logistischen Kennlinie” verwendet. In ihrer Monografie “Logistische Kennlinien” beschreiben die Autoren P. Nyhuis und H.-P. Wiendahl [2] unter anderem, wie sich ein solcher Zusammenhang für die Kenngröβen mittlerer Bestand und mittlerer Lieferverzug in einem einstufigen Lagersystem approximativ ableiten lässt. In mehreren Arbeiten erweitern Inderfurth und Schulz (siehe unter anderem in [1]) diesen Ansatz dahingehend, dass sie eine Methodik zur exakten Ableitung der Lagerkennlinie erarbeiten und diese dem approximativen Ansatz nach Nyhuis und Wiendahl gegenüberstellen. In diesem Beitrag soll die exakte Methodik auf ein mehrstufiges Lagersystem erweitert werden, womit an theoretische Aspekte der mehrstuff- gen stochastischen Lagerhaltung (siehe unter anderem in Tempelmeier [3]) angeschlossen wird. Dazu soll im folgenden Abschnitt der Modellrahmen kurz erläutert und die Extrempunkte der exakten Kennlinie (Maximalwerte von mittlerem Bestand und mittlerem Lieferverzug) bestimmt werden. Der Verlauf der Kennlinie zwischen den Extrempunkten steht im Mittelpunkt der Analyse des dritten Abschnitts, bevor die Arbeit im letzten Kapitel kurz zusammengefasst und ein Ausblick auf zukünftige Forschungsrichtungen gegeben wird.

Karl Inderfurth, Tobias Schulz
33. Setup Cost Reduction and Supply Chain Coordination in Case of Asymmetric Information

The model utilized in this paper captures a supply chain planning problem, in which the buyer asks the supplier to switch the delivery mode to Just-in-Time (JiT). We characterize the JiT mode with low order sizes. The buyer faces several multidimensional advantages from a JiTdelivery, which we aggregate to the buyer‘s holding costs per period. Hence, if the buyer faces high holding costs she is supposed to have high advantages from a JiT-delivery, and vice versa. On the other hand, smaller order sizes can cause an increase of the supplier‘s setup and distribution costs. In our modelling approach, the supplier‘s setup costs per period reect these disadvantages. Yet, it is well known that small order sizes are not sufficient for a successful implementation of the JiT concept. Setup cost reduction, thus, is regarded to be one main facilitator for JiT to be efficient. Our model depicts the need for accompanying process improvements by the supplier‘s option to invest in setup cost reduction (see [4]). From a supply chain perspective, an implementation of a JiT strategy will only be profitable, if the buyer`s cost advantages exceed the supplier`s cost increase. Yet, this is not always the case. The supplier, thus, may have a strong incentive to convince the buyer to abandon the JiT strategy, i.e. to accept higher order sizes. However, the buyer is supposed to be in a strong bargaining position and will not be convinced unless she is offered a compensation for the disadvantages of not implementing the JiT strategy. Yet, as long as pareto improvements are possible, the supplier can compensate the buyer while improving his own performance. Nonetheless, the above-mentioned advantages of a JiT strategy contain to a major extent private information of the buyer. Thus, they can not be easily observed and valued by the supplier. The buyer, thus, will apparently claim that switching towards higher order sizes causes substantial costs and that a high compensation is required. Assuming the strategic use of private information, it is in the supplier‘s best interest to offer a menu of contracts (see [1]). Basically, this menu of contracts aligns the incentives of the supply chain members such that a buyer with low advantages of a JiT delivery will agree upon higher order sizes than a buyer with high advantages of this supply mode. However, the incentive structure provided by this menu of contracts causes inefficiencies, because the resulting order sizes are too low compared to the supply chain‘s optimal solution. Starting from this insight, our main focus in this study is to analyse the impact of investments in setup cost reduction on this lack of coordination. Summing up, there are basically two streams of research (namely the inefficiencies due to asymmetric information and the optimal set-up cost reduction in an integrated lot-sizing decision) this paper combines.

Karl Inderfurth, Guido Voigt
34. A Heuristic Approach for Integrating Product Recovery into Post PLC Spare Parts Procurement

Product recovery gains increasing importance in spare parts acquisition, especially during the post product-life-cycle (PLC) service period. Considering product recovery in addition to traditional procurement options (final order, extra production) makes the underlying stochastic dynamic planning problem highly complex. An efficient heuristic for solving the integrated procurement problem is developed, which can be used to exploit the exibility offered by product recovery.

Rainer Kleber, Karl Inderfurth
35. The Economic Lot and Supply Scheduling Problem Under a Power-of-Two Policy

The paper presents a new problem class, the Economic Lot and Supply Scheduling Problem (ELSSP). The ELSSP deals with the simultaneous vehicle routing and production planning and is an extension of the well known Economic Lot Scheduling Problem (ELSP). To solve the described problem the junction point method of Yao and Elmaghraby (2001) for the solution of the ELSP under a power-of-two policy is modified and extended in order to consider the vehicle routing problem caused by the deliveries of the input materials.

Thomas Liske, Heinrich Kuhn
36. Towards Coordination of Decentralized Embedded System Development Processes

An acceleration in technological change and shorter product life cycles lead among other things to companies‘ concentration on their core competencies and to a parallelization of formerly sequential processes. This results in an increasing number of collaborations between firms. This trend can also be observed in research and development, where projects are often conducted corporately by various partners. An example for such a cooperation is the development of embedded systems. Embedded systems are characterized by a modular design of different interacting hard- and software components. Thereby, the performance of the overall system depends on the interaction of the individual components‘ performances. In current system industry‘s practice, these components are developed by specialized suppliers and are then combined to the embedded system by a system integrator. Since these partners are usually legally and economically independent companies, the cooperation is regularized by contractual agreements. In practice, fixed-price contracts are most commonly used. In these contracts, components‘ requirements as well as prices are fixed ex-ante by the system integrator. However, uncertainties exist with regard to the outcome of the development process. For instance, the performance of components as well as of the overall system cannot be predicted with certainty. Additionally, partners may follow different objectives. Thus, ineficiencies in the design process as well as in the design of the embedded systems often occur. Some of these uncertainties and ineficiencies might be absorbed by making use of existing substitutional dependencies between components. However, this is not possible when inappropriate contracts as well as insufficient incentive structures are applied, since these lead to a decreasing exibility in the development process and thus to an increase in development costs. Thereby, economic risk for suppliers and integrators increases. Overcoming these difficulties requires improved coordination of the partners ahead of and during the development process. Hence, the aim of this contribution is to improve collaborative development processes of embedded systems by adapting mechanisms from supply chain management to development processes. As in supply chain management cooperation is regularized by contracts. In addition, uncertainties exist in the decentralized development processes as in supply chain management, which lead to ineficiencies in the cooperation. Supply chain management has rich experience in exible contracting with various incentives, targeting overall exibilization, risk mitigation, and economic fairness [1]. Unlike in supply chain management there are substitutional dependencies between components‘ attributes in the development process. However, differences between production and development processes currently prevent an easy adoption of these mechanisms. First approaches to the exibilization of contracts in embedded system development processes are given by [2, 3]. In the following section a mathematical model for cooperative development processes is described and analyzed with regard to the optimal actions of the partners.

Kerstin Schmidt, Grit Walther, Thomas S. Spengler, Rolf Ernst
37. Collaborative Planning: Issues with Asymmetric Cost and Hold-Up in Automotive Supply Chains

Supply chain optimization has emerged as an important topic in several industries. In the automotive industry supply chains are prevalently composed of independent agents with specific preferences, e.g. distinct firms or profit centers in holdings. In either case, one must expect that no single party has control over the entire chain, and no partner has the power to optimize the whole chain by hierarchical planning. Under such decentralized decisions a way to improve supply chain performance is achievable through coordination and synchronization. The field of collaborative planning offers novel coordination concepts for such situations. We characterize issues in such concepts in automotive supply chains under asymmetric cost allocations. Here, and as well in other industries, few assembly sites (mostly owned by OEM‘s) are linked to multiple suppliers in a converging structure. We find, that under such setting, an iterative negotiation-based process based on counter-proposals is little different to upstream-planning, as local supplier-side savings have comparatively small effects. Further, we study the interaction between collaborative planning and the hold-up problem (i.e. at least one party performs relationship-specific investments), as an additional characteristic in the automotive industry.

Thomas Staeblein
38. Negative Default Dependencies: Do Automotive Suppliers Benefit from Their Competitors' Default?

Given the criticality of the supplier network and the increasing number of supplier defaults [3], automotive OEMs are well advised to actively manage their supplier networks, consider supply risks [9], and avoid negative and exploit positive consequences of supplier defaults. We aim to extend the research on supplier defaults along two dimensions: First, with a few exceptions [1, 10], previous studies have focused on the default consequences of individual suppliers [2]. However, since suppliers do not operate in isolation, their financial situation is often interlinked. We argue that default dependence (also default correlation) exists in supplier networks and that OEMs should manage the entire network as a group and consider interdependencies among suppliers. Second, all studies that have considered default dependencies have exclusively modeled positive dependence, i.e., they have assumed that, given two suppliers 1 and 2, the default of supplier 2 is positively related to the default of supplier 1. In contrast, we model negative default dependence (i.e. scenarios where supplier 2 benefits from the default of supplier 1).

Stephan M. Wagner, Christoph Bode, Philipp Koziol

Traffic and Transportation

Frontmatter
39. An Optimization Approach for the Crew Scheduling Problem in Waste Management

Difficult financial situations in most German cities and increasing competition in waste management between private and public companies require an efficient allocation of all resources that are involved in the waste disposal process. The two major resources are waste collection vehicles and crews. They can be assigned to the corresponding planning steps of finding so called waste collection districts and appropriate crew schedules which are planned manually at the moment. This paper focuses on the optimization of the crew scheduling process where different crews have to be allocated to speciffic waste collection districts and vehicles respectively. The paper shows first results of a joint research project including the Universities of Braunschweig and Dortmund as well as two companies from the private and public sector.

Jens Baudach, Annette Chmielewski, Tobias Mankner
40. Milk Run Optimization with Delivery Windows and Hedging Against Uncertainty

Milk runs are an important transportation concept, e.g. in the automotive industry. Trucks start from a depot, pick up goods at different suppliers, and deliver those goods to a single customer. Therefore, milk runs are technically a pick up and delivery problem with multiple pick ups and a single delivery. They make it possible to deliver small lots eficiently and thus lower average inventory levels. Prerequisite is that there are several frequent orders from suppliers that are closely located, otherwise transshipment centers will be used in spite of handling costs. Pickup&Delivery problems calculate routes for single days, sometimes with the additional restriction of time windows for delivery. Models and algorithms for these problems exist and can help in practice as leadtimes are usually very short, in most cases only one day. It has been discussed whether it would be better to allow for delivery windows of a few days so that carriers have more leeway for route optimization (cf. [1]). Now the routing problem is extended with the problem of allocating orders to days. The integrated solution requires vehicles to make multiple trips and is formulated as the VRP with Multiple Trips (VRPM). The VRPM has found only little attention so far: “Although in practice multiple route assignment is common, there is a shortage of papers covering this feature.”[2] It is even more interesting to look at the problem from a dynamic point of view, i.e. to iterate through the days and to assign incoming orders to days without having information on future orders.

Carsten Böhle, Wilhelm Dangelmaier
41. MEFISTO: A Pragmatic Metaheuristic Framework for Adaptive Search with a Special Application to Pickup and Delivery Transports

We present MEFISTO, a pragmatic framework for transport optimization algorithms that has been jointly developed by the authors and is integral part of logistics planning solutions provided by PTV AG. We present design aspects that have led to the architecture of the framework using a variant of Granular Tabu Search. For the case of vehicle routing problems with pickup and delivery transports, we present a specialized local search procedure. Summarized results on benchmark and real world problems are given.

Andreas Cardeneo, Werner Heid, Frank Radaschewski, Robert Scheffermann, Johannes Spallek
42. Planning in Express Carrier Networks: A Simulation Study

In this paper we present first results of a simulation study analyzing the effect of alternative compensation schemata in a courier network compared to alternative organizational settings: centralized collaborative planning and individual non-collaborative planning.

Sascha Dahl, Ulrich Derigs
43. Stability of Airline Schedules

Stability describes the ability of a plan to remain feasible and to give acceptable results under different variations of the operating environment without any modifications of the plan. In this paper we explore the effect of crews changing aircrafts on the stability of airline schedules using a simulation model for delay propagation, considering the interdependence between aircraft rotations and crew pairings.

Viktor Dück, Natalia Kliewer, Leena Suhl
44. Waiting Strategies for Regular and Emergency Patient Transportation

Many emergency service providers, especially ambulance departments and companies who provide non-public maintenance services, face the problem that their eet of vehicles has to provide two different types of services: 1. Cover a certain region and provide immediate service when an emergency occurs; 2. Provide some regular service (e.g., the pick-up and delivery of patients, predetermined service tasks, periodic pick-ups . . . ). This is the current situation for the largest Austrian emergency service providers (e.g., the Austrian Red Cross), where the same eet is used to provide both types of services. Dynamic aspects thus directly inuence the schedule for the regular service. When an emergency occurs and an ambulance is required, the vehicle with the shortest distance to the emergency is assigned to serve the emergency patient. Therefore, it may happen that an ambulance vehicle that has to carry out a scheduled transport order of a patient, which has not started yet, is used to serve the emergency request and the schedule for the regular services has to be re-optimized and another vehicle has to be reassigned to the regular patient. Ambulances that carry out emergency transport become available at the hospital after the emergency service and can be then used to carry out regular transport orders. Again, the schedule for regular services has to be re-optimized.

Guenter Kiechle, Karl F. Doerner, Michel Gendreau, Richard F. Hartl
45. Eine heuristische Methode zur Erhohung der Robustheit von Mehrdepot-Umlaufplanen im OPNV

Die Einsatzplanung für Busse, die sogenannte Umlaufplanung, ist eine Hauptaufgabe des operativen Planungsprozesses eines Verkehrsbetriebes im Öffentlichen Personennahverkehr (ÖPNV). In ihr werden Fahrzeuge einer vorgegebenen Menge von Servicefahrten zugewiesen, so dass die Kosten minimal sind. Die Kosten setzen sich zusammen aus Fixkosten pro eingesetztem Fahrzeug, variablen Kosten pro gefahrenem Kilometer und variablen Kosten pro Zeiteinheit, die ein Bus auβerhalb des Depots verbringt. Ein Umlaufplan ist genau dann zulässig, wenn jede Servicefahrt mindestens einem Fahrzeug zugewiesen ist, alle Servicefahrten nur Fahrzeugen zulässigen Typs zugewiesen sind und jeder Fahrzeugumlauf in einem zulässigen Depot startet und am Ende des Planungshorizontes in dieses wieder zurückkehrt. Für eine genauere Beschreibung des Umlaufplanungsproblems und der verschiedenen Lösungsmethoden siehe [1]. Die Umlaufpläne werden im ÖPNV traditionell mehrere Wochen vor dem Tag der Ausführung erstellt. Deshalb können in dieser Planung nicht die tatsächlichen Fahrtzeiten berücksichtigt werden. Stattdessen wird mit Erfahrungswerten für verschiedene Strecken und Tageszeiten geplant. Somit gehen die Störungen im Fahrbetrieb und die daraus resultierenden Verspätungen nicht in diese offine Umlaufplanung ein. In den letzten Jahren sind die Umlaufpläne durch den Einsatz spezialisierter Planungssoftware und Verbesserungen bei den verwendeten Methoden immer kostengünstiger geworden. Mit der Senkung der geplanten Kosten geht aber eine Erhöhung der Störanfälligkeit einher, da die Wartezeit der Busse verringert wird, die zum Auffan gen von Verspatungen zur Verfügung steht. Dies führt bei Auftreten von Störungen zu einem auβerplanmäβigen Mehreinsatz an Fahrzeugen und zu Strafzahlungen, die der Verkehrsbetrieb an die staatliche Verwaltung entrichten muss, wenn eine vertraglich vereinbarte minimale Servicefahrt-Pünktlichkeit unterschritten wird (siehe [2]). Deshalb steigen manchmal die tatsächlichen Kosten entgegen der ursprünglichen Zielsetzung an, obwohl die geplanten Kosten gesunken sind.

Stefan Kramkowski, Christian Meier, Natalia Kliewer
46. The Berth Allocation Problem with a Cut-and-Run Option

The Berth Allocation Problem consists of assigning berthing times and berthing positions to container vessels calling at a seaport. We extend this problem by a so-called cut-and-run option, which is used in practice to ensure that vessels meet their liner schedules. A mathematical problem formulation, meta-heuristics, and computational tests are provided.

Frank Meisel, Christian Bierwirth
47. A Model for the Traveling Salesman Problem Including the EC Regulations on Driving Hours

Since April 2007 the new EC Regulation No 561/2006 concerning driving hours in road transport is effective. This regulation restricts the length of time periods for driving and requires minimum breaks and rest periods for drivers [2]. An analysis of the EC Regulation with respect to vehicle routing can be found in [3]. In this paper the restrictions on driving times and the need for breaks are formalized and integrated in an optimization model of the TSPTW. The solution space of the extended traveling salesman problem with time windows and EUconstraints (TSPTW-EU) contains all Hamiltonian circuits which full the given time windows and restrictions of the Regulation relevant for a time period up to one week. The presented approach for extending the TSPTW to the TSPTW-EU is also applicable for the extension of the VRPTW and PDPTW, thus offering a possibility to include the EC Regulations in vehicle routing and scheduling.

Herbert Kopfer, Christoph Manuel Meyer
48. Crew Recovery with Flight Retiming

This paper presents a method for the crew recovery problem with the possibility to retime flights to avoid large modifications of the original schedule. In the presented method a new neighborhood search for the crew scheduling problem is embedded in a tabu search scheme. Dynamic programming on leg based networks is used for generation of pairings. We test our method on different crew schedules and disruption scenarios and show the effects of flight retiming.

Felix Pottmeyer, Viktor Dück, Natalia Kliewer
49. A Branch-and-Cut Approach to the Vehicle Routing Problem with Simultaneous Delivery and Pick-up

The vehicle routing problem with simultaneous delivery and pickup (VRPSDP) is an extension of the capacitated vehicle routing problem in which products have to be transported from the depot to customer locations and other products have to be trucked from customer locations to the depot. Each customer requires a simultaneous delivery and pick-up of goods by the same vehicle. The VRPSDP is a basic problem in reverse logistics: In addition to the distribution process to the customers, re-usable goods have to be transported in the reverse direction. We implement a branch-and-cut approach and study how it can be applied to the solution of the VRPSDP. The computational tests have been performed on known benchmark instances. Some benchmark problems are solved to optimality for the first time

Julia Rieck, Jürgen Zimmermann
50. Vehicle and Commodity Flow Synchronization

Network freight ow consolidation organizes the commodity ow with special attention to the minimization of the ow costs but the efficiency of transport resources is not addressed. In contrast, vehicle routing targets primarily the maximization of the efficiency of transport resources but the commodity-related preferences are treated as inferior requirements. This article is about the problem of synchronizing simultaneously the use of vehicles and the ow of commodities in a given transport network. Section 2 introduces the investigated scenario. Section 3 proposes a multi-commodity network ow model for the representation of the ow synchronization problem and Section 4 presents results from numerical experiments. Related and Previous Work. The synchronization of ows along independently determined paths in a network is investigated under the term multi-commodity network ow problem [3]. While most of the contributions aim at minimizing the sum of travel length (or similar objectives) only little work has been published on the assignment of commodity path parts to transport resources with intermediate resource change [1, 2].

Jörn Schönberger, Herbert Kopfer, Bernd-Ludwig Wenning, Henning Rekersbrink
51. Multi-Criteria Optimization for Regional TimetableSynchronization in Public Transport

The need for synchronizing timetables occurs when several public transportation companies interact in the same area, e.g. a city with busses, trains and trams. Existing approaches that optimize a global, waiting time oriented objective function reach their limit quite fast concerning real-life applications. While optimizing public transportation timetables, \winners" and \losers" will inevitably come about and the traffic planner has to keep control of this process. Thus, we propose an optimization approach that enhances the overall situation without losing sight of the defienciencies arising at particular stations. In our concept for optimizing public transportation timetables, the goal is to synchronize a set of lines (e.g. busses, trains, ...) by changing their starting times such that passengers can transfer more conveniently. For this, we do not take the often used approach of simply minimizing waiting times, but we use a concept of trying to achieve transfers that can be called convenient. For these, the time should not be too long, but also not too short, in order to reduce the risk of missing a transfer if the arriving vehicle is delayed. For optimization purposes, we assign to all possible waiting times at a transfer a corresponding penalty-value. One of our goals is to minimize the sum of these penalties over all transfers. For a detailed introduction, see [4].

Ingmar Schüle, Anca Diana Dragan, Alexander Radev, Michael Schröder, Karl-Heinz Küfer
52. Transportation Planning in Dynamic Environments

Transportation planning in realistic scenarios has to deal with a dynamic uncertain environment. The actual state of all information relevant to planning can be estimated and be the base of a static planning environment which is often the base for classic planning algorithms. If the environment differs from the one used for planning, either by a wrong estimation or due to the dynamics in the environment the actual plan has to be adapted. Dynamic transportation planning is currently an active research area; nevertheless it is not a new problem in the literature. A good overview of the development of this field can be found in [1]. Different technologies like agents or meta-heuristics and their possible integration are in discussion [2]. Even if a lot of research has been done on the question how these problems can be solved, the dynamics has been rarely tried to measure. Commonly, dynamic transportation planning is characterized by the occurrence of disturbing events that invalidate the current plan. What kinds of events occur depends on the planning problem that was assumed. In the current state of research different technologies claim to be fruitful in more dynamic environments. Larsen [3] points out the importance of measuring dynamics for effi cient and sufficient performance comparison. Therefore he defines the degree of dynamism (DoD).Moreover, from an engineering perspective it would be very useful to have metrics and measurements at hand that can help to choose one technology for a given problem. The vision is that based on the characteristics of the given application one could decide which is the most appropriate technology. Therefore, different questions have to be answered, like how can dynamics be measured to what DoD classical approaches are competitive, and how different technologies perform on a varying DoD. In this article we tackle the first question. Therefore, we discuss here different existing approaches to measure dynamics in transportation planning. Based on the well-known multiple depot vehicle routing problem with time windows (MDVRPTW) problem, we present an extended version of the measurement of the DoD. In section 3, we present empirical results and Finally we discuss our Findings.

René Schumann, Thomas Timmermann, Ingo J. Timm
53. The CaSSandra Project: Computing Safe and EfficientRoutes with GIS

Large amounts of dangerous goods are kept constantly on the move in Europe because of their significant impact on economic growth and to support quality of life. According to available statistics [3], road transportation accounts for the movement of the major part of dangerous goods within Europe (58% in 2002). The access to a well built and distributed road infrastructure gives higher exibility and door to door capabilities [7]. Consequently, transport purchasers perceive this transportation mode as highly effective and economically advantageous. However, the same factors stated above oblige material ows to travel through highly-populated areas or highly-traficed road segments. As a consequence the exposure of civilians to accident risks increases drastically [3]. History shows that accidents which take place during the transportation of hazardous material can have the same magnitude as those occurring in industrial plants [13]. Possible consequences may include fatality of human beings or ecological disaster if the cargo is dispersed in water catchment areas [8],[6]. The ramifications on private stakeholders may include delayed shipment, undelivered shipment, wasted cargo and higher transportation costs (i.e. bridge collapse) [8],[2]. Dangerous goods or hazardous materials (hazmat) are any solid, liquid or gas substances that can have harmful effects for living organisms, property or environment [11]. Laws and regulations for the transportation of dangerous goods in Europe are first collected by the United Na tions Economic Commission for Europe, UNECE and then extended to all transportation means (road, rail, sea and air) through specific organizations. The transportation of dangerous goods over European roads is regulated by the Agreement concerning the International Carriage of Dangerous Goods by Roads (ADR) that is enforced in Sweden by the Swedish Rescue Services Agency (SRSA)[11]. The SRSA publishes yearly dangerous goods recommended and restricted road segments. Recommended roads are classified as primary, for throughway traffic and secondary, for local transportation from and to the primary network. The restricted roads are road tunnels and segments in proximity of water catchment areas [10].

Luca Urciuoli, Jonas Tornberg
54. Design and Optimization of Dynamic Routing Problems with Time Dependent Travel Times

The worldwide transportation of cargo is steadily growing and forwarding agencies handling less-than-truckload freight are no exception. The performance of these companies is inuenced strongly by varying transport times between two consecutive points. Surprisingly, traffic information is hardly used within the forwarding industry, even though vehicle location is available in real-time. Numerous unknown customer orders, increasingly received shortly before the actual pickup, are impacting the performance, too. Typical forwarding agencies perform the pickups and the deliveries conjoined. They have to cope with hundreds of pickups and deliveries each day and a few tens of vehicles are necessary to service the customers in the short-distance traffic region. Furthermore, inquiries of business customers cannot be neglected. In the following we focus on one part of the problem dealing with the integration of varying travel times, often resulting in late deliveries and penalties. Especially in urban areas for many roads rush hour traffic jams are known, but this information is hardly used within forwarding agencies. In particular, real-time approaches solving pickup and delivery problems (PDP) with inhomogeneous goods, capacities of general cargo, time windows, varying travel times, and unknown customer orders, which cannot be neglected, are missing. Thus, the objective is to develop a customized dynamic routing model capable of handling all requirements and assisting forwarding agencies in routing vehicles efficiently in real-time.

Sascha Wohlgemuth, Uwe Clausen
55. An Ant Colony Algorithm for Time-Dependent Vehicle Routing Problem with Time Windows

In this paper, we address the Vehicle Routing Problem with Time Windows, both time-independent and -dependent cases. In the timeindependent case, our objective is to minimize the total distance. To solve this problem, we propose an Ant Colony Optimization algorithm. Then we implement the algorithm to solve the time-dependent case where the objective is to minimize the total tour time. The time dependency is embedded in this model by using a deterministic travel speed function which is a step function of the time of the day. An experimental evaluation of the proposed approach is performed on the well-known benchmark problems Optimizing a distribution network has been and remains an important challenge both in the literature and in real-life applications and the routing of a eet of vehicles is the most widely addressed problem in a distribution network. The Vehicle Routing Problem (VRP) determines a set of vehicle routes originating and terminating at a single depot such that all customers are visited exactly once and the total demand of the customers assigned to each route does not violate the capacity of the vehicle. The objective is to minimize the total distance traveled. An implicit primary objective is to use the least number of vehicles. The Vehicle Routing Problem with Time Windows (VRPTW) is a variant of VRP in which lower and upper limits are imposed to the delivery time of each customer. The arrival at a customer outside the specified delivery time is either penalized (soft time windows) or strictly forbidden (hard time windows). The interested reader is referred to [1] for more details on VRPTW. In the Stochastic Vehicle Routing Problem, the customer demands and/or the travel times between the customers may vary. Although stochastic travel times and demand distributions have been frequently used in the literature, time-varying travel speeds and time-dependent VRPTW (TDVRPTW) have seldom been addressed. In the literature, time dependency is taken into consideration in two ways: stochastic travel times and deterministic travel times. First introduced by [2], stochastic travel times are mainly examined by [4] and [3]. [5] proposeda deterministic travel time based model in which the important nonpassing property is introduced. [6] and [7] also use deterministic travel times in a setting where the day is divided into time intervals.

Umman Mahir Yildirim, Bülent Ç atay

Applied Probability and Stochastic Programming

Frontmatter
56. The Comparative Analysis of Different Types of Tax Holidays Under Uncertainty

Tax holidays, that exempt firms (fully or partially) from tax payment for a certain period of time, have been widely used over the world as one of the most effective stimuli for investment attraction (see [1]). Below, we present the model of investment attraction by means of tax holidays (for other mechanisms of investment attraction see, e.g., [2]). Within the framework of this model, we compare two alternative mechanisms of tax holidays: tax holidays of deterministic (fixed) duration and tax holidays based on the payback period of the initial investment.

Vadim Arkin, Alexander Slastnikov, Svetlana Arkina
57. Stochastic Programming Problems with Recourse via Empirical Estimates

Let ξ := ξ (ω) (s×1) be a random vector defined on a probability space (Ω; S; P); F; PF the distribution function and the probability measure corresponding to the random vector ξ : Let, moreover, $$g{\rm o}(x,z),g_0^1 (y,z)$$ be functions defined on Rn × Rs and Rn1 × Rs; fi(x; z); gi(y); i = 1; : : : ; m functions defined on Rn × Rs and Rn1 ; h := h(z) (m × 1) a vector function defined on Rs; $$h'(z) = (h_1 (z), \ldots ,h_m (z));X \subset R^n ,Y \subset R^{n1}$$ be nonempty sets. Symbols x (n × 1); y := y (x; ξ) (n1 × 1) denote decision vectors. (Rn denotes the n-dimensional Euclidean space, h' a transposition of the vector function h:)

Vlasta Kaňková
58. Sorting and Goodness-of-Fit Tests of Uniformity in Random Number Generation

Empirical testing of random number generators includes goodness- of tests with many numbers. The tests involve sorting and classiffication of random numbers. We study the effects of sorting routines on the computation time of tests of uniformity and propose improvements

Thomas Morgenstern
59. Constrained Risk-Sensitive Markov Decision Chains

We consider a Markov decision chain $${X = \{ X_n ,n = 0,1 \cdots \}}$$ with finite state space I = {1; 2;…;N} and finite set Ai = {1; 2;…;Ki} of possible decisions (actions) in state i Є I. posing that in state i Є I action a ЄA i is selected, then state j is reached in the next transition with a given probability p ij (a) and one-stage transition rewards r ij (a) and s ij (a) will be accrued to such transition. We shall suppose that the streams of transition rewards are evaluated by an exponential utility function, say $${u^\gamma ( \cdot )}$$, with risk aversion coefficient γ < 0 (the risk averse case) or γ > 0 (the risk seeking case). Then the utility assigned to the (random) reward ξ is given by $${u^{\rm \gamma } (\xi ): = {\rm sign (\gamma ) exp (\gamma \xi )}}$$.

Karel Sladký

Business Informatics, Decision Support and Artifical Intelligence

Frontmatter
60. An EM-based Algorithm for Web Mining Massive Data Sets

This paper introduces the PEM algorithm for estimating mixtures of Markov chains. The application to website users‘ search patterns shows that it provides an effective way to deal with massive data sets.

Maria João Cortinhal, José G. Dias
61. AgileGIST - a Framework for Iterative Development and Rapid Prototyping of DSS for Combinatorial Problems

Developing model-based decision support systems for combinatorial problems you often encounter the situation that the problem owner is not able to explicate all necessary information to set up the system of constraints and objectives but that he is able and willing to criticise solutions which are obtained from some model implementation. Thus in such scenarios it is necessary for the success of the project to design a development process based on a sequence of models with associated prototype systems and to integrate the problem owner into an iterative and experimental procedure in which (additional) components can be added or dropped from the model base. Heuristics based on indirect search have shown to be suitable and rather powerful in these environments. These problems are typical scenarios in DSS-development and have motivated the development of our new framework: they require new problem-specific representational and algorithmic knowledge which has to be acquired during an initial conceptual phase, exibility rather than absolute accuracy as in literature problems is a dominant system requirement, and, to allow for rapid system development the use of general problem solving capabilities captured in standard coding schemes and metaheuristic strategies is more or less mandatory. In earlier papers we have reported successful applications of the so-called GIST-approach to non-standard real-world decision problems which had been given to us in form of a semi-structured planning task and for which we had to develop a decision support systems (cf.[1],[2]and[3]).In this paper we describe how evolutionary system development using the GISTapproach can be supported by a framework called AgileGIST which has been motivated by two concepts from different areas: mathematical programming languages and the dependency injection framework from software engineering.

Ulrich Derigs, Jan Eickmann
62. Incorporation of Customer Value into Revenue Management

The effecient utilization of limited capacity resources, e. g. airplane seats or hotel rooms, is a prevalent success factor for service providers [1]. Hence, revenue management is applied to control the acceptance of booking requests. Though successful in the short-term, it is transactionbased so far by focusing on willingness-to-pay [8] and neglecting the establishment of relationships with long-term profitable customers. Therefore, a conceptual model of customer value-based revenue management is developed. Transaction-based optimization and booking control are enhanced by regarding customer value-related information

Tobias von Martens, Andreas Hilbert
63. Einuss der Adoptoreinstellung auf die Diffusion Komplexer Produkte und Systeme

Die Diffusion von Komplexen Produkten und Systemen (KoPS) auf dem Markt der Konsumenten ist mit spezifischen Risiken verbunden. Die Gründe dafür liegen in der einzigartigen Struktur der KoPS. Sie bestehen aus einer hohen Komponentenanzahl, die in Interaktionsbeziehungen zueinander stehen, projektbezogen und kundenindividuell gefertigt werden [3]. Eine hohe Produktkomplexität kann zu einem verlangsamten Diffusionsprozess führen [7]. Deshalb ist es notwendig, den Diffusionsprozess von KoPS und seine Ein- ussfaktoren zu untersuchen. Der Einsatz des Systemdynamischen Ansatzes ermöglicht die Entwicklung von komplexen und dynamischen Modellen. Dieses Paper analysiert den Diffusionsprozess von KoPS unter Integration des Ein- ussfaktors Einstellung zur Technik der privaten Haushalte. Zusatzlich unterst utzt eine Primärerhebung die Phase der Modellvalidierung

Sabine Schmidt, Magdalena Miβler-Behr
64. Data Similarity in Classification and Fictitious Training Data Generation

In this paper we explore the possibility of deriving consensus rankings by solving consensus optimization problems, characterizing consensus rankings as suitable complete order relations minimizing the average Kemeny-Snell distance to the individual rankings. This optimization problem can be expressed as a binary programming (BP) problem which can typically be solved reasonably efficiently. The underlying theory is discussed in Sect. 1. Applications of the proposed method given in Sect. 2 include a comparison to other mathematical programming (MP) approaches using the data set of Tse [9] and establishing a consensus ranking of marketing journals identified by domain experts from a subset of the Harzing journal quality list [2]. In Sect. 3 we discuss computational details and present the results of a benchmark experiment comparing the performance of the commercial solver CPLEX to three open source mixed integer linear programming (MILP) solvers

Ralf Stecking, Klaus B. Schebesch
65. Journal Ratings and Their Consensus Ranking

In this paper we explore the possibility of deriving consensus rankings by solving consensus optimization problems, characterizing consensus rankings as suitable complete order relations minimizing the average Kemeny-Snell distance to the individual rankings. This optimization problem can be expressed as a binary programming (BP) problem which can typically be solved reasonably efficiently. The underlying theory is discussed in Sect. 1. Applications of the proposed method given in Sect. 2 include a comparison to other mathematical programming (MP) approaches using the data set of Tse [9] and establishing a consensus ranking of marketing journals identified by domain experts from a subset of the Harzing journal quality list [2]. In Sect. 3 we discuss computational details and present the results of a benchmark experiment comparing the performance of the commercial solver CPLEX to three open source mixed integer linear programming (MILP) solvers

Stefan Theussl, Kurt Hornik
66. The Effect of Framing and Power Imbalance on Negotiation Behaviors and Outcomes

Negotiation can be defined as a joint decision making process where two or more parties are trying to inuence the other about the allocation of resources or division of gains for the purpose of achieving own or mutual interests [2]; [16]. Negotiators often fail to reach Pareto optimal solutions when there is integrative potential that expand the pie and yield higher joint outcomes [2]; [16]; [19]. Literature showed that framing of conicts and power relations are two widely acknowledged factors that affect the negotiation process and outcomes. According to Emerson \the power of A over B is equal to and based upon the dependence of B upon A" [7]. The power imbalance empirically manifests when high-power and low-power parties initialize a supply-demand relationship in which these demands are contradictory to supplier‘s desires [7]. Nevertheless, decision makers -therefore negotiators- systematically violate the requirements of rational choice and deviate from rationality because of imperfections of human perception and decisions [9]; [14]; [18]. One of the hidden traps in decision making is reference framing which was first introduced by Prospect theory [9]; [18]. It is proposed that people normally perceive outcomes as gains and losses rather than final states of outcomes and valuation of any outcome, defined as gains or losses, depends on its location relative to the reference point which is assigned a value of zero. Further, they stated that people are more risk averse for positive but risk-seeking for negative outcomes. Thus, the way a problem is framed can dramatically inuence our choices

Ali Fehmi Ünal, Gül Gökay Emel
67. Response Mode Bias Revisited - The "Tailwhip" Effect

Expected utility theory is an important instrument for decision making under risk. To apply it in a prescriptive context, the decision maker‘s utility function needs to be elicited. Several methods for utility elicitation have been proposed in the literature which, from a theoretical point of view, should lead to identical results. However, beginning in the late 1970s, researchers have discovered that theoretically equivalent methods of utility elicitation lead to different results [4, 6, 7]. One important bias phenomenon which was identified in this context is the \Response mode bias", which describes an inconsistency in elicitation results of the Certainty Equivalence (CE) and the Probability Equivalence (PE) methods. Both methods are based on the comparison between a certain payoff xs and a lottery, which yields a better outcome xh with probability p and a worse outcome xl with probability 1??p. In the CE method, the decision maker is informed about xh, xl and p and has to provide a value xs which makes him or her indifferent between the lottery and the certain payment. In the PE method, the decision maker provides a value for p in response to the three other parameters. Using a set of 10 questions, Hershey et al. [6] found that in problems involving only losses, the PE method led to a significantly higher number of subjects exhibiting risk seeking behavior than the CE method. This phenomenon, which was later on labeled the \Response mode bias" was particularly noticeable in lotteries involving a comparatively high probability of loss.

Rudolf Vetschera, Christopher Schwand, Lea Wakolbinger
68. Enhancing Target Group Selection Using Belief Functions

One of the most critical decisions in customer relationship management, interactive marketing, and in direct marketing is the assignment of customers to groups with similar response characteristics. This decision usually is based on previous purchase behavior and socio-demographic variables. Unfortunately, information from different sources is likely to provide the managers with conicting indications of group memberships. In this study we utilize Multi Sensor Clustering and a Rough Sets approach for assigning individuals to groups. We outline the combination of different assignment recommendations by means of the Evidence Theory and the Transferable Belief Model. For an empirical application we rely on a set transaction data recorded from a customer loyalty program.

Ralf Wagner, Jörg Schwerdtfeger

Discrete and Combinatorial Optimization

Frontmatter
69. A Constraint-Based Approach for the Two-Dimensional Rectangular Packing Problem with Orthogonal Orientations

We propose a constraint-based approach for the two-dimensional rectangular packing problem with orthogonal orientations. This problem is to arrange a set of rectangles that can be rotated by 90 degrees into a rectangle of minimal size such that no two rectangles overlap. It arises in the placement of electronic devices during the layout of 2.5D System-in-Package integrated electronic systems. Moffitt et al. [2] solve the packing without orientations with a branch and bound approach and use constraint propagation. We generalize their propagation techniques to allow orientations. Our approach is compared to a mixed-integer program and we provide results that outperform it

Martin Berger, Michael Schröder, Karl-Heinz Küfer
70. Detecting Orbitopal Symmetries

Symmetries are usually not desirable in integer programming (IP) models, because they derogate the performance of state-of-the-art IP-solvers like SCIP [1]. The reason for this is twofold: Solutions that are equivalent to ones already discovered are found again and again, which makes the search space \unnecessarily large". Furthermore, the bounds obtained from the linear programming (LP) relaxations are very poor, and the LP-solution is almost meaningless for the decision steps of the IP-solving algorithm. Overall, IP-models mostly suffer much more from inherent symmetries than they benefit from the additional structure. Margot [4, 5] and Ostrowski et al. [7, 8] handle symmetries in general IPs, without knowing the model giving rise to the instance. Kaibel et al. [2, 3] took a polyhedral approach to deal with special IP-symmetries. They introduced orbitopes [3], which are the convex hull of 0=1-matrices of size p X q, lexicographically sorted with respect to the columns. For the cases with at most or exactly one 1-entry per row, they give a complete and irredundant linear description of the corresponding orbitopes. These orbitopes can be used to handle symmetries in IP-formulations in which assignment structures appear, such as graph coloring problems; see the next section for an example. All of the above approaches assume that the symmetry has been detected in advance or is known. Therefore, automatic detection of symmetries in a given IP-formulation is an important task. In this paper, we deal with the detection of orbitopal symmetries that arise in the orbitopal approach. While this problem is polynomially equivalent to the graph automorphism problem, whose complexity is an open problem, orbitopal symmetries can be found in linear time, if at least the assignment structure is given. Otherwise we show that the problem is as hard as the graph automorphism problem.

Timo Berthold, Marc E. Pfetsch
71. Column Generation Approaches to a Robust Airline Crew Pairing Model For Managing Extra Flights

The airline crew pairing problem (CPP) is one of the classical problems in airline operations research due to its crucial impact on the cost structure of an airline. Moreover, the complex crew regulations and the large scale of the resulting mathematical programming models have rendered it an academically interesting problem over decades. The CPP is a tactical problem, typically solved over a monthly planning horizon, with the objective of creating a set of crew pairings so that every ight in the schedule is covered, where a crew pairing refers to a sequence of ights operated by a single crew starting and ending at the same crew base. This paper discusses how an airline may hedge against a certain type of operational disruption by incorporating robustness into the pairings generated at the planning level. In particular, we address how a set of extra fights may be added into the fight schedule at the time of operation by modifying the pairings at hand and without delaying or canceling the existing fights in the schedule. We assume that the set of potential extra fights and their associated departure time windows areknown at the planning stage. We note that this study was partially motivated during our interactions with the smaller local airlines in Turkey which sometimes have to add extra fights to their schedule at short notice, e.g., charter fights. These airlines can typically estimate the potential time windows of the extra fights based on their past experiences, but prefer to ignore this information during planning since these flights may not need to be actually operated. Typically, these extra flights are then handled by recovery procedures at the time of operation which may lead to substantial deviations from the planned crew pairings and costs. The reader is referred to [3] for an in-depth discussion of the conceptual framework of this problem which we refer to as the Robust Crew Pairing for Managing Extra Flights (RCPEF). In [3], the authors introduce how an extra flight may be accommodated by modifying the existing pairings and introduce a set of integer programming models that provide natural recovery options without disrupting the existing flights. These recovery options are available at the planning stage and render operational recovery procedures that pertain to crew pairing unnecessary

Elvin Çoban, İbrahim Muter, Duygu Taş, Ş. İlker Birbil, Kerem Bülbül, Güvenç Şahin, Y. İlker Topçu, Dilek Tüzün, Hüsnü Yenigün
72. Approximation Polynomial Algorithms for Some Modifications of TSP

In the report polynomial approximation algorithms with performance guarantees are presented for some modifications of TSP: for the minimum-weigt 2-PSP on metric distances and for the maximum-weight m- PSP in Euclidean space Rk.

Edward Gimadi
73. MEFISTO: A Pragmatic Metaheuristic Framework for Adaptive Search with a Special Application to Pickup and Delivery Transports

Intense competition in the current business environment leads firms to focus on selecting the best R&D project portfolio. Achieving this goal is tied down by uncertainty which is inherent in all R&D projects and therefore, investment decisions must be made within an optimization framework accounting for unavailability of data. In this paper, such a model is developed to hedge against uncertainty. The robust optimization approach is adopted and the problem is formulated as a robust zero-one integer programming model to determine the optimal project portfolio. An example is used to illustrate the benefits of the proposed approach.

Farhad Hassanzadeh, Mohammad Modarres, Mohammad Saffai
74. A New Branch and Bound Algorithm for the Clique Partitioning Problem

This paper considers the problem of clustering the vertices of a complete, edge weighted graph. The objective is to maximize the edge weights within the clusters (also called cliques). This so called Clique Partitioning Problem (CPP) is NP-complete, but it has several real life applications such as groupings in exible manufacturing systems, in biology, in flight gate assignment, etc.. Numerous heuristic and exact approaches as well as benchmark tests have been presented in the literature. Most exact methods use branch and bound with branching over edges. We present tighter upper bounds for each search tree node than those known from literature, improve constraint propagation techniques for fixing edges in each node, and present a new branching scheme

Jörg Rambau, Cornelius Schwarz
75. On the Benefits of Using NP-hard Problems in Branch & Bound

We present a Branch-and-Bound (B&B) method using combinatorial bounds for solving makespan minimization problems with sequence dependent setup costs. As an application we present a laser source sharing problem arising in car manufacturing

Jörg Rambau, Cornelius Schwarz
76. Automatic Layouting of Personalized Newspaper Pages

Layouting items in a 2D-constrained container for maximizing container value and minimizing wasted space is a 2D Cutting and Packing (C&P) problem. We consider this task in the context of layouting news articles on xed-size pages in a system for delivering personalized newspapers. We propose a grid-based page structure where articles can be laid out in different variants for increased exibility. In addition, we have developed a fitness function integrating aesthetic and relevance criteria for computing the value of a solution. We evaluate our approach using well-known layouting heuristics. Our results show that with the more complex fitness function only advanced C&P algorithms obtain nearly-optimal solutions, while the basic algorithms underperform

Thomas Strecker, Leonhard Hennig
77. A Tabu Search Approach to Clustering

In Clustering Problems, groups of similar subjects are to be retrieved from large data sets. Meta-heuristics are often used to obtain high quality solutions within reasonable time limits. Tabu search has proved to be a successful methodology for solving optimization problems, but applications to clustering problems are rare. In this paper, we construct a tabu search approach and compare it to the existing k-means and simulated annealing approaches. We find that tabu search returns solutions of very high quality for various types of cluster instances

Marcel Turkensteen, Kim A. Andersen
78. Parallel Computation for the Bandwidth Minimization Problem

The bandwidth minimization problem is a classical combinatorial optimization problem that has been studied since around 1960. It is formulated as follows. Given a connected graph G = (V;E) with n nodes and m edges, find a labeling π, i.e. a bijection between V and {1; 2;… ; n}, such that the maximum difference |π(u)–(v)|, uv ε E, is minimized. This problem is NP-hard even for binary trees [3]. Though much work on the bandwidth problem has been done, the gaps between best known lower and upper bounds for benchmark problems is still large. Applications of the bandwidth problem can be found in many areas: solving systems of linear equations, data storing, electronic circuit design, and recently in topology compression of road networks [7]. Due to the hardness of bandwidth minimization, much research has dealt with heuristic methods. These range from structured methods to metaheuristic methods, genetic algorithm and scatter search. The reader can refer to [1] for detailed references in this areas. For solving the problem to optimality, dynamic programming and branch-andbound have been used but with little success for sparse graphs. Caprara and Salazar [2] have proposed new lower bounds and strong integer linear programming (ILP) formulations to compute the bounds more effectively. Their method is therefore used in our parallel implementation documented in this paper. We will report about computational results for problem instances with up to 200 nodes.

Khoa T. Vo, Gerhard Reinelt
79. Strengthening Gomory Mixed-Integer Cuts

Gomory mixed-integer cuts play a central role in solving mixedinteger linear programs and are an integral part of state-of-the-art optimizers. In this paper we present some of the the existing approaches for improving the performance of the Gomory mixed-integer cut. We briey discuss the ideas of the different techniques and compare them based on computational results.

Franz Wesselmann, Achim Koberstein, Uwe H. Suhl

Forecasting, Econometrics and Game Theory

Frontmatter
80. Applied Flexible Correlation Modeling

Estimation of correlations of asset returns lies at the core of modern portfolio management. For the seminal Dynamic Conditional Correlation model [1], several generalizations have been proposed recently. In this contribution, we focus on the Flexible Dynamic Conditional Correlation model proposed in [2]. Using both simulation exercises and applications to observed return data, we show that the exible specification performs well only in very restrictive cases, contradicting the “flexibility" of the approach. However, our results indicate that model performance can be improved substantially by a particular adjustment of the variance specification

Frederik Bauer, Martin Missong
81. Forecasting Behavior in Instable Environments

In many economic situations periodically occurring changes in behavior of the involved agents can be observed. These changes have the characteristics of abrupt structural breaks. The behavior often seems to switch between regimes as if there were constant relationships between economic variables between these breaks. Examples are the alternating price determination of sellers and buyers on a market out of equilibrium or the periodical development of price cartels and the resulting switches in prices. In this study, we want to analyze the expectation formation of participants of a laboratory experiment subject to regime switches. Despite the practical relevance there is hardly any experimental contribution regarding the reaction of economic decision-makers to structural breaks. The only systematic experiment in this context was performed by [2]. The authors found inconsistent evidence on the performance of judgmental forecasts versus statistical procedures in the literature and therefore wanted to experimentally test the individual performance of subjects. The noise levels of the time series, the type of break (abrupt or creeping) and the direction of the break was systematically varied between 10 time series. The participants were told that the time series may contain structural changes. The judgmental forecasts were found to perform significantly worse than statistical procedures. The subjects were trying to read too much signal into the series and their forecasts contained excessive noise. In our experiment the forecasting performance will be analyzed but the main interest is the explanation of the average forecasts in order to understand how the subjects react on the break. The most simple casefor a structural break in our data generating process is a constant shift. Three time series are applied in the experiment subject to one break and two breaks respectively. Between the breaks the data generating process remains constant. By these means the behavior of the subjects in several regimes and their reactions on the breaks can be observed. Recently [1] presented a simple heuristic for the modeling of average forecasts of the subjects. The authors showed that the model forecasts the behavior of the subjects better than the Rational Expectations Hypothesis (REH) when indicators are in the information set of the participants. This heuristic is restricted to stable and stationary time series. We will apply a modified version of the model to the forecasts in the setting with the structural breaks. We find that the behavior of the subjects after the break is best described by a transition phase. When the new level of the series has established several periods after the break the information before the break (i.e. especially earlier turning points) is gradually ignored. We also find that the human performance compared to statistical procedures is rather poor.

Otwin Becker, Johannes Leitner, Ulrike Leopold-Wildburger
82. Der Einfluss von Kostenabweichungen auf Nash-Gleichgewichte in einem nicht- kooperativen Disponenten-Controller-Spiel

In diesem Beitrag wird die effiziente Gestaltung von ÜUberprüfungen der Bestellmengen, die von einem Disponenten durchgeführt werden, durch einen Controller untersucht. [2] beschreibt - basierend auf den Arbeiten von [3] und [1] - ein efizientes Design von Kontrollen für das Risikomanagement als Nahtstelle zwischen vollständigem Vertrauen und vollständigem Misstrauen auf der Basis des spieltheoretischen Modells Inspection Game. Bei [7] wird das Inspection Game hinsichtlich einer Nichtentdeckung und eines Fehlalarms untersucht. Diese Sichtweise der Nichtentdeckung wird auf das modifizierte Spiel übertragen. Es spiegelt sich im Modell von [4] an der Stelle wider, an der angenommen wird, dass ein Controller durch ein niedriges Prüfniveau das nicht-methodische Arbeiten seitens des Disponenten nicht aufdeckt. Demgegenüber wird nun ein weiterer Schritt in das im Folgenden zu analysierende Modell integriert: Durch das Einfügen der Wahrscheinlichkeit - Aufdecken der fehlerhaften Arbeitsweisen durch das Top- Management - wird die Möglichkeit bzw. Gefahr modelliert, dass eben dieses Fehlverhalten nicht nur unentdeckt bleibt, sondern auch gewährleistet wird, dass der Controller, der in einem lateralen Verhältnis zum Disponenten steht, ebenfalls bestraft wird, sofern bei niedrigem Kontrollniveau der Fehler des Disponenten verborgen bleibt. In dem nachfolgenden Modell, das auf dem Ansatz von [4] basiert, wird analysiert, inwieweit eine groβe Kostenabweichung aufgrund einer nicht optimal gewählten Bestellmenge gegenüber einer geringen Kostenabweichung die Entscheidungen der Spieler und folglich die gleichgewichtige Lösung in diesem Spiel beeinusst

Günter Fandel, Jan Trockel
83. Multilayered Network Games: Algorithms for Solving Discrete Optimal Control Problems with Infinite Time Horizon via Minimal Mean Cost Cycles and a Game-Theoretical Approach

We extend a classical discrete optimal control problem in such a way that the feasible sets and the costs depend now on the parameter t: Such problems occur for example in (multilayered) emission trading games which are introduced by the authors [8]. Here, we develop two possible characterizations and solution principles. One is based on a classical linear programming approach. The other one exploits a game-theoretic treatment. Via these aprroaches we can solve the problem even for the case that we handle with an arbitrary transition function

Dmitrii Lozovanu, Stefan Pickl
84. Competitive Facility Placement for Supply Chain Optimization

The decision of where to place a company‘s distribution and outlet facilities when entering a new market is a crucial one faced by today‘s retail company supply chain planners. This problem alone is complicated enough and is currently an active area of research [1], [2]. The problem is further complicated when there exists a rival company intent on servicing the same market by implementing a parallel distribution network. Similar competitive problems have been studied by [3] and [4]. In this problem, two competitor companies are both interested in an answer to the same two questions: 1. Where should the distribution centre be located? and, 2. Where should the retail outlets be located (what areas of the market are to be serviced)? Both companies answer these questions with the understanding that its rival will attempt to maximize its own profits. There is one leader company which moves first. A follower company moves second and makes its decision with the knowledge of the leader company‘s move. In this paper, a bilevel optimization method is presented for solving the dynamic and continuous competitive facility placement problem.

Andrew Sun, Hugh Liu

Linear, Nonlinear and Vector Optimization

Frontmatter
85. New Optimization Methods in Data Mining

Generally speaking, an optimization problem consists in maximization or minimization of some function (objective function) f : S → R. The feasible set S ⊆ Rn can be either finite or infinite, and can be described with the help of a finite or infinite number of equalities and inequalities or in the form of some topological structure in Rn. The methods for solution of a certain optimization problem depend mainly on the properties of the objective function and the feasible set. In this paper, we discuss how specific optimization methods of optimization can be used in some specific areas of data mining, namely, in classification and clustering that are considered interrelated [11]

Süureyya Özöğür-Akyüz, Başak Akteke-Öztürk, Tatiana Tchemisova, Gerhard-Wilhelm Weber
86. The Features of Solving of the set Partitioning Problems with Moving Boundaries Between Subsets

Problems and methods presented in this paper synthesize foundations of theory of continuous set partitioning problems (SPP) and optimal control of systems described by ordinary differential equations. In order to mathematically formulate SPP quite often one should take into account the temporal and spatial changes of object or process state. Some of such models concerned with problems of preservation of the environment were learning by our scientists. Mathematical models of problems mentioned above are new from the viewpoint of problem statement and interesting to further generalization and developing of theoretical results which could be used in practice widely. A common SPP could be formulated as follows: it is necessary to partition a given area (set) into a finite number of disjoint subsets that satisfies certain restrictions so that the objective function reaches an extreme value. We propose a new problem statement, which differs from known ones in the following way: the desired set partition is dynamic in consequence of 1) a function which describes the certain object or process state varies with time; 2) a function, choice of which has an inuence on state of this object or process, is defined by partition of considered set each moment of time. This problem amounts to optimal control one for which one should write out the necessary conditions of optimality in the form of Pontrjagin‘s maximum principle. The constructed algorithm for solving such problems bases on combining both the methods of solving continuous SPP and methods of optimal control theory. With a view to investigate the properties of solutions of new set partitioning problem we realized the series of computational experiments and made qualitative analysis of obtained results.

Tetyana Shevchenko, Elena Kiseleva, Larysa Koriashkina
87. Symmetry in the Duality Theory for Vector Optimization Problems

The objective of this work is to obtain symmetry in the duality theory for linear vector optimization problems as known from the scalar duality theory. We derive results that are related to the concept of geometric duality as introduced in [1] and extend these results to a larger class of optimization problems. We emphasize that the dual problem for the linear vector optimization problem is naturally set-valued as it can easily be derived with results of the Lemma of Farkas

Martina Wittmann-Hohlbein
88. A Simple Proof for a Characterization of Sign-Central Matrices Using Linear Duality

We consider a problem described in 1992 by Davidov and Davidova, where for a given matrix A ⊆m×n we want to know whether every matrix B ⊆ Rm×n with the same sign pattern as A has a nonnegative, nonzero element in its null-space. Such a matrix A is called sign-central. Davidov and Davidova gave a characterization of sign-central matrices that was proven by a rather long argument. In this paper we present a simple proof showing that the aforementioned characterization of sign-central matrices can be seen as a consequence of the weak duality theorem of linear programming

Rico Zenklusen

Network Optimization

Frontmatter
89. Vickrey Auctions for Railway Tracks

We consider a single-shot second price auction for railway slots, the Vickrey Track Auction (VTA), in which the winner determination problem is a complex combinatorial optimization problem. We show that the VTA is incentive compatible, i.e., rational bidders are always motivated to bid their true valuation, and that it produces efficient allocations, even in the presence of constraints on allocations. The results carry over to \generalized" Vickrey auctions with combinatorial constraints

Ralf Borndörfer, Annette Mura, Thomas Schlechte
90. The Line Connectivity Problem

This paper introduces the line connectivity problem, a generalization of the Steiner tree problem and a special case of the line planning problem. We study its complexity and give an IP formulation in terms of an exponential number of constraints associated with "line cut constraints". These inequalities can be separated in polynomial time. We also generalize the Steiner partition inequalities

Ralf Borndörfer, Marika Neumann, Marc E. Pfetsch
91. Heuristics for Budget Facility Location-Network Design Problems with Minisum Objective

In this paper we present the first heuristics for discrete budget facility location{network design with minisum objective. Simple greedy heuristics, a custom heuristic that separates the problems of facility location and network design, a basic local search, and simulated annealing heuristics using two different kinds of neighborhoods are developed. The results of each heuristic are compared with known optimal solutions to a series of test problems

Cara Cocking, Gerhard Reinelt, Marc E. Pfetsch
92. Combinational Aspects of Move Up Crews

In railway planning, a frequent problem is that a delayed train necessarily delays its crew. To prevent delay propagation along the crew‘s succeeding trips, a move-up crew may take over the rest of the crew‘s duty on time. We address two problems in this context: First, during the planning stage, one has to make a tradeoff between robustness gained and additional costs incurred by introducing move-up crews. We suggest different heuristics to solve this problem, depending on the size of the instance. Second, during operations, dispatchers have to make optimal crew swap decisions, i.e., use available move-up crews to minimize overall passenger delay. For the local case, we give an efficient algorithm. However, optimizing crew swaps over the whole railway network is NP-hard

Holger Flier, Abhishek Gaurav, Marc Nunkesser
93. Combinatorial Aspects of Move-Up Crews for Spreading Processes on Networks

Spreading processes on networks can often be mapped onto network reliability problems. The computational complexity of computing the probability that the spreading process reaches some given subset K of the nodes is well studied as it reduces to the classical K-terminal reliability problem. Often one is not interested in a particular set K, but more global properties of the spreading process, such as the expected spreading size or the probability of a large spreading. We show that the direct Monte Carlo approach is an FPRAS for the expected spreading size, but unless NP ⊆ BPP, there is no randomized constant-factor approximation for the probability of large spreadings. When nodes are weighted to represent their importance, we show that estimating the expected spreading impact is of the same computational complexity as estimating s-t reliability.

Marco Laumanns, Rico Zenklusen
Metadaten
Titel
Operations Research Proceedings 2008
herausgegeben von
Bernhard Fleischmann
Karl-Heinz Borgwardt
Robert Klein
Axel Tuma
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-00142-0
Print ISBN
978-3-642-00141-3
DOI
https://doi.org/10.1007/978-3-642-00142-0

Neuer Inhalt