Skip to main content
main-content

Über dieses Buch

The volume contains a selection of manuscripts of lectures presented at the International Symposi­ um on Operations Research (SOR 96). The Symposium took place at the Technical University of Braunschweig, September 3-6, 1996. SOR 96 was organized under the auspices of the two German societies of Operations Research, Deutsche Gesellschaft fur Operations Research (DGOR) and Gesellschaft fur Mathematik, Okonomie and Operations Research (GMOOR) in cooperation with the Working Group Discrete Optimization of the IFIP (WG7.4). Since 1995, DGOR and GMOORjointly prepare the Symposium as a common annual conference. In particular, the annual general meetings of the DGOR, the GMOOR and the WG7.4 took place during the conference. The Symposi~m had 527 participants from 32 countries around the world, including 92 partici­ pants from Eastern Europe. The Symposium obviously attracts an international audience of workers fully covering the broad spectrum of Operations Research and related areas in economics, mathema­ tics and computer science. The importance of a highly interdisciplinary field as Operations Research is increasing owing to the growth in applications in related disciplines. Technological advances in computer science and algorithmic mathematics are crucial for attacking the great challenges waiting in the areas of applications of Operations Research effectively. As a participant of SOR 96 one could well observe the current pace of achievements. Many of these results are in these proceedings. The program consisted of two plenary, 17 semiplenary, and 335 contributed lectures in 18 sections.

Inhaltsverzeichnis

Frontmatter

Plenary Lecture

Brief 2000: das 4-Mrd.-Programm der Deutschen Post

The paper considers the use of OR-tools by Deutsche Bundespost to become a market oriented company. The paper concentrates on the letter mail service. It describes how the letter mail service at Deutsche Post AG is trying to achieve two goals simultaneously before the year 2000: improving the quality of customer service by using reliable, rapid letter transportation and achieving greater economic efficiency through savings. Is it possible for Deutsche Post AG to create maximum quality service near the minimum cost? How can OR help?

Günter W. Tumm

Linear Programming

Eine Alternativ-Methode für die numerische Lösung linearer Optimierungsaufgaben

Using a modified Farkas-lemma on the solvability of a linear inequality system it is given a finite, general method for solving linear optimization problems.

Horst Hollatz

Efficient algorithm for a class of bilevel linear programming problems

In this paper we investigate the special case of bilevel linear programming problem: $$_{x \geqslant 0}^{\max }\left\{ {cx + dy|Ax + By \leqslant b} \right\},$$, where y is an optimal solution of the lower (inner) problem: $$_{y \geqslant 0}^{\max }\left\{ {\alpha x|\beta y \leqslant 1,y\, \leqslant \,x} \right\}.$$Exact polynomial-time algorithm is developed for the well-posed problem. It is shown that the problem becomes NP-hard after an introduction of the additional restriction y≤ γ for the lower problem. Further research directions are discussed.

Yuri Kochetov, Alexander Pljasunov

A Flexible Approach to Piecewise Linear Multiple Objective Programming

An approach to generating all efficient solutions of multiple objective programs with piece- wise linear objective functions and linear constraints is presented. The approach is based on the decomposition of the feasible set into subsets, referred to as cells, so that the original problem reduces to a series of linear multiple objective programs over the cells. The concepts of cell-efficiency and complex-efficiency are introduced and their relationship with efficiency is examined. Applications in location theory as well as in worst case analysis are highlighted.

Stefan Nickel, Malgorzata M. Wiecek

NonLinear Programming

Analysis of regularization techniques in convex nondifferentiable optimization

We describe some of the most common approaches, based on “bundling” philosophy, to construct iterative descent schemes in nondifferentiable optimization. We show that they share a common motivation in terms of two-objective programming and that in fact they are aimed to solve parametrically such a program.In particular we focus on the Moreau-Yosida regularization of the cutting plane function and discuss the trajectory of its optimal point with respect to the regularization parameter.

A. Astorino, A. Fuduli, M. Gaudioso

A Bregman-Projected Subgradient Method for Convex Constrained Nondifferentiable Minimization

In spite of recent progress in other methods, Polyak’s subgradient projection algorithm for (simply) constrained convex minimization is widely used, because it is simple and easy to implement. We generalize this algorithm by allowing inexact Bregman’s nonorthogonal projections. They stem from Bregman distances generated by “sufficiently well-behaved” strictly convex functions, whose domains coincide with the feasible set. Inexact Bregman projections are quite cheap for Cartesian products of intervals, simplices and ellipsoids that arise in many applications. The residting method is globally convergent. It is also easy to implement (for typical constraints), can be faster than the original one, and it generalizes popular variants. We hope, therefore, that it will be useful in many applications.

Krzysztof C. Kiwiel

Long-Step Surrogate Subgradient Methods for Convex Feasibility Problems

We discuss subgradient methods for finding common points of finitely many closed convex sets in a Hilbert space, when each set is a lower level set of some convex function. Every iteration approximates each set by a halfspace given by a subgradient inequality. The next iterate is found by projecting the current one on a surrogate halfspace formed by taking a convex combination of the halfspace inequlities. We extend to accelerated methods some recent results of Bauschke and Borwein on convergence of short-step methods.

Bożena Lopuch, Krzysztof C. Kiwiel

On Exterior Penalties in Equilibrium Problems

The paper is devoted to optimization problems, where a quasi-variational inequality arises as a side constraint. We augment this constraint partially to the objective and show that under a regularity condition the applied penalty is exact.

J.V Outrata, M. Kočvara

Robust Optimisation of Nonlinear Systems under Parametric Uncertainty

A robust optimisation framework for constrained nonlinear optimisation under parameter uncertainty is considered. We consider robust formulations under the mean-variance robustness framework for uncer­tainty characterised by continuous probability density functions. A robust formulation designed for strict feasibility over the entire probability space is presented. Flexibility in the operational conditions is also provided via a penalty function framework. The robust strategies are tested on a dynamic optimisation problem from chemical engineering.

B.A Tanyi, J. Darlington, C.C C. Pantelides, B. Rustem

Combinatorial and Discrete Optimization

A “Locate First - Route Second” Heuristic for a Combined Location-Routeing Problem

We consider a two-stage facility location problem where on the second distribution stage, delivery vehicles operate on routes. The objective is to find the number, size and locations of the depots, together with the allocation of customers and the product flow from the plants to the depots, which minimize the sum of the trunking costs, variable and fixed depot costs and delivery costs, subject to the supply and demand constraints, depot capacities and vehicle capacity constraints.To solve the problem, we present an iterative heuristic which - starting with the solution of the location part - ping-pongs between a facility location and a vehicle routeing problem until consistency between the solutions of the two subproblems is reached.To get an initial estimate of the delivery costs, we use a approach based on hierarchical agglomerative clustering methods. The location subproblem is solved using a Lagrangean heuristic, which is based on the relaxation of the supply and capacity constraints. To solve the routeing subproblem, we use conventional tour construction heuristics together with some tour improvement procedures, which are also applied to the multiple depot routeing problem defined by the set of open depots.

Arno Bruns, Andreas Klose

On the Average Behaviour of Primal and Dual Greedy Algorithms for the Knapsack Problem

We consider primal (“best-in”) and dual (“worst-out”) greedy algorihms for the knapsack problem with Boolean variables. For the primal greedy algorithms, the worst-case behaviour is well-known (cf. [2]), for the dual greedy algorithm it can be arbitrarily bad. We study the average performance of both algorithms. It is shown (on the basis of the Central Limit Theorem) that in average the objective function value of the dual greedy algorithm differs insignificantly from the objective function value of the linear relaxation (and hence from the primal greedy and the optimal objective function values). This means that both the primal and the dual greedy algorithms are in a certain sense asymptotically optimal. This sharpens the results obtained by the authors in [1].

Gennady Diubin, Alexander Korbut

Packing a Bin Online to Maximize the Total Number of Items

A bin of capacity 1 and a finite sequence σ of items of sizes a1,a2,… are considered, where the items are given one by one without information about the future. An online algorithm A must irrevocably decide whether or not to put an item into the bin whenever it is presented. The goal is to maximize the number of items collected. A is f-competitive for some function f if n*(σ) ≤ f(n A (σ)) holds for all sequences σ, where n* is the (theoretical) optimum and n A the number of items collected by A.A necessary condition on f for the existence of an f-competitive (possibly randomized) online algorithm is given. On the other hand, this condition is seen to guarantee the existence of a deterministic online algorithm that is “almost” f-competitive in a well-defined sense.

Ulrich Faigle, Walter Kern

Algorithmische Methoden zur Kartierung von DNA-Sequenzen

In der Molekularbiologie traten in den letzten Jahren vermehrt diskrete Optimierungs- und Zuordnungsprobleme im Bereich der Untersuchung des Erbguts und der Proteine auf. Dies gilt insbesondere für das Human Genome Project, dessen primäres Ziel die Erforschung der genetischen Information des Menschen ist. Hierzu gehört die vollständige Sequenzierung des Genoms, d. h. die Erforschung von Nucleotidsequenzen bzw. deren Funktion. Hier wird eine spezifische Problemstellung betrachtet, deren Ziel die Ermittlung der realen Ordnung von Klonen (Teilabschnitten von Chromosomen) mit Hilfe von experimentell ermittelten - in der Regel fehlerbehafteten - Daten ist (STS-Kartierungsstrategie). Die sich daraus ergebende kombinatorische Problemstellung wird diskutiert, und es werden verschiedene Methoden zur Generierung von Lösungen sowie zur Identifikation experimenteller Fehler entwickelt und analysiert.

Andreas Fink

Exact Algorithms for Some Multi-level Location Problems on a Chain and a Tree

The multi-level Simple Plant Location Problem (MLP) are studied. Assumed that cost of transporting unit commodity from one site to other site equal to the sum of edge lengths in the path between these sites. Let p be a number of levels, n – a number of demand sites, m r – a number of feasible facilities on level r, 1 ≤ r ≤ p.For the MLP on a chain exact algorithms A p andà p are constructed with time complexities(pnm1• • • m p ) and (n3∑ip=1m r ) respectively. For a case two and three levels algorithms A2 and A3 are preferable. In case p ≤ 4 the polynomial algorithm à p is better.For a case MLP on tree networks (in contrary to the chain problem) the optimal solution with connected regions of servicing can not exist (even in case two levels). Nevertheless in case two-level LP on tree networks we present the polynomial exact algorithm which requires (m3n) operations.

E. Kh. Gimadi

A Lagrangean Heuristic Based Branch-and-Bound Method for the Capacitated Network Design Problem

The capacitated network design problem arises in various kinds of network construction situations, for example in telecommunications. It is a multicommodity minimal cost network flow problem with fixed charges on the arcs. In this paper, we develop a solution method for this problem, consisting of a Lagrangean-based heuristic embedded in a branch-and-bound algorithm. The method can be used both as a heuristic to obtain good feasible solutions and as an exact solution method. Computational tests have been performed on networks of various structures and sizes. The method is found to be very efficient for generating near-optimal solutions, both with respect to solution time and data storage limits, especially for large scale network design problems. This makes it interesting for real life applications.

Kaj Holmberg, Di Yuan

The Towers of Hanoi and Pileproblems

Pileproblems are difficult discrete optimization problems. They deal with pratical tasks like loading and unloading trains, ships or trucks (Schulz [9]). Pileproblems can be described in terms of graph theory. This paper refers to the one by Schmiedel [8] and presents yet another variant of the Towers of Hanoi Problem.

Lutz Kämmerer, Lorenz Hempel

Simple plant location problem with partial external finance: lower bound, heuristic and exact solution

In the paper we consider following bilevel mixed integer problem. The upper problem (level one) is a simple plant location problem with additional condition for product distribution. The lower problem (level two) is a knapsack problem with continuous variables. It is shown that initial problem can be reduced to series of simple plant location problems. The reduction allows to get lower bound for objective function and design exact and approximate algorithms. Computational results and further research directions are discussed.

Yuri Kochetov, Dmitri Alexandrov

Computational Experience in Nonlinear Mixed Integer Programming

An interior-point algorithm within a branch-and-bound framework for solving nonlinear mixed integer programs is described. In contrast to solving the relaxation to optimality at each tree node, the relaxation is only solved to near-optimality. Analogous to using advanced bases for warmstart solutions in the case of linear MIP, a “dynamic” collection of warmstart vectors are kept. Computational results on various classes of nonlinear mixed integer programs are presented.

Eva K. Lee, John Mitchell

Dynamical Voronoi partitions of piecewise flat manifolds and modeling applications

We review the general notion of Voronoi partitions of a given space and corresponding dual Delaunay tessellations. The classical flip algorithm is generalized to Laguerre partitions and to Voronoi partitions of piecewise linear surfaces.and tori. Further, the utility of this notion is illustrated in the context of various simulation models of dynamically evolving systems: tissue growth, polycrystalline growth and of granular media.

Thomas M. Liebling

Some polynomially solvable subcases of the detailed routing problem in VLSI design

There are plenty of NP-complete problems in VLSI design, like channel routing or switchbox routing in the 2-layer Manhattan model (2Mm, for short). However, there are quite a few polynomially solvable problems as well. Some of them (like the single row routing in 2Mm) are “classical” results; in this survey we present some more recent ones, including 1.a linear time channel routing algorithm in the unconstrained 2-layer model2.a linear time switchbox routing algorithm in the unconstrained multilayer model3.a linear time solution of the so called gamma routing problem in 2Mm(This latter means that all the terminals to be interconnected are situated at two adjacent sides of a rectangular routing area, thus forming a V shape. Just like channel routing, it is (i) a special case of switchbox routing, and (ii) contains single row routing as a special case.)We present the (positive and negative) results in a systematic way, taking into account two hierarchies, namely that of geometry (what to route) and technology (how to route) at the same time.

András Recski

Storage Controlled Pile-Up Systems, Theoretical Foundations

A pile-up system consists of one or more stacker cranes picking up bins from a conveyor and placing them onto pallets with respect to costumer orders. We give a mathematical definition of the pile-up problem, define a data structure for control algorithms, introduce polynomial time algorithms for deciding whether a system can be blocked by making bad decisions, and show that controlling pile-up systems is in general NP-complete. For pile-up systems with a restricted storage capacity or with a fixed number of pile-up places the pile-up problem is proved to be solvable very efficiently.

J. Rethmann, E. Wanke

A heuristic for the probabilistic traveling salesman problem

Recently probabilistic versions of combinatorial optimization problems are becoming of interest. The probabilistic traveling salesman problem differs from its deterministic version by the fact, that only a subset of nodes has to be visited in any given instance of the problem. A problem instance S appears with a given probability p(S).At the end of the 80’s Jaillet introduced the a priori method to solve this problem. An a priori tour has to be found, which minimizes the expected length over all possible instances. On every instance the nodes have to be visited in the order of the a priori tour skipping the absent nodes.This method will be combined with some new approaches like the neural network theory for the deterministic traveling salesman problems.

Silke Rosenow

LP-bounds for the Container and Multi-Container Loading Problem

In this paper we develop new bounds for problems of optimal packing of small (rectangular- shaped) pieces (boxes) within one or several larger containers, that is bounds for the so-called Container Loading Problem and the Multi-Container Loading Problem. These new bounds are generalizations of an LP-bound for the Rectangle Packing Problem.

Guntram Scheithauer

An exact algorithm for general orthogonal n-dimensional knapsack problems

The n-dimensional orthogonal knapsack problem has a wide range of practical applications, including packing, cutting and scheduling. We present a new approach for its exact solution using a two-level tree search algorithm. A key role plays a graph-theoretical characterization of packing patterns that allows us to deal with classes of packing pattern that are symmetrical in a certain sense, instead of single ones. Computational results are reported for two-dimensional test problems from literature.

Jörg Schepers

On some polynomial solvable cases of cutting stock, pallet loading and scheduling problems

Alexander G. Tarnowski

Graph Algorithms and Complexity

Verifying Minimum Spanning Trees in Linear Time

The problem of verifying a given spanning tree to be minimum arises as a subproblem in Karger etal.’s linear-time minimum spanning tree algorithm. Currently two methods exist for the solution of the verification problem in linear time. The latest, proposed by King, is the first direct linear-overhead implementation of Komlós’s algorithm. In the present work, we propose the second linear-time application of the Komlós algorithm which is considerably simpler. It differs from King’s method mainly in the data structure used and the operations defined on it. We eliminate the labelling mechanism and the tags employed in King’s algorithm and construct the look-up tables used in the procedure in shorter time with less storage space.

Cüneyt F. Bazlamaçci, Khalil S. Hindi

Compressing Data by Shortest Path Methods

Storage and transmission of signals tend to be resource demanding when the data are processed in their original size. This is not confined to multi-dimensional signals such as images, but is also most relevant in the one-dimensional case considered in this work. Compressing the data in such a way that close reconstructions can be found, is therefore a highly respected problem in signal processing.This paper demonstrates how a graph theory approach can yield optimal subsets by which all the original signal samples are to be represented. Under the restriction of bounded subset cardinality, the actual method guarantees to give an approximation with minimum distance from original data. Recognizing the problem as a constrained shortest path problem, enables us to derive simple algorithms which produce the exact solution in polynomial time.Closeness between original data and algorithm output is measured by any vector norm. The norm in question is applied to the difference between the original data vector and the unique reconstruction based on algorithm output. We consider the infinity and Euclidean norms, and present a complexity analysis of an algorithm incorporating both error measures.

D. Haugland, J. G. Heber, J. H. Husøy

Easy Planarity Testing for Ordered Sets

The search for an efficient planarity-testing procedure for ordered sets is a longstanding problem. Even for small ordered sets it was often not easy do decide whether they have upward planar drawings or not. Only recently it has been proved that, in contrast with graphs, planarity for ordered sets is NP-complete [2], and a polynomial-time procedure has been found, on the other hand, to test whether a planar embedding of an acyclic digraph has a similar planar upward drawing [1]. The later shows that checking whether a triconnected ordered set is planar may be done also in polynomial-time. The procedure described in [1] uses to this end the Ford-Fulkerson algorithm solving the max-flow problem on a related network. In this note we present a direct method, which is simpler and for small ordered sets requires, in fact, almost no computations.

Andrzej Kisielewicz

Stochastic Models and Optimization

Optimal personnel configuration of branch office banking through applied queueing network theory

German banks are finding themselves at a juncture: explosion of costs and revenue shortfalls have led to severely reduced margins. The globalisation of markets finds todays customer enjoying a greater variety of bank services. The resulting basis for comparison leads him to expect better interest rates on his investment as well as better service. In addition, new and more economical marketing channels such as telephone banking, BTX and the Internet are pushing into the market. If a bank is to survive in a today’s market it must cut its cost in half and at the same time raise its utility for customers through improved service and qualitatively upgraded performance. Problematic in this regard are especially the branch offices, because there are high fixed costs in term of space and personnel. Here it is especially the small branches which generate no appreciable revenues and thus operate at a loss. To do away with such small branch offices is not an optimum solution because of the fact that in the business with the private customer only quantity leads to cost effectiveness. For these reasons banks have declined the closure option to a large extent in favor of increasing the operating efficiency of existing branch offices. This approach includes the centralisation of administrative and procedural tasks through so called “back office units”, some of which handle standard activities such as booking of money transfer for over 100 branch offices. Thus the branches become pure sales centers offering a reduced package consisting only of standard products. Following this concept, some greater german banks experiment with mini-branches with two or three workes and self service technology (automatic tellers to handle the issue of cash, money transfers and printing of account statements) and telephone banking. Many aspects of this “lean banking concept” have been adopted from the industrial sector, which was confronted earlier with finding a way to operate cost effectively. Such tests in eveiy day business situations, however, often allow a determination of the effect of single or complex changes only after a long period of time or not at all. Beyond this, testing of alternatives, for example in task distribution or work flow leads to a feeling of insecurity on a part of the workers.

Gudrun Beier

On Uniformization for Reducible Nonnegative Dynamic Systems

We discuss extensions of the classical technique of uniformization (or randomization) for bounded continuous-time Markov chains to continuous-time dynamic systems with finite state spaces and generators given by arbitrary reducible matrices with nonnegative off-diagonal entries. The obtained results arc of computational and theoretical interest. This includes also Markov reward models and input-output models in economic analysis. These two specific applications are discussed in detail to illustrate the conditions and the results.

Nico M. van Dijk, Karel Sladký

Ein cluster-analytischer Ansatz zur Modellierung nichtlinearer Systeme

Bei der Modellierung technischer Systeme für Steuerungs- oder Prognosezwecke sind bei Einschränkung der Betrachtungen auf nur lineare Systeme oft keine befriedigenden Ergebnisse zu erzielen.

Tatjana Lange, Wladimir Wasiljew, Stephan Schiorf

A Note on the Finite Time Behaviour of Simulated Annealing

Simulated Annealing has proved to be a very sucessful heuristic for various combinatorial optimization problems. It is a randomized algorithm, that attempts to find the global optimum with high probability by local exchanges. In this paper we give a new proof of the convergence of Simulated Annealing with logarithmic cooling schedule by considering the conductance of the underlying transition graph. With this proof technique it is possible to obtain better bounds for the finite time behaviour of Simulated Annealing than previously known.

Andreas Nolte, Rainer Schrader

Approximative Analysis of Series Queues

In this paper an approximative solution of a G/M/1 → /M/1 →... series of service stations is proposed. Some examples of tandem queues are manufacturing or assembly line processes in which units must proceed through a series of work stations. Approximation methods, such as those of Kingman, Krämer/Langenbach-Belz, or diffusion approximation, do only consider the first two moments to characterize the input and service process. In contrast to those common approximation formulas the method described below bases on the approximative computation of the Laplace transform of the interarrivai time distribution. Numerical results are compared to simulation and other approximation results to indicate the accuracy of the solution. In the end an outlook is given on how to use higher moments for the solution of queueing networks.

Horst Zisgen

Scheduling

Graph Models for a Duo-Processor Task Scheduling Problem

In this paper we study a scheduling problem with duo-processor tasks, that is tasks simultaneously requiring two dedicated processors for their execution. No precedence constraints exist between tasks and that task preemption is not allowed. The objective is to find a schedule which minimizes the maximum completion time. We show that the instances with up to four processors can be solved in polynomial time. Moreover, we prove that the problem with 2s + 1 (for s = 2,3,…) processors is NP-hard. An approximation results for particular cases of the duo-processor tasks scheduling problem with 2s + 1 processors is given.

Paolo Dell’Olmo, Stefano Giordani, Maria Grazia Speranza

Scheduling in Production of Concrete Wares

We consider a real-life problem of scheduling clients orders in the production of concrete wares in a factory of building industry. This problem can be modeled as hybrid flow shop scheduling problem with mixed no wait/no store constraints and the makespan criterion. To solve the problem, we propose an approximation algorithm based on the tabu search approach.

Józef Grabowski, Jarosław Pempera, Czesław Smutnicki

Resource-constrained project scheduling A survey of recent developments

Resource-constrained project scheduling involves the scheduling of project activities subject to precedence and resource constraints in order to meet the objective(s) in the best possible way. The area covers a wide variety of problem types. The objective of this paper is to provide a survey of what we believe are the important recent developments in the area. Our main focus will be on the recent progress made in and the encouraging computational experience gained with the use of optimal solution procedures for the basic resource-constrained project scheduling problem (RCPSP) and important extensions. We illustrate how the branching rules, dominance and bounding arguments of a new depth- first branch-and-bound procedure can be extended to a rich variety of related problems: the generalized resource-constrained project scheduling problem, the resource-constrained project scheduling problem with generalized precedence relations, the preemptive resource-constrained project scheduling problem, the resource availability cost problem, and the resource-constrained project scheduling problem with various time/resource(cost) trade-offs and discounted cash flows.

Willy Herroelen, Erik Demeulemeester, Bert De Reyck

Computational Complexity Analysis of Single Machine Scheduling Problems with Job Release Dates Dependent on Resources

Single machine scheduling problems with job release dates and technological precedence constraints among jobs are considered. It is assumed that job release dates depend on continuously-divisible resources such as: energy, fuel, catalyzer, row materials, financial outlay. The following optimization criteria are considered: makespan, total resource consumption and both simultaneously. Detailed computational complexity analysis of the considered problems and their special cases were conducted. It appears that some problems are NP-complete even for the some models of release dates for all jobs. Several special cases with polynomial time algorithms are found.

Adam Janiak

Scheduling Problems with Linear Increasing Processing Times

We consider the scheduling problems with the time-dependent processing times. Each job i has a processing time yi = pi + viti where pi, vi are positive constant and ti is the starting time. All jobs become available for processing in time t = 1. We study the following problems. Problem 1.PN jobs have to be processed on a single machine. Each job i has a due date di. The problem is to find a schedule minimizing the maximum lateness i.e. max (Ci – di), where Ci denotes the completion time of job i.Problem 2.Problem 2. N jobs have to be processed on two parallel machines. The problem is to find a schedule minimizing the maximum completion time.Problem 3.Problem 3. N jobs have to be processed on two parallel machines. The problem is to find a schedule minimizing the sum completion time.We show NP-hardness problem 1 even though pi = 0 for all jobs but one and we prove NP-hardness problem 2 and problem 3 even though pi = 0 for all jobs.

Alexander Kononov

Approximation Algorithms

The paper deals with a class of approximation algorithms based on a tabu search technique, recommended for scheduling of a flexible flow line. This line consists of a sequence of processing centres, each center has a number of identical parallel machines, there are intermediate buffers between centers, and parts flow through successive (not necessary all) centers in the same order. Proposed algorithms are fast, easily implementable and have excellent performance verified in computer tests.

Czesław Smutnicki

Constraint-Based Scheduling in Oz

It is discussed, how scheduling problems can be solved in the concurrent constraint programming language Oz. Oz is the first high-level constraint language, which offers an interface to invent new constraints in an efficient way using C++. Its multi-paradigm features including programmable search are unique in the field of constraint programming. Through the interface, algorithms from Operations Research and related fields can be incorporated. The algorithms can be combined through encapsulation into constraints and can communicate via shared variables. This is exemplified by the integration of new techniques based on edge-finding for job-shop and multi-capacitated scheduling. The viability of Oz as a platform for problem solving is also exemplified by a graphical scheduling workbench. The performance for job-shop problems is comparable to state-of-the-art scheduling tools.

Jörg Würtz

Production

Ein exaktes Verfahren zur kostenorientierten Fließbandabstimmung

After a characterization of the cost-oriented assembly line balancing problem it will be shown that by loading the stations maximally the cost-oriented optimum can be missed. Instead of loading the stations maximally the criterion “2-station-rule” has to be used. For generating optimal solutions an exact backtracking-approach is presented in which modified and new bounding rules limit the enumeration process. Results of an experimental investigation show that the method can find an optimal solution for small and medium size problem instances in acceptable time.

Matthias Amen

Kombinierte Mengen- und Preisoptimierung für einen Produktionsbetrieb

A simultaneous optimization of volumes and prices for the relevant products of an industrial enterprise was achieved by standard software on a personal computer. The relevant problem of data presentation was already considered in the mathematic modelling. A very good approximate solution of the non-linear problem could be achieved by solving a substitute linear model in an iterative way. Impressive possibilitis for an increase in profit for the enterprise could be shown through a number of optimizing calculations.

Winfried Brecht

Solving Unit Commitment Problems in Power Production Planning

Unit commitment in production planning of power systems involves dynamic, mixed-integer programming problems. We show that specialized bundle methods applied to Lagrangian relaxations provide not only lower bounds on the optimal value (as do subgradient methods), but also certain relaxed primal solutions. These solutions can be used for constructing good primal feasible solutions. We present computational experience for large-scale examples.

Stefan Feltenmark, Krzysztof C. Kiwiel, Per-Olov Lindberg

Performance Evaluation of Repair Systems With Priorities

The paper describes a model for the performance evaluation of repair systems with priorities. Since for large problem sizes the calculations for the exact solution of our model are computational prohibitive, several heuristic solution procedures are presented. We investigate the quality of these heuristics with the help of several examples.

Heinrich Kuhn, Ulrich A. W. Tetzlaff

CLAZZI - Ein PC-Programm zur Unterstützung von dezentralen und objektorientierten PPS-Systemen

In the last years, we can recognize a strong tendency to distributed, decentralized and object- oriented production planning systems. Hence a reengineering of PPC-systems becomes necessary; functional planning hierarchies are substituted by object-oriented hierarchies. Unfortunately literature gives not many hints how to support the process of aggregating objects. Therefore the PC-program CLAZZI (CLusteranalysis Antizipating fuZZI\ness) was developed. CLAZZI enables the user to create the planning units by an interactive process. It includes an algorithm which was especially developed for hierarchical production planning and which is presented here. CLAZZI-partitions allow the maintenance of flexibility refering to unexpected events. During the planning process current information can be anticipated. Plan revisions are avoided and wrong decisions due to imperfect information drastically reduced.

Peter Rausch

Die Incremental Order Quantity - Eine kritische Analyse

Die vorliegende Arbeit beschäftigt sich mit der Incremental Order Quantity (IOQ), einer besonders anwenderfreundlichen Heuristik für Losgrößenprobleme vom Wagner/Whitin-Typ, die seit Anfang der 80er Jahre in der Literatur wiederholt diskutiert wurde. Die Meinungen über die Güte einer IOQ-Lösung sind - insbesondere aufgrund widersprüchlicher Resultate verschiedener empirischer Studien - geteilt. Um die Leistungsfähigkeit der IOQ zu klären, wird das Lösungsverhalten auf analytischem und empirischem Weg eingehend überprüft. Die Untersuchung umfaßt zusätzlich eine kritische Diskussion einiger Modifikationen und Weiterentwicklungen dieses Verfahrens.

Andreas Recker

Koordination vernetzter Produktionsprozesse

Gegenstand des Beitrages ist die Entwicklung eines Modells zur Darstellung, Analyse und Gestaltung der Koordinationsprozesse zwischen zwei oder mehreren Teilbereichen einer Unternehmung oder zwischen Unternehmungsbereichen und Umwelt. Die Unterscheidung in passive und aktive Koordinationsschnittstellen erfolgt anhand der Objekte des Koordinationsprozesses. Die passive Koordination basiert auf definierten Objekten. Demgegenüber ist der Entscheidungsprozeß über Transaktionsobjekte zwischen Unternehmungsbereichen und deren Ausgestaltung Bestandteil der aktiven Koordination.

Bärbel Reuter

Planung von Lebenszykluskosten industrieller Produkte mit Hilfe der Fuzzy Linearen Optimierung

Mit der Verabschiedung des Kreislaufwirtschafts- und Abfallgesetzes (KrW-/AbfG) hat der Gesetzgeber die Produktverantwortung über den gesamten Lebenszyklus dem Hersteller übertragen. Dieser ist zukünftig verpflichtet, die von ihm hergestellten Produkte am Ende ihrer Lebensdauer zurückzunehmen und einem qualitativ hochwertigen Recycling zuzuführen. Zur frühzeitigen Ermittlung der entscheidungsrelevanten Lebenszykluskosten sind somit die Kosten zur Demontage und Verwertung der demontierten Baugruppen und Bauteile bereits in der Phase der Produktentwicklung abzuschätzen. Die in dieser Phase vorliegende Planungsunsicherheit hinsichtlich der zukünftigen technischen und wirtschaftlichen Rahmenbedingungen kann mit Hilfe der Fuzzy Set Theorie berücksichtigt werden. Im vorliegenden Beitrag erfolgt daher die Modellierung der zukünftig zu internalisierenden Demontage- und Recyclingkosten mit Hilfe der Fuzzy Linearen Optimierung. Unternehmen werden damit in die Lage versetzt, zukünftige Recyclingkosten ihrer sich in der Entwicklung befindlichen Produkte bereits frühzeitig abschätzen und gezielt beeinflussen zu können.

Th. Spengler, O. Rentz

Holding Costs Minimization with Service Level Constraints in an Arborescent Distribution Network

A two-stage arborescent distribution network is considered in which final demand occurs merely at the finished goods level and inventory may be stored at either level. The system components are controlled by the well-known order-up-to-level policy. A constraint holding costs minimization problem is analyzed for determining the optimal amounts to stock at each node of the given network with customer service levels to be simultaneously ensured. The optimization model allows furthermore for different control intervals at the retailer level. Numerical results show that the analyzed optimization procedure produces high-quality policies with low computational effort.

Ulrich Tüshaus, Christoph Wahl

Planning the extraction and preparation of mineral resources

In the mineral industries, large quantities of material are extracted from natural deposits and processed into marketable mineral products. Customers demand steady supply in defined qualities, while natural resources are inhomogeneous and technical constraints restrict the solution space of feasible production programs. Linear Programming models are common to solve this problem, but they do not suffice. Zero-Or-Range conditions on production amounts and sequencing constraints for scheduling purposes make the problem NP-hard. In addition to showing this, we present a common heuristic approach and demonstrate how it is used in a system for everyday planning in an underground hard coal mine.

Barbara Maren Winkler

Transportation

A combinatorial optimization approach to locate traffic counting points in a transport network

We define an integer linear programming model for deriving the complete set of traffic flows on a transport network based on the measurements provided by a suitably located minimal-cost set of counting points. We propose a pair of greedy heuristics that determine lower and upper bounds on the number of traffic sensors. We perform some preliminary numerical simulations in order to evaluate the computational performances of the discussed heuristics. Finally, we briefly discuss the main implications of the proposed model on Origin/Destination (O/D) trip matrix estimation and updating.

Lucio Bianco, Giuseppe Confessore, Pierfrancesco Reverberi

Models and Algorithms for Real-Time Control of Aircraft Landings

In this paper, models and algorithms for real-time control of the region around the airport are proposed. We consider the dynamic case in which the entry times of aircraft in the region are unknown and the sequence of aircraft must be recomputed whenever a new aircraft approaches the terminal area. A set of constraints which allow the controller to maintain given patterns of the landing sequences previously generated must be taken into account. Heuristic algorithms are proposed and computational results are discussed.

Lucio Bianco, Paolo Dell’Olmo, Stefano Giordani

Entwicklung eines logistischen Dienstleistungskonzepts für die Belieferung von Reisebüros mit Veranstalterkatalogen

Die Abläufe bei der Distribution von Veranstalterkatalogen in der Reisebranche sind derzeit nicht effizient strukturiert. Diese Situation erfordert zwingend Veränderungen, sowohl aus ökonomischer als auch aus ökologischer Sicht. Ausgehend von den Ist-Abläufen wird ein Lösungsansatz, der auf dem Konzept eines logistischen Dienstleisters basiert, vorgestellt und hinsichtlich seiner Anwendungsmöglichkeiten untersucht

Joachim R. Daduna, Reinhard Koch, Peter Maciejewski

A New Heuristic for Vehicle Routing with Narrow Time Windows

This paper describes a new heuristic for vehicle routing problems with narrow time windows. The problem arises in the context of the delivery of groceries to restaurants. For most of the instances the given time window distribution does not allow solutions where no time restrictions are violated. The aim is to schedule most of the customers in time building regionally bounded tours. The few remaining customers have to be scheduled manually. If the disponent decides to serve one or more of the remaining customers in time, he has to allow out-of-time deliveries for some of the automatically planned stops. The algorithm is based on a clustering procedure where a tree with multiple node weights is divided into subtrees. Upper bounds restrict the sums of the weight functions in each subtree. This problem is NP-complete for trees with a restricted number of weight functions. A greedy algorithm is developed to determine the tree partition. For our application it is extended to a version which also checks if each subtree can be routed regarding the problem specific requirements. Although the algorithm was developed for a specific real world problem, the ideas can also be applied to other vehicle routing problems - even to those with more complicated constraints.

Anja Hamacher, Christoph Moll

Improving Vehicle Scheduling Support by Efficient Algorithms

In this paper, the minimum fleet size problem is investigated: Find the minimum number of vehicles to serve a set of trips of a given timetable for a transportation system. First, we present an algorithm for the basic problem requiring only linear-time after suitably sorting input data. This improves a quadratic-time greedy algorithm developed in [Su95]. Our algorithm was implemented and tested with real-life data indicating a good performance. Generated diagrams on vehicle standing times are shown to be useful for various tasks. Second, Min-Max-results for the minimum fleet size problem are discussed. We argue that Dilworth’s chain decomposition theorem works only if unrestricted deadheading, i.e., adding non-profit ‘empty’ trips, is permitted and thus its application to the case of railway or airline passenger traffic is misleading. To remedy this lack, we consider a particular network flow model for the no deadheading case, formulate a Min-Max-result, and discuss its implications- along with efficient algorithms-for vehicle as well as trip and deadhead trip scheduling.

Taïeb Mellouli

Macroeconomics, Economic Theory, Games

Multidimensional signalling and entry decision strategies

We define perfect Bayesian equilibria for a multiperiod multisignal game between a single incumbent firm and a potential entrant into a given market. We argue that searching for these equilibria is prevented by significant theoretical and computational issues. Then, we tackle the entry decision problem by an efficient sequential and incremental solution procedure that works on a Bayesian belief network and identifies a reasonable outcome of the game.

Domenico Campisi, Alberto Nastasi, Pierfrancesco Reverberi

Reducing the Number of Criteria in Quasi-convex Multicriteria Optimization

In this paper we prove a reduction result for the number of criteria in quasi-convex multiob- jective optimization. This result states that to decide whether a point x in the decision space is Pareto optimal it suffices to consider at most n + 1 criteria at a time, where n is the dimension of the decision space. The main theorem is based on a geometric characterization of Pareto, strict Pareto and weak Pareto solutions.

Matthias Ehrgott, Stefan Nickel

Closed Form Solutions for a Game of Macroeconomic Policy in a Two-Party System

The very influential paper, Alesina (1987), models the interaction of political candidates with different objectives concerning inflation and unemployment as a repeated game. We derive the exact closed form expressions related to this game and compare them with the results obtained by Alesina, who used an approximation.

P. Jean-Jacques Herings

Statistics and Econometrics

Factor-GARCH Models for German Stocks — A Model Comparison —

This paper presents theoretical models and their empirical results for the movement of the returns of German stocks which are contained in the German stock index DAX. A factor structure is used in order to allow for a parsimonious modelling of the first two moments of return changes. Because of the autocorrelation in the squared series, the model is based upon dynamic factors following Generalized Autoregressive Conditional Heteroscedasticity (GARCH) dynamics. Three different GARCH specifications (GARCH(1,1)-M, IGARCH(1,1)-M and EGARCH(1,1)-M) and three assumptions about the distribution of the disturbances (Gaussian, Student’s t and Generalized Error Distribution) sire applied to the data, resulting in nine different models altogether, which are compared with a factor model with constant means and variances.

Thomas Kaiser

Minimax sequential procedures for Markov renewal processes

The paper deals with the problem of finding minimax sequential procedures for Markov renewal processes. To find the optimal sequential procedures in some statistical models for these processes, a tool of exponential families is used. A minimax theorem is presented which is useful in deriving the optimal sequential estimation procedures for a multiparameter exponential model for stochastic processes, when a weighted squared error loss is used and the cost of observing the process is taken into account. Using this tool, a class of minimax sequential procedures is derived for estimating the ratios between transition probabilities of the embedded Markov chain and the mean value parameter of the additive part of the Markov renewal processes considered.

Ryszard Magiera

Bayes Empirical Estimation by the Method of Sieves with some Applications

Let X be a random variable with the density function f(x|γ). Assume that the parameter γ is also a random variable with unknown a priori density θ(γ). In the paper the problem of approximation of a priori density via sieve is considered. The estimate obtained is further used to approximation of Bayes estimator of γ and Bayes risk by some Bayes empirical estimator and Bayes empirical risk. A theorem on almost surely convergence of the Bayes empirical sequence of estimators and Bayes empirical sequence of risks is presented.

Roman Różański

Marketing and Data Analysis

Part-worth Estimation Using Individual Hybrid Conjoint Analysis

Individual hybrid conjoint models are presented, which provide new approaches for obtaining part-worths at the individual level for problems involving large numbers of attributes. Model formulations and estimation algorithms for data analysis as well as procedures from experimental design theory for data collection are discussed. An empirical comparison shows to which extent the newer models outperform the traditional approaches.

Daniel Baier, Frank Säuberlich

Multiperiod price optimization in oligopolies considering reference price effects

Dynamic reference price models are often based on the assumption of a monopolistic market or incorporate prices of competitors in an indirect manner, e.g., by using relative instead of absolute values. We suggest a model which combines reference price effects and competitive behavior in oligopolistic markets. Some well-known models are included as special cases in our approach. Properties of the suggested model are discussed. The model is incorporated into a dynamic programming formulation.

Michael Löffler, Wolfgang Gaul

Information and Decision Support Systems

Flugroutenplanung im Cargo Sektor

Wir beschreiben die spezifischen Charakteristika der Produktionsplanungsproblematik in der Luftfrachtindustrie sowie die methodische Basis zur Flugplanbewertung und -generierung innerhalb eines spezifischen Decision Support Systems Synopse.

J. Antes, U. Derigs, M. Zils

MESAP-III: An Information and Decision Support System for Energy Planning and Environmental Management

The increasing complexity of energy and environmental problems require the processing of an expanding amount of information in order to arrive at sound conclusions and defendable policy recommendations. Decision support tools for energy and environmental planning have reached a tremendous diversity and quality of methodological approaches but they are lacking convergence in information management MESAP, the Modular Energy System Analysis and Planning environment developed at the University of Stuttgart integrates modem information management and decision support tools with existing operations research methodologies for energy planning. The essential conceptual aspect of MESAP is the separation of the database design from the procedural flow of the mathematical modelling algorithms. The central MESAP database is designed as case study information system and serves as a standardized database management system for process-engineering oriented energy system models. The standardization of energy system data facilitates the integration of different methodologies and reduces software engineering efforts. The other design principles of MESAP are flexibility concerning structure and aggregation of the analyzed system, modularity of the involved methodologies, and user friendliness of the software. Today MESAP is used for local, regional and national energy planning, for life cycle and fuel chain analysis and for environmental accounting in enterprises.

Christoph Schlenzig, Rolf Kühner

Hierarchical Graphs for Model Building in Energy and Environmental Planning

This paper will present a hierarchical model building concept that has been used in the implementation of the Network Designer and Analyst (NDA), a graphical editor designed as a tool for model development and analysis in energy and environmental planning. The network editor NDA aims at supporting the user with a graphical model building concept for energy and environmental system models on the basis of the so-called reference energy system graph (RES). The RES is a network representation of the whole energy conversion chain, starting from extraction of primary energy, passing through intermediate energy conversion processes to reach different final energy demand sectors. Therefore, the NDA enables interactive modeling, running, and analysis of energy demand or supply models. It is shown how the RES approach corresponds to graph theory and how the model components can be structured in hierarchies. It can be shown that the chosen approach is especially useful to describe and to analyze large real-world systems. Additionally, layout algorithms are presented that convert textual model descriptions into proposals of graphical visualizations of the model topology.

Alf Schweiker, Uni Stuttgart

Supporting Planning and Operation Time Control in Transportation Systems

Providers of public transportation systems, like airlines and railway companies, are usually faced with a complex planning and control process consisting of several phases, like demand estimation, timetable construction, resource allocation, e.g., for vehicles and crews, and operation time control. The control phase includes rescheduling of connections according to corporate rules and resource reallocation. We discuss computer-based decision support systems capable of supporting such processes. Especially, the case of integrated planning and control phases for railway passenger traffic will be presented: The integrated system being developed involves techniques, like optimisation, simulation, and knowledge-based components as well as common user interface and object-oriented architecture. Results of the planning phase are forwarded to operation time control. Components developed to support the control phase can be used to improve planning quality by what-if analyses.

Leena Suhl, Taïeb Mellouli

Banking, Finance, Insurance

Analyzing the Long-Run Performance of Initial Public Offerings: An Empirical Investigation for Germany

Although initial public offerings (IPOs) have been intensively analyzed in the USA and some other countries, IPOs in Germany are not sufficiently discussed. This paper analyzes the post-issue long- run capital-market performance of German IPOs from the beginning of the eighties until the early nineties. The performance is investigated relative to three market indices and a sample of matching firms (MFs). In accordance with the Ritter (1991) analysis the cumulative average abnormal returns are calculated and tested for significant deviations from zero. The test shows that the IPOs perform neutrally over the whole sample period (1983–1993) compared with the broad market indices during the first five years after the issue, but over/underperform the market during the eighties/nineties. The IPOs, however, generally underperform the sample of matching firms.

Annemarie Sapusek

Maximum Loss for Risk Measurement of Portfolios

Effective risk management requires adequate risk measurement. A basic problem herein is the quantification of market risks: what is the overall effect on a portfolio if market rates change? First, a mathematical problem statement is given and the concept of “Maximum Loss” (ML) is introduced as a method for identifying the worst case in a given scenario space, called “Trust Region”. Next, a technique for calculating efficiently ML for quadratic functions is described; the algorithm is based on the Levenberg-Marquardt theorem, which reduces the high dimensional optimization problem to a one dimensional root finding.Following this, the idea of the “Maximum Loss Path” is presented: repetitive calculation of ML for a growing trust region leads to a sequence of worst cases, which form a complete path. Similarly, the paths of “Maximum Profit” (MP) and “Expected Value” (EV) can be determined. The comparison of them permits judgements on the quality of portfolios. These concepts are also applicable to non-quadratic portfolios by using “Dynamic Approximations”, which replace arbitrary profit and loss functions by a sequence of quadratic functions.Finally, the idea of “Maximum Loss Distribution” is explained. The distributions of ML and MP can be obtained directly from the ML and MP paths. They lead to lower and upper bounds of the true profit and loss distribution and allow statements about the spread of ML and MP.

G. Studer, H.-J. Lüthi

Environment, Energy, Health

Waste Treatment in a Metal-Processing Plant

In this paper we present a case study of a waste-water treatment procedure for the zinc galvanization unit of a metal-processing company. During the process of galvanization several types of waste-water in different quantities as well as in different concentrations occur and are stored temporarily in containers. To empty these containers waste-waters may be mixed in certain ratios with the possible use of additional chemicals and then disposed of.The aim of the developed management tool is to control the storage of the waste-waters (no container should be filled beyond capacity) and to minimize the amount of additional chemicals used. An on-line heuristic is presented meeting the above requirements satisfactorily.

Rainer E. Burkard, Ulrich Pferschy, Rüdiger Rudolf

Restraining Public Health Expenditure

Health expenditure is undoubtedly one of the main components of public expenditure in Italy. Therefore Italian Government, forced by his huge deficit, is trying to restrain it in different ways. Firstly by seeking to improve its efficiency by making uniform the treatments of similar cases, secondly by introducing a compulsory costs sharing (“tickets”) to reduce the demand.The aim of this paper is to show how the performances of institutions providing health services differ in different Italian regions and to find some economic explanation of this situation. Empirical estimates verify the existence of a significant correlation between the amount of free health service supplied and their demand. This evidence seems indicate that the private expenditure will not substitute completely reduced public spending.

Gianandrea Goisis, Paola Parravicini

Die kostentheoretische Bewertung von betrieblichen Umweltwirkungen

Dieser Beitrag behandelt verschiedene Ansätze der Bewertung von Umweltwirkungen der betrieblichen Produktion. Zunächst werden die quantitativen Umweltwirkungen ihren Verursachern - den Produktionsfaktoren, den Produktionsprozessen und den Produkten - zugewiesen. Zur ökologischen Bewertung der Umweltwirkungen werden allgemeine, aus der Ökologie bekannte Schadensverläufe herangezogen. Mit diesen Daten lassen sich die Umweltwirkungen in kostentheoretische Modelle integrieren und deren Einfluß auf die Erreichung ökonomischer Ziele aufzeigen. Für weitergehende Planungszwecke wird ein auf pagatorische Kosten normiertes Verrechnungspreissystem entwickelt, das dazu beitragen kann, die mengenmäßigen Umweltwirkungen gemäß den betrieblichen Zielsetzungen zu steuern.

Peter Letmathe

Entwicklung Und Anwendung Eines Gemischt-Ganzzahligen Energie-Emissions-Modells

Traditional linear energy-environment models have certain shortcomings when applied to energy systems which are significantly influenced by single processes, for instance small countries. In this paper, a new developed energy-emission model is described that makes possible either an aggregated or an individual consideration of production and conversion units. By following the new approach, a more detailed analysis of energy systems is possible, in particular economic and technological effects/aspects such as economies of scale, minimum and standard capacities, shutting down costs etc. can be taken into account adequately. The methodology of the model is described as well as first experiences from an application to the Slovenian energy system, which show higher system cost than in the linear approach. Furthermore, new model developments, e.g. the consideration and evaluation of all greenhouse gases are presented and possible model applications are discussed.

Oliver A. Lüth, Armin Ardone, Martin Wietschel, Otto Rentz

Ganzheitliche Bewertung von Produktionsprozessen durch multikriterielle Entscheidungsunterstützung

Die Beurteilung von Produktionsverfahren unter ökologischen, technischen und ökonomischen Gesichtspunkten erfordert ein Konzept, das insbesondere unterschiedlich skalierte Kriterien, die zueinander im Zielkonflikt stehen, berücksichtigt. Während Hilfsmittel zur Beurteilung von techno-ökonomischen Kennzahlen bereits umfangreich diskutiert wurden, müssen konsensfähige ökobilanzielle Bewertungsverfahren noch entwickelt werden. In diesem Beitrag werden multikriterielle Entscheidungsmodelle für die Auswahl einer oder mehrerer Investitionsalternativen vorgestellt. Exemplarisch werden Reinigungsprozesse bei der Oberflächenbehandlung untersucht. Dazu werden die betrachteten Stoff- und Energieströme um techno-ökonomische Kennzahlen erweitert und für einen methodischen Vergleich einer erweiterten Nutzwertanalyse sowie den Outranking-Verfahren ELECTRE und PROMETHEE unterzogen.

Thomas Spengler, Jutta Geldermann, Otto Rentz

Ökonomisch-Ökologisches Spannungsfeld, Problem der Abwässer in der Holzindustrie Und Lösungsmöglichkeiten Durch die Methode der Zieloptimierung (Goal Programming)

Unter den mit dem Umweltschutz auf der Ebene des Unternehmens verbundenen Kosten sind die Kosten einer alternativen Produktionsweise besonders wichtig. Das sind die Kosten, die dabei anfallen, wenn das Unternehmen infolge der Anforderungen des Umweltschutzes auf gewisse, in ökonomischer Hinsicht zwar vorteilhafte Produktionsalternativen verzichten muß. Im vorliegenden Beitrag wird anhand eines mathematischen Modells die Methodologie der Lösungfindung für diesbezügliche Probleme dargestellt. Mit der Methode der Zieloptimierung konnte nachgewiesen werden, daß sich die Modellösungen den gesetzten ökonomisch-ökologischen Zielen befriedigend annähern. Obwohl das Modell mit hypothetischen Daten operiert, weisen die Lösungen auf die Möglichkeit seiner praktischen Anwendimg hin, wenn man über konkrete Meßdaten der Verschmutzimg durch Abwässer verfügt.

Mirko Tratnik, Leon Oblak

On the Economic Efficiency of Energy Conservation Programs

The efficiency of demand side management programs (abbreviated DSM) is one of the most topical and heavily debated issues in applied utility regulation. Although the aim of DSM is twofold (load management and conservation), this study, like most of the literature and the public debate concentrates on conservation. The basic idea of conservation dates back to seventies, presumably to Amory Lovins who insisted that “a kilowatt-hour saved is just like a kilowatt-hour generated…so that they should be treated alike.” This claim plus the promise of substantial profits, Lovins (1985), ‘making gigabucks with negawatts’ established conservation as legitimate intervention.

Franz Wirl

Neural Networks and Fuzzy Systems

Ein neuronales Netz zur nichtlinearen Volatilitätsschätzung

Die Schätzung der Volatilität spielt neben der Schätzung des Erwartungswertes eine entscheidende Rolle in der Bewertung und Preisbildung von Optionen bzw. für die Entwicklung von Optionsstrategien. In diesem Beitrag wird neben den aus der Statistik bekannten nichtlinearen GARCH-Modellen ein neuronales Netz zur Beschreibung der Volatilität im Zeitablauf vorgestellt. Die mit diesem Netz geschätzte Volatilität des Bund-Futures wird der des GARCH-Prozesses gegenübergestellt.

Bernd Freisleben, Klaus Ripper

Applying Fuzzy Clustering for a Better Representation of the Total Scenario Space

In scenario analysis we are looking for some consistent, but quite different pictures of the future concerning a given problem. Not just the pictures themselves but also the way from today to a forward-looking situation shall be explained. Agglomerative methods in classification are already used to find the right number of final scenarios and to determine the final scenarios themselves. Here we examine, whether these questions can better be answered with the help of fuzzy-c-means cluster algorithm or whether fuzzy classification results can support interpretation and decision making in that context. Two different approaches will be presented.

Magdalena Mißler-Behr

Optimization models of production of a coal- mining company

This paper deals with modeling of production plan of a coal mining company in Northern Moravia coal region. Usual (crisp) LP model had proved to be non-feasible which led to the soft formulation i.e. LP model with fuzzy right hand sides of the constraints and, alternatively, to the goal programming LP model. A simplified versions of both models were solved on PC by Excel Solver using real data. In the paper, mathematical formulations of the models are presented, some numerical results of simulations are discussed and both approaches are compared.

Jaroslav Ramík, Jana Hanèlová

Fuzzy Logic in Standard-Simulationssystemen

Mit Hilfe der Simulation können Informationen über das dynamische Verhalten betrachteter Systeme durch Modellexperimente gewonnen werden (vgl. Law/Kelton 1991, S. 1 ff.). Der Einsatz der Simulation bietet sich insbesondere in Fällen an, in denen Beobachtungen und Experimente am Realsystem nicht in Betracht kommen oder analytische Modelle und Methoden aufgrund der Komplexität der Problemstellung keine adäquate Behandlung ermöglichen (vgl. Liebl 1995, S. 195 ff.). Das hohe Maß an Flexibilität bei der Modellierung sowie die Entwicklung leistungsfähiger Computer in den vergangenen Jahren bilden die Grundlage für eine zunehmende Anzahl von Anwendungen der Simulation als Instrument zur Unterstützung betriebswirtschaftlicher Entscheidungsprozesse.

Peer Völkner

Control Theory

About “Obstacles” In The Problem Of Optimal Control For System With Deviating Argument

In the report a canonical problem of optimal control with deviating argument is considered: To minimize the functional 1$$J = J(x(t_0^{}),\,x(t_{1\,}^{}))$$2$$\dot x(t) = g(t,u(q_1^{}(t)),...,u(q_s^{}(t),..,x(q_s^{}(t))),\,\,t\, \in \,[t_0^{},\,t_1^{}],$$3$$\dot{x} = \varphi (t), \dot{t} \in \mathbb{R}\backslash [{{t}_{0}},{{t}_{1}}]$$4$$\kappa (x(t_0^{}),\,x(t_1^{}))\, = \,0$$5$$u(t) \in U for almost everywhere t \in \mathbb{R} U \subseteq {{\mathbb{R}}^{m}}$$

L. Beklaryan

Numerical Solution of Optimal Control Problems in Macroeconomics and Microeconomics: Are Direct or Indirect Methods More Favorable?

Governments of industrial countries try to control among others gross domestic product, inflation and unemployment rate with various legal, regulatory and policy changes of, e. g., monetary policy, tax law or trade policies. Analogously the managements of companies have to decide to purchase products, to hire or fire workers, to sell or buy bonds or collect welfare payments in non-worker programs. For both macroeconomic and microeconomic decision-making appropriate mathematical models in economics, finance, accounting, management science, marketing and organization behavior enable a quantitative and qualitative analysis and forecasting, e. g., of business cycles. Thus useful information is available to economic politicians and to managements of companies.

Michael H. Breitner

Discrete Approximation of Nonlinear Controls

The basic optimal control problem with bounded measurable controls and absolutely continuous trajectories is approximated by the sequence of finite-dimensional (discrete) problems. Using the notion of the discrete convergence of elements and operators, conditions are presented which quarantee the discrete convergence of trajectories and the weak discrete convergence of controls.

Riho Lepp

Optimal Fiscal Policies for Austria

Using the optimum control algorithm OPTCON, we determine approximately optimal fiscal policies for Austria for the period 1995 to 2000 within the framework of a problem of quantitative economic policy. An intertemporal objective function is minimized subject to the constraints of the macroeconometric model FINPOL3, estimated for Austria using 3SLS. Exogenous variables of the model are forecast by time series methods. The results show that optimal fiscal policies can improve lipon the performance of the Austrian economy with respect to some policy objectives as compared to a simulation using extrapolations of policy variables.

Reinhard Neck, Sohbet Karbuz

Optimal Control By Heat Flow In Continuous Casting Steel

An optimal control problem of the cooling in the continuous casting steel is considered. The quality of a steel ingot essentially depends from the temperature gradients in the hardening process. Minimizing the temperature gradients is the purpose of the control. The objective functional is introduced in a hard phase of the cylindrical ingot. The control being at the bound of the ingot. There is a restriction for temperature at the end of the ingot. It was realized by a fine functional. The objective functional was minimized directly by the extreme method with regulating direction of descent.

V. K. Tolstykh, N. A. Volodin

Simulation

Simulation und Fuzzy-Ansätze in der Fertigungssteuerung

Gegenstand des Beitrages ist das Maschinenbeiegungsproblem der Fertigungssteuerung in einem flexiblen Fertigungssystem. Es werden Fuzzy-Ansätze und koordinierende Interaktionsregeln vorgestellt, mit denen wesentlich bessere Steuerstrategien generiert werden können als mit einfachen heuristischen Prioritätsregeln. Die Fuzzy-Technik zieht Expertenwissen in die Lösungsfindung ein und bewältigt Probleme der Unsicherheit und Unschärfe. Interaktionsregeln zielen auf einen gleichmäßigeren Durchlauf der Aufträge als einfache Prioritätsregeln. Zur Bewertung der Steuerstrategien wird ein Modell des Fertigungssystems verwendet, das in das Konzept einer simulationsbasierten Fertigungssteuerung eingebaut wird.

Jochen Beyer, Peter Gmilkowsky

Simulationsanalyse einer automatischen Leiterkartenbestückung zur Minimierung von Rüstzeiten und Maximierung des Durchsatzes

Gegenstand des Beitrags ist die Einlastungs- und Rüststrategie in einer automatischen Leiter- kartenfertigung. Durch den Einsatz eines neuen Algorithmus zur Erzeugung von Festaufrüstungen ist es gelungen, die Rüstzeiten und Streuung der Durchlaufzeiten deutlich zu reduzieren, den Anlagendurchsatz zu erhöhen und die Pufferbestände zu senken ohne dabei die Kapazitätsauslastung zu beeinträchtigen. Der Algorithmus wurde mit Echtdaten eines Zielunternehmens beschickt und die errechneten Festaufrüstungen in einem Simulationsmodell der realen Produktionsanlage getestet.

Claus-Burkard Böhnlein

Zeitdynamische Simulation zur Produktionsplanung - Erfahrungsberichte aus der industriellen Anwendung Simulationssysteme schaffen eine neue Planungsqualität in der Produktionssteuerung

Im Unterschied zu herkömmlichen PPS-Systemen und Leitständen sind Simulationssysteme in der Lage, Aufträge situationsabhängig und gegen mehrfach begrenzte Kapazitäten zu planen. Diese neue Planungsqualität vermindert die Anzahl der notwendigen Eingriffe und den Aufwand für die Steuerung.

Bodo Fritsche

Practical OR (Application Reports)

Sensitivity Analysis in Facility Location Applied to a Depot Location Problem of a Food Producer

Solving facility location problems does not only require to compute optimal or nearby optimal solutions, but also to perform a sensitivity analysis. It is necessary to provide insight into the behaviour of the total cost in dependence on the number of facilities to locate and into the possible variations of the data, which do not affect the optimality of a solution. Such an information is needed because locational decisions have a long-term planning horizon and the cost and demand data are subject to unforeseeable changes. Furthermore, only the information of the cost curve in the neighborhood of the optimum allows the decision maker to assess the consequences of a deviation from the optimal solution, which may be desirable for certain reasons.Regarding the uncapacitated facility location problem (UFLP), such a sensitivity analysis of the fixed and variable costs can be done by means of Lagrangean relaxation. We will briefly sketch the major theoretical results on necessary and sufficient conditions for the optimality of the Lagrangean multiplier, which give the basis for the application of the tangential approximation method to compute the convex hull of the total cost curve. The use of this analysis is then demonstrated on the depot location problem of a food producer.

Andreas Klose, Paul Stähly

Locating Depots for a Food Producer by Solving Uncapacitated Facility Location Problems

We discuss the solution of a depot location problem of a food producer in Switzerland. The overall structure of the distribution system was predetermined by the management to be a two- stage system (plant - depots - customers), with the first stage (transportation of the goods from the plant to the depots) carried out as full-truck-loads and the second stage (from depots to customers) in a less-than-truck-load mode. The problem was formulated as an uncapacitated facility location problem. For this model, we discuss the determination of the necessary data basis, taking especially into account the specific transportation mode on the second stage.

Ulrich Tüshaus, Stefan Wittmann

Rendite und Marge - vom richtigen Umgang mit mangelhaften Konzepten

In diesem Praxisbericht wird aufgezeigt, wie mit dem Konzept der Rendite trotz oder auch gerade wegen seiner Mängel profitabel am Kapitalmarkt gearbeitet werden kann. Dazu werden zuerst die Mängel dargestellt, die die Rendite gerade in ihrer ureigensten Aufgabe, dem Vergleich von Zinsinstrumenten und damit der Margenrechnung, hat. Anschließend werden anhand tatsächlicher Geschäfte der Praxis die Arbitragemöglichkeiten gezeigt, die sich daraus ergeben. Den Abschluß bildet eine kurze Beschreibung des operativen Umgangs mit der Rendite.

Markus Weick

Backmatter

Weitere Informationen