Skip to main content
Top

Operations Research Proceedings 1999

Selected Papers of the Symposium on Operations Research (SOR ’99), Magdeburg, September 1–3, 1999

  • 2000
  • Book
insite
SEARCH

Table of Contents

Frontmatter
Optimal Design of Discrete Engineering Structures Resisting Effects of Material Degradation

In this first section we relate two approaches for the evaluation of the maximal effect of degradation in a given discrete or discretized mechanical structure. The first model is based on classical approaches while the second model has been recently proposed (cf. [3]). The new model leads to a convex optimization problem, and thus can serve as the kernel of a model problem where the design is optimized with respect to extremal degradation (cf. Sec. 2). We note that both models do not reflect explicit considerations on the material related cause for degradation as studied in the field of Damage Mechanics (although there are many analogies in modelling; cf. [8, 9]).

W. Achtziger, M. P. Bendsøe
A Bundle Trust Region Algorithm for Bilinear Bilevel Programming

Bilevel programming is a model for a static, two-person game in which each player tries to optimize his or her individual objective function subject to a set of interdependent constraints [2]. Play is sequential and assumed to be uncooperative. The decision variables are partitioned amongst the players, and the choices of one affect the payoffs and choices available to the other. However, neither player can completely dominate his opponent, nor are solutions likely to be Pareto-optimal as they are in a more traditional two-person game. Applications include government regulation, management of a decentralized firm, and the standard min-max problem (e.g., see [10]). In each of these situations, the leader goes first, and in selecting a strategy, must anticipate all possible reactions of the follower.

S. Dempe, J. F. Bard
Convergence Analysis for General Penalty Path-Following Newton Iterations

In the present paper rather general barrier and penalty methods (e.g. logarithmic barriers, SUMT, exponential penalties), which define a continuously differentiable primal and dual path, applied to linearly constrained optimization problems are studied. In particular, the radius of convergence of Newton’s method depending on the barrier and penalty parameter is estimated. Unlike in most of the logarithmic barrier analysis which make use of self-concordance properties (cf. [6], [10], [11]) here the convergence bounds are derived by an approach developed in [1] via direct estimations of the solutions of the Newton equations (compare also [13]). There are established parameter selection rules which guarantee the overall convergence of the considered barrier and penalty techniques with only a single Newton step at each parameter level. Moreover, the obtained estimates support a scaling method which uses approximate dual multipliers as available in barrier and penalty methods.

C. Grossmann
A Regularization Approach for Variational Inequalities with Pseudo-Monotone Operators

This paper is motivated by the observation that the classical regularization method is not efficient enough to regularize variational inequalities with noncoercive pseudo-monotone operators. The main reason for its deficiency is the fact that any perturbation of pseudo-monotone operators may not be pseudo-monotone. In this contribution we study the variational inequalities with non-coercive pseudo-monotone operators. Instead of the classical regularization method to regularize the problem we use a different approach which is based on the theory of linear compact operators.

Vyacheslav Kalashnikov, Akhtar Ali Khan
A Curvilinear Pseudo-Newton Algorithm for Nonlinear Programming

A modified pseudo-Newton algorithm for equality constrained nonlinear programming problems that uses curvilinear searches and a step size rule is described. The algorithm chooses a curvilinear search according to the constraint violations. The step size rule is based on an implicit formula that is implemented as a predictor-corrector scheme. Numerical results are given to illustrate the performance of the algorithm.

M. Teresa, T. Monteiro, Edite M. G. P. Fernandes
Pathfollowing and Jumps in Bilevel Programming
S. Vogel, S. Dempe
Multiobjective Duality for Convex-Linear Problems

We consider a general vector minimum optimization problem with convex objective functions and linear inequality constraints.By means of linear scalarization and using the Fenchel-Rockafellar duality approach for the scalarized problem there is constructed a multiobjective dual maximum problem.The investigations are devoted to weak and strong duality assertions. Moreover, optimality conditions are verified for properly efficient and efficient solutions.

Gert Wanka, Radu-Ioan Bot
Maximum Representation of Some Convex Marginal Functions Without Constant Rank Regularity

We consider a convex optimization problem of two groups of variables x and y, where the objective function and the constraints are jointly convex and continuously differentiable. Supposing strong second order sufficient optimality condition (SOC), constant rank constraint qualification (CR) and using a decomposition approach with the variables x at the lower level, it is known that the marginal function ϕ(y) local can be represented as the maximum of a finite number of convex functions f i (y). Hereby the (locally defined) functions f i depend on the point y, and no such representation of ϕ is known in the neighborhood of points y where (CR) is violated.But in the particular case under consideration better properties can be obtained without supposing (CR). For this, we use an extension technique and construct a special convex differentiate intermediate function with values between two arbitrary convex functions. In this way, the functions f i (y) can be rearranged and partially redefined such that a global maximum representation holds where all functions f i are defined and differentiable on int dom(ϕ).

U. Würker
Polyatomic Discrete Tomography — Polyhedral Aspects

Computerized Tomography (CT) using X-rays is one of several methods, which allow to get information about the molecular structure of three dimensional objects. Examples of other techniques are ultrasonics, neutron beams or nuclear magnetic resonance. However, CT is a good tool to describe the underlying mathematical problems. The main task is to reconstruct the three dimensional structure. In CT the mathematical problems are nowadays well understood and efficient reconstruction algorithms have been developed and successfully used in practice.

René Brandenberg
Orienting Dart-free Clique-Helly Chordal Graphs

In this paper we acyclic orient dart-free clique-Helly chordal graphs in which each directed path is contained in at most two maximal cliques. As shown by the authors in previous works, this allows to give performance guarantee approximation results on a wide class of optimization problems.

Giuseppe Confessore, Paolo Dell’Olmo, Stefano Giordani
On Some Approximation Algorithms for Dense Vertex Cover Problem

In this work we study the performance of some algorithms approximating the vertex cover problem (VCP) with respect to density constraints. We consider a modification of the algorithm of Karpinski and Zelikovsky [9] for the weighted VCP and prove a performance guarantee for this algorithm in terms of a new density parameter. Also we investigate the hardness of approximation for the VCP on everywhere ε-dense graph and show that it is NP-hard to approximate within a factor less than.

Anton V. Eremeev
A Probabilistic Analysis of an Approximation Algorithm for the Minimum Weight Spanning Tree Problem with Bounded from Below Diameter

This paper is devoted to the minimum weight spanning tree problem with bounded from below diameter. In general case the problem is NP-hard. We consider a complete undirected weighted graph with vertex set {1, …, n}. An approximation polynomial algorithm with O(n2)-running time is suggested and its probabilistic analysis is performed. Supposed that edge weights are chosen randomly, independently, and uniformly either from the segment (a n , b n ), a n > 0 or the set {1, …, r n } respectively for the continuous and the discrete cases. We investigate the case b n /a n ≤ n/ψ n (respectively r n ≤ n/ψ n ) for the diameter bound function d n = n − n/ψ n , where ψ n = o(log n), ψ n → ∞ as n → ∞. In this case it is shown that the algorithm constructed guarantees the relative error not exceeding (log ψ n − 1)/ψ n with the probability at least 1 — exp(-0.25n/ψ n ) (i.e. it is asymptotically optimal).

E. Kh. Gimadi, A. I. Serdyukov
Approximating Stable Sets Using the ϑ-function and Cutting Planes

We investigate an approximation algorithm for the maximum stable set problem based on the Lovász number ϑ(G) as an initial upper bound. We strengthen this relaxation by adding two classes of cutting planes, odd circuit and triangle inequalities. We present computational results using this tighter model on several classes of graphs.

Gerald Gruber, Franz Rendl
Optimizing Generalized Profiles for Enhanced Discrimination of Biomolecular Sequences

This paper presents a robust and efficient parameter identification method of generalized profiles for motif recognition in biomolecular sequences. It is based on solving a non linear optimization problem that uses samples of positive as well as of negative sequences.Particular care is taken to enhance discriminating power by including over-fitting preventing constraints.

Nicolas Moeri, Thomas M. Liebling, Philipp Bucher
Polyhedral Aspects of the Consecutive Ones Problem

A 0/1-matrix has the consecutive ones property for rows if its columns can be permuted in such a way that in every row all ones appear consecutively. Whereas it is easy to decide whether a given matrix has the consecutive ones property, it is difficult to find for a given 0/1-matrix B a consecutive ones matrix A that resembles B as closely as possible (in a sense to be specified). In this paper we study the latter problem from a polyhedral point of view and discuss an integer programming formulation that is the basis for a branch-and-cut algorithm.

Marcus Oswald, Gerhard Reinelt
Solving One-Dimensional Cutting Stock Problems Exactly with a Cutting Plane Algorithm

The one-dimensional cutting stock problem (CSP) is as follows: Given an unlimited number of pieces of identical stock material of length L the task is to cut b i pieces of length ℓ i for i ∈ I = {1, …, m} while minimizing the number of stock material pieces needed. Let ℓ = (ℓ1, …, ℓ m )T and b = (b1, …, b m )T. Without loss of generality, let L ≥ ℓ1 > … > ℓ m > 0 and b i > 0 for i ∈ I

Guntram Scheithauer
Graph Drawing: the Aesthetics-Complexity Trade-Off

The visualization of graphs is a key component for many applications in science and engineering. When drawing a graph, we would like to take into account several aesthetic criteria. For example, planarity and the display of symmetries are often highly desirable in visualization applications. Also, to avoid wasting space on the screen, it is important to keep the area of the drawing small, subject to resolution rules. Further, it is desirable to avoid as much as possible long edges with bends.Many graph drawing problems can be formalized as optimization problems. Solving better or worse such problems implies constructing drawings with better or worse aesthetic features. On the other hand, finding satisfactory solutions requires very often the usage of substantial computational resources. For this reason the research in graph drawing continuosly explores the aesthetics-complexity trade-off. Examples can be found in the construction of drawings with a few bends along the edges, where the commonly used techniques cover a large time-compexity interval that includes polynomial-time and exponential-time algorithms.

Giuseppe Di Battista
The Smallest Hard-to-Color Graphs for Sequential Coloring Algorithms

For a given approximate coloring algorithm a graph G is said to be hard-to-color (HC) if every implementation of the algorithm uses more colors than the chromatic number. For a collection of such algorithms G is called a benchmark if it is HC for every algorithm in the collection. In the paper we study the performance of six sequential algorithms when used in two models of coloring: classical and chromatic sum. We give the smallest HC graph for each of them and two collective benchmarks for both models of coloring.

Marek Kubale, Krzysztof Manuszewski
Optimal Control in Economical and Technical Problems

Indirect numerical methods use maximum principles to solve optimal control problems. In some examples the existence of optimal measurable controls can be proved without convexity of the orientor field. Relaxed problems are exploited. Models of flight dynamics and dynamical models of production of the Leontief-type are discussed as well as an abstract process governed by a Hammerstein integral equation. An example shows that relaxation gaps can appear. Bireactor models are formulated as control problems and a tracking problem is formulated.

W. H. Schmidt
Über Probleme der optimalen Steuerung mit einer Quasivariationsungleichung
Helmut Dietrich
Robust Control of Continuous Fermentation Processes Described by Monod-type Model with Delay

Discrete nonlinear problem for control synthesis of continuous fermentation processes is formulated and method for robust control synthesis is developed. The method consists of an optimal robust control design and an input/output linearization via a nonlinear state feedback. The obtained total control is a robust one.The method is demonstrated by numerical simulations on the example of a Saccaromyces cerevisiae fermentation, described by Monod-type model with a delay.

R. Tsoneva, T. Patarinska
Asset-liability Optimization for Pension Fund Management

We describe the problem of optimal asset allocation for a pension fund as a stochastic dynamic programming problem. We discuss the model generation, the formulation of the objective and solution methods. The original problem is nonconvex, but convex approximations can be found.

G. Ch. Pflug, A. Świȩtanowski
Multistage Stochastic Programming; Stability, Approximation and Markov Dependence

A general (M+l)-stage (M ≥ 1) stochastic programming problem can be considered either as an optimization problem with respect to some mathematical abstract space (say L p , p ≥ 1) or recursively as the problem (see e.g. [1]).

Vlasta Kaňková
Parallel Iterative Proportional Fitting

Iterative Proportional Fitting (IPF) ist ein Verfahren zur Erzeugung einer entropiemaximalen Verteilung unter linearen Nebenbedingungen. Das Verfahren erzeugt eine Folge von Verteilungen, die im Falle der Konsistenz der linearen Restriktionen gegen die gesuchte Grenzverteilung konvergiert. Der Artikel beschreibt die Anwendung von IPF auf eine spezielle Klasse von Nebenbedingungen, und zwar derjenigen, die aus der Vorgabe von bedingten Wahrscheinlichkeiten einer hochdimensionalen Verteilung resultieren. Hierbei treten zwei wesentliche Probleme auf. Erstens ist es im vorhinein nicht klar, ob die Restriktionen eventuell widersprüchlich sind. Zweitens ist die technische Durchführung der Iteration bei sehr hoher Dimension der gesuchten Verteilung nicht möglich, da die Anzahl der zu bestimmenden Parameter ex-ponentiell mit der Dimension anwächst. Zur Lösung dieser Probleme wird die Zerlegung einer gemeinsamen Verteilung in ein System von Randverteilungen beschrieben. Auf diesen kann die Iteration jeweils getrennt und parallel durchgeführt werden.

Carl-Heinz Meyer, Holger Knublauch
Error Bounds and Sensitivity Analysis of Semi—Markov Processes

Perturbation theory for finite discrete-time Markov chains was systematically studied by several authors. In particular, Schweitzer [8] recognized the importance of the fundamental matrix for perturbation theory and obtained perturbation formulas for so-called regular perturbations of a discrete-time Markov chain. Using a different approach, Meyer [6] presented a bounding formula for the norms of the difference of steady state distributions of an ergodic discrete-time Markov chain. Recently special perturbations results were reported by for perturbations under infinitesimal changes (cf. [1], [2] and [3]).

Karel Sladký
Partition of the Coefficient of Determination in Multiple Regression

Regression analysis is a standard method to examine data sets in business and management economics. The R2 coefficient of determination indicates which percentage of the variation of a given variable can commonly be described by the other variables. A neodescriptive approach is necessary to split this percentage in such a way that the relative importance of each variable can be rated.

Norman Fickel
Minimax Estimation Under Random Sample Size

Let Y be a random vector taking its values in a measurable space (Y, B) and having an unknown distribution P, which is assumed to be an element of the set.

Maciej Wilczyński
Using the Sieve Bootstrap Method in Time Series Analysis

We consider using bootstrap method for stationary time series problems concerned with prediction intervals for future observations and confidence intervals for the spectral density. Our approach relies on the sieve bootstrap procedure introduced by Bühlmann (1996, 1997) for stationary AR(∞) processes.We extend the method of obtaining prediction intervals which has been proposed by Stine (1987) for autoregressive time series of known order and compare it with more traditional Gaussian strategy. The introduced sieve-bootstrap approach for constructing confidence intervals for the spectrum is also compared with X2 approximation method and other bootstrap procedure proposed by Pranke and Härdie (1992).The accuracy of the presented methods is verified via numerical comparison including both Gaussian and non-Gaussian data.

Adam Zagdański
Support and Implementation of the Kalai-Smorodinsky Bargaining Solution

The idea behind the Nash program is to achieve a cooperative outcome via strategic interaction. The task is to design a non-cooperative game such that the (cooperative) payoff vector appears as an equilibrium-payoff of the game. So the cooperative solution can be supported by an equilibrium of an appropriate non-cooperative game. One motivation why it might be useful to have such support results for cooperative solutions was given by Nash (1953): “… two approaches to the problem, via the negotiation or via the axioms [which] are complementary; each helps to clarify and justify the other.”

Claus-Jochen Haake
When Does Learning Lead to Nash Equilibrium?

This paper surveys those theoretical results on learning in games which address the question when learning leads to Nash equilibrium. The main conclusion is that the known convergence results refer to relatively small classes of games. Moreover, these results either rely on restrictive assumptions about players’ prior knowledge, or they use implicitly the assumption that players’ willingness to experiment is very large.

Tilman Börgers
Cooperative and Noncooperative R&D Activities in an Oligopolistic Market

This paper deals with an oligopolistic industry characterised by the joint presence of firms which adopt cooperative and noncooperative conducts in R&D activities before competing in outputs. The effects of different levels of information sharing of the R&D outcomes among cooperative firms on firms profit and social welfare are analysed. In this framework low spillover degree assures that private incentive to cooperate in R&D activities coincide with the social incentive to maximise economic welfare.

Domenico Campisi, Paolo Mancuso
The Timing of Informational Activities in Decision and Control Problems

We consider the information management activities of a principal who is faced with problems of double moral hazard and whose available information systems differ only with respect to the moment in which information is released. Pre-decision information helps principal and agent to realize a probability of success with smaller amounts of effort, but it can make it costlier for the principal to commit to a pre-specified effort level. In a one-period setting, the principal prefers the earliest available information, if the agent’s marginal productivity increases with the principal’s effort. This does not hold in a two-period setting. To the contrary, it is well possible that the principal prefers no information to early pre-decision information for both decision makers.

Anne Chwolka
On Properties of Optimal Solutions of Allocation Problems with Polymatroid Feasible Region

We consider the polymatroid allocation problem P. It can be formulated as the problem of maximizing separable concave nondecreasing function over integer points of polymatroid. It is well known that the problem P can be solved by Greedy Algorithm (GA). But the complexity of GA is not polynomially bounded in general because of possibly great number of iterations. Recently several polynomial algorithms for the polymatroid allocation problem were proposed. The best of them are the Scaling Algorithm, the Decomposition Algorithm, and the Dichotomic Algorithm. Among algorithms constructed for special cases of the problem P there are the Conquer Tree Algorithm and the Bottom Up Algorithm. The optimality proofs of all mentioned algorithms used complicated constructions and were very long. We present several new fundamental properties of solutions generated by GA. These properties lead to new optimality proofs for all mentioned algorithms, that are significantly shorter (about 2–3 times) and simpler. On this base it becomes clear that all mentioned algorithms for polymatroid allocation problem are different ways to implement GA efficiently.

E. Girlich, M. Kovalev, A. Zaporozhets
Cooperative Procedures for Multiple-Issue Negotiations

In this paper we demonstrate the design and use of cooperative negotiation procedures by linking cooperative bargaining theory to models of fair division. The main feature of a fair-division approach is that the outcome is determined by a procedure rather than a formula. We generalize two fair-division procedures in order to apply them to multiple-issue, multiple-option negotiations. The procedures consist of two simple and intuitive steps: The first step ensures an efficient outcome, and the second step establishes “fairness” through redistribution.

Matthias G. Raith
Game-Playing Experiments

Our survey of experimental studies concentrates on situations which are related to familiar topics in operations research and game theory like • intertemporal allocation when the future is uncertain,• repeated games with incomplete information involving reputation formation,• (asymmetric) private value-auctions,• fair division games where unlike to auctions the price is not collected by the seller, but equally shared by the bidders.Although behavior often reacts qualitatively as predicted by optimality, people clearly do not optimize. How to model boundedly rational decision making, guided by stylized experimental results, seems to be the major challenge.

Werner Güth
Signaling and Identifying the Willingness to Cooperate: Some Experimental Results

In recent years, experimental economists have discovered that people show different patterns of cooperation behavior. This article presents a face-to-face experiment which analyzes whether the individual willingness to cooperate can truthfully be signaled and identified. The results show that both capabilities depend on individuals’ self-disposition to cooperate.

Jeannette Brosig
Data Mining mit Neuro-Fuzzy-Systemen

Unter Data Mining versteht man die Anwendung von Modellie-rungs- und Entdeckungstechniken, um neue, unerwartete, valide, verständliche und verwertbare Informationen aus Datenbanken zu gewinnen. Zum Einsatz gelangen neben Methoden aus dem Datenbankbereich (Data Warehousing) und der Statistik (explorative Datenanalyse) auch Nicht-Standardansätze aus verschiedensten Bereichen wie z.B. der Neuronalen Netze, des Maschinellen Lernens (z.B. Entscheidungsbäume und Induktive Logische Programmierung) und der Fuzzy-Systeme, wobei letztere besonders wegen ihrer Einfachheit bei Anwendern sehr beliebt sind. In diesem Aufsatz konzentrieren wir uns auf Methoden aus dem Bereich der Neuro-Fuzzy-Systeme, die in Kooperation mit verschiedenen Industriefirmen entwickelt und erfolgreich eingesetzt wurden.

Rudolf Kruse, Christian Borgelt, Detlef Nauck
Behavior of the Ant Colony Algorithm for the Set Covering Problem

We develop the Ant Colony approach for the classical Set Covering problem. As artificial ants we use three randomized greedy heuristics: GRASP heuristic and two asymptotically exact heuristics Random Neighborhood and Random Covering Set. We study the influence of the main control parameters and present new ideas for accumulating and exploiting of statistical information. Computational results for difficult benchmarks are discussed.

Dmitri Alexandrov, Yuri Kochetov
Comparison of Exchange Rate Forecasts by Neural Networks and by Econometric Methods

Neural networks are increasingly used as an empirical method in economics. Since they enter into competition with econometrics, the question of a comparison of both procedures offers itself automatically. Whereas neural networks make often very successful predictions without using any economic theory econometric forecasts rely more on a sound theoretical basis (leaving aside the univariate time series analysis). In our first example we present a neural network forecast of the former type and recalculate it econometrically, using the same variables, data set and forecast period. In the second example we show an econometric standard approach and recalculate it by a neural network technique. The forecast results show in both cases with both methods small differences the significance of which has still to be investigated.

Lutz Beinsen, Martin Kührer
Bewertung von partiellen Wahrscheinlichkeitsinformationen bei entropieoptimaler Wissensverarbeitung

Informationsbeschaffung zur Reduktion von Ungewißheit in einer Wissensdomäne ist das zentrale Thema dieses Aufsatzes. Hierzu werden alternative Informationspakete hinsichtlich ihres Gehalts für den Empfänger bewertet. Die Bewertung hängt von seinem Vorwissen und von seinem Ziel ab. Ziel kann die Informationsverdichtung auf der gesamten Wissensdomäne, Teilen davon oder bzgl. eines reellen Nutzens sein. Alle Bewertungsschritte vollziehen sich in einem Modell, das durch Verwendung des Entropiebegriffs auf der Basis Shannonscher Informationstheorie steht. Ist das optimale Informationspaket beschafft, können auch Entscheidungen unter Nutzeninformationsgesichtspunkten getroffen werden.

W. Rödder, E. Reucher
Reasoning with Probabilities and Maximum Entropy: The System PIT and its Application in LEXMED

We present a theory, a system and an application for common sense reasoning based on propositional logic, the probability calculus and the concept of maximum entropy. The task of the system PIT (Probability Induction Tool) is to provide decisions under incomplete knowledge, while keeping the necessary additional assumptions as minimal and clear as possible. We therefore enrich the probability calculus by two principles which have their common source in the concept of model-quantification ([8, 17]) and find their dense representation in the well-known principle of Maximum Entropy (MAXENT [6]). As model-quantification delivers a precise semantics to MAXENT, the corresponding decisions make sense not only in our current project of medical diagnosis in LEXMED.

Manfred Schramm, Wolfgang Ertel
AGA-Konzept und AGAPE-Toolbox und deren Verwendung im Rahmen der prototypischen Entwicklung der DSS-Komponente eines Leitstandes zur Feinsteuerung

Das Problem der Parametrisierung ist für die Integration Genetischer Algorithmen (GA) in Standard-Software kritisch. In dieser Arbeit stellen wir kurz das am WINFORS entwickelte Konzept der Adaptiven Genetischen Algorithmen (AGA) sowie die darauf basierende Entwicklungsumgebung AGAPE vor. Anschließend berichten wir über die Anwendung von AGA und AGAPE in einem Projekt zur Integration von GA in einen Leitstand zur Feinsteuerung.

U. Derigs, M. Heckmann, R. Ploch, O. Ziemek, M. Zils
Simulation im Konfliktmanagement in großen Transportnetzwerken

Der zunehmende Konkurrenzdruck durch die Alternativen Straßen- und Luftverkehr zwingt Bahngesellschaften, sich immer mehr an den Bedürfnissen von Kunden zu orientieren. Gefordert werden ein breites, preiswertes Angebot bis in kleine Regionen, schnelle Verbindungen zwischen Ballungszentren und möglichst gute Verknüpfungen zwischen Nah- und Fernverkehr. Das große Angebot erlaubt eine Vielzahl individueller Routen durch das Netz und schränkt den Spielraum für die Durchführung jeder einzelnen Fahrt zeitlich und räumlich stark ein. Um so wichtiger ist ein ausgereifter Fahrplan und dessen zeitlich präzise Ausführung, da mitunter schon kleine Störungen gravierende Folgen an weit entfernten Stellen im Netz haben können. Erfahrene Disponenten versuchen in diesem komplexen System, operative Effizienz und Kundenzufriedenheit sicherzustellen. Ihre gute Ausbildung ist demnach von elementarer Bedeutung für eine erfolgreiche Einhaltung des Fahrplans. Bisher war eine Schulung nur am realen System durch „Learning by Doing“ oder durch Abschauen bei erfahrenen Kollegen möglich. Beides hat mehrere entscheidende Schwächen Zum einen herrscht im Realbetrieb hoher Zeitdruck: Gerade dann, wenn großer Erklärungsbedarf für Entscheidungen bestünde („rush hour“), ist keine Diskussion möglich, da sofort die nächsten Probleme anstehen. Hinzu kommt, daß Entscheidungen sehr oft auf umfangreichem impliziten Wissen qualifizierter Disponenten beruhen, d.h. es ist schwierig, alle Einflußfaktoren einer Entscheidung zu benennen, selbst wenn genügend Zeit verfügbar wäre. Überdies schließlich verursachen Fehlentscheidungen ungeübter Disponenten im Realbetrieb hohe Kosten. Es besteht demnach ein dringender Bedarf nach einem Simulations system, mit dem Schulungen in einem realitätsnahen Umfeld erfolgen können. Disponenten in der Ausbildung können auf diese Weise Erfahrungen sammeln und ggf. auch Fehlentscheidungen treffen, ohne daß in den laufenden Betrieb eingegriffen werden muß und das reale System beeinträchtigt würde.

J. Goecke, C. Biederbick
Computerunterstützte Stundenplanerstellung für die Offizierschule des Heeres

Seit 40 Jahren wird untersucht, ob und wie es möglich ist, die Stundenplanerstellung mittels Computer zu bewerkstelligen. Verschiedene Ansätze und Algorithmen wurden dazu entwickelt. Untersucht wurden dabei häufig Stundenpläne oder Kurspläne an Schulen oder Hochschulen sowie Prüfungspläne [Ju82]. Die vorliegende Arbeit widmet sich einer neuen Problemstellung: der Stundenplangestaltung an der Offizierschule des Heeres. Dieses Stundenplanproblem wird im einzelnen untersucht und analysiert. Seine Lösung erfolgt bislang noch manuell, doch gibt es bereits Ansätze zu einer computerunterstützten Lösung. Hierzu wurde eine geeignete Software entwickelt, die ebenfalls vorgestellt wird.

W. Junginger
Computergestützte Anfragebewertung im industriellen Anlagengeschäft

Bis zum Vertragsabschluß über den Kauf einer industriellen Anlage vergehen Monate, wenn nicht sogar Jahre, in denen der potentielle Käufer Angebote bei verschiedenen Anbietern einholt. Für das anbietende Unternehmen ist die Erstellung eines entsprechenden Angebotes mit nicht unerheblichen Kosten verbunden und mit der Unsicherheit über den Zuschlag behaftet. Wichtig und zugleich schwierig ist die frühzeitige Erkennung möglicher Risiken und damit die richtige Einschätzung der Erfolgsaussichten eines Angebotes. Praktiker wie auch Wissenschaftler haben die Notwendigkeit einer objektiven Anfragebewertung erkannt und zahlreiche Methoden und Systeme entwickelt, die in dieser Sache eine Entscheidungshilfe bieten sollen. Diskutiert wird in diesem Beitrag die Nutzung probabilistischer Wissensverarbeitung bei der Anfragebewertung. In Zusammenarbeit mit einem großen deutschen Anlagenbauer wurde für das System SPIRIT eine umfangreiche Wissensbasis aufgebaut und anhand laufender Projekte die Prognosegüte bzgl. der Zielgröße „Angebotserfolg“ mit herkömmlichen Methoden verglichen.

Friedhelm Kulmann, Wilhelm Rödder
Datenqualitätsanalyse mit Methoden des Data Mining

Information wird zunehmend nicht nur als Ressource proklamiert, sondern auch als solche eingesetzt. Die Basis für dazu notwendige Management-Unterstützungs-Systeme bilden dabei stets wachsende Unternehmensdatenbasen. Diese werden durch Data Warehouse-Projekte aufgefangen und auf die dispositiven Bedürfnisse von z. B. Controlling- oder Marketing-Abteilungen transformiert. Ein wesentlicher Bestandteil dieser Arbeit ist die Zusammenführung von Daten aus den verschiedensten Datenbasen, welcher mit einer Vielzahl von Problemen beladen ist, da die Datenbasen bis zu diesem Zeitpunkt unabhängig voneinander entwickelt wurden. Ein besonderes Augenmerk ist hier auf Datenqualität zu richten. Die Auswirkungen mangelnder Qualität werden in diesem Kontext besonders spürbar. Nachdem die vielen technischen Hürden von Data Warehouse-Projekten gemeistert wurden, stellt sich oft heraus, daß die gewünschten Informationen aus den Daten nicht zu lesen sind, da die zugrunde liegende Datenqualität eine gesicherte Ableitung derer nicht zuläßt. Eine Datenqualitätsanalyse als integraler Bestandteil von Data Warehouse-Projekten ist daher dringend zu empfehlen. In der Westdeutschen Landesbank konnten insbesondere durch den Einsatz von Data Mining-Methoden erfolgversprechende Ansätze zur Datenqualitätsverbesserung geliefert werden.

U. Windheuser
Aggregation von drei Dimensionen des Downside-Risikos durch einen modifizierten Sen-Index

Die Wahrscheinlichkeit der Verfehlung einer vorgegebenen Mindestrendite, das durchschnittliche Ausmaβ der Verfehlung und die Verteilungsungleichheit der Renditen im Downside-Bereich charakterisieren drei Dimensionen von Downside-Risiko. Durch eine Reformulierung des aus der Armutsmessung bekannten Sen-Index werden diese drei Dimensionen gleichzeitig in einem Risikomaß abgebildet.

Frank Eggers, Andreas Pfingsten, Sven Rieso
Die Ermittlung von Value-at-Risk-Handelslimiten zur Kontrolle und Steuerung von Marktrisiken bei kontinuierlicher Überprüfung

Zur Kontrolle und Steuerung von Marktrisiken in den Handelsbereichen der Kreditinstitute werden üblicherweise Limitsysteme auf Basis des Value-at-Risk eingesetzt. Limite stellen Begrenzungen der Risikoübernahme für einzelne Händler, Händlerteams und ganze Handelsbereiche dar. Der Einsatz von Limitsystemen, die einen dauerhaften Bezug zum bankinternen Risikomeßverfahren aufweisen, wird zur Risikosteuerung im Handelsbereich seitens der Bankenaufsicht explizit gefordert. Nach der einführenden Definition des für die weitere Argumentation grundlegenden Value-at-Risk werden in diesem Beitrag bekannte Ausgestaltungen von tagesbasierten Handelslimitsystemen diskutiert, die auf dem Ansatz der Umrechnung von Jahres- auf Tageslimiten mittels der „Quadratwurzel-T-Formel“ aufbauen und wechselweise realisierte Gewinne und Verluste auf die Limite anrechnen. Anschließend wird ein aus den Überlegungen der Portfolio-Insurance abgeleiteter Ansatz zur Limitsteuerung vorgestellt. Dabei werden die Handelslimite kontinuierlich mittels des Deltas von Put-Optionen gesteuert.

Hermann Locarek-Junge, Mario Straßberger, Henning Vollbehr
CAPM und systematisches bzw. unsystematisches Risiko unter Kurssprüngen

Stocks’ risk premiums under combined jump and diffusion risk are a linear combination of the risk premium of the "classical" market portfolio and the premium of a jump-induced correction fund. Moreover, firm-specific jump risk is completely systematic. The unsystematic part of stocks’ return variances stem solely from normal price vibrations.

Bernhard Nietert
Approximation der risikoneutralen Verteilung — Methodenvergleich und Implementierung

In ihrem fundamentalen Aufsatz verdeutlichten Black und Scholes(vgl. [2]) bereits, daß die Bestimmung eines Optionspreises unabhängig von den Einstellungen des Investors gegenüber dem Risiko ist. Diese Eigenschaft nutzten Cox und Ross(vgl. [5]), um die risikoneutrale Bewertungsmethode zu entwickeln. Ist nämlich die Risikopräferenz nicht relevant, so ist ein Ergebnis, das aus der Verwendung einer speziellen Präferenz resultiert, dennoch allgemeingültig. Die Wahl der Risikoneutralität vereinfacht nun oder ermöglicht sogar erst die Bestimmung von fairen Optionspreisen. Nach dieser Methode entspricht der Wert einer pfadunabhängigen Option dem mit der risikolosen Zinsrate r diskontierten Erwartungswert ihrer Auszahlungen bezüglich der risikoneutralen Verteilung F(S T ) des Aktienkurses S T zur Fälligkeit T der Option. Der Wert eines European Calls zum Zeitpunkt t mit Ausübungspreis K ergibt sich etwa aus (vgl. [5] S. 154.)

Lars Schiefner
Finite Element Modeling of Exotic Options

The Finite Element Method is a well-studied and well-understood method of solving partial differential equations. Its applicability to financial models formulated as PDEs is demonstrated. Its advantage concerning the computation of accurate “Greeks” is delineated. This is demonstrated with some exotic options.

Jürgen Topper
Scheduling Deteriorating Jobs with Ready Times

The paper deals with a single machine scheduling problem, where the job processing time is start time and ready time dependent. The problem is to find such a schedule (the job processing order) which minimizes the maximum schedule length (makespan). We show that the problem is NP-complete and present some heuristic algorithm basing on the proved problem properties. Some computational analysis of the proposed algorithm is also given.

Aleksander Bachman, Adam Janiak
Scheduling Tasks and Vehicles in FMSs — Network Flow Model

A problem of simultaneous task scheduling and vehicle routing in a flexible flow shop, is considered. The problem of finding a feasible vehicle schedule for the given machine schedule is reduced to the one of finding a maximum flow in the a three-partite network. On the other hand, if a minimum number of vehicles is looked for, then the problem must be reduced to the one of finding maximal flow of a minimum cost in the network with irregular cost function.

Jacek Błażewicz, Grzegorz Pawlak, Marie-Laure Espinouse, Gerd Finke
Total Late Work Criteria for Shop Scheduling Problems

Scheduling theory is concerned with problems of the allocation of resources to perform a set of activities in order to achieve a certain goal. The purpose of the scheduling process could be attained by finding a feasible solution of the considered problem as well as by determining the best solution in the reference to a given optimality criterion. There are a few classical performance measures broadly studied such as schedule length, mean or mean weighted flow time, maximum lateness etc. However, trying to cover more realistic problems, new parameters and criteria must be considered. The performance measure based on the amount of late work in the system is an example of such a non-classical criterion involving due dates. Classical criteria such as e.g. maximum lateness or mean tardiness calculate the penalty for late tasks with respect to the time of their completion, whereas in some applications, the penalty should be determined with the reference to the amount of the late work independently of the time of its completion. The criteria based on late work were first proposed in the context of parallel processors and then applied in a one machine scheduling problem. In the paper, new results are presented concerning general complexity of scheduling problems with the late work criteria and some special cases in the shop environment.

Jacek Błażewicz, Erwin Pesch, Malgorzata Sterna, Frank Werner
A Parallel B & B Algorithm to Optimize a Combination of Job Tardiness and Machine Utilization in Flexible Flow Lines

The term flexible flow line (FFL) denotes a generalization of the standard flow shop where at least one stage comprises more than one machine. We present a scheduling algorithm for the FFL which minimizes a combination of job tardiness and machine utilization. To our knowledge, this is the first exact method that minimizes a tardiness based objective function for the given production facility. The algorithm makes use of a simple enumeration scheme and applies priority rule based simulation methods in order to compute lower bound values for the considered criteria in an efficient way. The exploration strategy of our algorithm is a combination of best first search and depth first search and connects the advantages of both methods in an adaptive dynamic manner. Based upon an efficient parallelization of the algorithm, it is now possible to solve problem instances of moderate scale in a reasonable amount of time.

Klaus Brockmann, Wilhelm Dangelmaier
Lower and Upper Bounds for the Resource Investment Problem

The resource investment problem deals with the issue of providing resources to a project such that a given deadline can be met. The objective is to make the resources available in the cheapest possible way. For each resource, expenses depend on the maximum amount required during the course of the project. We develop two lower bounds for this NP-hard problem using Lagrangean relaxation and column generation techniques, respectively. Both procedures are capable of yielding feasible solutions as well. Hence, we also have two optimization guided heuristics. Details of what is presented here can be found in [3].

Andreas Drexl, Alf Kimms
Scheduling in Robotic Cells

We consider a robotic cell, consisting of a flow-shop in which the machines are served by a single central robot. We concentrate on the case where only one part type is produced and analyze the validity of the so-called One-Cycle Conjecture by Sethi, Sriskandarajah, Sorger, Blazewicz and Kubiak. This conjecture claims that the repetition of the best one-unit production cycle will yield the maximum throughput rate in the set of all possible cyclic robot moves.We want to summarize the current state-of-the-art of the conjecture concerning general cells and some specialized configurations. We also discuss performance factors for the one-cycles within the set of all k-cycles whenever the one-cycle conjecture is not valid.

Gerd Finke, Nadia Brauner
Scheduling with Deadlines and Nested Processing Intervals for a Single Machine

In this paper, a single machine preemptive scheduling problem to minimize the weighted number of late jobs is considered under the assumption that release and due date intervals are nested and certain specified jobs have to be completed on time, that is, some of the due dates are in fact deadlines. Sidney [8] was the first who considered the problem to minimize the number of late jobs in the situation when certain specified jobs have to be on time. Unlike [8], we consider this situation in the case of given release dates and weights of jobs subject to nested release and due date intervals.

V. S. Gordon, F. Werner, O. A. Yanushkevich
On the Solution of 2-Machine Flow Shop Problems With a Common Due Date

In this paper, we study two machine flow shop problems with earliness and tardiness penalties and a common due date. For a job-independent penalty function with some reasonable monotonicity features, we present an enumerative algorithm with branch and bound components. The algorithm can solve problems with up to 20 jobs. Additionally, a beam search algorithm derived from the exact algorithm and two local search heuristics are discussed.

Jatinder N. D. Gupta, Volker Lauff, Frank Werner
Scheduling Jobs with Start Time and Resource Dependent Processing Times

This paper deals with the single machine scheduling problems, where the job processing times are given as linear functions dependent on the processing start time and the amount of resources allocated to the jobs. We consider two problems: the first one deals with the makespan minimization where the total amount of resources is limited; in the second one the value of makespan is limited and the considered criterion is the minimization of the total resource consumption. We present optimal algorithms for some special cases of the first problem. For the second problem we present a solving method which can be applied if only the symmetrical version of the problem is solvable in polynomial time and some additional conditions are satisfied.

Daniel Iwanowski, Adam Janiak, Adrian Rogala
Fully Polynomial Approximation Schemes for Decomposable Partition Problems

A partition type extremum problem (problem PText) is formulated. It is assumed that the objective function of this problem can be represented as a composition of some auxiliary functions which can recursively be computed. Properties of the problem PText are established which are sufficient for developing a fully polynomial approximation scheme (FPAS) for this problem. Such a scheme is described. Several known discrete optimization and scheduling problems can be formulated in terms of the problem PText. The suggested approach provides original FPASes for some problems, generalizes several exisiting FPASes and gives clear exposition of their main ideas.

Mikhail Y. Kovalyov, Wieslaw Kubiak
Minimizing Earliness-Tardiness Costs of Resource-Constrained Projects

We consider the scheduling of interdependent subprojects incurring costs for early or tardy completion w.r.t. given milestones. Each subproject consists of several activities between which minimum and maximum start-to-start time lags have to be observed. In addition, the processing of activities takes up scarce shared resources. The problem is to determine an activity schedule complying with the temporal constraints such that the resource requirements can be matched by the capacities at any point in time and the earliness-tardiness costs of the project are minimized. For solving the resource-unconstrained version of this problem, we propose a primal and a dual algorithm which are based on the iterative calculation of locally optimal descent and ascent directions, respectively. An initial (generally resource-infeasible) schedule for the problem with resource constraints is determined by applying the primal method to the resource relaxation. Within a branch-and-bound algorithm, resource conflicts are resolved by enumerating sets of precedence constraints between activities which are executed simultaneously and whose requirements exceed the capacity of at least one resource. At each enumeration node, the corresponding relaxation is solved by the dual algorithm starting with the schedule of the father node.

Christoph Schwindt
Sequencing under Precedence Constraints: Construction of All Optimal Permutations

We propose a technique for constructing all optimal permutations for the problem of minimizing so-called strong priority-generating functions under precedence constraints. A similar problem was considered in [3] where a decomposition scheme for finding all optimal solutions was developed. Nevertheless it is not clear how to apply this scheme efficiently for concrete problems. Our technique allows to develope efficient algorithms. The time complexity of the corresponding algorithm for series-parallel precedence constraints, for example, is O(n2).

Yakov M. Shafransky, Alexander V. Tuzikov
Scheduling via Mixed Graph Coloring

Let G = (V, A, E) denote a finite mixed graph with vertex set V = {v1, v1, …, v n }, arc set A, and edge set E. Arc (v i , v j ) ∈ A means ordered pair of vertices, and edge [v p , v q ] ∈ E means unordered pair of vertices. Mixed graph coloring ψ may be defined as follows (see [3]).

Yuri N. Sotskov
Project Scheduling with Discounted Cash Flows and General Temporal Constraints

The net present value is widely regarded as the main financial criterion for evaluating the return on assets and profitability of large scaled projects. Considering activity-on-arc networks, the positive and negative cash flows that occur throughout the project lifetime are associated with the completion of activities. Emphasizing the financial aspects of project management, the activities of the project have to be scheduled such that the net present value of the project is maximized and all temporal and resource constraints are met. For solving the resource unconstrained case, we consider an exact solution procedure based on a first-order steepest ascent approach where the ascent direction is normalized by the supremum norm. Using the concept of minimal delaying modes, we device a branch-and-bound procedure for solving the resource-constrained project scheduling problem with discounted cash flows where we use a dual method to solve the resource relaxation in each node of the enumeration tree. To speed up the procedure preprocessing, immediate selection and a generalized subset dominance rule are used. An experimental performance analysis shows that the proposed algorithms clearly outperforms the procedures proposed in the open literature.

Jürgen Zimmermann
Decentralized Rail Capacity Allocation: a Model and Applications

The application of Directive 91/440 in European countries implies opening the access to railroad infrastructure to competing transport operators (TOs), while the network should be managed by an infrastructure company (IC). An adequate network access mechanism will have to be devised in order to allocate available capacity and determine access charges. We propose a game-theoretic model where the TOs request tracks to the IC and set service prices with the aim of maximizing their profit, on the basis of demand data. The IC allocates track capacity among the TOs and determines access tariffs through a market based allocation and pricing mechanism. The presented numerical simulations analyze the effects of increased congestion on a line segment with respect to effective schedules, access tariffs, service prices and profits.

Anna Bassanini, Alberto Nastasi
Planung von Flugzeugumläufen unter Nebenbedingungen

Mit steigendem Wettbewerbsdruck und zunehmender Globalisierung gewinnt auch in der Luftfahrtindustrie ein effektiverer Produktionsplanungs- und Steuerungsprozeß mehr und mehr an Bedeutung. Aufgrund seiner hohen Komplexität wird dieser Prozeß zumeist in mehrere Phasen unterteilt, die bei allen Fluggesellschaften im Prinzip vergleichbar sind (s. Abb. 1). In der Praxis gibt es jedoch von Airline zu Airline unterschiedliche Prozeßausprägungen (vgl. [GraClaMa98] oder [Suhl95]).

Claus Biederbick
Optimal Routing of Traffic Flows with Length Restrictions in Networks with Congestion

When traffic flows are routed through a road network it is desirable to minimize the total road usage. Since a route guidance system can only recommend paths to the drivers, special care has to be taken not to route them over paths they perceive as too long. This leads in a simplified model to a nonlinear multicommodity flow problem with constraints on the available paths. In this article an algorithm for this problem is given, which combines the convex combinations algorithm by Frank and Wolfe with column generation and algorithms for the constrained shortest path problem. Computational results stemming from a cooperation with DaimlerChrysler are presented.

Olaf Jahn, Rolf H. Möhring, Andreas S. Schulz
Analyse kombinatorischer Auktionen für ein Multi-Agentensystem zur Lösung des Groupage-Problems kooperierender Speditionen

Steigendes Verkehrsaufkommen und die Zunahme der Wettbewerbsintensität im Transportgewerbe zwingen die Transportunternehmen zu einer Effizienzsteigerung bei der Erstellung von Transportleistungen. Insbesondere speditionsübergreifende Kooperationen lassen eine deutliche Steigerung der Auslastung der Produktionsfaktoren im Transportsektor erwarten. Die Aufgabe, den hierzu erforderlichen Laderaum- und Frachtausgleich zwischen kooperierenden Speditionen durch den unternehmensübergreifenden Abgleich zwischen verfügbaren und benötigten Kapazitäten herzustellen, ist als das Groupage-Problem beschrieben worden [4]. Die Beachtung wichtiger Aspekte der wirtschaftlichen Selbständigkeit der Teilnehmerspeditionen legt es nahe, dezentrale Lösungsansätze für das Groupage-Problem zu wählen, die das Groupage-Problem in ein lokales Dispositionsproblem für jede Teilnehmerspedition einerseits und ein überbetriebliches Koordinierungsproblem für die Auftragsallokation andererseits zerlegen. Es bietet sich an, das Szenario der Groupage-Kooperation als Multi-Agenten-System (vgl. z.B. [13]) zu modellieren, bei dem mehrere Speditionen als autonome Agenten zur Lösung des Groupage-Problems interagieren.

Giselher Pankratz
Zur Modellierung und Simulation von Eisenbahnverkehr

In diesem Beitrag wird ein Modell vorgestellt, das die Simulation von Eisenbahnverkehr in großen Netzwerken in mehrfacher Echtzeit erlaubt. Das Modell basiert auf einem Zellularautomaten, der im Bereich der Straßenverkehrssimulation bereits vielfache Anwendung gefunden hat. Besonderes Gewicht wird dabei auf die Wechselwirkung einzelner Züge im Netzwerk gelegt. Fragestellungen einer solchen Simulation liegen in der Bestimmung entscheidender Einflußgrößen und in der Erarbeitung eines umfassendenn phänomenologischen Verständnis. Es wird die Möglichkeit gegeben, neue Betriebskonzepte und den Einsatz von Telematikanwendungen auszutesten, um somit Optimierungspotentiale offenzulegen.

André Stebens, Michael Schreckenberg
Ansätze zur Lösung von Stauraumproblemen

Zur Lösung von Stauraumproblemen wird ein zweistufiger Lösungszugang vorgeschlagen. Zunächst wird eine allgemeine Vorgehensweise für eine Greedyheuristik bei der Lösung vorgestellt. Diese Greedyheuristik lässt sich in ein Entscheidungsbaumverfahren einbetten. Die im Zusammenhang mit Stauraumproblemen entwickelten Schranken werden vorgestellt.

Michael Eley
Kostenoptimierung von Kommissioniersystemen mit Hilfe Genetischer Algorithmen

Die Kommissionierung stellt einen Kernbereich der Logistik dar und verfolgt nach VDI 3590 „das Ziel, aus einer Gesamtmenge von Gütern (Sortiment) Teilmengen aufgrund von Anforderungen (Aufträgen) zu bilden“ [2]. Dabei ist die betriebliche Funktion „Kommissionieren“ innerhalb der logistischen Kette eines Unternehmens i.d.R. mehrfach erforderlich. Kommissionierprozesse treten sowohl in der Beschaffungslogistik als auch in der Produktionslogistik und der Distributionslogistik auf.

Lothar Grünz, Klaus Heinz
Facility Location Decisions in Supply Chain Management

Facility location decisions play a crucial role in the logistics activities involved in supply chain management. In real-life settings, the optimization of location and allocation decisions is often preceded by an evaluation of the existing distribution network system. Such an evaluation starts by assessing the quality of the current locations of the service facilities and the allocation of customer demands to those facilities. We propose several quick geometric tools for obtaining an initial configuration of the distribution network. In addition, we present different algorithms for strategic decision-making with respect to the construction of additional facilities whose location is determined without altering the sites of the already existing facilities. The proposed algorithms are embedded into the APO business application for supply chain modeling developed by SAP AG, the world’s largest supplier of business application software.

Jörg Kalcsics, Teresa Melo, Stefan Nickel, Veronika Schmid-Lutz
A New Model for Planning Complex Assembly Lines in Support of Efficient Mass Customization

The orientation towards an efficient mass customization confronts most companies with two opposing requirements: While the applied mass production system should be able to produce a huge product variety, the cost must be kept low. Therefore assembly lines have to be planned in a much more flexible way. The inability of traditional models to support efficient mass customization is mainly due to the fact that they define every variant as an individual product. The resulting necessity to record information about all variants for every task leads to an unmanageable amount of data and unrealistic program execution times. In this context, a new problem description is developed which integrates specific complexity management approaches as well as instruments for detailed personnel and process planning. This model has been solved efficiently on a workstation network by using a new distributed Tabu search algorithm.

Stefan Bock
Comparison of Dispatching Rules for Reducing the Mean and Variability of Cycle Times in Semiconductor Manufacturing

Semiconductor wafer manufacturing is among the most complex manufacturing processes, and hence a capital-intensive endeavor. The most common non cost-related performance measures for a semiconductor manufacturing facility are machine utilization, production yield, throughput, and last but not least cycle time. We define cycle time as the time a lot of wafers needs to travel through the semiconductor wafer manufacturing process. The overall cycle time of a lot of wafers is the sum of the total processing times of a wafer, waiting times for all resources (such as operators, machines, tools, transportation system) needed for each processing step to become available, times elapsed during measurement and quality inspections, and the time the lot has spent in transportation.

Manfred Mittler, Alexander K. Schoemig
On Transfer Blocking and Minimal Blocking in Serial Manufacturing Systems and the Impact of Variability

In this paper we investigate the impact of buffer allocation in an unbalanced, asynchronous production system consisting of 12 stages arranged in series. Each job is released to the first stage of the system and undergoes processing at each stage. The performance of a production system is mainly determined by variations in the service times at each stage. The higher those variations the lower is the production rate, i.e. the amount of products finished in a given time period.

Alexander K. Schoemig
Process Flow Scheduling als ressourcenbeschränktes Projektplanungsproblem

We discuss a project scheduling model for batch scheduling in process industries like chemicals or food industry. The production of various final products out of different raw materials is achieved by several successive chemical or physical transformation processes on different production stages, e.g. heating, filtration, or packaging. In batch mode production, a given quantity of a substance, called batch, is processed at each production level using multi-purpose equipment. The complexity of the batch scheduling problem is increased by constraints as finite intermediate storage, minimal and maximal time lags that have to be considered between different production steps, sequence-dependent cleaning operations, alternative modes for individual production steps, campaign scheduling etc. The objective is to determine a start time and a mode for each production step such that all constraints of the problem are observed and a given goal function is minimized. After an analysis of the problem and objective functions relevant in praxis, we present a deterministic resource-constrained project scheduling model and basic concepts of a solution algorithm.

Norbert Trautmann
Die Auswirkungen des Neuheitsgrades auf die Qualitätsbeurteilung

Die Hervorbringung und erfolgreiche Vermarktung von Innovationen wird seit Schumpeter immer wieder als Schlüssel einer dauerhaften unternehmerischen Betätigung angesehen. Auf der einen Seite wird dabei oftmals angenommen, daß die Erfolgswahrscheinlichkeit einer Innovation positiv mit dem Neuheitsgrad korreliert [Kotzbauer 92, S. 18 ff.]. Auf der anderen Seite stellt die Qualitätsunsicherheit ein wichtiges Informationsproblem der Innovationssituation dar, für deren Ausmaß ebenfalls ein positiver Zusammenhang mit dem Ausmaß der Neuheit vermutet werden kann [Wieandt 94, S. 22].

Gundolf Baier
Die optimale Gestaltung eines Händlernetzes im Automobilbereich

We consider the problem of a sales-maximizing design of a dealership network for a car manufacturer. It involves the sales-maximizing solution of the dealer location problem. The problem is formulated in terms of an uncapacitated facility location problem. The expected sales figures of a region and its sub-regions enter the objective function as coefficients. In an application, we use a Tobit model as an appropriate econometric method for estimating expected sales. Given these estimates, the mathematical model can be optimally solved within reasonable time, using the standard software package CPLEX.

Knut Haase, Kerstin Lange, Martin Missong
The Impact of Exchange Rate Changes on International Pricing

Different demand conditions in different markets can be a reason for price differentiation. If different currencies are used in such markets, the international pricing can be influenced by exchange rates, in particular by exchange rate changes. Empirical studies have shown not only that exchange rate changes influence price decisions (Knetter, M. M., 1993 and 1995; Giovannini, A., 1988), but also that different suppliers use different policies to set their export prices (Irandoust, M., 1998). Knetter (1989, 1991), Ohno (1989), and Marston (1990) showed that Japanese and German exporters hold their price in foreign currency constant in many branches of industry if the exchange rate changes (local-currency price stability (LCPS)). In addition, Mann (1986) and Knetter (1989) found that US exporters do not tend to use LCPS. The assumption that price behavior differs among exporting countries was confirmed by Knetter (1989) (1995), whereas Knetter (1993) identified different pricing behavior of exporters in different branches of industry. Theoretical studies tried to show that the different empirical results of the Pricing-to-Market under exchange rate changes can also be traced back to market concentration (Feinberg, R.M., 1986), imperfect competition, and reputations of the exporting companies (Krugman, P., 1987), as well as other market structure conditions (Dohner, R. S., 1984), (Dornbusch, R., 1987), (Kreinin, M., Martin, S., and Sheehey, E. J., 1987), (Froot, K. A., and Klemperer, P. D., 1989). Sharp (Sharp, D. J., 1994) pointed out that international price management is frequently determined by habits and routines rather than attempts at optimization. Simon (Simon, H. 1992) identified two alternative kinds of behavior of exporters: (1) The price in the foreign currency is held constant, and (2) the revenue per unit in home currency is held constant.

Helge Löbler
Markenwertmessung auf Basis von Paneldaten

Measuring Brand Equity is a main topic in marketing science since the end of the eighties. Many different approaches to measure brand equity have been published. This paper deals, after a short summary on theory and measuring purposes, with a simple method based on panel data. The advantages of this approach are the use of data without subjective influence, the use of data that has been already collected and the visual representation of the data as a result of the method. An empirical application, benefits and limitations are discussed.

Jürgen Maretzki
Industrielles Stoffstrommanagement
Modellierung von Stoff- und Energieströmen in Produktions- und Recyclingnetzwerken

During the last years, environmental protection measures aimed at the isolated protection of the environmental media air, water and soil. In the future, environmental management will focus on integrated approaches taking into account all environmental media. The implementation of end-of-pipetechniques will be replaced by product- and production-integrated emission reduction techniques. The paper focusses firstly on the development of the production management towards an industrial substance flow management, dealing with a substance and energy based planning and control of industrial production and recycling networks. Secondly, different approaches for the modelling of production and recycling processes, based on technical and scientific laws of material and energy transformation will be discussed. Koopmans theory of the analysis of activities is well suited to deal as a modelling framework, because the combination with flowsheeting systems for the computation of stationary mass and energy balances as well as the combination with petri net models for the simulation of the dynamic behaviour is possible.

Thomas Spengler
Entwicklung und Anwendung eines optimierenden Energie- und Stoffflußmodells für die langfristige Kapazitätsausbauplanung bei Energieversorgungsunternehmen

Die vom Deutschen Bundestag mit dem Gesetz zur Neuregelung des Energiewirtschaftsrechts beschlossene Liberalisierung des bundesdeutschen Elektrizitätsmarktes, durch die die Binnenmarktrichtlinie Strom der Europäische Union in nationales Recht umgesetzt wird, hat weitreichende Folgen für die deutschen Energieversorgungsunternehmen (EVU). Der Wegfall der Gebietsmonopole sowie die Eröffnung des Marktzugangs fur ausländische Anbieter resultieren in einer deutlich verschärften Wettbewerbssituation. In dieser Situation gewinnt die Entscheidungsunterstützung bei strategischen Fragestellungen für die Kapazitätsausbau- sowie -einsatzplanung stark an Bedeutung. Durch die hohe Kapitalintensität von Investitionen in Kraftwerksanlagen sowie die langen Nutzungsdauern der Anlagen wirken sich einmal getroffenen Entscheidungen teilweise über mehrere Jahrzehnte auf die Struktur des Kraftwerksparks und damit auf die Stromgestehungskosten aus.

Mathias Göbelt, Wolf Fichtner, Martin Wietschel, Otto Rentz
Entwicklung eines Planungsmodells zur Bestimmung umweltorientierter Investitionsprogramme

In this paper a model for environmentally oriented capital program budgeting is characterised. The model aims to allocate investments in production and emission reduction technologies in both an economical and ecological efficient manner. Based on an activity analytical model of the investigated production system, the net present value of payments related to the production system and connected material and energy flows is maximised, subject to emission limit values and given production requirements, by the selection of optimal capital programs. The model can be applied to capital budgeting in process industry.

Rainer Jochum, Frank Schultmann, Otto Rentz
Planning and Organisation in the Hospital

Two projects dealing with planning and organisation in the hospital will be presented. First we will describe two different approaches for the hospital layout problem, where wards and treatment rooms as operation theatres and laboratories have to be assigned to already existing buildings and rooms, but the given data is not unique. The second problem deals with the transport of patients on the hospital campus, which can be subsumed under the Vehicle Routing Problem. The basic aspects can be formulated for example as a Dial-a-Ride Problem, certainly under hospital inherent constraints. An additional aspect which will be adressed shortly is the online-nature of the problem.

Stefan Nickel
Implementation of IctNeo: a Decision Support System for Jaundice Management

IctNeo is a complex decision support system (DSS) for the management of a common disease: neonatal jaundice. This article discusses the system implementation with a wide point of view, which has produced a general tool for DSSs construction with influence diagrams (IDs). The knowledge representation of the problem as an ID, and the user interface are only the specific modules for IctNeo. The other components can be reused for further developments. A grammar is defined for IDs representation, dividing the specification and manipulation of this graphical model. The compiler of IDs (expressed with this grammar) will produce a compiled version of the diagram, once it is correct and prepared for evaluation. Finally, various mechanisms are required to reduce the computational complexity of the evaluation of so large IDs: a genetic algorithm to determine an efficient sequence of node reductions, evidence incorporation from patient data and a recursive computation of the utility function. The evaluator gathers these tools and provides a set of decision tables with each patient proposals.

S. Ríos-Insua, M. Gómez, C. Bielza, J. A. Fdez del Pozo
Modelling Sustainable Use of Natural Resources

Attempts to operationalize sustainable development in ecosystem models have largely focused on the conditions to maintain the stock of natural capital and to determine the maximum sustainable consumption that respects the inherent dynamics of natural resources. The analysis of interactions aiming at fulfilling human needs requires agent-based models. An integrated model approach combines the dynamics in both ecosphere and sociosphere to analyze conditions for their mutual adaptation and compatibility. The dynamic relation between sustainability strategies and their impact on the equilibrium between human needs and natural resources for various agents is discussed. The general analysis is exemplified by a simulation applicable to fishery and forestry.

Jürgen Scheffran
Backmatter
Title
Operations Research Proceedings 1999
Editors
Prof. Dr. Karl Inderfurth
Prof. Dr. Gerhard Schwödiauer
Prof. Dr. Wolfgang Domschke
Prof. Dr. Friedrich Juhnke
Prof. Dr. Peter Kleinschmidt
Prof. Dr. Gerhard Wäscher
Copyright Year
2000
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-58300-1
Print ISBN
978-3-540-67094-0
DOI
https://doi.org/10.1007/978-3-642-58300-1

PDF files of this book are not accessible. They are based on scanned pages and do not support features such as screen reader compatibility or described non-text content (images, graphs etc). However, they likely support searchable and selectable text based on OCR (Optical Character Recognition). Users with accessibility needs may not be able to use this content effectively. Please contact us at accessibilitysupport@springernature.com if you require assistance or an alternative format.