Skip to main content
main-content

Über dieses Buch

BrunoBuchberger This book is a synopsis of basic and applied research done at the various re search institutions of the Softwarepark Hagenberg in Austria. Starting with 15 coworkers in my Research Institute for Symbolic Computation (RISC), I initiated the Softwarepark Hagenberg in 1987 on request of the Upper Aus trian Government with the objective of creating a scienti?c, technological, and economic impulse for the region and the international community. In the meantime, in a joint e?ort, the Softwarepark Hagenberg has grown to the current (2009) size of over 1000 R&D employees and 1300 students in six research institutions, 40 companies and 20 academic study programs on the bachelor, master’s and PhD level. The goal of the Softwarepark Hagenberg is innovation of economy in one of the most important current technologies: software. It is the message of this book that this can only be achieved and guaranteed long term by “watering the root”, namely emphasis on research, both basic and applied. In this book, we summarize what has been achieved in terms of research in the various research institutions in the Softwarepark Hagenberg and what research vision we have for the imminent future. When I founded the Softwarepark Hagenberg, in addition to the “watering the root” principle, I had the vision that such a technology park can only prosper if we realize the “magic triangle”, i.e. the close interaction of research, academic education, and business applications at one site, see Figure 1.

Inhaltsverzeichnis

Frontmatter

Hagenberg Research: Introduction

Abstract
This book is a synopsis of basic and applied research done at the various research institutions of the Softwarepark Hagenberg in Austria. Starting with 15 coworkers in my Research Institute for Symbolic Computation (RISC), I initiated the Softwarepark Hagenberg in 1987 on request of the Upper Austrian Government with the objective of creating a scientific, technological, and economic impulse for the region and the international community. In the meantime, in a joint effort, the Softwarepark Hagenberg has grown to the current (2009) size of over 1000 R&D employees and 1300 students in six research institutions, 40 companies and 20 academic study programs on the bachelor, master’s and PhD level.
Bruno Buchberger

I. Algorithms in Symbolic Computation

Abstract
The development of computer technology has brought forth a renaissance of algorithmic mathematics which gave rise to the creation of new disciplines like Computational Mathematics. Symbolic Computation, which constitutes one of its major branches, is the main research focus of the Research Institute for Symbolic Computation (RISC). In Section 1, author P. Paule, one finds an introduction to the theme together with comments on history as well as on the use of the computer for mathematical discovery and proving. The remaining sections of the chapter present more detailed descriptions of hot research topics currently pursued at RISC. In Section 2 the inventor of Gröbner Bases, B. Buchberger, describes basic notions and results, and underlines the principal relevance of Gröbner bases by surprising recent applications. Section 3, author F. Winkler, gives an introduction to algebraic curves; a summary of results in theory and applications (e.g., computer aided design) is given. Section 4, author M. Kauers, reports on computer generated progress in lattice paths theory finding applications in combinatorics and physics. Section 5, author C. Schneider, provides a description of an interdisciplinary research project with DESY (Deutsches Elektronen-Synchrotron, Berlin/Zeuthen). Section 6, author E. Kartashova, describes the development of Nonlinear Resonance Analysis, a new branch of mathematical physics.
Peter Paule, Bruno Buchberger, Lena Kartashova, Manuel Kauers, Carsten Schneider, Franz Winkler

II. Automated Reasoning

Abstract
Observing is the process of obtaining new knowledge, expressed in language, by bringing the senses in contact with reality. Reasoning, in contrast, is the process of obtaining new knowledge from given knowledge, by applying certain general transformation rules that depend only on the form of the knowledge and can be done exclusively in the brain without involving the senses. Observation and reasoning, together, form the basis of the scientific method for explaining reality. Automated reasoning is the science of establishing methods that allow to replace human step-wise reasoning by procedures that perform individual reasoning steps mechanically and are able to find, automatically, suitable sequences of reasoning steps for deriving new knowledge from given one.
Tudor Jebelean, Bruno Buchberger, Temur Kutsia, Nikolaj Popov, Wolfgang Schreiner, Wolfgang Windsteiger

III. Metaheuristic Optimization

Abstract
Economic success frequently depends on a company’s ability to rapidly identify market changes and to adapt to them. Making optimal decisions within tight time constraints and under consideration of influential factors is one of the most challenging tasks in industry and applied computer science. Gaining expertise in solving optimization problems can therefore significantly increase efficiency and profitability of a company and lead to a competitive advantage. Unfortunately, many real-world optimization problems are notoriously difficult to solve due to their high complexity. For example, in the context of combinatorial optimization (where the search space tends to grow exponentially) or in nonlinear system identification (especially if no a-priori knowledge about the kind of nonlinearity is available) such applications are frequently found. Exact mathematical methods cannot solve these problems in relevant dimensions within reasonable time.
Michael Affenzeller, Andreas Beham, Monika Kofler, Gabriel Kronberger, Stefan A. Wagner, Stephan Winkler

IV. Software Engineering – Processes and Tools

Abstract
Software engineering traditionally plays an important role among the different research directions located in the Software Park Hagenberg, as it provides the fundamental concepts, methods and tools for producing reliable and high quality software. Software engineering as a quite young profession and engineering discipline is not limited to focus on how to create simple software programs, but in fact introduces a complex and most of the time quite costly lifecycle of software and derived products. Some efforts have been made to define software engineering as a profession and to outline the boundaries of this emerging field of research [PP04, Som04]. Several different definitions of the term software engineering appeared since its first mentioning on a NATO Software Engineering Conference in 1968.
Gerhard Weiß, Gustav Pomberger, Wolfgang Beer, Georg Buchgeher, Bernhard Dorninger, Josef Pichler, Herbert Prähofer, Rudolf Ramler, Fritz Stallinger, Rainer Weinreich

V. Data-Driven and Knowledge-Based Modeling

Abstract
This chapter describes some highlights of successful research focusing on knowledge-based and data-driven models for industrial and decision processes. This research has been carried out during the last ten years in a close cooperation of two research institutions in Hagenberg:
- the Fuzzy Logic Laboratorium Linz-Hagenberg (FLLL), a part of the Department of Knowledge-Based Mathematical Systems of the Johannes Kepler University Linz which is located in the Softwarepark Hagenberg since 1993,
- the Software Competence Center Hagenberg (SCCH), initiated by several departments of the Johannes Kepler University Linz as a non-academic research institution under the Kplus Program of the Austrian Government in 1999 and transformed into a K1 Center within the COMET Program (also of the Austrian Government) in 2008.
Erich Peter Klement, Edwin Lughofer, Johannes Himmelbauer, Bernhard Moser

VI. Information and Semantics in Databases and on the Web

Abstract
The world we are living in is predominated by information affecting our business as well as private lives and thus, the time we are living in is commonly referred to as “information age” or “knowledge age”. Information and knowledge, the latter providing the additional potential to infer new knowledge, are contained in databases, ranging from traditional ones storing structured data, via, knowledge bases, semantic networks, and ontologies up to the World Wide Web (WWW), which can be regarded as a huge distributed database following the hypertext paradigm of linked information, containing unstructured respectively semi-structured data. Information systems enable the retrieval of information and knowledge stored in their database component, e.g., via search engines for the WWW case. Current research approaches enable the management of semantics, i.e., the meaning of data, e.g., the Semantic Web aiming at making information on the WWW interpretable for machines.
Roland Wagner, Josef Küng, Birgit Pröll, Christina Buttinger, Christina Feilmayr, Bernhard Freudenthaler, Michael Guttenbrunner, Christian Hawel, Melanie Himsl, Daniel Jabornig, Werner Leithner, Stefan Parzer, Reinhard Stumptner, Stefan Wagner, Wolfram Wöß

VII. Parallel, Distributed, and Grid Computing

Abstract
The core goal of parallel computing is to speedup computations by executing independent computational tasks concurrently (“in parallel”) on multiple units in a processor, on multiple processors in a computer, or on multiple networked computers which may be even spread across large geographical scales (distributed and grid computing); it is the dominant principle behind “supercomputing” respectively “high performance computing”. For several decades, the density of transistors on a computer chip has doubled every 18–24 months (“Moore’s Law”); until recently, this rate could be directly transformed into a corresponding increase of a processor’s clock frequency and thus into an automatic performance gain for sequential programs. However, since also a processor’s power consumption increases with its clock frequency, this strategy of “frequency scaling” became ultimately unsustainable: since 2004 clock frequencies have remained essentially stable and additional transistors have been primarily used to build multiple processors on a single chip (multi-core processors). Today therefore every kind of software (not only “scientific” one) must be written in a parallel style to profit from newer computer hardware.
Wolfgang Schreiner, Károly Bósa, Andreas Langegger, Thomas Leitner, Bernhard Moser, Szilárd Páll, Volkmar Wieser, Wolfram Wöß

VIII. Pervasive Computing

Abstract
Pervasive Computing has developed a vision where the “computer” is no longer associated with the concept of a single device or a network of devices, but rather the entirety of situative services originating in a digital world, which are perceived through the physical world. It is expected that services with explicit user input and output will be replaced by a computing landscape sensing the physical world via a huge variety of sensors, and controlling it via a plethora of actuators. The nature and appearance of computing devices will change to be hidden in the fabric of everyday life, invisibly networked, and omnipresent. Applications and services will have to be greatly based on the notions of context and knowledge, and will have to cope with highly dynamic environments and changing resources. “Context” refers to any information describing the situation of an entity, like a person, a thing or a place. Interaction with such computing landscapes will presumably be more implicit, at the periphery of human attention, rather than explicit, i.e. at the focus of attention. In this chapter we will address some of the Pervasive Computing research challenges and emerging issues of interaction in Pervasive Computing environments. After computing devices pervade into objects of everyday life, computers will be “invisible”, but physical interfaces will be “omnipresent”—hidden in literally “every thing”. It will contrast implicit and explicit interaction approaches at the frontiers of pervasive, integrated and thus “hidden” technology. In the outlook, we will give a more systematic prospect of emerging lines of research.
Alois Ferscha

IX. Interactive Displays and Next-Generation Interfaces

Abstract
Until recently, the limitations of display and interface technologies have restricted the potential for human interaction and collaboration with computers. For example, desktop computer style interfaces have not translated well to mobile devices and static display technologies tend to leave the user one step removed from interacting with content. However, the emergence of interactive whiteboards has pointed to new possibilities for using display technology for interaction and collaboration. A range of emerging technologies and applications could enable more natural and human centred interfaces so that interacting with computers and content becomes more intuitive. This will be important as computing moves from the desktop to be embedded in objects, devices and locations around us and as our desktop and data are no longer device dependent but follow us across multiple platforms and locations.
Michael Haller, Peter Brandl, Christoph Richter, Jakob Leitner, Thomas Seifried, Adam Gokcezade, Daniel Leithinger

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise