Skip to main content

2001 | Buch

Web Intelligence: Research and Development

First Asia-Pacific Conference, WI 2001 Maebashi City, Japan, October 23–26, 2001 Proceedings

herausgegeben von: Ning Zhong, Yiju Yao, Jiming Liu, Setsuo Ohsuga

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Introduction

Web Intelligence (WI) Research Challenges and Trends in the New Information Age

This paper is about a new research field called Web Intelligence (WI for short). We try to explain the needs for coining the term as a sub-discipline of computer science for systematic studies on advanced Web related theories and technologies, as well as the design and implementation of Intelligent Web Information Systems (IWIS). Background information and related topics are discussed in an attempt to demonstrate why we consider WI to be a subject worthy of study and, at the same time, to establish a starting point for the further development of WI.

Y. Y. Yao, Ning Zhong, Jiming Liu, Setsuo Ohsuga

Invited Talks

Knowledge is Power: The Semantic Web Vision

Good science periodically revisits old results in the context of new discoveries and technologies. In this way, new understanding is gained of the earlier results and, sometimes, new insights can be gained into current work. This in turn can lead to new discoveries, and so the process continues. In this paper, we revisit the generalization known as the “knowledge principle,” introduced more than twenty years ago to explain the source of power of expert systems. We show that in a new context, the power of knowledge will come from the distribution and decentralization of knowledge that is ubiquitously developed and applied. In the new semantic web concept, tools are provided to the large population of WWW users that allow those individuals (perhaps millions of them) to encode small bodies of knowledge that can be integrated into an effective large knowledge base. The metaphor of “knowledge is power” thus changes from one of the centralized power to one of distributed power.

James Hendler, Edward A. Feigenbaum
From Computational Intelligence to Web Intelligence: An Ensemble from Potpourri

The advent of the internet has changed the world in possibly more significant ways than any other event in the history of humanity. Is internet access and use beyond the reach of ordinary people with ordinary intelligence? Ignoring for the moment economic issues of access for all citizenry, what is it about internet access and use that hinders more widespread acceptability? We explore several issues, not exclusive, that attempt to provoke and poke at answers to these simple questions. Largely speculative, as invited talks ought to be, we explore 3 topics, well studied but as yet generally unsolved, in computational intelligence and explore their impact on web intelligence. These topics are machine translation, machine learning, and user interface design. Conclusion will be mine; readers will draw general conclusions.

Nick Cercone
Pedagogical Agents for Web-Based Learning

This presentation will discuss techniques for incorporating “guidebots,” or animated pedagogical agents, into Web-based learning environments. Guidebots help keep learning on track; they offer students advice and guidance as appropriate in order to get the most out on-line learning experiences. Guidebots build on research in intelligent tutoring systems, but go further by engaging the learner in natural face-to-face interaction. Guidebots can stand on the side and discuss learning objectives with the learners; they also can work together with learners as teammates, and can play roles within interactive educational stories. They help bring the aesthetics of animated entertainment to interactive educational experiences. This talk will present recent developments in guidebot technology, and outline challenges for current research.

W. Lewis Johnson
Ontological Engineering: Foundation of the Next Generation Knowledge Processing

Ontological engineering as a key technology of the next generation knowledge processing is discussed. After a brief introduction to ontological engineering with my speculation about its potential contribution, three major results of the practice of ontological engineering in my lab are presented. Then, paradigm shift in information processing is discussed followed by a future directions in the Web intelligence context.

Riichiro Mizoguchi
Social Networks on the Web and in the Enterprise

The subject of this talk is the use of ideas from social network theory on the web and in the enterprise.We begin by reviewing a number of empirical observations on the web, concerning various measures of popularity of websites. Next, we describe how these observations can be used in algorithms for searching and mining on the web. We develop mathematical models for these phenomena. Finally, we discuss how these ideas and phenomena change as one goes from the public web to the confines of enterprises.

Prabhakar Raghavan
3D Object Recognition and Visualization on the Web

This research deals with state-of-the-art novel ideas in high level visualization, understanding and interpretation of line-drawing images of 3D patterns, including articulated objects. A new structural approach using linear combination and fast two-pass parallel matching techniques is presented. It is aimed at learning, representing, visualizing, and interpreting 2D line drawings as 3D objects, with only very few learning samples. It solves one of the basic concerns in diffusion tomography complexities, i.e. patterns can be reconstructed through fewer projections, and 3D objects can be recognized by a few learning samples views. It can also strengthen advantages of current key methods while overcome their drawbacks. Furthermore, it will be able to distinguish objects with very similar properties and is more accurate than other methods in the literature. In addition, an expsrimental system using JAVA and user-friendly interative platform has been established for testing large volume of image data in virtual environment on the web, for learning, and recognition. Several illustrative examples are demonstrated, including learning, recognizing, visualization and interpretation of 3D line drawing polyhedral objects. Finally, future research topics are outlined.

Patrick S. P. Wang, IAPR Fellow

Web Information System Environment and Foundations

A Web Proxy Cache Coherency and Replacement Approach

We propose an adaptive cache coherence-replacement scheme for web proxy cache systems that is based on several criteria about the system and applications, with the objective of optimizing the distributed cache system performance. Our coherence-replacement scheme assigns a replacement priority value to each cache block according to a set of criteria to decide which block to remove. The goal is to provide an effective utilization of the distributed cache memory and a good application performance.

Jose Aguilar, Ernst Leiss
Content Request Markup Language (CRML): A Distributed Framework for XML-Based Content Publishing

Construct web applications to provide dynamic, personalized web contents with high scalability and performance is a challenge to the software industry in the next century. In most available solutions, load balancing and caching mechanisms are introduced in front of web servers to reduce workload. In this paper we present Content Request Markup Language (CRML), an enabling technique for distributed XML processing at the content level. CRML is a language based on emerging XML standards, XSLT and XPATH, to publish XML-based content over HTTP protocol. It provides hints to construct a distributed framework to support parallel XML-based content publishing. In addition, the content from databases or other sources could be cached before or after processing in block or page level. With the parallel content publishing and the caching mechanism, the CRML could provide a high performance platform for fully customized web service.

Chi-Huang Chiu, Kai-Chih Liang, Shyan-Ming Yuan
A Rough Set-Aided System for Sorting WWW Bookmarks

Most people store “bookmarks” to web pages. These allow the user to return to a web page later on, without having to remember the exact URL address. People attempt to organise their bookmark databases by filing bookmarks under categories, themselves arranged in a hierarchicalfashion. As the maintenance of such large repositories is difficult and time-consuming, a tool that automatically categorises bookmarks is required. This paper investigates how rough set theory can help extract information out of this domain, for use in an experimentalautomatic bookmark classification system. In particular, work on rough set dependency degrees is applied to reduce the otherwise high dimensionality of the feature patterns used to characterize bookmarks. A comparison is made between this approach to data reduction and a conventional entropy-based approach.

Richard Jensen, Qiang Shen
Average-Clicks: A New Measure of Distance on the World Wide Web

The pages and hyperlinks of the World Wide Web may be viewed as nodes and edges in a directed graph. In this paper, we propose a new definition of the distance between two pages, called average-clicks. It is based on the probability to click a link through random surfing. We compare the average-clicks measure to the classical measure of clicks between two pages, and show average-clicks fits better to the users’ intuitions of distance.

Yutaka Matsuo, Yukio Ohsawa, Mitsuru Ishizuka
Autonomy Oriented Load Balancing in Proxy Cache Servers

Proxy cache servers are used to deal with the increasing demand for information on the Internet by caching the frequently referenced web objects. It is common to have more than one proxy cache servers being installed in one local network. The problem of load balancing then arises as organizations want to utilise the resources in the best way. This article proposes two methods to tackle the load balancing problem. The two methods are based on the notion of autonomy oriented computation where entities in the model are allowed to make local decisions and they only need to interact with local neighbors.

Kwok Ching Tsui, Jiming Liu, Hiu Lo Liu
Emerging Topic Tracking System

Due to its open characteristic,the Web is being posted with vast amount of new information dynamically. Consequently,at any time, there will be hot issues emerge in any information area which may interest the users. However,it is not practical for users to browse the Web all the time for the updates. Thus,w e need this Emerging Topic Tracking System (ETTS) as an information agent,to detect the changes in the information area of our interest and generate a summary from the changes back to us from time to time. This summary of changes will be the latest most discussed issues and it may reveal an emerging topic.

Khoo Khyou Bun, Mitsuru Ishizuka
On Axiomatizing Probabilistic Conditional Independencies in Bayesian Networks

Several researchers have suggestedthat Bayesian networks (BNs) shouldb e usedto manage the inherent uncertainty in information retrieval. However, it has been arguedthat manually constructing a large BN is a difficult process. In this paper, we obtain the only minimal complete subset of the semi-graphoidaxiomatization governing the independency information in a BN. This result may be useful in developing an automatedBN construction procedure for information retrieval purposes.

C. J. Butz
Dynamic Expert Group Models for Recommender Systems

Recently many recommender systems have been developed to recommend items in online commerce markets, based on user preferences for a particular user, but they have difficulty in deriving user preferences for users who have not rated many documents. In this paper we use dynamic expert-group models to recommend domain-specific items or documents for unspecified users, while users give feedbacks of relative ratings over the recommended items or documents. In this system, the group members have dynamic authority weights depending on their performance of the ranking evaluations. We have tested two effectiveness measures on rank order to determine if the current top-ranked lists recommended by experts are reliable.

DaeEun Kim, Sea Woo Kim
The ABC’s of Online Community

Online community is having growing social and commercial impact on the WWW, but what does “community” mean online? Can the level of community be measured? This article articulates an evidential conceptual model of community synthesizing earlier definitions drawn from the literature and creating new core conditions. The four conditions, purpose, commitment, context and infrastructure, we believe are necessary and sufficient for modeling and gauging intra-community “glue”, and that without this glue sustainable community cannot manifest.

Robert McArthur, Peter Bruza
Sufficient Conditions for Well-Behaved Adaptive Hypermedia Systems

We focus on well-behaved Adaptive Hypermedia Systems, which means the adaptation engine that executes adaptation rules always terminates and produces predictable (confluent) adaptation results. Unfortunately termination and confluence are undecidable in general. In this paper we discuss sufficient conditions to help authors to write adaptation rules that satisfy termination and confluence.

Hongjing Wu, Paul De Bra

Web Human-Media Engineering

Towards Formal Specification of Client-Server Interactions for a Wide Range of Internet Applications

Traditional way of designing Internet applications involves writing code for programming the sequence of pages presented to the client and associated decision-making logic. This makes the interaction flow of an application unclear and reduces its maintainability. A formal method of expressing interactions in an Internet application that is based on the notion of Interaction Machine is proposed. Such formalism can be mapped to interaction specifications expressed in XML that can be interpreted by an applicationneutral universal controller. Additional advantages of the proposed universal controller include full synchronization between the client and the server (a wellknown problem for Internet applications), preservation of complete history of application context that allows for true rollbacks and resumption of longsuspended applications. The approach can be used for implementing a wide range of applications including standard HTML-based applications, WMLbased WAP and Business-to-Business XML-based applications.

Vadim Doubrovski
Collecting, Visualizing, and Exchanging Personal Interests and Experiences in Communities

In this paper, we propose a notion of facilitating encounters and knowledge sharing among people having shared interests and experiences in museums, conferences, etc. In order to show our approach and current status, this paper presents our project to build a communityware system situated in real-world contexts. The aims of the project are to build a tour guidance system personalized according to its user’s individual contexts, and to facilitate knowledge communications among communities by matchmaking users having shared interests and providing real and/or virtual places for their meetings. In this paper, we first show PalmGuide, hand-held tour guidance system. After that, we show two Web-based systems to increase the level of “community-awareness”. One is Semantic Map, a visual interface for exploring community information, such as exhibits and people (exhibitors and visitors). Another is AgentSalon, a display showing conversations between personal agents according to their users’ profiles and interests.

Yasuyuki Sumi, Kenji Mase
Audio Content Description in Sound Databases

Sound database indexing requires metadata to represent audio content of the data. If the metadata are not attached to the database by its creator, content information has to be extracted directly from sounds, using descriptors based on sound analysis. In this paper, authors present a number of sound descriptors based on various forms of signal analysis. Telescope Vector trees (TV-trees) and Frame Segment trees (FS-trees) are applied to represent audio content on the basis of the extracted sound descriptors and metadata provided by the database creator (if only available). Such a representation of audio content of the database is used to speed up the search of the audio material in multimedia databases.

Alicja A. Wieczorkowska, Zbigniew W. Raś
Intelligent Interfaces for Distributed Web-Based Product and Service Configuration

This paper emphasizes on the enhancement of web-based selling technology for complex products. The approach of the EC-funded CAWICOMS1 project is twofold: provision of technologies both for customeradaptive Web-interfaces for the configuration of mass-customized products as well as for the integration of configuration systems along the supply-chain. Within this paper we first motivate the demand for personalized and adaptive Web-interfaces of product configurators as an efficient means for customer relationship management. In addition, we sketch scenarios where product configuration takes place at several stages in the supply chain and the involved configuration systems have to cooperatively solve a distributed configuration task.

L. Ardissono, Felfernig A., G. Friedrich, D. Jannach, R. Schäfer, M. Zanker
Using Networked Workshop System to Enhance Creative Design

This study examined the usefulness of networked workshop instruction, an instructional method that emphasizes presentation, discussion, evaluation, and knowledge construction. In workshop instruction, web-based peer assessment was used to evaluate students’ performance. Twenty-four computer and information science graduate students enrolled in a course “Web and Database Integration” and were assigned to nine teams. Each team was instructed to design a web-based system capable of performing certain functions. Functioning similar to how researchers and scientists would in a workshop, participants orally presented their ideas and web-based peer assessment was conducted to increase critical feedback during a team designs their own product. One creative products and qualitative comments from professors were presented to demonstrate the students’ high quality achievement.

Sunny S. J. Lin, Eric Z. -F. Liu, M. C. Cheng, S. M. Yuan
Discovering seeds of New Interest Spread from Premature Pages Cited by Multiple Communities

The World Wide Webis a great source of new topics significant for trend birth and creation. In this paper, we propose a method for discovering topics, which stimulate communities of people into earnest communications on the topics’ meaning, and grow into a trend of popular interest. Here, the obtained are web pages which absorb attentions of people from multiple interest-communities. It is shown by a experiments to a small group of people, that topics in such pages can trigger the growth of peoples’ interests, beyond the bounds of existing communities.

Naohiro Matsumura, Yukio Ohsawa, Mitsuru Ishizuka
Personalized Web Knowledge Management

In this paper, we propose a novel approach for personalized Web Knowledge management by incorporating dynamic (e.g., news article) and static (e.g.,technical papers) Web contents. We can integrate these two types of information by using intelligent crawling and meta-data creation and provide a visual interface to personalize and navigate through the tailored information to an individual user.

Koichi Takeda, Hiroshi Nomiyama

Web Information Management

Event and Rule Services for Achieving a Web-Based Knowledge Network

This paper presents the concept of a Web-based knowledge network. A knowledge model is first described and then the overall architectural framework of the knowledge network is discussed. The implemented prototype system allows providers of information resources and/or application system services to publish not only their data resources and/or services on Web pages, but also their knowledge in the form of events, rules, and triggers associated with the contents and operations of these pages. Internet users can access these Web pages, subscribe in a registration process to some published events on a page, and provide values for event filters and customizable rules. The subscribers can also specify additional triggers and rules of their own to be processed on the subscribers’ sites. At run-time, when an event is posted by a provider (human or automated application system), event filters are processed, the relevant subscribers are notified, and both the provider and subscribers’ triggers and rules are processed on their respective sites. The architectural framework allows both providers and subscribers of information resources and services to contribute their knowledge to the Internet, thus forming a Webbased knowledge network instead of the present data/information network. The knowledge network is constructed by a number of replicable software components, which can be installed at various network sites. They, together with the existing Web servers, form the knowledge Web servers.

Minsoo Lee, Stanley Y. W. Su, Herman Lam
Knowledge-Based Validation, Aggregation, and Visualization of Meta-data: Analyzing a Web-Based Information System

As meta-data become of increasing importance to the Web, we will need to start managing such meta-data. We argue that there is a strong need for meta-data validation and aggregation. We introduce the Spectacle Workbench for verifying semi-structured information and show how it can be used to validate, aggregate and visualize the metadata of an existing Information System. We conclude that the possibility to verify and aggregate meta-data is an added value with respect to contents-based access to information.

Heiner Stuckenschmidt, Frank van Harmelen
Online Handwritten Signature Verification for Electronic Commerce over the Internet

There is a lot of potential in using on-line handwritten signature verification over the Internet. Banks and Government bodies recognize signatures as a legal means of authentication. Compared with other electronic identification methods such as fingerprints scanning and retinal vascular pattern screening, it is easier for people to migrate from using the popular pen-andpaper signature to one where the handwritten signature is captured and verified electronically. In an era where electronic commerce and online banking are gaining world-wide popularity and huge acceptance among the common masses, there has to be a way of verifying the identity of the potential client in cyberspace. A verification model incorporated with an appropriate algorithm is proposed to facilitate on-line handwritten signature verification for electronic commerce over the Internet. The client/server architecture and the verification algorithm used are presented to demonstrate the feasibility of on-line handwritten signature as an authentication means for e-commerce.

W. Sardha Wijesoma, K. W. Yue, K. L. Chien, T. K. Chow
A Data Model for XML Databases

In the proposed data model for XML databases, an XML element is directly represented as a ground (variable-free) XML expression—a generalization of an XML element by incorporation of variables for representation of implicit information and enhancement of its expressive power—while a collection of XML documents as a set of ground expressions, each describing an XML element in the documents. Relationships among elements in the collection as well as integrity constraints are formalized as XML clauses. An XML database, consisting of: (i) a document collection (or an extensional database), (ii) a set of relationships (or an intensional database) and (iii) a set of integrity constraints, is therefore modeled as an XML declarative description comprising a set of ground XML expressions and XML clauses. Its semantics is a set of ground XML expressions, which are explicitly described by the extensional database or implicitly derived from the intensional database and satisfy all the specified set of constraints.

Vilas Wuwongse, Chutiporn Anutariya, Kiyoshi Akama, Ekawit Nantajeewarawat
Conference Information Management System: Towards a Personal Assistant System

We are aiming to develop a personal assistant system which handles its user’s files stored in his/her computers and his/her interesting information that can be accessed on the Web. The system categorizes the files and Webpages gathered and extracts information specified by him/her and stores it into a structured database for use as the knowledge of its dialogue module. This paper presents a prototype of the system that handles information about conferences of interest to the user. The system extracts conference names, submission deadlines, dates, URIs and locations from e-mail messages the user has received, and webpages gathered by both crawling and meta-search. Extracted information is stored into a database so that the user can interactively search conference information via a user interface with natural language queries.

Tsunenori Mine, Makoto Amamiya, Teruko Mitamura
Automatic Intelligence Gathering from the Web: A Case Study in Container Traffic

The Internet provides one of the largest public repositories of data available to all with a networked computer. The amount of data available is so large, and so easily available that it opens up a wonderful possibility to extract the data, normalise it for the problem at hand, and package it for output to users who may not have the time or technical capacity to directly make full use of the information sources on the web. This paper describes a project where large amounts of data for the movement of containers were turned into a consolidated database that could assist members of the anti-fraud community investigating customs or other fraud relating to the transport of goods using containers. The main emphasis was placed on how these vast amounts of data could be harnessed systematically and disseminated over the Internet to those interested.

José Perdigao, Anjula Garg, Thomas Barbas, Stefan Scheer, Giuseppe Mastrangelo, Giovanna Rubino
The Work Concept RBAC Model for the Access Control of the Distributed Web Server Environment

Role Based Access Control method, the most suitable access control concept available today in a distributed web server system within a domain, will be applied in this paper to suggest a solution to the situation of being asked to follow the verification process from each server when accessing multiple web servers while working on the task. Work concept that is higher, more abstract, and more inclusive concept than Role will be imported to the existing RBAC so that the user could select Work instead of Role when doing one’s work which then leads to the way to complete the work more efficiently using the right provided by each server based on the selected Work.

Won Bo Shim, Seog Park
A New Conceptual Graph Generated Algorithm for Semi-structured Databases

As the World Wide Web grows dramatically in recent years, there is increasing interest in semi-structured data on the web. Semi-structured data are usually represented in graph format, many graph schemas have then been proposed to extract schemas from those data graphs. Conceptual graphs, which use incremental conceptual clustering method to extract schemas, have initially been proposed in 2000. In this paper, we revise the original algorithm to generate a conceptual graph by proposing some new operators in the construction process. The results have shown that with the revised algorithm, the quality of the conceptual graphs has been improved for query optimization.

Kam Fai Wong, Yat Fan Su, Dongqing Yang, Shiwei Tang

Web Information Retrieval

A Contextual Term Suggestion Mechanism for Interactive Web Search

This paper presents a novel term suggestion mechanism for interactive web search. The main distinction of the proposed mechanism is that it exploits the contextual information among the series of query terms submitted by the user in a search process. The main objective is to facilitate identifying the exact information need of the user and therefore to make better term suggestion to the user. This paper also discusses the main issues concerning implementation of the proposed term suggestion mechanism and reports some experimental results regarding its effects.

Chien-Kang Huang, Yen-Jen Oyang, Lee-Feng Chien
3DGML: A 3-Dimensional Graphic Information Retrieval System

This paper presents a web-based information retrieval system for 3-D graphic data. We describe a 3-D database system and its web-based user interface supporting semantics of 3-D objects. Our system offers a content-based retrieval for 3-D scenes that few graphic database systems are capable of. The user can pose a visual query involving 3-D shapes and spatial relations on the web interface. The data model underlying the retrieval system models 3-D scenes using domain objects and their spatial relations. An XML-based data modeling language called 3DGML has been designed to support the data model. It offers an object-oriented 3-D image modeling mechanism that separates low level implementation details of 3-D objects from their semantic roles in a 3-D scene. We discuss the retrieval system and the data modeling technique in detail. We believe our work is one of the earliest efforts to take advantage of XML for 3-D graphics.

Jong Ha Hwang, Keung Hae Lee, Soochan Hwang
An Evolutionary Approach to Automatic Web Page Categorization and Updating

Catalogues play an important role in most of the current Web search engines. The catalogues, which organize documents into hierarchical collections, are maintained manually increasing difficult y and costs due to the incessant growing of the WWW. This problem has stimulated many researches to work on automatic categorization of Web documents. In reality, most of these approaches work well either on special types of documents or on restricted set of documents. This paper presents an evolutionary approach useful to construct automatically the catalogue as well as to perform the classification of a Web document. This functionality relies on a genetic-based fuzzy clustering methodology that applies the clustering on the context of the document, as opposite to content based clustering that works on the complete document information.

Vincenzo Loia, Paolo Luongo
Automatic Web-Page Classification by Using Machine Learning Methods

This paper describes automatic Web-page classification by using machine learning methods. Recently, the importance of portal site services is increasing including the search engine function onWorld Wide Web. Especially, the portal site such as Yahoo! service, which hierarchically classifies Web-pages into many categories, is becoming popular. However, the classification of Web-page into each category relies on man power, which costs much time and care. To alleviate this problem, we propose techniques to generate attributes by using co-occurrence analysis and to classify Web-page automatically based on machine learning. We apply these techniques to Web-pages on Yahoo! JAPAN and construct decision trees, which determine appropriate category for each Web-page. The performance of this proposed method is evaluated in terms of error rate, recall, and precision. The experimental evaluation demonstrates that this method provides acceptable accuracy with the classification of Web-page into top level categories on Yahoo! JAPAN.

Makoto Tsukada, Takashi Washio, Hiroshi Motoda
A Theory and Approach to Improving Relevance Ranking in Web Retrieval

The development of the World Wide Web (WWW) makes a huge amount of information available on-line, and the amount of information continues to increase. As of March 2001 the Google search engine searches 1,346,966,000 Web pages. Many search systems have been developed to manage this massive collection of information. Investigation shows that the primary method used by these systems is classification. Unfortunately, classification has an intrinsic restriction. Consider this example. Recently, we sent a query that consists of the word x“computer ” to Google, and Google found 33,220,000 relevant Web pages. This number far exceeds anything that people can possibly begin to read. This problem is intrinsic to classification, which means it cannot be avoided. The problem is explained by the Pigeonhole Principle (i.e. Dirichlet’s Box Principle) [10]. Suppose we can classify Web pages using all the English words in a dictionary. Given a particular keyword, let us calculate on average how many Web pages will be classified as relevant. Let totalKeywords be the number of all keywords in a vocabulary list. Let averageKeywords be the average number of keywords that a Web document may have. Let the number of all Web pages be n. Let the number of relevant Web pages be numberRelevant. Then we have: $$ number Relevant \approx \frac{{n \times average Keywords}} {{total Keywords}}. $$ If n = 1346966000, averageKeywords = 100, and totalKeywords = 10000, then numberRelevant is 13469660.

Z. W. Wang, R. B. Maguire
A Fast Image-Gathering System on the World-Wide Web Using a PC Cluster

Thanks to the recent explosive progress of WWW (World-Wide Web), we can easily access a large number of images from WWW. There are, however, no established methods to make use of WWW as a large image database. In this paper, we describe an automatic imagegathering system from WWW, in which we use both keywords and image features. By exploiting some existing keyword-based search engines and selecting images by their image features, our system obtains, with high accuracy, images that are strongly related to query keywords. This system has been implemented on a parallel PC cluster, which enables us to gather more than one hundred images from WWW in about one minute.

Keiji Yanai, Masaya Shindo, Kohei Noshita
MELISSA: Mobile Electronic LSA Internet Server Search Agent

Searching the World Wide Web for desired information is becoming increasingly challenging with the growing number of web servers and pages. Commonly used search engines often return broken links, out-of-date information, and unrelated pages. This project presents an alternative to these search engines by describing an architecture for a mobile autonomous search agent. MELISSA searches Internet web servers on the user’s behalf and reports results back to the user in real-time that are up-to-date and free of broken links. In addition, MELISSA suppresses most unrelated pages by applying Latent Semantic Analysis (LSA), a statistical method for extracting and representing meaning in text corpora. An experiment comparing MELISSA to the AltaVista Internet search engine reveals significant improvements in search engine performance.

Hal D. Brian, Max H. Garzon
Construction of a Fuzzy Multilingual Thesaurus and Its Application to Cross-Lingual Text Retrieval

Cross-lingual text retrieval (CLTR) is a problem of vocabulary mismatch. To allow multilingual term matching, a multilingual thesaurus is used. However, a multilingual thesaurus encoded with exact translation equivalent only is insufficient for effective CLTR since relevant documents are often indexed by cross-lingual related term. In this paper, a novel approach for automatically constructing a multilingual thesaurus based on fuzzy set theory is proposed. By introducing a degree of relatedness between multilingual terms using the concept of membership degree, partial match of cross-lingual related terms is facilitated.

Rowena Chau, Chung-Hsing Yeh
Declustering Web Content Indices for Parallel Information Retrieval

We consider an information retrieval (IR) system on a low-cost highperformance PC cluster environment. The IR system replicates the Web pages locally, it is indexed by the inverted-index file (IIF), and the vector space model is used as ranking strategy. In the IR system, the inverted-index file (IIF) is partitioned into pieces using the lexical and the greedy declustering methods. The lexical method assigns each of the terms in the IIF lexicographically to each of the processing nodes in turn and the greedy one is based on the probability of co-occurrence of an arbitrary pair of terms in the IIF and distributed to the cluster nodes to be stored on each node’s hard disk. For each incoming user’s query with multiple terms, terms are sent to the corresponding nodes that contain the relevant pieces of the IIF to be evaluated in parallel. We study how query performance is affected by two declustering methods with various-sized IIF. According to the experiments, the greedy method shows about 3.7% enhancement overall when compared with the lexical method.

Yoojin Chung, Hyuk-Chul Kwon, Sang-Hwa Chung, Kwang Ryel Ryu
Indexing a Web Site to Highlight Its Content

This article presents a new approach in order to indexa Web site. It uses ontologies and natural language techniques for information retrieval on the Internet. The main goal is to build a structured indexof the Web site. This structure is given by a terminology oriented ontology of a domain which is chosen a priori according to the content of the Web site. The indexing process uses improved natural language technics.

E. Desmontils, C. Jacquin
Using Implicit Relevance Feedback in a Web Search Assistant

The explosive growth of information on theWorldWid eWeb demands effective intelligent search and filtering methods. Consequently, techniques have been developed that extract conceptual information from documents to build domain models automatically. The model we build is a taxonomy of conceptual terms that is usedin a search assistant to help the user navigate to the right set of requiredd ocuments. We monitor the dialogue steps performed by users to get feedback about the quality of choices proposedb y the system andto adjust the model without manual intervention. Thus, we employ implicit relevance feedback to improve the domain model. Unlike in traditional relevance feedback andcollab orative filtering tasks we do not need explicitly expresseduser opinions. Moreover, we aim at improving the domain model as a whole rather than trying to build individual user profiles.

Maria Fasli, Udo Kruschwitz
The Development and Evaluation of an Integrated Imagery Access Engine

Nowadays a lot of engines are proposed but they based on only one point of view; keyword or feature. In this paper, we integrate an engine using both a metadata type and a feature type database. It was built on the Web and all the components are free software. It deals with keywords and allows the result to be grouped by the feature of the contents. We then evaluated it using ten evaluaters, so the grouping method was highly evaluated.

Toru Fukumoto, Kanji Akahori
Conceptual Information Extraction with Link-Based Search

Link-based search provides a new vehicle to find relevant web documents on the WWW. Recently, there is quite a bit of optimism that the use of link information can improve search quality as in Google. Usually, text-based search engine returns web sites which have simply the best frequency of user query, so that the result might be different from user’s expectation. However, hypertextual search engine finds the most authoritative site. Proposed search engine consists of crawling, storing of link structure, ranking, and personalization processes. User profile encodes different relevances among concepts for each user. For conceptual information extraction from link-based search engine, fuzzy concept network is adopted. Fuzzy concept network can be personalized using the profile information and used to conduct fuzzy information retrieval for each user. By combining personal fuzzy information retrieval and link-based search, proposed search agent provides high-quality information on the WWW about user query. To show the effectiveness of the proposed search engine, a subjective test for five persons is conducted and the result is summarized. The result for five persons shows the usefulness of the proposed system and possibility for personalized conceptual information extraction.

Kyung-Joong Kim, Sung-Bae Cho
World Wide Web — A Multilingual Language Resource

This paper argues that the World Wide Web could be regarded not only as an information resource but also as a dynamic, multilingual, least controlled, easy to access and untagged language corpus. In order to support this idea, we realized a method, which is able to extract bilingual lexicons from parallel WWW pages by two-stage alignment. Language pairs of German, English and Chinese have been selected but the realization is independent of any natural language, domain or markup.

Fang Li, Huanye Sheng, Wilhelm Weisweber
Query by History Tree Manipulation

This paper describes a novel technique for refinement of search queries: query by history tree manipulation (QBHTM). QBHTM visualizes query histories by means of trees, and allows users to retrieve information by manipulatingthe trees. The Boolean AND operator is represented by the parent-child relation of nodes in the tree. QBHTM enables users to see the summary of the queries, and to compare them. This advantage is important for understanding characteristics of collections of documents.

Takashi Sakairi, Hiroshi Nomiyama
Web-Based Information Retrieval Using Agent and Ontology

With notoriously large and ever-increasing number of websites, users face the challenge of searching, filtering and monitoring ever-changing information of astronomical magnitude. This research has engineered a society of agents for: (1) locating desirable number of URLs, (2) browsing multiple websites simultaneously and (3) monitoring changes in websites. Since WWW is being used across many cultures, text documents may use different terms for the same concept. By using ontological relations of words, (i) a query processing agent (QPA) assists users in selecting desired number of URLs and (ii) information filtering agents (IFAs) are used to retrieve information. Additionally, information monitoring agents are used to constantly monitor and report changes in websites. Experimental results demonstrated that the QPA can find appropriate number of URLs and IFAs are successful in filtering relevant information in many instances.

Kwang Mong Sim, Pui Tak Wong
Content-Based Sound Retrieval for Web Application

It is both challenging and desirable to be able to retrieve sound files relevant to users’ interests by searching the Internet. Unlike the traditional way of using keywords as input to search for web pages with relevant texts, query example can be used as input to search for similar sound files. In this paper, content-based technology has been applied to automatically retrieve sounds similar to the query-example. Features from time, frequency and coefficients domains are firstly extracted from each sound file. Next, Euclidean distances between the vectors of query and sample audios are measured. An ascending distance list is given as retrieval results. Experiments have been conducted on a sound database with 414 files from 16 classes. Further, we propose to classify the query audio into three classes, speech, music and other sound, with much fewer features and then search relevant files only in that subspace. This way, the retrieval performance could be further increased with the saving of computing time as well. Simulations show that our method leads to better results compared to the Soundfisher software in terms of both retrieval quality and completeness.

Chunru Wan, Mingchun Liu, Lipo Wang

Web Agents

Collaborative Filtering Using Principal Component Analysis and Fuzzy Clustering

Automated collaborative filtering is a popular technique for reducing information overload. In this paper, we propose a new approach for the collaborative filtering using local principal components. The new method is based on a simultaneous approach to principal component analysis and fuzzy clustering with an incomplete data set including missing values. In the simultaneous approach, we extract local principal components by using lower rank approximation of the data matrix. The missing values are predicted using the approximation of the data matrix. In numerical experiment, we apply the proposed technique to the recommendation system of background designs of stationery for word processor.

Katsuhiro Honda, Nobukazu Sugiura, Hidetomo Ichihashi, Shoichi Araki
iJADE IWShopper: A New Age of Intelligent Mobile Web Shopping System Based on Fuzzy-Neuro Agent Technology

Owing to the increasing number of mobile e-commerce applications using WAP technology, intelligent agent-based systems becoming a new trend of development in the new millennium. Traditional web-based agent systems suffer various degrees of deficiency in terms of the provision of ’intelligent’ software interfaces and light-weighted coding to be implemented in WAP devices. In this paper, we propose a comprehensive and intelligent-agent platform known as iJADE (intelligent Java Agent Development Environment) for the development of smart (via the implementation of ’Conscious Layer’), compact and highly mobile agent applications. From the implementation point of view, we introduce the iJADE IWShopper - an intelligent mobile web shopping system using fuzzy-neuro agent technology - the integration of WAP and Java Servlets technology with our iJADE APIs. Promising results in terms of agent mobility, fuzzy-neural shopping efficiency and effectiveness are obtained.

Raymond S. T. Lee
Wireless Agent Guidance of Remote Mobile Robots: Rough Integral Approach to Sensor Signal Analysis

A roughin tegral multiple sensor fusion model for wireless agent guidance of remote mobile robots is presented in this paper. A roughmeasure of sensor signal values provides a basis for a discrete form of roughin tegral that offers a means of aggregating sensor values and to estimate by means of a sensor signal how close robot is to a target region of space. By way of illustration, the actions of a collection of robots are controlled by a wireless system that connects a web agent (called a Guide Agent or GA) written in Java and pairs of Radio Packet Controller (RPCs) modules (one attached to a workstation and a second RPC on board a robot). The web GA analyzes robot sensors signals, communicates robot movement commands and assists other web agents in updating some parts of a web page that implements a real-time robot traffic control system. This web page displays the current configuration of a society of mobile robots (stopping, direction of movement, avoiding, wandering, mapping, and planning). Only a brief description of the web GA is given in this paper.

J.F. Peters, S. Ramanna, A. Skowron, M. Borkowski
Ontology-Based Information Gathering Agents

A well-known problem of World Wide Web (WWW) is that it contains ultra-large amount of information that often makes it difficult for users to find out their desired information. We propose and implement architecture of ontology-based information gathering agents. The ontology is represented by object-oriented approach with procedural attachments. The information gathering agents could carry out the search by wrapping the search engines of various kinds as their search methods and perform planning and integration of the partial information gathering results. The users also can represent their queries in terms of the shared domain ontology to reduce their conceptual gaps with agents. It shows how information gathering agents could utilize domain knowledge and integrate information gathered from disparate information resources and provide much more coherent results for the users.

Yi-Jia Chen, Von-Wun Soo
An Effective Conversational Agent with User Modeling Based on Bayesian Network

Conversational agents interact with users using natural language interface. Especially in Internet space, their role has been recently highlighted as a virtual representative of a web site. However, most of them use simple pattern matching techniques without considering user’s goal. In this paper, we propose a conversational agent that utilizes user model constructed on Bayesian network for the responses consistent with user’s goal. The agent is applied to the active guide of a website, which shows that the user modeling based on Bayesian network helps to respond to user’s queries appropriately with the their goals.

Seung-Ik Lee, Chul Sung, Sung-Bae Cho
Information Fusion for Intelligent Agent-Based Information Gathering

This paper discusses the problem of information fusion for agent-based information gathering systems. The framework of information gathering in multi-agent environments is presented firstly, and then a cooperative fusion algorithm is presented for unstructured documents. The performance is also made by the traditional methods precision and recall, and it shows that the fusion algorithm is efficient.

Yuefeng Li
An Adaptive Recommendation System with a Coordinator Agent

This paper presents a recommendation system with a coordinator agent that is adaptive to its environment. Recommendation systems that suggest items to users are gaining popularity in the field of electronic commerce. Various methods such as collaborative, content-based, and demographic recommendation have been used to analyze and predict the preference of users. According to the characteristic of the application domain, the performance of each method varies. In the proposed system, we introduce a coordinator agent that adaptively changes the weights of each recommendation method to provide combined recommendation appropriate for the given environment.

Myungeun Lim, Juntae Kim
Interactive Web Page Filtering with Relational Learning

This paper describes a system for collecting Web pages that are relevant to a particular topic through an interactive approach. Indicated some relevant pages by a user, this system automatically constructs a set of rules to find new relevant pages. The purpose of the system is to reduce users’ browsing cost by filtering non-relevant pages automatically. Such an approach can be useful when users do not know how to describe their requirements to search engines. We describe the representation and the learning algorithm, and also show the experiments comparing its performance with a search engine.

Masayuki Okabe, Seiji Yamada
A Fuzzy Rule-Based Agent for Web Retrieval-Filtering

This work proposes an intelligent agent for information retrieval and information filtering in the context of e-learning. The agent is composed of five modules: the Indexing module,the User Profile module,the Information Retrieval module,the Information Filtering module,and the Interface module. The Information Retrieval and Information Filtering modules are based on the same Fuzzy Inference System which incorporates user profile knowledge.

S. Vrettos, A. Stafylopatis

Web Mining and Farming

Implementation Issues and Paradigms of Visual KDD Systems

We proposed an interactive visualization model for knowledge discovery and data mining, with which four visual KDD systems for different targets have been implemented. The implementation issues are considered and three implementation paradigms, including image-based approach, algorithm-embedded approach, and interaction-driven approach, are presented and discussed. The issues and paradigms presented in this paper can also be used in the design of Web mining and Web agenda.

Jianchao Han, Nick Cercone
Re-Engineering Approach to Build Domain Ontologies

Building ontology is changing from being an art to be a science. There are many attempts to re-engineer the process of building ontology [4], to categorize the applications that use ontology for better understanding [23], to study the common features of well-known existing ontologies [18], to provide environment and tools for ontology development [5], and to provide theoretical foundations for ontology [10]. In our experience of building ontological-base tendering system [15], we face the problem of building ontology. From these efforts and from our experience we demonstrate how to build ontologies for tendering process domain using conceptual graphs. Instead of reinventing the wheel, we reverse engineer EDI structures to build abstract ontology for tendering structures. Also, we use data-mining techniques to build abstract domain ontology from existing on-line catalogs.

Ahmad Kayed, Robert M. Colomb
Discovery of Emerging Topics Between Communities on WWW

In the real world, discovering new topics covering profitable items and ideas (e.g., mobile phone, global warming, human genome project, etc) is important and interesting. However, since we cannot completely encode the world surrounding us, it’s difficult to detect such topics and their mechanisms in advance. In order to support the detection, we show a method for revealing the structure of WWW by using the KeyGraph algorithm. Empirical results are reported.

Naohiro Matsumura, Yukio Ohsawa, Mitsuru Ishizuka
Mining Web Logs to Improve Web Caching and Prefetching

Caching and prefetching are well known strategies for improving the performance of Internet systems. The heart of a caching system is its page replacement policy, which selects the pages to be replaced in a proxy cache when a request arrives. By the same token, the essence of a prefetching algorithm lies in its ability to accurately predict future request. In this paper, we present a method for caching variable-sized web objects using an n-gram based prediction of future web requests. Our method aims at mining a prediction model from the web logs for document access patterns and using the model to extend the well-known GDSF caching policy. In addition, we present a new method to integrate this caching algorithm with a prediction-based prefetching algorithm. We empirically show that the system performance is greatly improved using the integrated approach.

Qiang Yang, Henry Haining Zhang, Ian T. Y. Li, Ye Lu
Mining Crawled Data and Visualizing Discovered Knowledge

This paper presents a challenging project which aims to extend the current features of search and browsing engines. Different methods are integrated to meet the following requirements: (1) Integration of incremental and focused dynamic crawling with meta-search; (2) Free the user from sifting through the long list of documents returned by the search engines; (3) Extract comprehensive patterns and useful knowledge from the documents; (4) Visual-based support to browse dynamic document collections. Finally, a new paradigm is proposed combining the mining and the visualization methods used for search and exploration.

V. Dubois, M. Quafafou, B. Habegger
Categorizing Visitors Dynamically by Fast and Robust Clustering of Access Logs

Clustering plays a central role in segmenting markets. The identification of categories of visitors to a Web-site is very useful towards improved Web applications. However, the large volume involved in mining visitation paths, demands efficient clustering algorithms that are also resistant to noise and outliers. Also, dissimilarity between visitation paths involves sophisticated evaluation and results in large dimension of attribute-vectors. We present a randomized, iterative algorithm (a la Expectation Maximization or k-means) but based on discrete medoids. We prove that our algorithm converges and that has subquadratic complexity. We compare to the implementation of the fastest version of matrixbased clustering for visitor paths and show that our algorithm outperforms dramatically matrix-based methods.

Vladimir Estivill-Castro, Jianhua Yang
Online Learning for Web Query Generation: Finding Documents Matching a Minority Concept on the Web

This paper describes an approach for learning to generate web-search queries for collecting documents matching a minority concept. As a case study we use the concept of text documents belonging to Slovenian, a minority natural language on the Web. Individual documents are automatically labeled as relevant or non-relevant using a language filter and the feedback is used to learn what query-lengths and inclusion/exclusion term-selection methods are helpful for finding previously unseen documents in the target language. Our system, CorpusBuilder, learns to select “good” query terms using a variety of term scoring methods. We present empirical results with learning methods that vary the time horizon used when learning from the results of past queries. Our approaches generalize well across several languages regardless of the initial conditions.

Rayid Ghani, Rosie Jones, Dunja Mladenic
A Formal Ontology Discovery from Web Documents

This paper defines a framework of formal ontology that is compatible with domain-specificity that Web documents has, and natural language structures. Furthermore, this paper investigates how to extract information about the formal ontology of the domain written in Web documents based on logics, Web technology such as XML and natural language processing.

Norihiro Ogata
The Variable Precision Rough Set Model for Web Usage Mining

Web Knowledge Discovery and Data Mining includes discovery and leveraging different kinds of hidden patterns in webdata. In this paper we mine webuser access patterns and classify users using the Variable Precision Rough Set (VPRS) model. Certain user sessions of webaccess are positive examples and other sessions are negative examples. Cumulative graphs capture all known positive example sessions and negative example sessions. They are then used to identify the attributes that are used to form an equivalence relation. This equivalence relation is used for the ß-probabilistic approximation classification of the VPRS model. An illustrative experiment is presented.

V. Uma Maheswari, Arul Siromoney, K. M. Mehata
Supporting Cooperative Consensus Formation via Ontologies

In this paper, we propose the Discussion Board system to support the nebulous communication between the users who do not clearly express the concepts intended. This is achieved by indicating the conceptual differences between the users through the users’ direct creation of the concepts as ontologies and by showing other concepts obtained from World Wide Web.

Kaoru Sumi, Riichiro Mizoguchi

Web-Based Applications

Web-Based Intelligent Call Center for an Intensive Care Unit

This paper presents an intelligent call center, one that is a subsystem of a web-based monitoring system of an intensive care unit. Based on Computer Telephony Integration (CTI) technology, the call center attempts to efficiently and automatically send messages to patients’ families, doctors, and other staff of the hospital via communication media suitable to the occasion. The problem of determining the appropriate media is complicated by the urgency of the message, calling time, and communication media available to the target person. The Dempster-Shafer theory is employed to determine the most suitable communication media in terms of rapid and safe transmission of the message. In addition, the calling process is performed through agent technology without requiring the intervention of the user of the call center. The call center enables message transfer through various communication media in an integrated environment, and relieves the ICU staff from the time-consuming and tedious calling task, which in turn will enable the ICU staff to concentrate better on their primary function, caring for patients.

Kyungsook Han, Dongkyu Lee
Electronic Homework on the WWW

This paper presents the design of an Intelligent Tutoring System (ITS) MathEH, that is a coached problem solving system, also called “Electronic Homework”. In describing the main modules of the system, we emphasize its novel aspects: (1). Using Constraint Logic Programming (CLP) as the domain knowledge representation and automatic reasoning mechanism. (2). A method for probability propagation in Bayesian networks to achieve two adversary requirements: exact probability computation and real-time response. (3). The design decisions about how to deploy MathEH on the WWW.

Chunnian Liu, Lei Zheng, Junzhong Ji, Chengzhong Li, Jingyue Li, Wensheng Yang
The Shopping Gate — Enabling Role- and Preference- Specific E-commerce Shopping Experiences

Customer relationship management, one-to-one marketing, recommendation systems, and real-time click mining are common means to create a personalised customer interaction in today’s e-Commerce. However, one major aspect of customisation and adaptation has thus far been neglected — one buyer might assume multiple shopping roles. Searching for a present for an 11-yearold daughter leads to a different preference profile than evaluating workstations for the purchasing department.This paper proposes the shopping gate — a site on the Internet where buyers can create and maintain different roles with specific preference profiles. Before going on an electronic shopping trip, a buyer can pass the shopping gate, choose the appropriate role and take the corresponding preference profile along to the merchants ‘on the way’, thus entering these shops with a role-specific ‘skin’. Our prototype implementation of the shopping gate server, role representation, and protocol is based on open standards such as the platform for privacy preferences (P3P) and XML Schema.

Markus Stolze, Michael Ströbel
Building Reusable and Adaptable Web-Based Courses

The high costs of production and maintenance of high quality webbased instructional material require to optimize its use, by means of methods and tools that allow, on one hand, to dynamically obtain a variety of courses starting from a set of contents and pedagogical objectives, and, on the other, to build personalised educational paths starting from a course and learner’s needs. On this basis, we present an approach to the design and development of educational content aimed at tackling, in a uniform context, both problems of reusability and adaptability of web-based instructional material. The feasibility of our idea has been analysed via an XML-based application oriented to the training of adults (mathematics teachers and University students) on Game Theory. This application is an operative example of the power of XML technology for building educational web systems which satisfy both the author’s need of reusing material and the learner’s need to avail him/herself of effective educational resources.

Paola Forcheri, Maria Teresa Molfino, Stefano Moretti, Alfonso Quarati
Group Learning Support System for Software Engineering Education— Web-Based Collaboration Support Between the Teacher Side and the Student Groups—

Software development is knowledge-intensive work. In a software project, various types of problems need to be resolved. One approach to acquire necessary knowledge for software development in university education is for the students to experience project-based software development. For this kind of education to succeed collaboration is important, both between the teacher side (teacher and TAs) and student groups and within the group members. This paper proposes a collaborative software development learning environment that in particular focuses on supporting collaboration between the student groups and the teacher side.

Atsuo Hazeyama, Akiko Nakako, Sachiko Nakajima, Keiji Osada
The Intelligent Electronic Shopping System Based on Bayesian Customer Modeling

The rapid development of computer network technology makes the electronic shopping come into fashion. In this paper, first we analyze the common electronic shopping system and discuss the bottlenecks that restrict its development. To solve these problems, individual product information should be provided to customers. Then we propose a Bayesian customer model and apply it in our intelligent electronic shopping system, which can predict the requirements of customers and provide them with individual product information actively.

Junzhong Ji, Lei Zheng, Chunnian Liu
Leveraging a Web-Aware Self-Organization Map Tool for Clustering and Visualization

The self-organization map (SOM) neural network has been recognized as a successful paradigm for clustering and visualization in a large variety of real-world applications. There exist a number of useful stand-alone SOM tools, however, they cannot be adapted to the newgeneration web environment. In addition, different user interfaces required for operation and the heterogeneity of platforms where the tools run on prevent them from appeal. In this paper, we propose a web-aware SOM tool which integrates the computationally powerful SOM_PAK and the vivid Nenet tools to augment the advantages of each. The proposed SOM tool is capable of delimiting the desired clusters by adopting twolevel network topology and silhouette coefficients.

Sheng-Tun Li
Experiencing NetPeas: Another Qay of Learning

This study implement networked peer assessment in designing and, in doing so, develops a networked peer assessment model as well. Based on the proposed model, a networked peer assessment system is designed as its main frame in conjunction with an optional Vee diagram used as its interface to facilitate designing. In this system, students turn in their homework via a friendly web browser. Students assess each other’s homework by offering comments through the Internet. Students then reflect and modify their homework based on those comments. This procedure is repeated for k (k≥1) consecutive rounds, based on the schedule. In this process, students act as an adaptive learner, author, and reviewer. This learning model allows students to further develop their critical thinking and problem solving skills. Results revealed that the networked peer assessment model facilitated students to continuously progress when learning to design work.

Eric Zhi-Feng Liu, Sunny S. J. Lin, Shyan-Ming Yuan
ITMS: Individualized Teaching Material System— Web-Based Exploratory Learning Support System by Adaptive Knowledge Integration—

A problem in Web-based exploratory learning is the learning impasse caused by the content of a page. To enable learners to avoid the impasse, it is necessary for a system to adapt the content. We developed a Web-based exploratory leaning support system named ITMS, which belongs to the category of Web-based AES and has the following features: (1) it deals with the open Web, (2) it adaptively integrates the knowledge that they learned into an arbitrary page, and (3) it presumes their latest knowledge states from their knowledge reference.

Hiroyuki Mitsuhara, Youji Ochi, Yoneo Yano
An Intelligent Sales Assistant for Configurable Products

Some of the recent proposals of web-based applications are oriented to provide advanced search services through virtual shops. Within this context, this paper proposes an advanced type of software application that simulates how a sales assistant dialogues with a consumer to dynamically configure a product according to particular needs. The paper presents the general knowledge model that uses artificial intelligence and knowledge-based techniques to simulate the configuration process. Finally, the paper illustrates the description with an example of an application in the field of photography equipment.

Martin Molina
Reconciling of Disagreeing Data in Web-Based Distributed Systems Using Consensus Methods

In a Web-based distributed system reconciling of disagreeing data is needed when a data conflict exists. A conflict of data is a situation in which some sites of the system generate and store different versions of data which refer to the same subject (problem solution, event scenario etc.). Thus in purpose to solve this problem one final data version called a consensus of given versions should be determined. In this paper we present a consensus system serving to representing data conflicts; the definition of consensus; the postulates for consensus choice functions and their analysis.

Ngoc Thanh Nguyen
Web Based Digital Resource Library Tracing Author’s Quotation

In recent years, the researches about the learning environment using Web are increasing. In addition, in a classroom, some teachers prepare a Web based teaching materials on Web, and introduce them into their lesson. Moreover, since many of Web contents are exhibited on the Internet, some teachers use the contents as their teaching materials. However, a teacher may not aware existence the teaching-materials resource suitable for his needs. On the other hand, a student is often unable to look for suitable Web resource, either. We focusing on the features of Web that is easy information exchange in the distributed environment. We developed a digital resource library that supports to share of Web teaching materials using author’s quotation. In this paper, we consider the use form of Web teaching materials from the viewpoint of a teacher and a student. Then we describe the outline of our approach and prototype system.

ouji Ochi, Yoneo Yano, Riko Wakita
Backmatter
Metadaten
Titel
Web Intelligence: Research and Development
herausgegeben von
Ning Zhong
Yiju Yao
Jiming Liu
Setsuo Ohsuga
Copyright-Jahr
2001
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-45490-8
Print ISBN
978-3-540-42730-8
DOI
https://doi.org/10.1007/3-540-45490-X