Skip to main content
Top

Information Integration and Web Intelligence

27th International Conference, iiWAS 2025, Matsue, Japan, December 8–10, 2025, Proceedings

  • 2026
  • Book

About this book

This book constitutes the refereed proceedings of the 27th International Conference on Information Integration and Web Intelligence, iiWAS 2025, held in Matsue, Japan, during December 8–10, 2025. The 23 full papers, 12 short papers and 1 keynote paper included in this book were carefully reviewed and selected from 79 submissions. They were organized in topical sections as follows: Keynote; Foundations of AI and Data Intelligence; Knowledge, Reasoning, and Human Interaction; Emerging Technologies and Applied Innovation; Creative and Generative AI.

Table of Contents

  • 1
  • 2
  • current Page 3
Previous
  1. Creative and Generative AI

    1. Frontmatter

    2. Generating Distinctive Recipe Names via Relative Feature Comparison in Recipe Set

      Maoto Watanabe, Yoshiyuki Shoji
      Abstract
      This paper proposes a method for generating expressive and distinctive recipe names by identifying each recipe’s unique features relative to others in the same collection. For example, when most recipes boil pasta in a pot, our method may generate a descriptive recipe name like “One-Pan Carbonara” for a recipe that completes the dish using a single frying pan only. The method detects ingredients, cooking procedures, and utensils that are statistical outliers, either significantly more or less frequent compared to the rest of the recipe set. These distinguishing features are then passed to a fine-tuned large language model, which generates the final recipe name. A user study showed that the proposed method produces accurate and appealing names that effectively highlight the distinctiveness of each recipe.
    3. Measuring Shape Unexpectedness of Exhibits Based on Similarity and Outlier Detection

      Maho Kinoshita, Wakana Kuwata, Hiroaki Ohshima
      Abstract
      This study proposes a method for finding museum exhibits that visually unexpected shapes, aiming to enhance visitor engagement and memory in museum. The proposed method measures shape-based unexpectedness by combining shape similarity computation with outlier detection. An exhibit is considered unexpected if it is identified as a shape-based outlier. To compute shape similarity, images are first converted into feature vectors using either Vision Transformer (ViT) or Convolutional Neural Networks (CNN). The study also investigates how converting color images into monochrome or line drawings affects the measurement of shape unexpectedness. For outlier detection, two methods, DBSCAN and PageRank-based approach are evaluated. Experiments were conducted using images of exhibits from the National Museum of Ethnology, Japan. Among all tested combinations, the pairing of ConvNeXt, a type of CNN, and PageRank-based approach achieved the highest performance with an nDCG@3 of 0.794.
    4. Automatic Facial Mist Application Skincare Based on Skin Condition Analysis

      Natsumi Matsui, Ayumi Ohnishi, Ayaka Uyama, Teizo Sugino, Tsutomu Terada, Masahiko Tsukamoto
      Abstract
      Skin condition changes throughout the day, yet current skincare does not adapt to these changes. Adaptive skincare requires sensing skin condition and applying appropriate products, ideally in an automatic way even during busy periods. This study proposes a new skincare concept that measures the skin condition and automatically applies lotion to maintain a stable condition. We developed a demo system using a glasses-type sensor device that assesses skin condition and applies a suitable facial mist. In an evaluation experiment, participants used the demo system and completed a questionnaire. Results showed that participants understood and valued the new skincare style, and based on these findings we discussed a possible system design to realize the concept.
    5. Enhancing Algorithms with LLMs: A Case Study

      Yashar Talebirad, Amirhossein Nadiri, Osmar R. Zaïane, Christine Largeron
      Abstract
      This paper explores the potential of Large Language Models (LLMs) to enhance community detection algorithms, with a focus on the SIWO (Strong In, Weak Out) algorithm. By integrating LLMs into the algorithm development process, focusing on their multi-disciplinary knowledge as a potential advantage over human expertise, we demonstrate how LLMs (with the possible oversight of a human expert) can generate innovative algorithm modifications that lead to enhanced performance. Our study reveals substantial reductions in execution times by more than 50% for SIWO when utilizing these modifications. Motivated by these promising results within the domain of Social Networks Analysis, we briefly introduce the Algorithmic Enhancement Framework (AEF), designed to extend these methodologies for broader algorithm enhancement. AEF employs the collaborative use of LLMs to generate and refine solutions iteratively, offering a novel foundational approach for incorporating LLM capabilities for the refinement of algorithms across a broad range of computational domains.
    6. Automated Instruction Generation via Alternating Evaluation and Creation with LLMs

      Ryo Tanaka, Yu Suzuki
      Abstract
      On crowdsourcing platforms, the quality of collected data depends on the clarity of instructions, but requesters struggle to create instructions that capture their own implicit criteria. To address this issue, we propose a novel framework that uses two Large Language Models (LLMs) – a Creator and an Evaluator – to automatically explore the space of possible instructions. In this iterative process, the Creator LLM generates diverse instruction candidates, and the Evaluator LLM, acting as a proxy for human workers, assesses their performance on a task, providing a fitness score. Our experiments show that this exploratory approach is effective for discovering high-quality instructions, even if the process does not show monotonic improvement. Using the best-performing instruction created by our method with gemma3, we achieved 5.4% higher accuracy and 0.035 lower RMSE than when gemma used an instruction created by a requester.
    7. Two-Stage Fine-Tuning for Dialogue Generation with Small Community Prominent Leaders’ Philosophies

      Tetsuya Kitahata, Kazuhiro Seki, Akiyo Nadamoto
      Abstract
      Recent advances in large language models (LLMs) have enabled the replication of speech patterns and philosophies of prominent historical figures. However, generating dialogue that reflects the philosophies of prominent leaders in small communities, such as founders of local universities or small businesses, remains a challenge due to the limited availability of public data. Nevertheless, the philosophies of such leaders often serve as important educational and behavioral foundations for members of these communities. In this study, we propose a dialogue generation method that enables the sharing of a prominent local leader’s philosophy through natural conversation. Specifically, we classify sentences left behind by the leader—such as those in books or diaries—into four types: statements, thoughts, actions, and facts. We then perform two-stage fine-tuning using the statements and thoughts to generate dialogues that faithfully reflect the leader’s values and philosophy.
    8. Can Stable Diffusion Recommend Outfits?:Outfit Recommendation from Fashion Item Images via Generative AI

      Yuma Oe, Yoshiyuki Shoji
      Abstract
      This paper proposes a method for generating and recommending fashionable outfit images based on a given image of a fashion item. The system uses image generation AI, specifically Stable Diffusion, to produce images of a person wearing the input item, leveraging inpainting techniques to complete the surrounding area. Two models were prepared: a fashionable model fine-tuned on highly rated outfit images from social media, and a normal model without fine-tuning. Both models generated multiple images featuring the input item, and object detection techniques (YOLO and CLIP) were used to identify and count frequently appearing items. Items that appeared more often in the outputs of the fashionable model were prioritized, and the corresponding images were ranked and presented as outfit recommendations. A subject experiment was conducted to evaluate the system, demonstrating that the proposed method can recommend stylish outfits and reflect query items more effectively than metadata-based recommendations.
    9. Effects of Image Samples on In-Context Learning of Multimodal Large Language Models

      Tomoya Ikeda, Shuhei Yamamoto
      Abstract
      Recently, multimodal large language models (LLMs) have gained attention due to their ability to handle various data types such as text, images, and audio. These models are useful for diverse tasks, especially through in-context learning, where task-specific performance can be improved by including a few examples in the prompt. However, effective methods for selecting few-shot samples in multimodal LLMs remain unclear. This study explores the impact of image samples on one-shot in-context learning using a violent image classification task. We investigate what kinds of image examples with associated labels help improve classification performance. Experimental results demonstrate that image samples can significantly affect the model’s performance, as shown by comparing zero-shot and one-shot settings. Furthermore, we analyze characteristics of image samples that lead to better or worse classification results. Our findings clarify the role of image examples in enhancing multimodal LLM performance in one-shot in-context learning scenarios.
    10. AI-Driven Web Game Development with Gemini 2.5 Pro

      Elena Popp, Helmut Hlavacs, Werner Winiwarter
      Abstract
      The rapid advancement of Large Language Models (LLMs) raises fundamental questions about the limits of AI in software development. This paper investigates whether a complete and playable multi-platform web-based game can be developed using only natural language prompts with an advanced LLM, Gemini 2.5 Pro. To explore this, the study compares two opposing development methodologies without the use of a traditional game engine.
      The study concludes that creating a playable game with prompts alone is possible, but only through a structured, iterative process. This positions the AI not as an autonomous developer, but as a powerful co-pilot that requires skilled, step-by-step guidance from a human expert.
  2. Backmatter

  • 1
  • 2
  • current Page 3
Previous
Title
Information Integration and Web Intelligence
Editors
Eric Pardede
Qiang Ma
Gabriele Kotsis
Toshiyuki Amagasa
Akiyo Nadamoto
Ismail Khalil
Copyright Year
2026
Electronic ISBN
978-3-032-11976-6
Print ISBN
978-3-032-11975-9
DOI
https://doi.org/10.1007/978-3-032-11976-6

PDF files of this book have been created in accordance with the PDF/UA-1 standard to enhance accessibility, including screen reader support, described non-text content (images, graphs), bookmarks for easy navigation, keyboard-friendly links and forms and searchable, selectable text. We recognize the importance of accessibility, and we welcome queries about accessibility for any of our products. If you have a question or an access need, please get in touch with us at accessibilitysupport@springernature.com.

Premium Partner

    Image Credits
    Neuer Inhalt/© ITandMEDIA, Nagarro GmbH/© Nagarro GmbH, AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH, USU GmbH/© USU GmbH, Ferrari electronic AG/© Ferrari electronic AG