Zum Inhalt

ICT for Global Innovations and Solutions

International Conference, ICGIS 2025, Virtual Event, April 26-27, 2025, Proceedings

  • 2026
  • Buch
insite
SUCHEN

Über dieses Buch

Dieser ACSAR-Band stellt die referierten Beiträge der Internationalen Konferenz ICGIS 2025, Virtual Event, vom 26. bis 27. April 2025 dar. ICGIS 2025 legt den Schwerpunkt auf Innovation in interdisziplinärer Forschung und Anwendung und präsentiert transformative Ideen in unterschiedlichen Bereichen. Der Band umfasst 49 vollständige Abhandlungen aus zahlreichen Einreichungen. Die Veranstaltung bot überzeugende Gespräche in einer Reihe von Bereichen - künstliche Intelligenz, intelligente Infrastruktur, Klimaanpassung, erneuerbare Energien, Cybersicherheit, digitale Gesundheit und datengestützte Politik -, die durch eine gemeinsame Vision vereint waren: Innovationen für eine nachhaltigere und sicherere Zukunft.

Inhaltsverzeichnis

Frontmatter
Advancing Vision-Language Models with Generative AI

Generative AI within large vision-language models (LVLMs) has revolutionized multimodal learning, enabling machines to understand and generate visual content from textual descriptions with unprecedented accuracy. This paper explores state-of-the-art advancements in LVLMs, focusing on prominent models such as CLIP for cross-modal retrieval, Flamingo for few-shot video understanding, BLIP for self-supervised learning, CoCa for integrating contrastive and generative learning, and X-CLIP for enhancing video-text retrieval. These models demonstrate the flexibility and scalability of LVLMs across a variety of applications. Through an evaluation based on metrics such as image generation quality, perceptual loss, and CLIP score, we provide insights into their capabilities, limitations, and opportunities for future enhancement. As generative AI continues to evolve, this analysis underscores the importance of developing scalable, efficient multimodal models capable of addressing real-world challenges with minimal fine-tuning.

Rahul Raja, Arpita Vats
Aërlink: Blockchain-Enabled Supply-Chain Transparency for Counterfeit Prevention

In an era where counterfeit pharmaceuticals and healthcare equipment pose serious risks to public safety, ensuring product authenticity is paramount. This paper presents Aërlink, a blockchain-based supply-chain management system designed to prevent counterfeit products by creating a decentralized, tamper-proof digital identity for each item. Leveraging smart contracts, Aërlink enables real-time product authentication through a secure and transparent ledger, empowering consumers with verifiable information about the products they purchase. Experimental evaluations demonstrate that the system offers a reliable, scalable solution to combat counterfeit trade in diverse industries, ultimately aiming to safeguard lives and bolster consumer trust.

Dewank Pant, Manan Wason, Akshat Joshi, Shruti Lohani
Revolutionizing Shift-Left Testing Through an Agentic AI Framework: Enhancing Software Quality and Digital Trust

In today’s fast paced software development world where AI agents and LLM (Large Language Model) are writing or assisting in code development, methods like Agile and DevOps demand faster, more efficient testing and early testing that are commonly known as shift-left testing. To deliver reliable, high-quality software early to market. Traditionally, In SDLC (software development life cycle) phase, testing comes late in the development process and leads to late and costly bug fixing. Shift-left testing means early testing in the software development lifecycle to identify and mitigate defects early.In this paper we are exploring the potential of Agentic AI in Shift left testing approach. A new wave of artificial intelligence frameworks designed to automate and revolutionize shift-left testing. By employing AI or Gen AI and machine learning (ML) techniques, these frameworks get trained directly from human testers, using trained classifiers to recognize application states, NLP (natural language processing) models learn and automate test workflows, and adaptable test-case generation models. In the 21st century, the AI era is going to redefine the software testing approach and AI-driven automation framework with the integrating of Agentic AI framework. Additionally, the paper demonstrates real-world benefits experienced by enterprises adopting Agentic AI and shift-left testing strategies, not only reduce costs, fewer delays but also provide next generation software testing approach and framework. Finally, this paper highlights areas for future research and practical insights into optimizing Agentic AI for more effective, efficient, responsible and this will change the future software testing with AI-driven shift-left testing framework.

Gaurav Sharma
AI in Engineering

Artificial intelligence is revolutionizing engineering practices by fundamentally transforming the software development lifecycle. Automating requirements gathering, enhancing testing, deployment and maintenance, AI-driven tools are redefining how software is built, delivered, and evolved. This paper explores the deepening integration of intelligent systems in software engineering, demonstrating how machine intelligence can boost productivity, reduce defects, hasten timelines, and facilitate continuous betterment. By tapping natural language processing, machine learning models, and predictive analytics, engineering teams across industries are updating legacy frameworks and embracing digital evolution. Drawing from authentic examples, this paper inspects AI's impact on each phase of the lifecycle, outlines architectural blueprints and platforms, and discusses the broader implications for future-oriented practices. Through case reports and visual workflows, it highlights how AI is shaping the next generation of scalable, adaptive, and efficient software ecosystems.

Nandhakumar Raju, Fardin Quazi
Compliance Automation for Mobile Payment Systems: Ensuring Adherence to Regulatory Standards

This comprehensive article examines the implementation of compliance automation systems in mobile payment platforms, focusing on regulatory adherence and technological solutions. The mobile payments industry is experiencing unprecedented growth, with global revenue reaching $2.1 trillion in 2021 and projected to grow at 7.3% CAGR through 2026, creating complex regulatory challenges that demand sophisticated automation solutions. Through detailed technical analysis and performance assessment of multiple implementation approaches, this article evaluates the effectiveness of AI-driven monitoring, blockchain verification, multi-cloud architectures, and advanced analytics in maintaining regulatory compliance. Our methodology combines quantitative performance analysis across 35 financial institutions with in-depth technical architecture reviews and structured case studies, providing a robust empirical foundation. The findings demonstrate that effective compliance automation reduces operational costs by 35%, improves regulatory reporting accuracy by 40%, and enables organizations to launch new products 40% faster than those relying on manual processes. Beyond operational efficiency, the research reveals the strategic value of compliance automation as organizations with mature capabilities achieve 28% higher customer retention rates and 15% lower customer acquisition costs. This article provides a comprehensive framework for technical implementation, addresses key challenges in scalability and integration, and offers a structured cost-benefit analysis to guide organizational investment decisions. By balancing technical depth with practical implementation guidance, this research contributes to advancing both theoretical understanding and practical application of compliance automation in the digital payment ecosystem.

Prabhu Govindasamy Varadaraj
Enhancing Network Security: Anomaly Detection Using Generalized Isolation Forest and Explainable AI

The network infrastructures currently face an onslaught of highly sophisticated and stealthy cyber threats in the increasingly more complex and connected digital world. Traditional rule-based detection systems rarely succeed in the identification of novel or evolving attack vectors, thus building the need for intelligent and adaptive anomaly detection frameworks. In this paper, we present a robust and scalable solution for network-based anomaly detection using the Explainable-Generalized Isolation Forest (EGIF) algorithm, which is well recognized for the identification of outliers in high-dimensional and imbalanced datasets representing network traffic. The GIF is an enhancement to the classical Isolation Forest methods that introduce more advanced candidate scoring mechanisms to better accommodate subtle anomalies and non-homogeneous data distributions. The pressing need in cybersecurity for explainability is solved by incorporating EGIF, a model-agnostic interpretability technique capable of providing justification. EGIF allows security analysts to act based on insights into feature-wise contributions to the anomaly scores, hence facilitating rapid Root Cause Analysis (RCA), making incident-response workflows more efficient, and satisfying explanations as mandated by regulation. In addition, this paper presents a cloud-native, real-time architecture hosted on Google Cloud Platform (GCP), deployed through Google Kubernetes Engine (GKE) yet cloud-agnostic at the application level. The architecture employs open-source technologies such as Apache Kafka, Apache Spark, Apache Airflow, and Prometheus/Grafana to ensure scalable data ingestion, processing, orchestration, and monitoring. This combination guarantees not only high performance and resilience but also real-time detection of incidents with interpretable reports. The system thus proposed represents a major enhancement to the security posture and trustworthiness of any automated network monitoring solution in highly dynamic environments, characteristic of modern enterprises.

Karan Alang, Anirudh Khanna, Suryaprakash Nalluri
Multi-Factor Authentication With Non-Intrusive Confidence Engine (NICE)

Information security is an essential aspect of resources and systems on the internet. Authentication systems must provide access to legitimate users while protecting against unauthorized access. Traditional authentication mechanisms often rely on password-based methods, with additional layers for enhanced security. This paper introduces NICE-MFA, a Multi-Factor Authentication system that leverages hardware, software, and network-based contexts from users’ registered devices to generate a confidence score. Our approach enhances security without sacrificing usability by minimizing interruptions faced during traditional MFA systems. By collecting and analyzing multiple contextual factors across different devices, NICE-MFA creates a non-intrusive authentication mechanism that balances security and usability. The system can serve as a complementary layer to existing authentication solutions, enhancing security while maintaining a seamless user experience.

Manan Wason, Dewank Pant
AI-Enhanced Orchestration in Hybrid Cloud Enterprise Integration: Transforming Enterprise Data Flows

Hybrid cloud enterprise integration presents a formidable challenge as organizations strive to harmonize legacy systems with modern, cloud-native applications. This article investigates the potential of AI-enhanced orchestration to dynamically manage integration workflows across such heterogeneous environments. By embedding artificial intelligence within orchestration platforms, enterprises can achieve real-time optimization of data flows, resource allocation, and security compliance, transforming static integration approaches into adaptive, self-healing systems. The article focuses on three key dimensions: dynamic resource allocation, real-time data flow management, and enhanced security monitoring. Traditional orchestration frameworks often struggle to react to fluctuating workloads and unpredictable network conditions. In contrast, AI algorithms analyze historical and real-time operational metrics to predict bottlenecks and proactively adjust resources across serverless functions, containerized microservices, and legacy infrastructures. AI-enhanced orchestration also improves fault tolerance by continuously monitoring integration pipelines, detecting anomalies, and initiating automated recovery processes. Various implementation approaches are examined, including augmenting existing platforms, leveraging cloud-native frameworks, and developing custom AI integration layers, along with challenges organizations face in the adoption and potential future directions of this transformative technology.

Tejaswi Bharadwaj Katta
AI-Driven DevOps Automation for Cloud-Native Application Modernization

Cloud-native architectures demand rapid, continuous delivery, stretching traditional DevOps workflows. Artificial intelligence (AI) and machine learning (ML) now supply the predictive insight and automation required to meet this pace. This paper introduces a unified AI-driven DevOps framework that optimizes the software-development lifecycle (SDLC) for cloud-native applications. We summarise recent advances in AI-enhanced CI/CD, predictive observability, proactive DevSecOps security, and self-healing infrastructure, and examine practical deployments across Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Finally, we outline future research directions—explainable and generative AI, federated learning, and responsible AI governance—charting a path toward sustainable, resilient, and secure cloud-native modernization.

Akshay Mittal
An Efficient KServe-Based Deep Learning Pipeline for Lung Cancer Detection with Enhanced Observability

Even today, lung cancer continues to be an important health problem around the world, underlining the need for effective detection methods crucial for increasing the survival rate. Providing lung cancer detection using deep learning from CT scans has been diagnosed in the previous work due to technical constraints, so this work focuses on achieving model efficiency, observability, scalability, and overall CT scan computation scalability. The architecture is built on the Keras framework with automatic differentiation, supports GPU model training, deployment, and inference through KServe in the Kubernetes cluster, can be transformed into compressed SavedModels on TensorFlow, and remotely served via REST/gRPC APIs with auto-scaling and version-controlled model inference. Moreover, observability of system performance, resource utilization, latency percentiles (P50, P90, P99), and system performance with regard to ultra-scalable microservices is executed by the observability layer created with Prometheus, Grafana, Jaeger, and Loki. The infrastructure presented helps combine experimental deep learning and neuroscience with real-life clinics without auditability, fail-with-inform apologies, and perfect explainable operational visibility crucial for AI in healthcare adoption.

Anupama Babu, Sudheep Elayidom, M. S. Athiramol, Sheenamol Yousaf, Midhun P. Mathew, K. M. Abubeker
AI-Empowered Healthcare Insurance Fraud Detection: An Analysis, Architecture, and Future Prospects

Healthcare insurance fraud is a growing concern, costing insurers billions of dollars and straining healthcare systems worldwide. Traditional methods, like manual claim reviews and rule-based detection, struggle to keep up with the increasingly sophisticated tactics used by fraudsters. This paper explores how artificial intelligence (AI) is transforming fraud detection, offering smarter and faster ways to identify suspicious claims. By leveraging machine learning and deep learning, AI can analyze vast amounts of data, spot hidden patterns, and adapt to new fraud schemes more effectively than ever before. We examine real-world examples of AI-driven fraud detection, highlighting its successes while also addressing key challenges such as data privacy, ethical concerns, and algorithmic bias. Looking ahead, advancements in AI, blockchain, and predictive analytics promise even greater efficiency in fraud prevention. This study provides insights for researchers and industry professionals seeking to harness AI’s potential to safeguard healthcare insurance from fraud.

Deven Yadav, Vijaykumar Viradia, Harikrishnan Muthukrishnan
Policy as Code: A Paradigm Shift in Infrastructure Security and Governance

Policy as Code represents a transformative approach to infrastructure security and governance in modern cloud environments. By codifying security and compliance policies as machine-readable code, organizations can automate enforcement throughout the development lifecycle. This paradigm shift addresses the velocity gap between rapid development cycles and traditionally slower security processes, enabling consistent policy enforcement without sacrificing agility. The integration with CI/CD pipelines allows for “shifting left” security considerations, identifying and remediating issues before they reach production. Various implementation approaches have emerged, from open-source tools like Open Policy Agent to cloud-native solutions, each with distinct advantages. While implementation challenges exist, including policy language complexity and organizational alignment, established best practices help organizations navigate these hurdles. As infrastructure continues to evolve, Policy as Code emerges as an essential strategy for maintaining security and compliance in dynamic, cloud-native environments, transforming governance from a perceived roadblock into an enabler of innovation.

Sarathe Krisshnan Jutoo Vijayaraghavan
The Convergence of CCAI, Chatbots, and RCS Messaging: Redefining Business Communication in the AI Era

This article examines the transformative convergence of Conversational AI (CCAI), intelligent chatbots, and Rich Communication Services (RCS) in modern business communication. The integration of these technologies represents a paradigm shift from traditional messaging systems toward sophisticated, context-aware engagement platforms that deliver personalized customer experiences at scale. As organizations across industries increasingly recognize conversational interfaces as essential components of their digital strategy, this convergence addresses longstanding limitations in customer engagement by enabling consistent interactions across multiple channels. The article analyzes how advanced NLP capabilities, machine learning algorithms, and contextual awareness combine with RCS features like rich media sharing, interactive elements, and verified business profiles to create powerful communication ecosystems. Through case studies spanning retail, financial services, and healthcare sectors, the article demonstrates how this technological integration delivers measurable improvements in customer satisfaction, operational efficiency, and conversion rates. It further explores implementation challenges, ethical considerations, and future trends including multimodal communication, emotional intelligence, and decentralized architectures, providing a comprehensive framework for understanding how these technologies are collectively redefining business communication in the AI era.

Raghu Chukkala
Democratizing AI: How AutoML is Transforming Enterprise Cloud Strategies

Automated Machine Learning (AutoML) is transforming how enterprises develop and implement AI solutions by democratizing access to advanced machine learning capabilities. This paradigm shift enables organizations to overcome traditional barriers to AI adoption by automating complex processes throughout the machine learning lifecycle, from data preprocessing to model deployment and monitoring. By reducing technical complexity and accelerating development cycles, AutoML allows domain experts without specialized data science knowledge to build effective AI solutions that address specific business challenges. Cloud providers have integrated robust AutoML capabilities into their platforms, enabling seamless implementation across various industries, including financial services, manufacturing, and retail. Despite impressive advancements, organizations must remain mindful of limitations regarding specialized applications, model transparency, and data quality requirements as they navigate their AutoML implementation journey.

Swapna Reddy Anugu
Modernizing Higher Education Through Cloud-Based Centralized ERP Systems: Lessons in Security and Efficiency

Cloud-based centralized Enterprise Resource Planning (ERP) systems have revolutionized higher education administration, providing integrated platforms that transform how institutions manage operations, secure data, and optimize costs. These systems offer advantages over traditional fragmented infrastructures, including improved IT flexibility, enhanced cost efficiency, superior educational services, and increased productivity. By implementing a unified data architecture with service-oriented components, institutions can address legacy challenges like data silos, redundant processes, and security vulnerabilities. Case studies demonstrate that large universities and smaller college consortiums benefit from hybrid cloud architectures, containerized microservices, and comprehensive security frameworks. While implementation presents challenges, particularly in data migration and integration with specialized academic systems, modern solutions like quality scoring algorithms, API gateways, and event-driven architectures effectively mitigate these issues. As technology evolves, institutions adopting cloud-based ERP systems are better positioned to harness emerging innovations such as artificial intelligence, blockchain for credential management, and Internet of Things (IoT) applications, paving the way for transformative advancements in education.

Sanjiv Kumar Bhagat
Edge Computing Revolution: Architecting the Future of Distributed Infrastructure

VMware's Edge Computing and Emerging Technologies are transforming modern IT infrastructure by enabling organizations to process data closer to the source, reducing latency and enhancing performance for distributed workloads. This comprehensive article encompasses the Edge Compute Stack (ECS), which provides a lightweight, scalable solution for deploying virtual machines at the edge across industries such as retail, manufacturing, and telecommunications. The integration of AI and machine learning capabilities, powered by GPUs, enables advanced analytics while maintaining data sovereignty in compliance with regional regulations. VMware's multi-cloud strategies strengthen edge deployments by ensuring workload mobility and disaster recovery across environments. Future technological advancements include ARM-based deployments, quantum-safe encryption, and self-healing infrastructure. Real-world applications demonstrate tangible benefits in retail inventory management, manufacturing predictive maintenance, and telecommunications service delivery. VMware's security approach incorporates multiple protection layers through micro-segmentation and zero-trust principles, while the Sovereign Cloud framework addresses data governance systematically across distributed environments.

Sai Prasad Mukala
Breaking Down Data Silos: Leveraging Data Layers and Unification Strategies for AI-Driven Business Intelligence in Healthcare

Data silos remain a key obstacle to business intelligence (BI), constraining holistic analytics and strategic decision-making. Despite advances in cloud computing, artificial intelligence, and data integration techniques, organizations continue to face scattered data environments. In this paper, based on Data Governance Theory, we examine the causes, impacts, and mechanisms of mitigating data silos through an integrative strategic framework. By leveraging centralized data lakes, cloud data warehouse services such as Salesforce Data Cloud, and real-time data virtualization, the proposed model introduces a governance-based decision-making approach. We present comparative case studies in healthcare and retail, demonstrating real-world application and performance estimates. Furthermore, we address concerns related to scalability, implementation challenges, and cost to ensure practical adoption. Original visual models, evaluation matrices, and projections of future trends reinforce this paper's theoretical and practical contributions.

Jagjot Bhardwaj, Sana Zia Hassan, Ashwin Saxena, Sivanagaraju Gadiparthi
Streaming Data at Scale – Ensuring Data Integrity in Kafka with Schema Registry

In the era of big data analytics and streaming platforms, Apache Kafka and Schema Registry based data pipelines have made rapid progress. This has enabled implementation of large scale, high throughput data pipelines for enterprises. This paper presents a robust approach to managing and validating high throughput streaming data using Apache Kafka in conjunction with Apache Avro and Confluent Schema Registry. We delve into strategies for effective schema design, serialization, and enforcement of compatibility rules, focusing on how Avro’s compact binary format and Schema Registry’s versioning capabilities work together to safeguard against data corruption and serialization mismatches. We then discuss the challenges with schema registry in terms of performance for high throughput systems and governance for distributed teams. We propose two novel solutions in this paper to address the challenges. One solution is to implement a gRPC based schema registry validation to address latency in high throughput systems. The other solution is to implement a multi-layer governance model to maintain change control in schema registry for distributed teams implementing Kafka, Avro and schema registry based solutions.

Karan Alang, Dipankar Saha, Jitender Jain
Cloud-Native Financial Intelligence: Distributed AI Architectures for Real-Time Market Analysis

Market complexity has outpaced traditional computing paradigms, creating a growing gap between available information and actionable intelligence in financial services. Drawing from three years of implementation experience, we present a distributed financial AI architecture that leverages containerized microservices alongside edge processing capabilities, enabling unprecedented analytical speed across global markets. Unexpectedly, our multi-tier approach revealed that selective computation reallocation during volatility spikes produced better results than uniform scaling strategies—a counterintuitive finding that contradicted our initial hypotheses. The production deployment processes over 7 million daily transactions and has demonstrated latency reductions of 76% while simultaneously improving predictive accuracy by 23%—figures that surprised even our implementation team. Security concerns initially threatened adoption; however, our zero-trust implementation framework (developed iteratively with compliance teams) has satisfied regulatory requirements across European and Asian markets. Several implementation challenges arose during deployment, including intermittent data consistency issues that required architectural modifications not initially anticipated. This paper offers both theoretical contributions and practical implementation guidance based on hard-won deployment experience across multiple financial institutions. The framework has already been adopted by five major trading firms, with client-reported ROI exceeding initial projections by approximately 40%.

Jitender Jain, Medha Gupta
Attackers Leveraging AI: Challenges and Countermeasures

Artificial intelligence (AI) advancement at a rapid pace together with its rapid adoption across different fields of technology has created powerful changes to cybersecurity practices. The defensive capabilities that AI provides to defenders include anomalous activity identification together with threat forecasting and incident automation yet this technology grants robust capabilities to attackers too. AI allows attackers to automate reconnaissance activities and create polymorphic malware while also generating deepfake contents and targeting machine learning system vulnerabilities through adversarial attacks. AI technology has introduced a fundamental transformation within cybersecurity because of its dual security and attack purposes. The research investigates modern threats stemming from AI alongside strategies which include adversarial training with AI-augmented threat identification and deepfake detection protocols and standards to manage ethical AI implementation. The growing intensity of security competition between attackers and defenders demands immediate implementation of proactive governance systems with robust AI frameworks and cross-sector cooperation for protecting the digital environment of the future.

Murali Mohan Malyala, Suryaprakash Nalluri, Hemalatha Kandagiri
Cybersecurity for Smart Grids: Resilient Energy Systems in the Digital Era

The digital transformation of energy systems through smart grids has improved efficiency, sustainability, and service delivery. However, integrating IoT devices, AI, and advanced metering infrastructure has also introduced complex cybersecurity challenges. This paper addresses the critical need for resilient cybersecurity measures to protect smart grids from emerging threats, including ransomware, data breaches, and grid manipulation.Focusing on proactive risk management, the chapter explores strategies to secure grid communication protocols and mitigate vulnerabilities in Advanced Metering Infrastructure (AMI). Smart grids can quickly identify and neutralize threats by leveraging AI-driven threat detection and response mechanisms. Case studies of cyber incidents impacting smart grids highlight the importance of resilient designs and robust security policies. The paper concludes with actionable recommendations for fortifying smart grids to ensure uninterrupted service delivery and system stability in the face of evolving cyber threats.

Anirudh Khanna, Suryaprakash Nalluri
Data Security and Privacy Through API Gateway

The API ecosystem has advanced significantly in recent years, becoming increasingly sophisticated and, unfortunately, more vulnerable to cyber attacks through various vectors. Middleware solutions like an API Gateway are essential to safeguard these crucial interfaces. These gateways secure the transactions that flow through them and play a vital role in managing and enforcing the policies necessary to protect consumer data and organizational assets. By acting as a robust barrier between external requests and internal resources, API Gateways help ensure that communications remain secure, reliable, and compliant with established security protocols, thereby safeguarding consumer data & adherence to laws. The API Gateway is crucial in bolstering transparency surrounding the utilization of consumer data. It ensures that data is managed effectively and aligns with established standards and protocols for sharing this information with a central aggregator. By facilitating seamless data flow, the API Gateway enhances accountability and trust, allowing organizations to demonstrate their commitment to responsible data management practices.

Anoop Gupta
Disruption in Data Engineering – Lakehouse Revolution with Iceberg

The data lakehouse is the latest evolution in big data architecture, combining the reliability of data warehouses with the scalability of data lakes. A key enabler of this paradigm is the open table format, which provides transactional consistency, schema evolution, and efficient data management. Apache Iceberg has emerged as the most advanced and widely adopted open table format solution, addressing critical challenges such as inefficient partitioning, lack of ACID compliance, and the absence of SQL-native analytics in traditional data lakes. It is a disruptive and transformative technology which is redefining the landscape of large-scale data management. Iceberg reintroduces SQL query efficiency at scale, eliminates vendor lock-in by supporting multiple processing engines, and enables time travel for historical data access. Its zero-copy architecture optimizes storage while maintaining a single source of truth for enterprises. This paper provides an in-depth analysis of Iceberg’s technical foundations, its advantages over other table formats, and its transformative role in modern data lakehouse architecture. Through detailed exploration and real-world use cases, we demonstrate how Iceberg is redefining data engineering and driving the future of scalable, cost-effective data architectures in the form of data lakehouse.

Dipankar Saha
Strategic Engineering Approaches for Enterprise System Transformation

This article introduces a novel Enterprise Performance Engineering Maturity Model (EPEMM) for transforming enterprise systems through advanced performance engineering methodologies. Unlike existing approaches, our framework synthesizes industry practices with empirical validation across multiple organizational contexts. Through extensive field research including surveys (n = 128), structured interviews (n = 42), and longitudinal case studies of Fortune 500 implementations, we demonstrate how the proposed EPEMM enables organizations to systematically advance their performance capabilities across five distinct maturity levels. Our comparative analysis reveals significant variations in effectiveness across industry sectors, with financial services achieving 43% greater improvement than manufacturing when implementing Level 4 practices. The research addresses critical gaps in current performance engineering literature by providing a reproducible methodology for capability assessment and progression, architectural reference models for implementation, and empirically-validated decision frameworks for practice selection. Our findings contribute to both theoretical understanding of performance engineering evolution and practical implementation guidance for organizations seeking performance excellence.

Sudhakar Reddy Narra
Dynamic Offer and Payment Personalization (DOPP): Reducing Cart Abandonment in E-commerce Using Random Forest Machine Learning Model

Cart abandonment in e-commerce, with a 70.19% rate in 2025, results in significant revenue losses for retailers. This paper proposes Dynamic Offer and Payment Personalization (DOPP) coined as Durga DOPP Framework, a novel system leveraging machine learning and real-time behavioural data, such as browsing time and cart value, to tailor payment options like digital wallets, buy-now-pay-later, or instant discounts to individual users. By analysing customer preferences dynamically, DOPP enhances checkout efficiency and user satisfaction. A simulation study with 10,000 synthetic transactions modelled on mid-sized retailer data showed a 28% reduction in cart abandonment and a 15% increase in conversion rates compared to static payment systems. This scalable, data-driven approach offers retailers a practical solution to optimize checkout processes, reduce losses, and improve customer engagement. The paper details DOPP’s architecture, implementation, and potential for broader e-commerce applications.

Durga Krishnamoorthy
Edge-Ready GenAI: Optimizing Performance and Efficiency for Resource-Constrained Environments

Generative AI models represent a significant advancement in content creation capabilities but face substantial challenges when deployed at the network edge due to inherent resource constraints. This article examines comprehensive optimization strategies for enabling generative AI functionality on edge devices without requiring cloud connectivity. The exponential growth in model size has created a widening gap between computational requirements and the limited resources available in edge environments. Through systematic model compression, architectural redesign, and hardware-software co-optimization, generative models can achieve dramatic efficiency improvements while maintaining acceptable quality thresholds. The compression techniques examined include pruning methodologies that systematically eliminate redundant parameters, quantization approaches that reduce numerical precision, and knowledge distillation methods that transfer capabilities from larger models to compact alternatives. Architectural innovations such as modified attention mechanisms, conditional computation, and neural architecture search further enhance efficiency by fundamentally rethinking model design for resource-constrained environments. The integration of these techniques with hardware-specific optimizations and specialized software frameworks enables practical deployment across diverse application domains. Real-world implementations in speech processing, computer vision, and industrial IoT demonstrate that properly optimized generative models can operate within edge constraints while delivering near-real-time performance and maintaining high-quality outputs. These advancements empower industries to leverage generative AI capabilities in scenarios where privacy concerns, connectivity limitations, or latency requirements make cloud-based processing impractical.

Sai Kalyan Reddy Pentaparthi
Enhancing Cross-VM Covert Channel Communication: Hybrid Approach with Advanced AI–Based Detection

Covert channels in multi-tenant virtualization environments pose a substantial security threat by allowing unauthorized data transfer through shared hardware resources. This paper proposes a novel hybrid approach that integrates timing-based and storage-based covert channel techniques with dynamic rate adaptation and advanced machine learning detection. Our framework leverages real-time hardware profiling and statistical feature extraction, enabling robust covert communication while minimizing detectability. Experimental evaluation in a nested virtualization setup demonstrates throughput rates up to 24.36 bytes per second with minimal error rates, outperforming single-technique baselines. We further present a machine learning–based detection system capable of identifying covert activity with over 90% accuracy and low false positives. These findings highlight the urgency of comprehensive mitigation strategies such as traffic normalization and multi-layer monitoring to secure virtualized cloud infrastructures against covert data exfiltration.

Dewank Pant, Manan Wason, Shruti Lohani
Enhancing POS Security: AI-Powered Identity Matching for Transaction Fraud Prevention

Retail fraud at Point of Sale (POS) checkouts, costing $30 billion annually worldwide, stems from weak identity verification, with 1–5% of transactions fraudulent and 60% of losses tied to high-value purchases (>$100). This study proposes an AI-driven system aiming to curb this by matching cardholder names with loyalty profiles in real time, supplemented by biometric (fingerprint) and behavioral biometric (gesture recognition) authentication for high-risk transactions. Leveraging natural language processing (92% accuracy) and anomaly detection (88% precision), the system flags 10% of transactions for secondary checks. In a simulation of 10,000 transactions (20% fraudulent), it detected 90% of frauds (1,800/2,000), including 450/500 high-value cases, far surpassing a baseline’s 30% detection rate (600). The system achieved low-risk transaction latency of 0.3 s and 0.6 s for high-risk cases with biometrics, compared to 1.5 s for traditional methods. It reduced false positives by 80% (500 to 100), chargebacks by 86% (1,400 to 200), and manual reviews by 80% (50 to 10 h/week). Implemented using POS fingerprint pads and gesture sensors, with Python-based simulations, this scalable, privacy-compliant solution enhances checkout efficiency for retailers like supermarkets and electronics chains. Biometrics outperformed behavioral methods in speed, though cost and adoption pose challenges. Future real-world tests will refine these findings, advancing AI and cybersecurity to bolster global retail resilience.

Uttam Kumar, Durga Krishnamoorthy
Harnessing Artificial Intelligence to Revolutionize Education: Personalized Learning and Beyond

Artificial Intelligence (AI) holds transformative potential for education by personalizing learning, enhancing teaching effectiveness, and streamlining administrative tasks. This paper explores key applications of AI in education, focusing on personalized learning platforms, AI-driven tutoring systems, and automation of administrative processes.Through machine learning, AI can tailor content to meet individual student needs, preferences, and abilities, promoting more effective and inclusive learning experiences. Building on this, we introduce the concept of hyper-personalized learning through Agentic AI as a novel advancement. Unlike traditional personalization approaches, Agentic AI enables dynamic, real-time adaptation by treating AI systems as autonomous agents that actively plan, reason, and interact with students. These intelligent agents can continuously adjust learning pathways based on a student’s evolving goals, emotional states, and learning behaviors, offering a deeply individualized and responsive educational experience.AI-powered tutoring systems provide real-time, targeted feedback, helping students overcome challenges and enabling continuous academic growth. For educators, AI reduces administrative burdens by automating grading, scheduling, and record-keeping, freeing up time to focus on student engagement and instruction. Beyond traditional classrooms, AI supports lifelong learning by delivering personalized content that helps individuals remain adaptable in a rapidly changing workforce.This paper also examines the ethical considerations surrounding AI in education, including issues of privacy, bias, and access. By analyzing current advancements and challenges, the study highlights opportunities for integrating AI, including hyper-personalized, agent-driven approaches, to foster equity, accessibility, and inclusivity in global education systems.Ultimately, this comprehensive analysis aims to demonstrate how AI, particularly through the novel lens of hyper-personalization using Agentic AI, can redefine the educational experience and drive meaningful improvements in learning outcomes for students worldwide.

Dhivya Nagasubramanian
Harnessing the Potential of Unstructured Data (Audio)- a New Era for Decision-Making

Unstructured data, encompassing audio, text, images, videos, social media posts, and sensor data, represents a valuable yet underutilized resource in modern business and research. Unlike structured data, unstructured data demands advanced analytical techniques to derive meaningful insights, particularly through Artificial Intelligence (AI) and Machine Learning (ML). This paper focuses on the growing importance of audio data as a form of unstructured information and explores the methods developed to harness its potential, including Natural Language Processing (NLP), speech recognition, and sound analysis. By leveraging these advanced techniques, businesses can analyze audio data from diverse sources—such as customer service calls, podcasts, surveillance systems, and media content—gaining insights into customer sentiment, operational efficiency, and market trends. The paper investigates the applications of audio data across industries including healthcare, finance, retail, and entertainment, emphasizing its role in enhancing patient care through speech analysis, supporting financial decision-making, and optimizing customer experience. Despite its considerable potential, challenges such as data quality, privacy concerns, and scalability continue to pose obstacles. Addressing these challenges will enable organizations to fully exploit audio data, driving innovation and improving decision-making in data-driven environments.

Dhivya Nagasubramanian
AI Chronic Diseases Preventive Care: Integrating Electronic Health Records, Genomic Data, and Real-Time Patient Monitoring with AI for Enhanced Early Detection of Chronic Diseases and Optimization of Peptide Drug Manufacturing

Chronic diseases such as diabetes, cardiovascular conditions, and metabolic syndromes have emerged as the leading causes of mortality and healthcare expenditure globally. Traditional healthcare systems, primarily reactive and fragmented, often fail to detect chronic conditions early or manage them effectively. Simultaneously, the pharmaceutical sector faces challenges in peptide-based therapeutic manufacturing, including variability, inefficiency, and high production costs.This paper proposes a comprehensive, multi-layered AI framework that addresses these dual challenges by integrating Electronic Health Records (EHRs), genomic data, real-time patient monitoring through Internet of Medical Things (IoMT) devices, and pharmaceutical bioreactor telemetry. The architecture leverages cloud-native data integration, predictive analytics through machine learning and deep learning models, conversational AI agents for patient engagement, and digital twins combined with reinforcement learning (RL) to optimize peptide drug manufacturing.Quantitative results demonstrate significant improvements: early disease detection models achieved a ROC-AUC of 0.92 and an F1-score of 0.87, while pharmaceutical optimization reduced cycle times by 40% and improved quality control rates by 17%. The framework is designed with ethical AI principles, including bias mitigation, human-in- the-loop validation, federated learning for privacy preservation, and explainability through SHAP and LIME methods.This research offers a scalable, ethical blueprint for transforming both chronic disease management and pharmaceutical manufacturing, ensuring broader access, greater efficiency, and enhanced patient outcomes.

Ashwin Saxena, Sana Zia Hassan, Jagjot Bhardwaj
HealthVigil: Harnessing Federated AI for Cross-Border Pandemic Intelligence & Preemptive Intervention

The COVID-19 pandemic highlighted flaws in the global public health surveillance infrastructure and unveiled the challenges of timely outbreak detection, efficient cross-border coordination, and quick responses to novel health threats. As a revolutionary federated AI framework for creating new pandemic intelligence and pre-emptive intervention capacities, this paper introduces HealthVigil. Privacy-preserving federated learning techniques are used by HealthVigil models to train together without needing any agency or institutions to centralize patients’ sensitive data. Using datasets ranging from clinical records, genomic sequences, social media signals, mobility patterns, and environmental factors, HealthVigil provides a thorough early warning system to detect potential outbreaks before they propagate to epidemics. Our framework incorporates three key innovations: (1) The distributed anomaly detection system detects abnormal disease patterns while ensuring compliance with data sovereignty and privacy regulations; (2) an explainable AI module that provides transparent insights to public health officials, enhancing trust and facilitating timely decision-making; and (3) a cross-border coordination protocol that enables secure information sharing and collaborative response planning between nations while maintaining local governance. Using historical data from past outbreaks, HealthVigil can pinpoint 43 days sooner when pandemics are underway and offer the leading time to take early intervention action. Its federated architecture significantly reduces to 37 percent the otherwise impracticable false alarm rates of isolated systems, all while addressing ethical issues and regulatory impediments that have prohibited the international sharing of health data. The implementation challenges we discuss are data standardization, algorithmic fairness for diverse populations, and governance frameworks to enable global adoption. HealthVigil is a significant step in building a more potent global health infrastructure that could avert future pandemics by collaborating to sense and respond—the earlier, the better.

Shubham Gupta, Swapna Nadakuditi
HyperAutomation in Healthcare: Transforming Operations Through AI, RPA, and Intelligent Workflows

HyperAutomation in healthcare leverages advanced technologies such as AI (Artificial Intelligence), RPA (Robotics Process Automation), and intelligent workflows to comprehensively optimize and automate complex processes. This integration drives efficiency, accuracy, and enhances patient care by significantly reducing manual interventions. This study aims to explore the evolution and impact of HyperAutomation in the healthcare sector, analyze the core technologies that enable it, and assess its benefits, challenges, and future directions. Our research findings indicate substantial improvements in operational efficiency and patient care quality, coupled with a significant reduction in errors and operational costs. However, there are noteworthy ethical considerations and barriers to widespread adoption that need to be addressed. The methodologies employed in this study include a combination of qualitative and quantitative research methods, utilizing data from case studies, industry reports, and academic literature to provide a comprehensive analysis. Through this research, we aim to contribute to the understanding of HyperAutomation’s transformative potential in healthcare and offer actionable insights for its effective implementation.

Fardin Quazi, Nandhakumar Raju
Harnessing Artificial Intelligence for Global Sustainability: Innovative Solutions to Climate, Health, and Urban Challenges

Artificial Intelligence (AI) has emerged as a powerful tool in addressing some of the most pressing global challenges, including climate change, public health crises, and rapid urbanization. This paper explores how AI-driven innovations can enhance global sustainability efforts by providing practical and scalable solutions. We examine AI applications in climate modeling, renewable energy optimization, disease prediction, urban planning, and smart infrastructure management, demonstrating their potential to transform policy-making and societal resilience. Case studies illustrate how AI systems are currently being deployed successfully to manage resources more efficiently, predict and mitigate risks, and support decision-making processes in diverse geographical contexts. The analysis also discusses ethical considerations, potential barriers, and recommendations for integrating AI responsibly to ensure equitable outcomes. Ultimately, this paper emphasizes the critical role of interdisciplinary collaboration in leveraging AI to create sustainable and resilient communities worldwide.

Srinivas Reddy Kosna
Machine Learning in Healthcare Mobile Applications: Advancing Patient Care Through Intelligent Systems

This article examines the integration of machine learning technologies in healthcare mobile applications, focusing on their implementation, challenges, and impact across various medical settings. The article analyzes the adoption of ML-powered healthcare solutions, exploring real-time diagnostics, patient monitoring systems, and personalized treatment optimization. The article covers technical frameworks, including core ML technologies and data processing pipelines, while addressing critical challenges in data privacy, regulatory compliance, and model interpretability. The article further evaluates implementation best practices, examining model optimization techniques and validation frameworks, culminating in a comprehensive assessment of healthcare outcomes and economic benefits. The article demonstrates the transformative potential of ML integration in improving healthcare delivery, patient care, and operational efficiency through extensive analysis of multiple healthcare facilities and patient populations.

Kamal Gupta
Utilizing the Metaverse: Innovative Approaches for Improved Management and Organisational Effectiveness

The emergence of the metaverse offers numerous innovative improvements in the management practices of enterprises. The current study provides a comprehensive overview of the revolutionary potential of the metaverse in enhancing cooperation, refining training methodologies, and establishing immersive work environments. Organizations like Meta, Microsoft, Mayo Clinic, Osso VR, Epic Games, and Roblox Corporation can transform conventional management strategies by integrating virtual and augmented reality technologies, substantially enhancing employee engagement and productivity. The current study delineates principal uses of the metaverse, including education, vehicle documentation, virtual medical investigation, virtual meetings, remote team-building exercises, and interactive training initiatives, substantiated by contemporary research and case studies. Additionally, the current study provides a direct and alternate solution based on published literature, associated with implementing these technologies, such as data security, privacy issues, and the necessity for a resilient organization infrastructure. This study recommends adopting the metaverse can yield a competitive edge by improving operational efficiency and cultivating a culture of innovation.

Ola Al Mari, Felix Velica-Martin, Pedro Brazo
Optimizing Apache Spark Workflows on Kubernetes for Cloud-Native Environments

As enterprises transition toward cloud-native architectures, the integration of Apache Spark with Kubernetes offers a powerful, flexible foundation for scalable distributed data processing. This combination supports a wide range of workloads—from high-throughput batch jobs, real-time streaming pipelines to scalable machine learning (ML) training and inference jobs – all within a unified, containerized environment. However, running Spark on Kubernetes introduces several operational and performance challenges, including resource allocation, pod scheduling, fault tolerance, and cluster optimization. This paper identifies the key differences between the traditional Spark deployments using YARN as Resource manager and deploying Spark on Kubernetes. The paper identifies not just the benefits of running Spark on Kubernetes, but also the challenges and mitigation strategies. We also investigate architectural considerations, tuning strategies, and deployment best practices for running diverse Spark workloads efficiently in Kubernetes environments. We explore dynamic resource allocation, executor and driver optimization, pod affinity rules, and integration with persistent and object storage solutions. Additionally, we address multi-tenant configurations that require fair scheduling, namespace isolation, and secure resource boundaries to support concurrent data teams and use cases. To operationalize Spark in production, we present DevOps-aligned best practices using Helm charts, GitOps workflows, and CI/CD pipelines to manage versioned, repeatable Spark deployments. Our findings serve as a comprehensive guide for data engineers, ML practitioners, and platform architects seeking to build robust Spark-on-Kubernetes pipelines for batch, streaming, and ML workload.

Karan Alang, Ankush Gumber, Praveen Chaitanya Jakku, Saigurudatta Pamulaparthyvenkata
AI-Driven Cloud Optimization for Cost Efficiency

AI-driven cloud optimization represents a transformative approach to addressing the significant challenges of cloud resource management and cost efficiency. As global cloud expenditure continues to grow at a rapid pace, organizations face increasing pressure to optimize their cloud investments while maintaining performance standards. This article examines how artificial intelligence technologies are revolutionizing cloud resource management through dynamic allocation, predictive analytics, and automated workload optimization. The integration of machine learning algorithms with cloud infrastructure enables unprecedented levels of accuracy in resource forecasting, automated scaling, and workload classification. These capabilities allow organizations to significantly reduce both over-provisioning and under-provisioning scenarios that plague traditional threshold-based management approaches. The economic benefits of these technologies are substantial and multifaceted, extending beyond direct cost reduction to include improved application performance, reduced downtime, and decreased operational overhead. As the complexity of cloud environments continues to increase, the strategic value of AI-driven optimization becomes increasingly apparent across diverse industry sectors, from financial services to healthcare and e-commerce.

Tarun Kumar Chatterjee
Preventing Homelessness Before It Happens: A Cloud-Based Risk Prediction Model for Sustainable Cities

Homelessness remains a critical challenge for urban governance, especially in rapidly growing cities such as Seattle. Despite substantial investment in housing assistance and support services, systemic limitations have led to reactive strategies that address homelessness after it has occurred.

Junaith Meeran Haja Mohideen
Revolutionizing Homeopathy: Integrating Data Analytics and AI/ML for Precision Remedy

Homeopathy is a system of medicine based on the principle: ‘like cures like’. Homeopathy provides as a cure for over 200 million people globally [1]. Traditionally a homeopathy remedy selection is primarily guided by materia medica [2] (a dataset of symptom-remedy pairs) which is subjective, slow, and heavily reliant on practitioner expertise. Due to variations in patient responses and the difficulty of integrating real-time feedback, current automated systems also face challenges in accurately matching complex symptom profiles to appropriate remedies, particularly. As part of this proposed framework to minimizing excessive testing and recommending the exact medication in fewer attempts, advanced data analytics, ensemble learning, and natural language processing (NLP) are used. On top of that, using AI and machine learning can really change the game when it comes to choosing the right remedy. This study proposes a framework that keeps learning and improving based on patient feedback and actual results from treatments. Turning the materia medica into a structured dataset and matching it with patient symptoms using Apache Spark to handle enormous patient data, and PyTorch helps with making smart predictions. Also by using natural language processing (NLP) to pull useful information from symptom descriptions, and passing through ensemble analytics, this framework mixes past data making the framework super dynamic and focused on real results.

Ram Ghadiyaram, Vamshidhar Morusu, Durga Krishnamoorthy, Jaya Eripilla
Safeguarding Personal Finances: A PySpark-Driven Risk Modeling Framework Inspired by Institutional Failures

Improper Value at Risk (VaR) estimation contributed to the collapse of institutions like Lehman Brothers (2008) and Barings Bank (1995), where inadequate risk modeling failed to capture extreme losses. For individuals, similar missteps in underestimating investment risks or unexpected expenses can jeopardize financial stability. This study introduces a PySpark-driven risk modeling framework to safeguard personal finances, adapting institutional-grade techniques such as Monte Carlo simulations, stress testing, VaR, and economic capital estimation. Applied to retirement planning ($500,000–$1,000,000 initial savings, $40,000 annual expenses, 6% return, 15% volatility), the framework estimates a 52.3–94.5% probability of savings lasting 30 years. A refined scenario with a $1,000,000 portfolio, 4% inflation-adjusted withdrawals, and simulated market crashes reveals success rates dropping to 41.5% under stress, underscoring the need for robust risk management. By leveraging PySpark’s scalability, this framework bridges institutional and personal finance, offering a data-driven tool for financial resilience and fintech innovation.

Jaya Eripilla, Ram Ghadiyaram, Durga Krishnamoorthy, Vamshidhar Morusu
Smart Grid Enterprise Integration: Security and Analytics Framework

This article presents a comprehensive framework integrating enterprise architecture for smart grid management with fraud detection systems, with particular emphasis on critical security, latency, and bandwidth requirements across diverse grid segments. The framework guarantees mission-critical reliability rates up to 99.999% while facilitating real-time data processing from Phasor Measurement Units operating at 30–120 samples per second with latencies under 100 ms. The architecture’s multi-layered design addresses the communication diversity spanning from Home Area Networks (HANs) operating at 10–100 Kbps to Wide Area Networks (WANs) requiring 2–10 Mbps bandwidth capacity. Advanced analytics capabilities including dimensionality reduction techniques compress PMU data from 500 to 20 dimensions while preserving 98% of variance, enhancing scalability. The security framework efficiently identifies complex false data injection attacks even with access to only 4 m in a 14-bus system. Enhanced by cloud computing infrastructure and achieving event classification accuracy above 95%, this framework offers a robust, real-time solution for modern grid demands, effectively balancing performance, security, and interoperability requirements.

Gokul Babu Kuttuva Ganesan
Sustainable Backup and Recovery Practices in Cybersecurity

The increasing reliance on digital infrastructure for critical operations has elevated the importance of backup and recovery systems, yet their environmental impact is often overlooked. This paper addresses the intertwined challenges of cybersecurity resilience and sustainability in backup and recovery practices. Traditional data protection approaches consume significant energy and resources, resulting in inefficiencies and increased carbon emissions. This work explores sustainable practices to address these issues, including data deduplication, compression, tiered storage, and energy-efficient cloud-based solutions.The integration of renewable energy sources and optimized lifecycle management further demonstrates the potential to reduce the environmental footprint of backup systems. Through case studies, the paper highlights real-world examples of organizations implementing eco-friendly strategies to achieve robust data protection while aligning with sustainability goals. The paper emphasizes the importance of businesses adopting green data protection strategies that enhance operational resilience and promote environmental stewardship.

Anirudh Khanna, Suryaprakash Nalluri
Smart Textile Circularity: A Hybrid Framework of AI-Enabled Optimization and Blockchain-Based Transparency

The global textile and fashion industry generates 92 million tons of waste annually and is responsible for 20% of industrial water pollution (Ellen MacArthur Foundation). As sustainability concerns rise, AI and ML are emerging as transformative tools to mitigate textile waste, optimize production efficiency, and drive sustainable innovation. The textile industry is one of the largest contributors to environmental pollution, responsible for excessive water consumption, chemical waste, and carbon emissions. AI and machine learning (ML) present innovative solutions to optimize manufacturing processes, enable textile recycling, and enhance supply chain transparency. This paper explores AI-driven approaches to sustainable textile production, analyzing key environmental and economic data to evaluate AI’s impact via AI-driven textile sorting, predictive analytics for demand forecasting, and AI-assisted material innovation which can significantly reduce waste and pollution. The study highlights under-researched areas, challenges as well as proposes future research directions.

Sana Zia Hassan, Abhaar Gupta
The Convergence of Cloud and Digital Financial Architecture in Enterprise Systems

This paper focuses on how Cloud Computing and digitalization of financial architecture converge in enterprise systems, particularly in terms of scale, efficiency and innovation. This paper introduces a novel architectural framework – the Bhatia Digital Finance Reference Architecture (DFRA) – designed to accelerate cloud ERP transformation in highly regulated industries using SAP S/4HANA Cloud. This addresses how Cloud-based ERP systems such as SAP S/4HANA Cloud are playing a significant part in allowing real-time data processing, compliance as well as integration with upcoming technologies such as Blockchain, AI and Predictive Analytics. It points to the advantages of embedded finance and API driven ecosystem of modularity and operational flexibility. Using the combination of AI and Machine Learning and Blockchain integration, transaction security and cost efficiency will be improved, as well as fraud detection and decision making. Still, security, compliance, and multi Cloud management, present challenges in future. The Cloud can be made fit for FinTech purposes by optimizing Cloud-FinTech integration, managing security exposures and improving regulation to support sustainable financial changes under a fast digitalizing environment.

Rahul Bhatia
The Rise of Smart Villages: Connecting Communities Through Data Analytics, AI/ML and Cloud Technology for Sustainable Agriculture and E-commerce

Rapid urbanization is shifting populations toward cities, yet billions in rural areas face challenges in accessing markets, healthcare, and infrastructure. Smart villages address these gaps by deploying IoT-based sensor networks, AI-driven analytics, edge computing, and cloud technology to foster sustainability and resilience. This paper presents a novel Smart Village platform integrating smart farming, healthcare, retail e-commerce, intelligent lighting, parking, traffic management, security, and environmental monitoring. The farming module leverages IoT and AI to enhance agricultural efficiency, achieving a 91% accurate crop prediction model, while healthcare improves medical access through telemedicine, significantly speeding up diagnoses. Retail e-commerce empowers rural economies, boosting farmer incomes by 25% and improving goods access for most villagers. We analyze the platform’s multi-layered architecture, data processing, and open API framework, addressing challenges like cybersecurity, privacy, and interoperability. This paper presents a roadmap to highlight AI, 5G, and blockchain as future enablers, with findings suggesting that predictive analytics and decentralized systems can transform rural communities into connected, economically vibrant ecosystems, aligning with global sustainability goals.

Vamshidhar Morusu, Durga Krishnamoorthy, Ram Ghadiyaram, Jaya Eripilla
The Role of AI in Modern Data Engineering: Automating ETL and Beyond

Artificial intelligence is transforming data engineering by enhancing traditional Extract, Transform, Load (ETL) processes with adaptive, self-optimizing systems. As organizations confront growing data volumes and complexity, AI offers solutions that extend beyond conventional approaches, introducing capabilities for automated schema detection, intelligent data quality management, performance optimization, and natural language interfaces. These advancements enable dynamic adaptation to changing data structures, sophisticated anomaly detection, resource allocation optimization, and more intuitive human-system interactions. Across financial services, manufacturing, and healthcare sectors, AI-driven data pipelines demonstrate substantial improvements in fraud detection, IoT data processing, and patient data harmonization. While challenges persist in explainability, training data requirements, governance, and skill transitions, the future points toward augmentation rather than replacement—creating synergistic partnerships between human expertise and machine intelligence that combine strategic thinking with pattern recognition at scale.

Janardhan Reddy Kasireddy
Integrating Zero Trust in CI/CD: A Modern Approach to DevSecOps Security

Modern DevSecOps systems depend heavily on the critical Zero Trust Security framework for protecting their environments specifically in CI/CD pipelines. Security models based on perimeter controls differ from Zero Trust because it implements “never trust, always verify” while continuously authenticating users and implementing strict access restrictions and monitoring threats in real time. Development teams enjoy improved software security through Zero Trust implementation within DevSecOps because the security measures are distributed across the complete development cycle from code to deployment. The combination of artificial intelligence and automation enables Zero Trust protection through its ability to detect irregularities along with behavioural pattern analysis and threat prediction functions. The transition of organizations to cloud-native systems and microservices requires Zero Trust as a necessary approach to protect against internal vulnerabilities as well as supply chain attacks and minimize security entry points. A flexible DevSecOps system emerges through identity-based access control combined with continuous monitoring alongside automated security policies which enable businesses to create a resilient and future-proof secure defense system (Thopalle, 2024).

Praveen Chaitanya Jakku, Mohammed Shakeer Bandrev, Suryaprakash Nalluri, Murali Mohan Malyala
Anatomy of Modern Cyber Threats: Case Studies on the PowerSchool Breach and DDoS Attacks in Critical Infrastructure

This paper provides a comprehensive case study of the two significant cyber-compromises in 2024–2025: a significant data breach at PowerSchool compromises the data of over 72 million students and educators; and a record- setting 3.8 Tbps DDoS attack on critical infrastructure. The paper reveals systemic weaknesses of cloud-based in education and fragility of current infrastructure on volumetric cyberattacks through forensic investigation and threat actor profiling analysis. When you examine key pressures, including third-party supply chain exposure, the theft of credentials using tricked infostealer malware and increasingly sophisticated DDoS attack vectors. It also assesses current mitigation methods, such as zero-trust architecture, AI-powered anomaly detection and decentralized DDoS scrubbing networks. This survey analyses these disruptive events to learn from them and derives lessons that can help improve both education systems and national infrastructure resilience against wicked cyber threats.

Bhushan Bhimrao Chavan, Vishalkumar Langaliya, Ashish Dhone
Intelligent Identity Orchestration with AI-Driven Policy Reconciliation for Multi-Cloud Security

Intelligent Identity Orchestration with AI-driven policy reconciliation emerges as a comprehensive solution for enterprises navigating the complex security challenges of multi-cloud environments. This article addresses the fundamental limitations of traditional identity and access management systems through a decentralized identity control plane that harmonizes authentication and authorization across disparate cloud platforms while preserving their native capabilities. By leveraging advanced transformer-based models like BERT (Bidirectional Encoder Representations from Transformers) and RoBERTa, the system translates provider-specific IAM configurations into normalized vector representations that capture semantic intent regardless of syntactical differences. Natural language processing facilitates this reconciliation through specialized pipelines that perform entity recognition, dependency parsing, and semantic role labeling to extract core policy components such as principals, actions, resources, and conditions across varying provider terminologies. These capabilities enable organizations to automatically detect and resolve policy conflicts, implement just-in-time (JIT) identity provisioning, and remediate policy misconfigurations across AWS, Azure, GCP, and on-premises infrastructure. The architecture integrates with open standards such as Identity Query Language (IDQL), Open Policy Agent (OPA), and zero trust principles to ensure consistent governance without duplicating infrastructure. This paradigm shift delivers substantial benefits including enhanced security posture through the elimination of policy gaps, operational efficiency via automated management, simplified regulatory compliance across jurisdictions, scalability to accommodate emerging technologies, and comprehensive risk reduction that encompasses privilege escalation, unauthorized access, and compliance violations. While implementation challenges exist regarding AI explainability and organizational change management, future advancements in decentralized identity integration and adaptive risk-based authorization promise to further transform multi-cloud security approaches.

Aditi Mallesh
Backmatter
Titel
ICT for Global Innovations and Solutions
Herausgegeben von
Saurav Bhattacharya
Copyright-Jahr
2026
Electronic ISBN
978-3-032-02853-2
Print ISBN
978-3-032-02852-5
DOI
https://doi.org/10.1007/978-3-032-02853-2

Informationen zur Barrierefreiheit für dieses Buch folgen in Kürze. Wir arbeiten daran, sie so schnell wie möglich verfügbar zu machen. Vielen Dank für Ihre Geduld.

    Bildnachweise
    AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, NTT Data/© NTT Data, Wildix/© Wildix, arvato Systems GmbH/© arvato Systems GmbH, Ninox Software GmbH/© Ninox Software GmbH, Nagarro GmbH/© Nagarro GmbH, GWS mbH/© GWS mbH, CELONIS Labs GmbH, USU GmbH/© USU GmbH, G Data CyberDefense/© G Data CyberDefense, FAST LTA/© FAST LTA, Vendosoft/© Vendosoft, Kumavision/© Kumavision, Noriis Network AG/© Noriis Network AG, WSW Software GmbH/© WSW Software GmbH, tts GmbH/© tts GmbH, Asseco Solutions AG/© Asseco Solutions AG, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH