Artificial Intelligence Advances https://journals.bilpubgroup.com/index.php/aia <p>ISSN: 2661-3220(Online)</p> <p>Email: aia@bilpublishing.com</p> <p>Follow the journal: <a style="display: inline-block;" href="https://twitter.com/Artific06590490" target="_blank" rel="noopener"><img style="width: 20px; position: relative; top: 5px; left: 5px;" src="https://journals.bilpubgroup.com/public/site/Twitter _logo.jpg" alt="" /></a></p> <p><a href="https://journals.bilpubgroup.com/index.php/aia/about/submissions#onlineSubmissions" target="_black"><button class="cmp_button">Online Submissions</button></a></p> en-US aia@bilpublishing.com (Managing Editor : Minne) ojs@bilpublishing.com (IT SUPPORT) Wed, 30 Oct 2024 00:00:00 +0800 OJS 3.3.0.13 http://blogs.law.harvard.edu/tech/rss 60 A Novel Framework for Text-Image Pair to Video Generation in Music Anime Douga (MAD) Production https://journals.bilpubgroup.com/index.php/aia/article/view/6848 <p>The rapid growth of digital media has driven advancements in multimedia generation, notably in Music Anime Douga (MAD), which blends animation with music. Creating MADs currently requires extensive manual labor, particularly for designing critical frames. Existing methods like GANs and transformers excel at text-to-video synthesis but lack the precision needed for artistic control in MADs. They often neglect the crucial hand-drawn frames that form the visual foundation of these videos. This paper introduces a novel framework for generating high-quality videos from text-image pairs, addressing this gap. Our multi-modal system interprets narrative and visual inputs, generating seamless video outputs by integrating text-to-video and image-to-video synthesis. This approach enhances artistic control, preserving the creator's intent while streamlining the production process. Our framework democratizes MAD production, encouraging broader artistic participation and innovation. We provide a comprehensive review of existing research, detail our model's architecture, and validate its effectiveness through experiments. This study lays the groundwork for future advancements in AI-assisted MAD creation.</p> Ziqian Luo, Feiyang Chen, Xiaoyang Chen, Xueting Pan Copyright © 2024 Ziqian Luo, Feiyang Chen, Xiaoyang Chen, Xueting Pan https://creativecommons.org/licenses/by-nc/4.0 https://journals.bilpubgroup.com/index.php/aia/article/view/6848 Wed, 31 Jul 2024 00:00:00 +0800 Double-Compressed Artificial Neural Network for Efficient Model Storage in Customer Churn Prediction https://journals.bilpubgroup.com/index.php/aia/article/view/6377 <p>In the rapidly evolving field of Artificial Intelligence (AI), efficiently storing and managing AI models is crucial, particularly as their complexity and size increase. This paper explores the strategic importance of AI model storage, focusing on performance, cost-efficiency, and scalability within the realm of customer churn prediction, utilizing model compression technologies. Deep learning networks, integral to AI models, have become increasingly large, necessitating millions of parameters. These parameters make the models computationally expensive and voluminous in storage requirements. Addressing these issues, the paper discusses the application of model compression techniques—specifically pruning and quantization—to mitigate the storage and computational challenges. The experimental results demonstrated the effectiveness of the proposed method. These techniques reduce the physical footprint of AI models and enhance their processing efficiency, making them suitable for deployment on resource-constrained devices. Using these models in customer churn prediction in telecommunications illustrates their potential to improve service delivery and decision-making processes. By compressing models, telecom companies can better manage and analyze large datasets, enabling more effective customer retention strategies and maintaining a competitive edge in a dynamic market.</p> Lisang Zhou, Huitao Zhang, Ning Zhou Copyright © 2024 Lisang Zhou, Huitao Zhang, Ning Zhou https://creativecommons.org/licenses/by-nc/4.0 https://journals.bilpubgroup.com/index.php/aia/article/view/6377 Mon, 29 Apr 2024 00:00:00 +0800 DNN-Based AI-Driven H2/H∞ Filter Design of Nonlinear Stochastic Systems via Two-Coupled HJIEs-Supervised Adam Learning Algorithm https://journals.bilpubgroup.com/index.php/aia/article/view/7261 <p>This study introduces a new approach using supervised learning deep neural networks (DNNs) to develop an AI-driven filter for nonlinear stochastic signal systems with external disturbance and measurement noise. The filter aims to achieve a balanced design between and norm of the state estimation error to achieve both optimal and robust filtering design of nonlinear signal system simultaneously while considering environmental disturbance and measurement noise. Traditionally, this nonlinear filter design involves solving complex two-coupled Hamilton-Jacobi-Issac Equations (HJIEs). To simplify this complicated design process, a novel two-coupled HJIEs-supervised Adam learning algorithm is proposed for DNN-based AI-driven filter. This algorithm trains a DNN-based AI-driven filter offline using worst-case scenarios of environmental disturbance and measurement noise. This training phase generates state estimation errors that teach the DNN-based AI-driven filter how to coordinate nonlinear system model with worst-case external disturbance and measurement noise, Luenberger-type filter, estimation error dynamic model and two-coupled HJIEs-supervised deep Adam learning algorithm to achieve the mixed filtering strategy effectively. The study demonstrates theoretically that this approach will achieve the desired mixed filtering strategy once the Adam learning algorithm converges. Finally, the effectiveness of the proposed DNN-based AI-driven filter design method is validated through simulations, specifically involving trajectory estimation and prediction of an incoming ballistic missile detected by a radar system.</p> Bor-Sen Chen, Jui-Ming Ma, Ruei-Syuan Wu Copyright © 2024 Bor-Sen Chen, Jui-Ming Ma, Ruei-Syuan Wu https://creativecommons.org/licenses/by-nc/4.0 https://journals.bilpubgroup.com/index.php/aia/article/view/7261 Sun, 20 Oct 2024 00:00:00 +0800 Domain Adaptation-Based Deep Learning Framework for Android Malware Detection Across Diverse Distributions https://journals.bilpubgroup.com/index.php/aia/article/view/6718 <p>This study addresses the challenge of Android malware detection, a critical issue due to the pervasive threats affecting mobile devices. As Android malware evolves, conventional detection methods struggle with novel or polymorphic malware that bypasses traditional defenses. This research leverages machine learning (ML) and deep learning (DL) techniques to overcome these limitations by adopting domain adaptation strategies that enhance model generalization across different distributions. The approach involves dividing a dataset into distinct distributions and applying domain adaptation techniques to ensure robustness and accuracy despite distribution shifts. Preliminary results demonstrate that domain adaptation significantly improves detection accuracy in target domains not represented in the training data. This paper showcases a domain adaptation-based method for Android malware detection, illustrating its potential to enhance security measures in dynamic environments. The findings suggest that integrating advanced ML and DL strategies with domain adaptation can substantially improve the efficacy of malware detection systems.</p> Shuguang Xiong, Xiaoyang Chen, Huitao Zhang, Meng Wang Copyright © 2024 Shuguang Xiong, Xiaoyang Chen, Huitao Zhang, Meng Wang https://creativecommons.org/licenses/by-nc/4.0 https://journals.bilpubgroup.com/index.php/aia/article/view/6718 Sat, 29 Jun 2024 00:00:00 +0800