Artificial Intelligence Advances https://journals.bilpubgroup.com/index.php/aia <p>ISSN: 2661-3220(Online)</p> <p>Email: aia@bilpublishing.com</p> <p>Follow the journal: <a style="display: inline-block;" href="https://twitter.com/Artific06590490" target="_blank" rel="noopener"><img style="width: 20px; position: relative; top: 5px; left: 5px;" src="https://journals.bilpubgroup.com/public/site/Twitter _logo.jpg" alt="" /></a></p> <p><a href="https://journals.bilpubgroup.com/index.php/aia/about/submissions#onlineSubmissions" target="_black"><button class="cmp_button">Online Submissions</button></a></p> en-US aia@bilpublishing.com (Managing Editor : Minne) ojs@bilpublishing.com (IT SUPPORT) Wed, 30 Oct 2024 00:00:00 +0800 OJS 3.3.0.13 http://blogs.law.harvard.edu/tech/rss 60 A Novel Framework for Text-Image Pair to Video Generation in Music Anime Douga (MAD) Production https://journals.bilpubgroup.com/index.php/aia/article/view/6848 <p>The rapid growth of digital media has driven advancements in multimedia generation, notably in Music Anime Douga (MAD), which blends animation with music. Creating MADs currently requires extensive manual labor, particularly for designing critical frames. Existing methods like GANs and transformers excel at text-to-video synthesis but lack the precision needed for artistic control in MADs. They often neglect the crucial hand-drawn frames that form the visual foundation of these videos. This paper introduces a novel framework for generating high-quality videos from text-image pairs, addressing this gap. Our multi-modal system interprets narrative and visual inputs, generating seamless video outputs by integrating text-to-video and image-to-video synthesis. This approach enhances artistic control, preserving the creator's intent while streamlining the production process. Our framework democratizes MAD production, encouraging broader artistic participation and innovation. We provide a comprehensive review of existing research, detail our model's architecture, and validate its effectiveness through experiments. This study lays the groundwork for future advancements in AI-assisted MAD creation.</p> Ziqian Luo, Feiyang Chen, Xiaoyang Chen, Xueting Pan Copyright © 2024 Ziqian Luo, Feiyang Chen, Xiaoyang Chen, Xueting Pan https://creativecommons.org/licenses/by-nc/4.0 https://journals.bilpubgroup.com/index.php/aia/article/view/6848 Wed, 31 Jul 2024 00:00:00 +0800 Double-Compressed Artificial Neural Network for Efficient Model Storage in Customer Churn Prediction https://journals.bilpubgroup.com/index.php/aia/article/view/6377 <p>In the rapidly evolving field of Artificial Intelligence (AI), efficiently storing and managing AI models is crucial, particularly as their complexity and size increase. This paper explores the strategic importance of AI model storage, focusing on performance, cost-efficiency, and scalability within the realm of customer churn prediction, utilizing model compression technologies. Deep learning networks, integral to AI models, have become increasingly large, necessitating millions of parameters. These parameters make the models computationally expensive and voluminous in storage requirements. Addressing these issues, the paper discusses the application of model compression techniques—specifically pruning and quantization—to mitigate the storage and computational challenges. The experimental results demonstrated the effectiveness of the proposed method. These techniques reduce the physical footprint of AI models and enhance their processing efficiency, making them suitable for deployment on resource-constrained devices. Using these models in customer churn prediction in telecommunications illustrates their potential to improve service delivery and decision-making processes. By compressing models, telecom companies can better manage and analyze large datasets, enabling more effective customer retention strategies and maintaining a competitive edge in a dynamic market.</p> Lisang Zhou, Huitao Zhang, Ning Zhou Copyright © 2024 Lisang Zhou, Huitao Zhang, Ning Zhou https://creativecommons.org/licenses/by-nc/4.0 https://journals.bilpubgroup.com/index.php/aia/article/view/6377 Mon, 29 Apr 2024 00:00:00 +0800 Domain Adaptation-Based Deep Learning Framework for Android Malware Detection Across Diverse Distributions https://journals.bilpubgroup.com/index.php/aia/article/view/6718 <p>This study addresses the challenge of Android malware detection, a critical issue due to the pervasive threats affecting mobile devices. As Android malware evolves, conventional detection methods struggle with novel or polymorphic malware that bypasses traditional defenses. This research leverages machine learning (ML) and deep learning (DL) techniques to overcome these limitations by adopting domain adaptation strategies that enhance model generalization across different distributions. The approach involves dividing a dataset into distinct distributions and applying domain adaptation techniques to ensure robustness and accuracy despite distribution shifts. Preliminary results demonstrate that domain adaptation significantly improves detection accuracy in target domains not represented in the training data. This paper showcases a domain adaptation-based method for Android malware detection, illustrating its potential to enhance security measures in dynamic environments. The findings suggest that integrating advanced ML and DL strategies with domain adaptation can substantially improve the efficacy of malware detection systems.</p> Shuguang Xiong, Xiaoyang Chen, Huitao Zhang, Meng Wang Copyright © 2024 Shuguang Xiong, Xiaoyang Chen, Huitao Zhang, Meng Wang https://creativecommons.org/licenses/by-nc/4.0 https://journals.bilpubgroup.com/index.php/aia/article/view/6718 Sat, 29 Jun 2024 00:00:00 +0800