Enhancing cross-domain data analytics through multi-source transfer learning

Jan 1, 2025·
Jiarong Li
,
Jingge Wang
,
Jingyang Wang
,
Chihan Xu
,
Hao Wang
,
Chaobo Zhang
,
Xiaojun Liang
,
Wenbo Ding
,
Weihua Gui
· 0 min read
Abstract
The rapid evolution of machine learning and computer vision has given rise to numerous new tasks, primarily driven by deep convolutional neural networks’ ability to create complex mappings between feature space X and label space Y. These tasks often necessitate large-scale, human-labeled datasets for supervised training of the network. However, manual labeling is both costly and time-consuming. To overcome this hurdle, it is imperative to develop algorithms that can leverage existing labeled datasets to extract knowledge about the target domain. While most prior research has focused on single-source transfer learning, our study delves into multi-source transfer across various domains and tasks. The paper presents an improved cross-domain data analytics model that supports multi-source transfer learning by training a classifier to re-weight different sources and adjusting the rich and intricate information among sources to boost target learning. Experiments involving two source domains transferring to a single target domain showcase the effectiveness of our methodology. The model’s potential applications in image recognition include object recognition, medical imaging diagnosis, and so on. Future research could explore the scalability of the model to accommodate larger datasets and further refine the transfer learning process for more efficient knowledge extraction.
Type
Publication
International Conference on Cyberspace Simulation and Evaluation