skip to main content
10.1145/3503161.3548121acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation

Published: 10 October 2022 Publication History

Abstract

Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data, and multi-source domain adaptation (MSDA) is very attractive for real world applications. By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning. It is worth noting that both self-supervised learning and multi-source domain adaptation share a similar goal: they both aim to leverage unlabeled data to learn more expressive representations. Unfortunately, traditional multi-task self-supervised learning faces two challenges: (1) the pretext task may not strongly relate to the downstream task, thus it could be difficult to learn useful knowledge being shared from the pretext task to the target task; (2) when the same feature extractor is shared between the pretext task and the downstream one and only different prediction heads are used, it is ineffective to enable inter-task information exchange and knowledge sharing. To address these issues, we propose a novel Self-Supervised Graph Neural Network (SSG), where a graph neural network is used as the bridge to enable more effective inter-task information exchange and knowledge sharing. More expressive representation is learned by adopting a mask token strategy to mask some domain information. Our extensive experiments have demonstrated that our proposed SSG method has achieved state-of-the-art results over four multi-source domain adaptation datasets, which have shown the effectiveness of our proposed SSG method from different aspects.

Supplementary Material

MP4 File (MM22-fp1572.mp4)
This video introduced the paper Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation. It mainly contains four sections: introduction, motivation, method, and experiments. In the introduction part, we introduce domain adaptation and multi-source domain adaptation. In the motivation part, we introduce self-supervised learning and our proposed self-supervised graph neural network. In the method part, we use formulas and figures to make our method clear. In the part of the experiment, we show our comparison experiment with state-of-the-art models and effective experiments. Finally, we conclude the whole paper.

References

[1]
Mahsa Baktashmotlagh, Mehrtash T Harandi, Brian C Lovell, and Mathieu Salzmann. 2013. Unsupervised domain adaptation by domain invariant projection. In Proceedings of the IEEE International Conference on Computer Vision. 769--776.
[2]
Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine learning 79, 1 (2010), 151--175.
[3]
Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. 2017. Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3722--3731.
[4]
Koby Crammer, Michael Kearns, and Jennifer Wortman. 2008. Learning from Multiple Sources. Journal of Machine Learning Research 9, 8 (2008).
[5]
Jia Deng,Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. Ieee, 248--255.
[6]
Zhijie Deng, Yucen Luo, and Jun Zhu. 2019. Cluster alignment with a teacher for unsupervised domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 9944--9953.
[7]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[8]
Carl Doersch, Abhinav Gupta, and Alexei A Efros. 2015. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE international conference on computer vision. 1422--1430.
[9]
Carl Doersch and Andrew Zisserman. 2017. Multi-task self-supervised visual learning. In Proceedings of the IEEE International Conference on Computer Vision. 2051--2060.
[10]
Joan Bruna Estrach, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2014. Spectral networks and deep locally connected networks on graphs. In 2nd international conference on learning representations, ICLR, Vol. 2014.
[11]
Zeyu Feng, Chang Xu, and Dacheng Tao. 2019. Self-supervised representation learning by rotation feature decoupling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10364--10374.
[12]
Geoffrey French, Michal Mackiewicz, and Mark Fisher. 2017. Self-ensembling for visual domain adaptation. arXiv preprint arXiv:1706.05208 (2017).
[13]
Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In International conference on machine learning. PMLR, 1180-- 1189.
[14]
Muhammad Ghifary,WBastiaan Kleijn, Mengjie Zhang, David Balduzzi, andWen Li. 2016. Deep reconstruction-classification networks for unsupervised domain adaptation. In European conference on computer vision. Springer, 597--613.
[15]
Spyros Gidaris, Andrei Bursuc, Nikos Komodakis, Patrick Pérez, and Matthieu Cord. 2019. Boosting few-shot visual learning with self-supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 8059--8068.
[16]
Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. 2012. Geodesic flow kernel for unsupervised domain adaptation. In 2012 IEEE conference on computer vision and pattern recognition. IEEE, 2066--2073.
[17]
Arthur Gretton, Karsten Borgwardt, Malte Rasch, Bernhard Schölkopf, and Alex Smola. 2006. A kernel method for the two-sample-problem. Advances in neural information processing systems 19 (2006).
[18]
Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. 2012. A kernel two-sample test. The Journal of Machine Learning Research 13, 1 (2012), 723--773.
[19]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.
[20]
Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song. 2019. Using self-supervised learning can improve model robustness and uncertainty. Advances in Neural Information Processing Systems 32 (2019).
[21]
Xiang Jiang, Qicheng Lao, Stan Matwin, and Mohammad Havaei. 2020. Implicit class-conditioned domain alignment for unsupervised domain adaptation. In International Conference on Machine Learning. PMLR, 4816--4827.
[22]
Guoliang Kang, Lu Jiang, Yi Yang, and Alexander G Hauptmann. 2019. Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4893--4902.
[23]
Thomas N Kipf and MaxWelling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
[24]
Nikos Komodakis and Spyros Gidaris. 2018. Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations (ICLR).
[25]
Jack Lanchantin, Tianlu Wang, Vicente Ordonez, and Yanjun Qi. 2021. General multi-label image classification with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16478--16488.
[26]
Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. 2016. Learning representations for automatic colorization. In European conference on computer vision. Springer, 577--593.
[27]
Chen-Yu Lee, Tanmay Batra, Mohammad Haris Baig, and Daniel Ulbricht. 2019. Sliced wasserstein discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10285--10295.
[28]
Hankook Lee, Sung Ju Hwang, and Jinwoo Shin. 2020. Self-supervised label augmentation via input transformations. In International Conference on Machine Learning. PMLR, 5714--5724.
[29]
Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. 2015. Learning transferable features with deep adaptation networks. In International conference on machine learning. PMLR, 97--105.
[30]
Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. 2018. Conditional adversarial domain adaptation. Advances in neural information processing systems 31 (2018).
[31]
Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. 2016. Unsupervised domain adaptation with residual transfer networks. arXiv preprint arXiv:1602.04433 (2016).
[32]
Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. 2017. Deep transfer learning with joint adaptation networks. In International conference on machine learning. PMLR, 2208--2217.
[33]
Massimiliano Mancini, Lorenzo Porzi, Samuel Rota Bulo, Barbara Caputo, and Elisa Ricci. 2018. Boosting domain adaptation by discovering latent domains. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3771--3780.
[34]
Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. 2008. Domain adaptation with multiple sources. Advances in neural information processing systems 21 (2008).
[35]
Tuan Nguyen, Trung Le, He Zhao, Quan Hung Tran, Truyen Nguyen, and Dinh Phung. 2021. Most: Multi-source domain adaptation via optimal transport for student-teacher learning. In Uncertainty in Artificial Intelligence. PMLR, 225--235.
[36]
Van-Anh Nguyen, Tuan Nguyen, Trung Le, Quan Hung Tran, and Dinh Phung. 2021. Stem: An approach to multi-source domain adaptation with guarantees. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 9352-- 9363.
[37]
Mehdi Noroozi and Paolo Favaro. 2016. Unsupervised learning of visual representations by solving jigsaw puzzles. In European conference on computer vision. Springer, 69--84.
[38]
Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. 2019. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision. 1406--1415.
[39]
Joaquin Quiñonero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. 2008. Dataset shift in machine learning. Mit Press.
[40]
Zhongzheng Ren and Yong Jae Lee. 2018. Cross-domain self-supervised multi-task feature learning using synthetic imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 762--771.
[41]
Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. 2010. Adapting visual category models to new domains. In European conference on computer vision. Springer, 213--226.
[42]
Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3723--3732.
[43]
Jong-Chyi Su, Subhransu Maji, and Bharath Hariharan. 2020. When does selfsupervision improve few-shot learning?. In European Conference on Computer Vision. Springer, 645--666.
[44]
Baochen Sun, Jiashi Feng, and Kate Saenko. 2017. Correlation alignment for unsupervised domain adaptation. In Domain Adaptation in Computer Vision Applications. Springer, 153--171.
[45]
Baochen Sun and Kate Saenko. 2016. Deep coral: Correlation alignment for deep domain adaptation. In European conference on computer vision. Springer, 443--450.
[46]
Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. 2013. On the importance of initialization and momentum in deep learning. In International conference on machine learning. PMLR, 1139--1147.
[47]
Trieu H Trinh, Minh-Thang Luong, and Quoc V Le. 2019. Selfie: Self-supervised pretraining for image embedding. arXiv preprint arXiv:1906.02940 (2019).
[48]
Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7167--7176.
[49]
Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474 (2014).
[50]
Naveen Venkat, Jogendra Nath Kundu, Durgesh Singh, Ambareesh Revanur, et al. 2020. Your classifier can secretly suffice multi-source domain adaptation. Advances in Neural Information Processing Systems 33 (2020).
[51]
Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. 2017. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 5018--5027.
[52]
Hang Wang, Minghao Xu, Bingbing Ni, and Wenjun Zhang. 2020. Learning to combine: Knowledge aggregation for multi-source domain adaptation. In European Conference on Computer Vision. Springer, 727--744.
[53]
Junfeng Wen, Russell Greiner, and Dale Schuurmans. 2020. Domain Aggregation Networks for Multi-Source Domain Adaptation. In ICML 2020: 37th International Conference on Machine Learning, Vol. 1. 10214--10224.
[54]
Garrett Wilson and Diane J Cook. 2020. A survey of unsupervised deep domain adaptation. ACM Transactions on Intelligent Systems and Technology (TIST) 11, 5 (2020), 1--46.
[55]
Ruijia Xu, Ziliang Chen, Wangmeng Zuo, Junjie Yan, and Liang Lin. 2018. Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3964--3973.
[56]
Luyu Yang, Yogesh Balaji, Ser-Nam Lim, and Abhinav Shrivastava. 2020. Curriculum manager for source selection in multi-source domain adaptation. In European Conference on Computer Vision. Springer, 608--624.
[57]
Yuning You, Tianlong Chen, Zhangyang Wang, and Yang Shen. 2020. When does self-supervision help graph convolutional networks?. In International Conference on Machine Learning. PMLR, 10871--10880.
[58]
Richard Zhang, Phillip Isola, and Alexei A Efros. 2016. Colorful image colorization. In European conference on computer vision. Springer, 649--666.
[59]
Weichen Zhang, Wanli Ouyang, Wen Li, and Dong Xu. 2018. Collaborative and adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3801--3809.
[60]
Han Zhao, Remi Tachet Des Combes, Kun Zhang, and Geoffrey Gordon. 2019. On learning invariant representations for domain adaptation. In International Conference on Machine Learning. PMLR, 7523--7532.
[61]
Han Zhao, Shanghang Zhang, Guanhang Wu, José MF Moura, Joao P Costeira, and Geoffrey J Gordon. 2018. Adversarial multiple source domain adaptation. Advances in neural information processing systems 31 (2018), 8559--8570.
[62]
Sicheng Zhao, Guangzhi Wang, Shanghang Zhang, Yang Gu, Yaxian Li, Zhichao Song, Pengfei Xu, Runbo Hu, Hua Chai, and Kurt Keutzer. 2020. Multi-source distilling domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 12975--12983.
[63]
Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. 2021. Domain adaptive ensemble learning. IEEE Transactions on Image Processing 30 (2021), 8008--8018.
[64]
Yongchun Zhu, Fuzhen Zhuang, and Deqing Wang. 2019. Aligning domain specific distribution and classifier for cross-domain classification from multiple sources. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 5989--5996.

Cited By

View all
  • (2024)Domain-Aware Graph Network for Bridging Multi-Source Domain AdaptationIEEE Transactions on Multimedia10.1109/TMM.2024.336172926(7210-7224)Online publication date: 2-Feb-2024
  • (2024)Subject-Based Domain Adaptation for Facial Expression Recognition2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG)10.1109/FG59268.2024.10581958(1-10)Online publication date: 27-May-2024
  • (2024)A unified pre-training and adaptation framework for combinatorial optimization on graphsScience China Mathematics10.1007/s11425-023-2247-067:6(1439-1456)Online publication date: 7-Feb-2024
  • Show More Cited By

Index Terms

  1. Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '22: Proceedings of the 30th ACM International Conference on Multimedia
    October 2022
    7537 pages
    ISBN:9781450392037
    DOI:10.1145/3503161
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 10 October 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. graph neural network
    2. multi-source domain adaptation
    3. self-supervised learning

    Qualifiers

    • Research-article

    Conference

    MM '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 995 of 4,171 submissions, 24%

    Upcoming Conference

    MM '24
    The 32nd ACM International Conference on Multimedia
    October 28 - November 1, 2024
    Melbourne , VIC , Australia

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)86
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 15 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Domain-Aware Graph Network for Bridging Multi-Source Domain AdaptationIEEE Transactions on Multimedia10.1109/TMM.2024.336172926(7210-7224)Online publication date: 2-Feb-2024
    • (2024)Subject-Based Domain Adaptation for Facial Expression Recognition2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG)10.1109/FG59268.2024.10581958(1-10)Online publication date: 27-May-2024
    • (2024)A unified pre-training and adaptation framework for combinatorial optimization on graphsScience China Mathematics10.1007/s11425-023-2247-067:6(1439-1456)Online publication date: 7-Feb-2024
    • (2024)Unsupervised Multi-source Adaptive Pedestrian Re-recognition: Based on Target Domain Prioritization and Multi-dimensional Edge FeaturesQuality, Reliability, Security and Robustness in Heterogeneous Systems10.1007/978-3-031-65123-6_23(315-329)Online publication date: 20-Aug-2024
    • (2023)Open-Scenario Domain Adaptive Object Detection in Autonomous DrivingProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3611854(8453-8462)Online publication date: 26-Oct-2023
    • (2023) CoI 2 A: Collaborative Inter-domain and Intra-domain Alignments for Multisource Domain Adaptation IEEE Transactions on Geoscience and Remote Sensing10.1109/TGRS.2023.332615661(1-8)Online publication date: 2023
    • (2022)Weighted progressive alignment for multi-source domain adaptationMultimedia Systems10.1007/s00530-022-00987-729:1(117-128)Online publication date: 8-Aug-2022

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media