skip to main content
10.1145/3650212.3680336acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article
Free access

Efficient DNN-Powered Software with Fair Sparse Models

Published: 11 September 2024 Publication History

Abstract

With the emergence of the Software 3.0 era, there is a growing trend of compressing and integrating large models into software systems, with significant societal implications. Regrettably, in numerous instances, model compression techniques impact the fairness performance of these models and thus the ethical behavior of DNN-powered software. One of the most notable example is the Lottery Ticket Hypothesis (LTH), a prevailing model pruning approach. This paper demonstrates that fairness issue of LTH-based pruning arises from both its subnetwork selection and training procedures, highlighting the inadequacy of existing remedies. To address this, we propose a novel pruning framework, Ballot, which employs a novel conflict-detection-based subnetwork selection to find accurate and fair subnetworks, coupled with a refined training process to attain a high-performance model, thereby improving the fairness of DNN-powered software. By means of this procedure, Ballot improves the fairness of pruning by 38.00%, 33.91%, 17.96%, and 35.82% compared to state-of-the-art baselines, namely Magnitude Pruning, Standard LTH, SafeCompress, and FairScratch respectively, based on our evaluation of five popular datasets and three widely used models. Our code is available at https://anonymous.4open.science/r/Ballot-506E.

References

[1]
[n. d.]. Anonymized Repository - Anonymous GitHub. https://anonymous.4open.science/r/Ballot-506E.
[2]
[n. d.]. Rasbt/Deeplearning-Models: A Collection of Various Deep Learning Architectures, Models, and Tips. https://github.com/rasbt/deeplearning-models/tree/master.
[3]
[n. d.]. Torch.Nn — PyTorch 2.1 Documentation. https://pytorch.org/docs/stable/nn.html#module-torch.nn.utils.
[4]
2020. CIFAR-100 Datasets. https://www.cs.toronto.edu/~ kriz/cifar.html. https://www.cs.toronto.edu/ kriz/cifar.html
[5]
Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. TensorFlow: A System for Large-Scale Machine Learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI’16). 265–283.
[6]
Tameem Adel, Isabel Valera, Zoubin Ghahramani, and Adrian Weller. 2019. One-Network Adversarial Fairness. Proc. of AAAI, 2412–2420. https://doi.org/10.1609/aaai.v33i01.33012412
[7]
Aniya Agarwal, Pranay Lohia, Seema Nagar, Kuntal Dey, and Diptikalyan Saha. 2018. Automated Test Generation to Detect Individual Discrimination in AI Models. arXiv preprint arXiv:1809.03260, arxiv:1809.03260.
[8]
Rico Angell, Brittany Johnson, Yuriy Brun, and Alexandra Meliou. 2018. Themis: Automatically Testing Software for Discrimination. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering - ESEC/FSE 2018. 871–875. https://doi.org/10.1145/3236024.3264590
[9]
Muhammad Hilmi Asyrofi, Zhou Yang, Imam Nur Bani Yusuf, Hong Jin Kang, Ferdian Thung, and David Lo. 2021. Biasfinder: Metamorphic test generation to uncover bias for sentiment analysis systems. IEEE Transactions on Software Engineering, 5087–5101.
[10]
Fatma Başak Aydemir and Fabiano Dalpiaz. 2018. A Roadmap for Ethics-Aware Software Engineering. In Proceedings of the International Workshop on Software Fairness. 15–21. https://doi.org/10.1145/3194770.3194778
[11]
Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? Proc. of NeurIPS.
[12]
Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 1798–1828.
[13]
Philipp Benz, Chaoning Zhang, Adil Karjauv, and In So Kweon. 2021. Robustness may be at odds with fairness: An empirical study on class-wise accuracy. In NeurIPS 2020 Workshop on pre-registration in machine learning. 325–342.
[14]
Alex Beutel, Jilin Chen, Zhe Zhao, and Ed H. Chi. 2017. Data Decisions and Theoretical Implications When Adversarially Learning Fair Representations. arXiv:1707.00075 [cs], July, arxiv:1707.00075.
[15]
Cody Blakeney, Nathaniel Huish, Yan Yan, and Ziliang Zong. 2021. Simon says: Evaluating and mitigating bias in pruned neural networks with knowledge distillation. arXiv preprint arXiv:2106.07849.
[16]
Su Lin Blodgett, Lisa Green, and Brendan O’Connor. 2016. Demographic dialectal variation in social media: A case study of African-American English. arXiv preprint arXiv:1608.08868.
[17]
Tim Brennan and William L. Oliver. 2013. Emergence of Machine Learning Techniques in Criminology: Implications of Complexity in Our Data and in Research Questions. Criminology & Pub. Pol’y, 551.
[18]
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models Are Few-Shot Learners. https://doi.org/10.48550/arXiv.2005.14165 arxiv:2005.14165.
[19]
Yuriy Brun and Alexandra Meliou. 2018. Software Fairness. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 754–759. https://doi.org/10.1145/3236024.3264838
[20]
T. Anne Cleary. 1966. Test Bias: Validity of the Scholastic Aptitude Test for Negro and White Students in Integrated Colleges. ETS Research Bulletin Series, i–23.
[21]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
[22]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through Awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference on - ITCS ’12. 214–226. https://doi.org/10.1145/2090236.2090255
[23]
Ming Fan, Wenying Wei, Wuxia Jin, Zijiang Yang, and Ting Liu. 2022. Explanation-guided fairness testing through genetic algorithm. In Proc. of ICSE. 871–882.
[24]
Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and Removing Disparate Impact. In Proc. of KDD. 259–268. https://doi.org/10.1145/2783258.2783311
[25]
Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635.
[26]
Jonathan Frankle and Michael Carbin. 2019. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In Proc. of ICLR.
[27]
Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, and Michael Carbin. 2020. Linear mode connectivity and the lottery ticket hypothesis. In Proc. of ICML. 3259–3269.
[28]
Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou. 2017. Fairness Testing: Testing Software for Discrimination. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. 498–510. https://doi.org/10.1145/3106237.3106277
[29]
Xuanqi Gao. 2022. FairNeuron.
[30]
Xuanqi Gao, Juan Zhai, Shiqing Ma, Chao Shen, Yufei Chen, and Qian Wang. 2022. Fairneuron: Improving Deep Neural Network Fairness with Adversary Games on Selective Neurons. In Proc. of ICSE. 921–933. https://doi.org/10.1145/3510003.3510087
[31]
Xuanqi Gao, Juan Zhai, Shiqing Ma, Chao Shen, Yufei Chen, and Shiwei Wang. 2023. CILIATE: Towards Fairer Class-based Incremental Learning by Dataset and Training Refinement. arXiv preprint arXiv:2304.04222.
[32]
Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. 2020. Shortcut Learning in Deep Neural Networks. Nature Machine Intelligence, 2, 11 (2020), November, 665–673. issn:2522-5839 https://doi.org/10.1038/s42256-020-00257-z arxiv:2004.07780.
[33]
Robert M. Guion. 1966. Employment Tests and Discriminatory Hiring. Industrial Relations: A Journal of Economy and Society, 20–37.
[34]
Antonio Gulli and Sujit Pal. 2017. Deep learning with Keras. Packt Publishing Ltd.
[35]
Huizhong Guo, Jinfeng Li, Jingyi Wang, Xiangyu Liu, Dongxia Wang, Zehong Hu, Rong Zhang, and Hui Xue. 2023. FairRec: Fairness Testing for Deep Recommender Systems. arXiv preprint arXiv:2304.07030.
[36]
Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149.
[37]
Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149.
[38]
Song Han, Jeff Pool, John Tran, and William J. Dally. 2015. Learning both Weights and Connections for Efficient Neural Network. In Proc. of NeurIPS. 1135–1143.
[39]
Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of Opportunity in Supervised Learning. Proc. of NeurIPS, 3315–3323.
[40]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In Proc. of CVPR. 770–778.
[41]
Yang He, Ping Liu, Ziwei Wang, Zhilan Hu, and Yi Yang. 2019. Filter pruning via geometric median for deep convolutional neural networks acceleration. In Proc. of CVPR. 4340–4349.
[42]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
[43]
Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. 2007. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments.
[44]
Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proc. of CVPR. 2704–2713.
[45]
Weipeng Jiang, Chao Shen, Chenhao Lin, Jingyi Wang, Jun Sun, and Xuanqi Gao. 2023. Black-Box Fairness Testing with Shadow Models. In International Conference on Information and Communications Security. 467–484.
[46]
Tian Jin. 2022. On neural network pruning’s effect on generalization. Ph. D. Dissertation.
[47]
Faisal Kamiran and Toon Calders. 2009. Classifying without Discriminating. In 2009 2nd International Conference on Computer, Control and Communication. 1–6.
[48]
Faisal Kamiran and Toon Calders. 2012. Data Preprocessing Techniques for Classification without Discrimination. Knowledge and Information Systems, 1–33. https://doi.org/10.1007/s10115-011-0463-8
[49]
Neeraj Kumar, Alexander C Berg, Peter N Belhumeur, and Shree K Nayar. 2009. Attribute and simile classifiers for face verification. In Proc. of ICCV. 365–372.
[50]
Ya Le and Xuan Yang. 2015. Tiny imagenet visual recognition challenge. CS 231N, 3.
[51]
Yann LeCun, John Denker, and Sara Solla. 1989. Optimal brain damage. Proc. of NeurIPS.
[52]
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2016. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710.
[53]
Yanhui Li, Linghan Meng, Lin Chen, Li Yu, Di Wu, Yuming Zhou, and Baowen Xu. [n. d.]. Training Data Debugging for the Fairness of Machine Learning Software. In Proc. of ICSE. 2215–2227. https://doi.org/10.1145/3510003.3510091
[54]
Zhangheng LI, Tianlong Chen, Linyi Li, Bo Li, and Zhangyang Wang. 2023. Can Pruning Improve Certified Robustness of Neural Networks? Transactions on Machine Learning Research.
[55]
Tailin Liang, John Glossner, Lei Wang, Shaobo Shi, and Xiaotong Zhang. 2021. Pruning and quantization for deep neural network acceleration: A survey. Neurocomputing, 370–403.
[56]
Sicong Liu, Yingyan Lin, Zimu Zhou, Kaiming Nan, Hui Liu, and Junzhao Du. 2018. On-Demand Deep Model Compression for Mobile Devices: A Usage-Driven Model Selection Framework. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services. 389–400. https://doi.org/10.1145/3210240.3210337
[57]
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep Learning Face Attributes in the Wild. In Proc. of ICCV.
[58]
Alexandra Sasha Luccioni, Sylvain Viguier, and Anne-Laure Ligozat. 2022. Estimating the carbon footprint of bloom, a 176b parameter language model. arXiv preprint arXiv:2211.02001.
[59]
Jian-Hao Luo and Jianxin Wu. 2017. An entropy-based pruning method for cnn compression. arXiv preprint arXiv:1706.05791.
[60]
Xiaolong Ma, Geng Yuan, Xuan Shen, Tianlong Chen, Xuxi Chen, Xiaohan Chen, Ning Liu, Minghai Qin, Sijia Liu, and Zhangyang Wang. 2021. Sanity checks for lottery tickets: Does your winning ticket really win the jackpot? Proc. of NeurIPS, 12749–12760.
[61]
Xiaolong Ma, Geng Yuan, Xuan Shen, Tianlong Chen, Xuxi Chen, Xiaohan Chen, Ning Liu, Minghai Qin, Sijia Liu, Zhangyang Wang, and Yanzhi Wang. 2021. Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot? In Proc. of NeurIPS. 12749–12760.
[62]
Silverio Martínez-Fernández, Justus Bogner, Xavier Franch, Marc Oriol, Julien Siebert, Adam Trendowicz, Anna Maria Vollmer, and Stefan Wagner. 2022. Software Engineering for AI-Based Systems: A Survey. ACM Transactions on Software Engineering and Methodology, 37e:1–37e:59. https://doi.org/10.1145/3487043
[63]
Rahul Mishra, Hari Prabhat Gupta, and Tanima Dutta. 2020. A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions. arxiv:2010.03954.
[64]
Pavlo Molchanov, Arun Mallya, Stephen Tyree, Iuri Frosio, and Jan Kautz. 2019. Importance estimation for neural network pruning. In Proc. of CVPR. 11264–11272.
[65]
Kaiming Nan, Sicong Liu, Junzhao Du, and Hui Liu. 2019. Deep Model Compression for Mobile Platforms: A Survey. Tsinghua Science and Technology, 677–693. https://doi.org/10.26599/TST.2018.9010103
[66]
Renkun Ni, Hong-min Chu, Oscar Castañeda, Ping-yeh Chiang, Christoph Studer, and Tom Goldstein. 2020. Wrapnet: Neural net inference with ultra-low-resolution arithmetic. arXiv preprint arXiv:2007.13242.
[67]
Cathy O’neil. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.
[68]
PaddlePaddle. [n. d.]. prune model-API Document.
[69]
Michela Paganini. 2020. Prune responsibly. arXiv preprint arXiv:2009.09936.
[70]
Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. [n. d.]. On Fairness and Calibration. 10.
[71]
Adam Polyak and Lior Wolf. 2015. Channel-level acceleration of deep face representations. IEEE Access, 2163–2175.
[72]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 469–481.
[73]
Prasanna Sattigeri, Samuel C. Hoffman, Vijil Chenthamarakshan, and Kush R. Varshney. 2019. Fairness GAN: Generating Datasets with Fairness Properties Using a Generative Adversarial Network. IBM Journal of Research and Development, 3–1.
[74]
Roy Schwartz, Jesse Dodge, Noah A Smith, and Oren Etzioni. 2020. Green ai. Commun. ACM, 54–63.
[75]
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proc. of ICCV. 618–626.
[76]
Agam Shah. 2023. Nvidia CEO Huang: Get Ready for Software 3.0.
[77]
Jieke Shi, Zhou Yang, Bowen Xu, Hong Jin Kang, and David Lo. 2022. Compressing pre-trained models of code into 3 mb. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering. 1–12.
[78]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
[79]
Bing Sun, Jun Sun, Long H Pham, and Jie Shi. 2022. Causality-based neural network repair. In Proc. of ICSE. 338–349.
[80]
Shyam A Tailor, Javier Fernandez-Marques, and Nicholas D Lane. 2020. Degree-quant: Quantization-aware training for graph neural networks. arXiv preprint arXiv:2008.05000.
[81]
Pengwei Tang, Wei Yao, Zhicong Li, and Yong Liu. 2023. Fair Scratch Tickets: Finding Fair Sparse Networks Without Weight Training. In Proc. of CVPR. 24406–24416.
[82]
TensorflowBlog. [n. d.]. TensorFlow Model Optimization Toolkit — Pruning API.
[83]
Huan Tian, Tianqing Zhu, Wei Liu, and Wanlei Zhou. 2022. Image fairness in deep learning: problems, models, and challenges. Neural Computing and Applications, 12875–12893.
[84]
Qi Tian, Kun Kuang, Kelu Jiang, Fei Wu, and Yisen Wang. 2021. Analysis and Applications of Class-Wise Robustness in Adversarial Training. In Proc. of KDD. 1561–1570.
[85]
PyTorch Tutorials. [n. d.]. Pruning Tutorial.
[86]
Sakshi Udeshi, Pryanshu Arora, and Sudipta Chattopadhyay. 2018. Automated Directed Fairness Testing. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering - ASE 2018. 98–108. https://doi.org/10.1145/3238147.3238165
[87]
Elmira van den Broek, Anastasia Sergeeva, and Marleen Huysman. 2019. Hiring Algorithms: An Ethnography of Fairness in Practice.
[88]
Ana Ware. 2022. How Giant AI Workloads and the Looming “Bandwidth Wall” Are Impacting System Architectures.
[89]
Yawen Wu, Dewen Zeng, Xiaowei Xu, Yiyu Shi, and Jingtong Hu. 2022. Fairprune: Achieving fairness through pruning for dermatological disease diagnosis. In International Conference on Medical Image Computing and Computer-Assisted Intervention. 743–753.
[90]
Xiaofei Xie, Lei Ma, Haijun Wang, Yuekang Li, Yang Liu, and Xiaohong Li. 2019. Diffchaser: Detecting disagreements for deep neural networks.
[91]
Depeng Xu, Shuhan Yuan, Lu Zhang, and Xintao Wu. 2018. Fairgan: Fairness-aware Generative Adversarial Networks. In 2018 IEEE International Conference on Big Data (Big Data). 570–575.
[92]
Zhou Yang, Muhammad Hilmi Asyrofi, and David Lo. 2021. Biasrv: Uncovering biased sentiment predictions at runtime. In Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1540–1544.
[93]
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating Unwanted Biases with Adversarial Learning. In Proc. of AAAI. 335–340. https://doi.org/10.1145/3278721.3278779
[94]
Lu Zhang, Yongkai Wu, and Xintao Wu. 2017. Achieving Non-Discrimination in Data Release. In Proc. of KDD. 1335–1344. https://doi.org/10.1145/3097983.3098167
[95]
Lingfeng Zhang, Yueling Zhang, and Min Zhang. 2021. Efficient white-box fairness testing through gradient search. In Proc. of ISSTA. 103–114.
[96]
Peixin Zhang, Jingyi Wang, Jun Sun, Guoliang Dong, Xinyu Wang, Xingen Wang, Jin Song Dong, and Ting Dai. 2020. White-Box Fairness Testing through Adversarial Sampling. In Proc. of ICSE. 949–960. https://doi.org/10.1145/3377811.3380331
[97]
Yedi Zhang, Zhe Zhao, Guangke Chen, Fu Song, Min Zhang, Taolue Chen, and Jun Sun. 2022. QVIP: an ILP-based formal verification approach for quantized neural networks. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering. 1–13.
[98]
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A Survey of Large Language Models. https://doi.org/10.48550/arXiv.2303.18223 arxiv:2303.18223.
[99]
Jie Zhu, Leye Wang, and Xiao Han. 2022. Safety and Performance, Why not Both? Bi-Objective Optimized Model Compression toward AI Software Deployment. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering. 1–13.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ISSTA 2024: Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis
September 2024
1928 pages
ISBN:9798400706127
DOI:10.1145/3650212
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 September 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. deep neural network
  2. fairness
  3. pruning

Qualifiers

  • Research-article

Funding Sources

  • the National Key Research and Development Program of China
  • the National Natural Science Foundation of China
  • the Shaanxi Province Key Industry Innovation Program

Conference

ISSTA '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 58 of 213 submissions, 27%

Upcoming Conference

ISSTA '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 8
    Total Downloads
  • Downloads (Last 12 months)8
  • Downloads (Last 6 weeks)8
Reflects downloads up to 15 Sep 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media