Publications

Research papers in formal verification, trustworthy AI, and security.

Conference Papers

2025 2 papers

Convex Hull Approximation for Activation Functions

OOPSLA'25 CCF-A CORE-A

Zhongkui Ma, Zihan Wang, Guangdong Bai · the ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA'25) within SPLASH'25

Abstract & BibTeX

The wide adoption of deep learning in safety-critical domains has driven the need for formally verifying the robustness of neural networks. A critical challenge in this endeavor lies in addressing the inherent non-linearity of activation functions. The convex hull of the activation function has emerged as a promising solution, as it effectively tightens variable ranges and provides multi-neuron constraints, which together enhance verification precision. Given that constructing exact convex hulls is computationally expensive and even infeasible in most cases, existing research has focused on over-approximating them. Several ad-hoc methods have been devised for specific functions such as ReLU and Sigmoid. Nonetheless, there remains a substantial gap in developing broadly applicable approaches for general activation functions. In this work, we propose WraAct, an approach to efficiently constructing tight over-approximations for activation function hulls. Its core idea is to introduce linear constraints to smooth out the fluctuations in the target function, by leveraging double-linear-piece (DLP) functions to simplify the local geometry. In this way, the problem is reduced to over-approximating DLP functions, which can be efficiently handled. We evaluate WraAct against SBLM+PDDM, the state-of-the-art (SOTA) multi-neuron over-approximation method. WraAct outperforms it on commonly-used functions like Sigmoid, Tanh, and MaxPool, offering superior efficiency (average 400X faster) and precision (average 150X) while constructing fewer constraints (average 50% reduction).

@article{10.1145/3763086,
  author = {Ma, Zhongkui and Wang, Zihan and Bai, Guangdong},
  title = {Convex Hull Approximation for Activation Functions},
  year = {2025},
  issue_date = {October 2025},
  publisher = {Association for Computing Machinery},
  volume = {9},
  number = {OOPSLA2},
  url = {https://doi.org/10.1145/3763086},
  doi = {10.1145/3763086},
  month = oct,
  articleno = {308},
  numpages = {27}
}

Model Modulation with Logits Redistribution

WWW'25 CCF-A CORE-A*

Zihan Wang, Zhongkui Ma, Xinguo Feng, Zhiyang Mei, Zhiyong Ma, Derui Wang, Jason Xue, Guangdong Bai · the ACM Web Conference 2025 (WWW'25)

Abstract & BibTeX

The substantial data and resource consumption of training deep neural networks has rendered the large-scale training accessible only to organizations with necessary infrastructure and massive datasets. We propose AIM, a novel model modulation paradigm that enables a single model to exhibit diverse behaviors meeting the specific needs of stakeholders. AIM introduces a logits redistribution strategy for modulating model behaviors in a training data-agnostic and retraining-free manner.

@inproceedings{wang2025ai,
  title={{AI} Model Modulation with Logits Redistribution},
  author={Zihan Wang and Zhongkui Ma and Xinguo Feng and Zhiyang Mei and Zhiyong Ma and Derui Wang and Jason Xue and Guangdong Bai},
  booktitle={THE WEB CONFERENCE 2025},
  year={2025},
  url={https://openreview.net/forum?id=lOSomJvrc5}
}
2024 3 papers

Uncovering Gradient Inversion Risks in Practical Language Model Training

CCS'24 CCF-A CORE-A*

Xinguo Feng, Zhongkui Ma, Zihan Wang, Eu Joe Chegne, Mengyao Ma, Alshrif Abuadbba, Guangdong Bai · the 31st ACM Conference on Computer and Communications Security (CCS'24)

Abstract & BibTeX

The gradient inversion attack has been demonstrated as a significant privacy threat to federated learning, particularly in continuous domains such as vision models. We propose GRAB (gradient inversion with hybrid optimization), a domain-specific gradient inversion attack featuring two alternating optimization processes. GRAB can recover a significant portion (up to 92.9% recovery rate) of the private training data.

@inproceedings{10.1145/3658644.3690292,
  author = {Feng, Xinguo and Ma, Zhongkui and Wang, Zihan and Chegne, Eu Joe and Ma, Mengyao and Abuadbba, Alsharif and Bai, Guangdong},
  title = {Uncovering Gradient Inversion Risks in Practical Language Model Training},
  year = {2024},
  publisher = {Association for Computing Machinery},
  doi = {10.1145/3658644.3690292},
  booktitle = {Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security},
  pages = {3525--3539},
  series = {CCS '24}
}

CoreLocker: Neuron-level Usage Control

S&P'24 CCF-A CORE-A*

Zihan Wang, Zhongkui Ma, Xinguo Feng, Ruoxi Sun, Hu Wang, Minhui Xue, Guangdong Bai · the 45th IEEE Symposium on Security and Privacy (S&P'24)

Abstract & BibTeX

We propose CORELOCKER, employing the strategic extraction of a small subset of significant weights from the neural network. This subset serves as the access key to unlock the model's complete capability. Authorized users with the access key have full access to the model, while unauthorized users can have access to only part of its capability.

@inproceedings{wang2024corelocker,
  title={CoreLocker: Neuron-level Usage Control},
  author={Wang, Zihan and Ma, Zhongkui and Feng, Xinguo and Sun, Ruoxi and Wang, Hu and Xue, Minhui and Bai, Guangdong},
  booktitle={2024 IEEE Symposium on Security and Privacy (SP)},
  pages={222--222},
  year={2024},
  organization={IEEE Computer Society}
}

ReLU Hull Approximation

POPL'24 CCF-A CORE-A*

Zhongkui Ma, Jiaying Li, Guangdong Bai · the 51st ACM SIGPLAN Symposium on Principles of Programming Languages (POPL'24)

Abstract & BibTeX

Convex hulls are commonly used to tackle the non-linearity of activation functions in the verification of neural networks. We propose a fast and precise approach to over-approximating the convex hull of the ReLU function. Our key insight is to formulate a convex polytope that "wraps" the ReLU hull, by reusing the linear pieces of the ReLU function as the lower faces and constructing upper faces that are adjacent to the lower faces. We implement our approach as WraLU, and evaluate its performance. WraLU outperforms existing methods by generating fewer constraints to achieve tighter approximation in less time. It reduces the number of constraints by up to half, while delivering comparable or even superior results.

@article{ma2024relu,
  title={ReLU Hull Approximation},
  author={Ma, Zhongkui and Li, Jiaying and Bai, Guangdong},
  journal={Proceedings of the ACM on Programming Languages},
  volume={8},
  number={POPL},
  pages={2260--2287},
  year={2024},
  publisher={ACM New York, NY, USA}
}
2023 2 papers

Verifying Neural Networks by Approximating Convex Hulls

ICFEM'23 CCF-C CORE-C

Zhongkui Ma · International Conference on Formal Engineering Methods (ICFEM'23), Doctoral Symposium

Abstract & BibTeX

The increasing prevalence of neural networks necessitates their verification in order to ensure security. This work concentrates on approximating the convex hull of activation functions. An approach is proposed to construct a convex polytope to over-approximate the ReLU hull when considering multi-variables.

@inproceedings{ma2023verifying,
  title={Verifying Neural Networks by Approximating Convex Hulls},
  author={Ma, Zhongkui},
  booktitle={International Conference on Formal Engineering Methods},
  pages={261--266},
  year={2023},
  organization={Springer}
}

Formalizing Robustness Against Character-Level Perturbations for Neural Network Language Models

ICFEM'23 CCF-C CORE-C

Zhongkui Ma, Xinguo Feng, Zihan Wang, Shuofeng Liu, Mengyao Ma, Hao Guan, Mark Huasong Meng · International Conference on Formal Engineering Methods (ICFEM'23)

Abstract & BibTeX

This work focuses on formalizing robustness specification against character-level perturbations for neural network language models. We introduce a key principle of three metrics, namely probability distribution, density, and diversity, for generalizing neural network language model perturbations and formulate the robustness specification against character-level perturbed text inputs.

@inproceedings{ma2023formalizing,
  title={Formalizing Robustness Against Character-Level Perturbations for Neural Network Language Models},
  author={Ma, Zhongkui and Feng, Xinguo and Wang, Zihan and Liu, Shuofeng and Ma, Mengyao and Guan, Hao and Meng, Mark Huasong},
  booktitle={International Conference on Formal Engineering Methods},
  pages={100--117},
  year={2023},
  organization={Springer}
}

Journal Papers

Early works on Social Simulation and Agent-based Models, published during undergraduate period (2014-2018).

2018 2 papers

Does Truthfully-Stating Strategy Really Have its Reward? — Research on the Communication Strategies of Innovation Quality

Technology Intelligence Engineering

Haixin Ding, Li Xie, Zhongkui Ma · Technology Intelligence Engineering

Abstract & BibTeX

A multi-phase and multi-state model is constructed at micro level for modeling the diffusion of innovation, and typical innovation quality communication strategies are proposed using Agent-based modeling and simulation.

Model of Weibo Negative Public Opinion Communication in Colleges and Universities Based on Double-layer Network

Journal of Jiamusi Vocational Institute

Zhongkui Ma · Journal of Jiamusi Vocational Institute

Abstract & BibTeX

A negative public opinion propagation model is constructed on the mixed two-layer network structure. The research shows that offline strategy helps to reduce the scope and speed of negative public opinion diffusion.