Many detection systems based on AI approaches have been proposed to defend against cybersecurity threats, such as malware and complex malicious behaviors. However, towards the capability of detection, even though these cybersecurity systems have achieved excellent results in a large-scale dataset, there is still a significant gap in their reliability and robustness.
Recent studies have shown that many AI-based systems are vulnerable to various attacks on both data and model levels, such as adversarial attacks and backdoor attacks. Hence, there is a pressing demand for relevant research on enhancing usability in various scenarios towards reliability and robustness on AI-based cybersecurity solutions, such as malware detection and malicious behavior detection.
We invite the AI for cybersecurity community to submit to this special issue solutions for new defense methods research and development on malware classification. The goal is to help discover limitations of the current state-of-the-art AI-based malware classification and propose new defense AI methods to withstand the malicious attacks in both black-box and white-box settings. We believe this special issue will offer a timely collection of research updates to benefit the researchers and practitioners working on essential AI for cybersecurity. Topics of interest include, but are not limited to:
- Data collection, validation, and maintenance (such as malware, goodware, and analysis reports)
- Solving data bottlenecks with AI methods (such as self-supervised, active, and few-shot)
- Decision-making for attack and defense (such as adversarial attack, data poisoning attack, backdoor attack)
- Automated adversarial sample generation
- Probable defense methods against existing white-box/black-box attacks
- Model reliability and robustness enhancement (such as feature engineering, data manipulation, neural network design, and multimodal learning)
- System reliability and robustness evaluation (such as software engineering criteria and interpretation methods)
- AI-based detection in fine-grain factors (such as malicious program behavior)
Important Dates
- Manuscript Submission Deadline: February 1, 2022
- First Round of Review: April 1, 2022
- Revised Papers Due: June 1, 2022
- Final Notification: August 1, 2022
- Final Manuscript Due: September 15, 2022
Submission Guidelines
Papers submitted to this special issue for possible publication must be original and must not be under consideration for publication in any other journal or conference. TDSC requires meaningful technical novelty in submissions that extend previously published conference papers. Extension beyond the conference version(s) is not simply a matter of length. Thus, expanded motivation, expanded discussion of related work, variants of previously reported algorithms, and incremental additional experiments/simulations may provide additional length but will fall below the line for proceeding with review.
For author information and guidelines on submission criteria, visit the TDSC Author Information page. Please submit papers through the ScholarOne system and be sure to select the special-issue name. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.
Guest Editors
Yang Liu, Nanyang Technological University, Singapore
Heng Yin, University of California, Riverside, USA
Sin G. Teo, Agency for Science, Technology and Research, Singapore
Ruitao Feng, Nanyang Technological University, Singapore
Xiaofei Xie, Singapore Management University, Singapore
Damith Ranasinghe, University of Adelaide, Australia
For any queries, please kindly send to tdscspecialeditor@gmail.com.
The post Call for Papers: Special Issue on Reliability and Robustness in AI-Based Cybersecurity Solutions first appeared on IEEE Computer Society.
0 Commentaires