Data sharing and collaborative model training are promising ways to improve the quality of deep-learning models. However, it is usually difficult to implement such settings in practice due to data privacy concerns and relative regulations such as the GDPR and HIPAA. Federated learning aims to train collaboratively on distributed data sources without disclosing private data from each of the data sources, thus enabling privacy-preserving data sharing and collaboration.
However, federated learning also faces multiple challenges that may limit its applications in real-world use scenarios. For example, federated learning is still under the risk of various kinds of attacks that may result in leakage of individual data source privacy or degraded joint model accuracy. On the other hand, the common federated settings can only protect data privacy for each participant, but cannot identify incorrect inputs and computations from malicious participants. Thus, techniques like verifiable computing are also needed in federated learning. Furthermore, data privacy is a concern in critical businesses like the financial and healthcare industry, but the model interpretability is also essential. For a model generated from federated training, the model behavior or results need to be explainable and auditable to be widely used with confidence. It eventually leads to implementations of federated learning providers that will gain the trust of the federated learning consumers/users.
Federated learning on non-IID distributed data sources usually leads to lower model performance. Designing proper incentive mechanisms for federated systems is beneficial in practice as it encourages the active participation of data owners. Social responsibility in federated-learning systems is also an important topic that needs to be addressed.
We believe this special issue will offer a timely collection of research updates to benefit the researchers and practitioners working in federated learning. Topics of interest include, but are not limited to:
- Adversarial Attacks on Federated Learning
- Federated Learning for Non-IID Data
- Incentive Mechanisms in Federated Learning Systems
- Interpretability in Federated Learning
- Social Responsibility in Federated Learning Systems
- Fully Decentralized Federated Learning
- Verifiable Computing in Federated Learning
- Federated Learning with Blockchain
- Privacy-Preserving Techniques in Federated Learning
- Communication Efficiency in Federated Learning
- Federated Learning with Heterogeneous Devices
- Federated Learning with Unreliable Participants
- Systems and Infrastructures for Federated Learning
- Applications with Federated Learning
Important Dates
Submissions deadline: 15 January 2022
First notification: 1 March 2022
Revised papers: 1 May 2022
Final notification: 1 August 2022
Publication of special issue: October 2022
Submission Guidelines
Manuscripts must be within the scope of the IEEE Transactions on Big Data and the special issue on “Trustable, Verifiable, and Auditable Federated Learning.” Manuscript preparation guidelines are available on the TBD Author Information webpage. All papers will be handled via ScholarOne Manuscripts. Please select “SI: Trustable, Verifiable, and Auditable Federated Learning” when selecting article type name during the submission process. Submissions that are out of the scope of the journal may be rejected.
Guest Editors
- Qiang Yang, Hong Kong University of Science and Technology, Hong Kong (http://www.cs.ust.hk/~qyang/)
- Sin G. Teo, Agency for Science, Technology, and Research, Singapore
- Chao Jin, Agency for Science, Technology, and Research, Singapore
- Yu Han, Nanyang Technological University, Singapore (http://hanyu.sg/)
- Le Zhang, University of Electronic Science and Technology of China, China (https://zhangleuestc.github.io/)
0 Commentaires