Anglia Ruskin Research Online (ARRO)
Browse

Analyzing the vulnerabilities in split federated learning: assessing the robustness against data poisoning attacks

Download (3.88 MB)
journal contribution
posted on 2025-09-30, 12:50 authored by Aysha-Thahsin Zahir-Ismail, Raj Shukla
<p dir="ltr">Distributed Collaborative Machine Learning (DCML) offers a promising alternative to address privacy concerns in centralized machine learning. Split learning (SL) and Federated Learning (FL) are two effective learning approaches within DCML. Recently, there has been growing interest in Split Federated Learning (SFL), which combines elements of both FL and SL. This research provides a comprehensive study, analysis, and presentation of the impact of data poisoning attacks on Split Federated Learning (SFL). We propose three attack strategies: untargeted attacks, targeted attacks, and distance-based attacks. All these strategies aim to degrade the performance of the DCML classifier. We evaluate the proposed attack strategies using two case studies: Electrocardiogram Signal Classification and Automatic Handwritten Digit Recognition (MNIST dataset). We conducted a series of attack experiments, varying the percentage of malicious clients and the model split layer between the clients and the server. A comprehensive analysis of the attack strategies reveals that distance-based and untargeted poisoning attacks have a greater impact on evading classifier outcomes compared to targeted attacks in SFL.</p>

History

Related Materials

Item sub-type

research-article, Journal Article

Refereed

  • Yes

Volume

15

Publication title

Scientific Reports

ISSN

2045-2322

Publisher

Springer Science and Business Media LLC

Location

England

File version

  • Published version

Language

  • eng

Media of output

Electronic

Affiliated with

  • School of Computing and Information Science Outputs

Usage metrics

    ARU Outputs

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC