Skip to main content

Table 5 Analysis of membership inference attacks-based on devised examination criteria

From: Machine learning security and privacy: a review of threats and countermeasures

Reference

Machine learning model/ algorithm

Attack type

Exploited vulnerability

Attacker’s knowledge

Attacker’s goals

Attack severity and impact

Defined threat model

Targeted feature

Z. Zhu et al. [70], 2023

Multi-layer perceptron

MIA on sequential recommendation system

Surrogate and shadow models are designed to extract recommendations

Black box attack

Infer user recommendations

Inferring sequential recommendations leads to provide personalized details

Yes

Dataset inference

J. Chen et al. [71], 2021

Lasso regression, CNN

MIA with shadow model

Shadow model is used to mimic ground truth

White box attack

Retrieve confidential details of target model

Differential privacy mitigates MIA compromising accuracy of model

No

Model inference

M. Zhang et al. [72], 2021

Neural networks-based recommendation system

Inference attack to extract user-level details

Adversarial model is developed with theft users’ private data

Black box attack

Retrieve private details of victim model

Popularity randomization is effective against MIA in recommender system

Yes

Model privacy

Y. Zou et al. [73], 2020

Deep neural networks

Transfer learning-based black box attack

No privacy-preserved in transfer learning model

Black box attack

Infer training model details with three formulated attacks

Transfer machine learning is at serious threat of MIA

Yes

Model inference

J. Jia et al. [74], 2019

Neural network

MIA against binary classifier

Interpret output confidence score to manipulate model details

Black box attack

Retrieve private training data of classifier

Existing solutions are subjective to dataset used in classifier

No

Dataset inference