计算机科学
标识符
对抗制
生成语法
人工智能
生成对抗网络
数据挖掘
相容性(地球化学)
机器学习
理论计算机科学
深度学习
地球化学
地质学
程序设计语言
作者
Noseong Park,Mahmoud Mohammadi,Kshitij Gorde,Sushil Jajodia,Hong‐Kyu Park,Youngmin Kim
出处
期刊:Proceedings of the VLDB Endowment
[VLDB Endowment]
日期:2018-06-01
卷期号:11 (10): 1071-1083
被引量:123
标识
DOI:10.14778/3231751.3231757
摘要
Privacy is an important concern for our society where sharing data with partners or releasing data to the public is a frequent occurrence. Some of the techniques that are being used to achieve privacy are to remove identifiers, alter quasi-identifiers, and perturb values. Unfortunately, these approaches suffer from two limitations. First, it has been shown that private information can still be leaked if attackers possess some background knowledge or other information sources. Second, they do not take into account the adverse impact these methods will have on the utility of the released data. In this paper, we propose a method that meets both requirements. Our method, called table-GAN, uses generative adversarial networks (GANs) to synthesize fake tables that are statistically similar to the original table yet do not incur information leakage. We show that the machine learning models trained using our synthetic tables exhibit performance that is similar to that of models trained using the original table for unknown testing cases. We call this property model compatibility. We believe that anonymization/perturbation/synthesis methods without model compatibility are of little value. We used four real-world datasets from four different domains for our experiments and conducted in-depth comparisons with state-of-the-art anonymization, perturbation, and generation techniques. Throughout our experiments, only our method consistently shows a balance between privacy level and model compatibility.
科研通智能强力驱动
Strongly Powered by AbleSci AI