umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Self-adaptive Privacy Concern Detection for User-generated Content
Umeå University, Faculty of Science and Technology, Department of Computing Science. (Database and Data Mining Group)ORCID iD: 0000-0001-8820-2405
Umeå University, Faculty of Science and Technology, Department of Computing Science. (Database and Data Mining Group)
2018 (English)In: Proceedings of the 19th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing), 2018, Cornell University Library, arXiv.org , 2018Conference paper, Published paper (Other academic)
Abstract [en]

To protect user privacy in data analysis, a state-of-the-art strategy is differential privacy in which scientific noise is injected into the real analysis output. The noise masks individual’s sensitive information contained in the dataset. However, determining the amount of noise is a key challenge, since too much noise will destroy data utility while too little noise will increase privacy risk. Though previous research works have designed some mechanisms to protect data privacy in different scenarios, most of the existing studies assume uniform privacy concerns for all individuals. Consequently, putting an equal amount of noise to all individuals leads to insufficient privacy protection for some users, while over-protecting others. To address this issue, we propose a self-adaptive approach for privacy concern detection based on user personality. Our experimental studies demonstrate the effectiveness to address a suitable personalized privacy protection for cold-start users (i.e., without their privacy-concern information in training data).

Place, publisher, year, edition, pages
Cornell University Library, arXiv.org , 2018.
Keywords [en]
privacy-guaranteed data analysis, deep learning, multi-layer perceptron
National Category
Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:umu:diva-146470OAI: oai:DiVA.org:umu-146470DiVA, id: diva2:1196463
Conference
19th International Conference on Computational Linguistics and Intelligent Text Processing, Hanoi, Vietnam, March 18-24, 2018
Projects
Privacy-aware Data FederationAvailable from: 2018-04-10 Created: 2018-04-10 Last updated: 2019-10-29Bibliographically approved
In thesis
1. Privacy-awareness in the era of Big Data and machine learning
Open this publication in new window or tab >>Privacy-awareness in the era of Big Data and machine learning
2019 (English)Licentiate thesis, comprehensive summary (Other academic)
Alternative title[sv]
Integritetsmedvetenhet i eran av Big Data och maskininlärning
Abstract [en]

Social Network Sites (SNS) such as Facebook and Twitter, have been playing a great role in our lives. On the one hand, they help connect people who would not otherwise be connected before. Many recent breakthroughs in AI such as facial recognition [49] were achieved thanks to the amount of available data on the Internet via SNS (hereafter Big Data). On the other hand, due to privacy concerns, many people have tried to avoid SNS to protect their privacy. Similar to the security issue of the Internet protocol, Machine Learning (ML), as the core of AI, was not designed with privacy in mind. For instance, Support Vector Machines (SVMs) try to solve a quadratic optimization problem by deciding which instances of training dataset are support vectors. This means that the data of people involved in the training process will also be published within the SVM models. Thus, privacy guarantees must be applied to the worst-case outliers, and meanwhile data utilities have to be guaranteed.

For the above reasons, this thesis studies on: (1) how to construct data federation infrastructure with privacy guarantee in the big data era; (2) how to protect privacy while learning ML models with a good trade-off between data utilities and privacy. To the first point, we proposed different frameworks em- powered by privacy-aware algorithms that satisfied the definition of differential privacy, which is the state-of-the-art privacy-guarantee algorithm by definition. Regarding (2), we proposed different neural network architectures to capture the sensitivities of user data, from which, the algorithm itself decides how much it should learn from user data to protect their privacy while achieves good performance for a downstream task. The current outcomes of the thesis are: (1) privacy-guarantee data federation infrastructure for data analysis on sensitive data; (2) privacy-guarantee algorithms for data sharing; (3) privacy-concern data analysis on social network data. The research methods used in this thesis include experiments on real-life social network dataset to evaluate aspects of proposed approaches.

Insights and outcomes from this thesis can be used by both academic and industry to guarantee privacy for data analysis and data sharing in personal data. They also have the potential to facilitate relevant research in privacy-aware representation learning and related evaluation methods.

Place, publisher, year, edition, pages
Umeå: Department of computing science, Umeå University, 2019. p. 42
Series
Report / UMINF, ISSN 0348-0542 ; 19.06
Keywords
Diferential Privacy, Machine Learning, Deep Learning, Big Data
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-162182 (URN)9789178551101 (ISBN)
Presentation
2019-09-09, 23:40 (English)
Supervisors
Available from: 2019-08-22 Created: 2019-08-15 Last updated: 2019-08-26Bibliographically approved

Open Access in DiVA

fulltext(3508 kB)1 downloads
File information
File name FULLTEXT01.pdfFile size 3508 kBChecksum SHA-512
08bf7c07244a5fcba3edd53d786c28ab4e64d37de9917596e9afbdb97b2add08eac9060cf622d6ad20ae8b62d9cde94c9b5444eb4f114d1e2ba3fc76375abcfa
Type fulltextMimetype application/pdf

Other links

URL

Authority records BETA

Vu, Xuan-SonLili, Jiang

Search in DiVA

By author/editor
Vu, Xuan-SonLili, Jiang
By organisation
Department of Computing Science
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar
Total: 1 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 218 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf