构造(python库)
互联网隐私
计算机科学
计算机安全
心理学
数据科学
程序设计语言
作者
Philip Menard,Gregory J. Bott
摘要
Abstract To address various business challenges, organisations are increasingly employing artificial intelligence (AI) to analyse vast amounts of data. One application involves consolidating diverse user data into unified profiles, aggregating consumer behaviours to accurately tailor marketing efforts. Although AI provides more convenience to consumers and more efficient and profitable marketing for organisations, the act of aggregating data into behavioural profiles for use in machine learning algorithms introduces significant privacy implications for users, including unforeseeable personal disclosure, outcomes biased against marginalised population groups and organisations' inability to fully remove data from AI systems on consumer request. Although these implementations of AI are rapidly altering the way consumers perceive information privacy, researchers have thus far lacked an accurate method for measuring consumers' privacy concerns related to AI. In this study, we aim to (1) validate a scale for measuring privacy concerns related to AI misuse (PC‐AIM) and (2) examine the effects that PC‐AIM has on nomologically related constructs under the APCO framework. We provide evidence demonstrating the validity of our newly developed scale. We also find that PC‐AIM significantly increases risk beliefs and personal privacy advocacy behaviour, while decreasing trusting beliefs. Trusting beliefs and risk beliefs do not significantly affect behaviour, which differs from prior privacy findings. We further discuss the implications of our work on both research and practice.
科研通智能强力驱动
Strongly Powered by AbleSci AI