步伐
建筑
计算机科学
认知建筑学
重要性(审计)
人工智能
隐藏字幕
认知
人机交互
数据科学
图像(数学)
心理学
艺术
哲学
大地测量学
神经科学
视觉艺术
地理
美学
标识
DOI:10.1177/14780771231170272
摘要
This paper examines the prevalence of bias in artificial intelligence text-to-image models utilized in the architecture and design disciplines. The rapid pace of advancements in machine learning technologies, particularly in text-to-image generators, has significantly increased over the past year, making these tools more accessible to the design community. Accordingly, this paper aims to critically document and analyze the collective, computational, and cognitive biases that designers may encounter when working with these tools at this time. The paper delves into three hierarchical levels of operation and investigates the possible biases present at each level. Starting with the training data for large language models (LLM), the paper explores how these models may create biases privileging English-language users and perspectives. The paper subsequently investigates the digital materiality of models and how their weights generate specific aesthetic results. Finally, the report concludes by examining user biases through their prompt and image selections and the potential for platforms to perpetuate these biases through the application of user data during training.
科研通智能强力驱动
Strongly Powered by AbleSci AI