感知
计算机科学
公制(单位)
城市规划
人工智能
心理学
数据科学
土木工程
工程类
运营管理
神经科学
作者
Xiangyuan Ma,Chenyan Ma,Chao Wu,Yuliang Xi,Renfei Yang,Ningyezi Peng,Chen Zhang,Fu Ren
出处
期刊:Cities
[Elsevier]
日期:2021-01-11
卷期号:110: 103086-103086
被引量:144
标识
DOI:10.1016/j.cities.2020.103086
摘要
Ubiquitous and up-to-date geotagged data are increasingly employed to uncover the visual traits of the built environment. However, few prior studies currently link this theoretical knowledge of street appraisals with operable practices to inform streetscape transformation. This study proposes a proof-of-concept analytical framework that sheds light on the connections between urban renewal and the quantification of streetscape visual traits. By virtue of a million intensively collected panoramic street view images in Shenzhen, China, the image-segmentation technique SegNet automatically extracts pixelwise semantical information and classifies visual elements. The throughput of the eye-level perception of the street canyon is formed by five indices. Additionally, the framework-derived scores (FDSs) are contrasted with the subjective rating scores (SRSs) to report the divergence and coherence between the visually experienced and the quantitative estimated methods. Furthermore, we investigate the spatial heterogeneity of five perception aspects, discuss the variations of the perception outcomes across different function streets, and analyze the net effect of urban renewal projects (URPs) on streetscape transformation. We conclude that this deep learning-driven approach provides a feasible paradigm to depict high-resolution streetscape perception, to analyze fine-scale built environment, and to effectively bridge gaps between the street semantic metric and urban renewal.
科研通智能强力驱动
Strongly Powered by AbleSci AI