联想(心理学)
计算机科学
融合
纳米技术
人工智能
材料科学
心理学
语言学
哲学
心理治疗师
作者
Sijie Ma,Yue Zhou,Tianqing Wan,Qinqi Ren,Jian‐Min Yan,Lingwei Fan,Huanmei Yuan,Mansun Chan,Yang Chai
出处
期刊:Nano Letters
[American Chemical Society]
日期:2024-05-28
卷期号:24 (23): 7091-7099
被引量:4
标识
DOI:10.1021/acs.nanolett.4c01727
摘要
Multimodal perception can capture more precise and comprehensive information compared with unimodal approaches. However, current sensory systems typically merge multimodal signals at computing terminals following parallel processing and transmission, which results in the potential loss of spatial association information and requires time stamps to maintain temporal coherence for time-series data. Here we demonstrate bioinspired in-sensor multimodal fusion, which effectively enhances comprehensive perception and reduces the level of data transfer between sensory terminal and computation units. By adopting floating gate phototransistors with reconfigurable photoresponse plasticity, we realize the agile spatial and spatiotemporal fusion under nonvolatile and volatile photoresponse modes. To realize an optimal spatial estimation, we integrate spatial information from visual–tactile signals. For dynamic events, we capture and fuse in real time spatiotemporal information from visual–audio signals, realizing a dance-music synchronization recognition task without a time-stamping process. This in-sensor multimodal fusion approach provides the potential to simplify the multimodal integration system, extending the in-sensor computing paradigm.
科研通智能强力驱动
Strongly Powered by AbleSci AI