结构化
扎根理论
计算机科学
社会学
政治学
社会科学
定性研究
法学
作者
Zeyi Huang,Yuyang Ji,Xiaofang Wang,Nikhil Mehta,Tong Xiao,D. John Lee,Sigmund Vanvalkenburgh,Shengxin Zha,Bolin Lai,Licheng Yu,Qinyu Zhang,Yong Jae Lee,Miao Liu
出处
期刊:Cornell University - arXiv
日期:2025-01-08
标识
DOI:10.48550/arxiv.2501.04336
摘要
Long-form video understanding with Large Vision Language Models is challenged by the need to analyze temporally dispersed yet spatially concentrated key moments within limited context windows. In this work, we introduce VideoMindPalace, a new framework inspired by the "Mind Palace", which organizes critical video moments into a topologically structured semantic graph. VideoMindPalace organizes key information through (i) hand-object tracking and interaction, (ii) clustered activity zones representing specific areas of recurring activities, and (iii) environment layout mapping, allowing natural language parsing by LLMs to provide grounded insights on spatio-temporal and 3D context. In addition, we propose the Video MindPalace Benchmark (VMB), to assess human-like reasoning, including spatial localization, temporal reasoning, and layout-aware sequential understanding. Evaluated on VMB and established video QA datasets, including EgoSchema, NExT-QA, IntentQA, and the Active Memories Benchmark, VideoMindPalace demonstrates notable gains in spatio-temporal coherence and human-aligned reasoning, advancing long-form video analysis capabilities in VLMs.
科研通智能强力驱动
Strongly Powered by AbleSci AI