HumanEval-V: Evaluating Visual Understanding and Reasoning Abilities of Large Multimodal Models Through Coding Tasks

编码(社会科学) 视觉推理 计算机科学 预测编码 认知心理学 认知科学 心理学 人工智能 数学 统计
作者
Fengji Zhang,Lisa Y. Wu,Hui‐Yu Bai,Guancheng Lin,Xiao Li,Xiao Yu,Yue Wang,Bei Chen,Jacky Keung
出处
期刊:Cornell University - arXiv
标识
DOI:10.48550/arxiv.2410.12381
摘要

Coding tasks have been valuable for evaluating Large Language Models (LLMs), as they demand the comprehension of high-level instructions, complex reasoning, and the implementation of functional programs -- core capabilities for advancing Artificial General Intelligence. Despite the progress in Large Multimodal Models (LMMs), which extend LLMs with visual perception and understanding capabilities, there remains a notable lack of coding benchmarks that rigorously assess these models, particularly in tasks that emphasize visual reasoning. To address this gap, we introduce HumanEval-V, a novel and lightweight benchmark specifically designed to evaluate LMMs' visual understanding and reasoning capabilities through code generation. HumanEval-V includes 108 carefully crafted, entry-level Python coding tasks derived from platforms like CodeForces and Stack Overflow. Each task is adapted by modifying the context and algorithmic patterns of the original problems, with visual elements redrawn to ensure distinction from the source, preventing potential data leakage. LMMs are required to complete the code solution based on the provided visual context and a predefined Python function signature outlining the task requirements. Every task is equipped with meticulously handcrafted test cases to ensure a thorough and reliable evaluation of model-generated solutions. We evaluate 19 state-of-the-art LMMs using HumanEval-V, uncovering significant challenges. Proprietary models like GPT-4o achieve only 13% pass@1 and 36.4% pass@10, while open-weight models with 70B parameters score below 4% pass@1. Ablation studies further reveal the limitations of current LMMs in vision reasoning and coding capabilities. These results underscore key areas for future research to enhance LMMs' capabilities. We have open-sourced our code and benchmark at https://github.com/HumanEval-V/HumanEval-V-Benchmark.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
好吃的烤雞完成签到,获得积分10
1秒前
1秒前
华仔应助鹿见林采纳,获得10
1秒前
搜集达人应助鹿见林采纳,获得10
1秒前
1秒前
JamesPei应助科研通管家采纳,获得10
1秒前
科研通AI5应助科研通管家采纳,获得10
1秒前
枫叶应助科研通管家采纳,获得10
1秒前
加菲丰丰应助科研通管家采纳,获得10
1秒前
ycccccc发布了新的文献求助30
1秒前
科研通AI2S应助科研通管家采纳,获得10
2秒前
乐乐应助科研通管家采纳,获得10
2秒前
脑洞疼应助科研通管家采纳,获得10
2秒前
2秒前
上官若男应助科研通管家采纳,获得10
2秒前
aa完成签到,获得积分10
2秒前
无心发布了新的文献求助10
3秒前
3秒前
5秒前
汉堡包应助黄建雨采纳,获得10
5秒前
5秒前
Orange应助自信的眉毛采纳,获得30
6秒前
StevenZhao发布了新的文献求助10
6秒前
如常完成签到,获得积分20
6秒前
S77发布了新的文献求助10
6秒前
7秒前
星辰大海应助Naruto采纳,获得20
8秒前
认真惮完成签到,获得积分10
8秒前
重要的小丸子完成签到,获得积分10
9秒前
9秒前
littlexu完成签到,获得积分20
9秒前
10秒前
10秒前
11秒前
TK发布了新的文献求助20
11秒前
细心书蕾完成签到 ,获得积分10
12秒前
littlexu发布了新的文献求助10
12秒前
ffff发布了新的文献求助10
13秒前
共享精神应助毛毛哦啊采纳,获得10
13秒前
无心完成签到,获得积分20
14秒前
高分求助中
Production Logging: Theoretical and Interpretive Elements 2700
Conference Record, IAS Annual Meeting 1977 1250
Neuromuscular and Electrodiagnostic Medicine Board Review 1000
APA educational psychology handbook, Vol 1: Theories, constructs, and critical issues 700
An Annotated Checklist of Dinosaur Species by Continent 500
岡本唐貴自伝的回想画集 500
Distinct Aggregation Behaviors and Rheological Responses of Two Terminally Functionalized Polyisoprenes with Different Quadruple Hydrogen Bonding Motifs 450
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3652029
求助须知:如何正确求助?哪些是违规求助? 3216197
关于积分的说明 9711172
捐赠科研通 2924058
什么是DOI,文献DOI怎么找? 1601466
邀请新用户注册赠送积分活动 754190
科研通“疑难数据库(出版商)”最低求助积分说明 732987