计算机科学
人工智能
过程(计算)
生成语法
语义学(计算机科学)
边距(机器学习)
自然语言处理
编码(集合论)
程序设计语言
机器学习
集合(抽象数据类型)
作者
Yang Jin,Kun Xu,Kun Xu,Liwei Chen,Chao Liao,Jianchao Tan,Quzhe Huang,Bin Chen,Chenyi Lei,An Liu,Chengru Song,Xiaoqiang Lei,Di Zhang,Wenwu Ou,Kun Gai,Yadong Mu
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:2
标识
DOI:10.48550/arxiv.2309.04669
摘要
Recently, the remarkable advance of the Large Language Model (LLM) has inspired researchers to transfer its extraordinary reasoning capability to both vision and language data. However, the prevailing approaches primarily regard the visual input as a prompt and focus exclusively on optimizing the text generation process conditioned upon vision content by a frozen LLM. Such an inequitable treatment of vision and language heavily constrains the model's potential. In this paper, we break through this limitation by representing both vision and language in a unified form. Specifically, we introduce a well-designed visual tokenizer to translate the non-linguistic image into a sequence of discrete tokens like a foreign language that LLM can read. The resulting visual tokens encompass high-level semantics worthy of a word and also support dynamic sequence length varying from the image. Coped with this tokenizer, the presented foundation model called LaVIT can handle both image and text indiscriminately under the same generative learning paradigm. This unification empowers LaVIT to serve as an impressive generalist interface to understand and generate multi-modal content simultaneously. Extensive experiments further showcase that it outperforms the existing models by a large margin on massive vision-language tasks. Our code and models are available at https://github.com/jy0205/LaVIT.
科研通智能强力驱动
Strongly Powered by AbleSci AI