美文网首页
语言模型中的多模态思维链推理

语言模型中的多模态思维链推理

作者: Valar_Morghulis | 来源:发表于2023-02-24 15:31 被阅读0次

    Multimodal Chain-of-Thought Reasoning in Language Models

    Feb 2023

    Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola

    [Shanghai Jiao Tong University, Amazon Web Services]

    https://arxiv.org/abs/2302.00923

    https://github.com/amazon-science/mm-cot     2.2k stars

    大型语言模型(LLM)通过利用思想链(CoT)提示生成中间推理链作为推断答案的基本原理,在复杂推理方面表现出令人印象深刻的性能。然而,现有的CoT研究集中于语言模态。我们提出了多模式CoT,它将语言(文本)和视觉(图像)模式结合到一个两阶段框架中,该框架将理论基础生成和答案推理分开。通过这种方式,答案推理可以更好地利用基于多模态信息的推理。使用多模态CoT,我们的模型在10亿个参数下的性能在ScienceQA基准上比之前的最先进LLM(GPT-3.5)高出16个百分点(75.17%->91.68%的准确度),甚至超过了人类性能。代码可在以下网址公开获取https://github.com/amazon-science/mm-cot.

    Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer. However, existing CoT studies have focused on the language modality. We propose Multimodal-CoT that incorporates language (text) and vision (images) modalities into a two-stage framework that separates rationale generation and answer inference. In this way, answer inference can leverage better generated rationales that are based on multimodal information. With Multimodal-CoT, our model under 1 billion parameters outperforms the previous state-of-the-art LLM (GPT-3.5) by 16 percentage points (75.17%->91.68% accuracy) on the ScienceQA benchmark and even surpasses human performance. Code is publicly available available at https://github.com/amazon-science/mm-cot.

    相关文章

      网友评论

          本文标题:语言模型中的多模态思维链推理

          本文链接:https://www.haomeiwen.com/subject/xmmzkdtx.html