计算机科学
推论
任务(项目管理)
语言模型
人工智能
认知
过程(计算)
机器学习
自然语言处理
心理学
工程类
程序设计语言
系统工程
神经科学
作者
Woojin Lee,J.-J. Lee,Harksoo Kim
出处
期刊:PeerJ
[PeerJ, Inc.]
日期:2024-12-03
卷期号:10: e2585-e2585
标识
DOI:10.7717/peerj-cs.2585
摘要
Stance detection is a critical task in natural language processing that determines an author’s viewpoint toward a specific target, playing a pivotal role in social science research and various applications. Traditional approaches incorporating Wikipedia-sourced data into small language models (SLMs) to compensate for limited target knowledge often suffer from inconsistencies in article quality and length due to the diverse pool of Wikipedia contributors. To address these limitations, we utilize large language models (LLMs) pretrained on expansive datasets to generate accurate and contextually relevant target knowledge. By providing concise, real-world insights tailored to the stance detection task, this approach surpasses the limitations of Wikipedia-based information. Despite their superior reasoning capabilities, LLMs are computationally intensive and challenging to deploy on smaller devices. To mitigate these drawbacks, we introduce a reasoning distillation methodology that transfers the reasoning capabilities of LLMs to more compact SLMs, enhancing their efficiency while maintaining robust performance. Our stance detection model, LOGIC (LLM-Originated Guidance for Internal Cognitive improvement of small language models in stance detection), is built on Bidirectional and Auto-Regressive Transformer (BART) and fine-tuned with auxiliary learning tasks, including reasoning distillation. By incorporating LLM-generated target knowledge into the inference process, LOGIC achieves state-of-the-art performance on the VAried Stance Topics (VAST) dataset, outperforming advanced models like GPT-3.5 Turbo and GPT-4 Turbo in stance detection tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI