计算机科学
背景(考古学)
前缀
语言模型
过程(计算)
人气
自然语言处理
人工智能
语义记忆
意义(存在)
机器学习
语言学
心理学
古生物学
社会心理学
哲学
认知
神经科学
心理治疗师
生物
操作系统
作者
Nusrat Jahan Prottasha,Asif Mahmud,Md. Shohanur Islam Sobuj,Prakash Bhat,Md. Kowsher,Niloofar Yousefi,Özlem Özmen Garibay
标识
DOI:10.1038/s41598-024-75599-4
摘要
Abstract Large Language Models (LLMs) are gaining significant popularity in recent years for specialized tasks using prompts due to their low computational cost. Standard methods like prefix tuning utilize special, modifiable tokens that lack semantic meaning and require extensive training for best performance, often falling short. In this context, we propose a novel method called Semantic Knowledge Tuning (SK-Tuning) for prompt and prefix tuning that employs meaningful words instead of random tokens. This method involves using a fixed LLM to understand and process the semantic content of the prompt through zero-shot capabilities. Following this, it integrates the processed prompt with the input text to improve the model’s performance on particular tasks. Our experimental results show that SK-Tuning exhibits faster training times, fewer parameters, and superior performance on tasks such as text classification and understanding compared to other tuning methods. This approach offers a promising method for optimizing the efficiency and effectiveness of LLMs in processing language tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI