计算机科学
人工智能
语法
自然语言处理
卷积神经网络
深度学习
任务(项目管理)
人工神经网络
图形
机器学习
理论计算机科学
哲学
语言学
经济
管理
作者
Dimmy Magalhães,Ricardo H. R. Lima,Aurora Pozo
标识
DOI:10.1016/j.asoc.2023.110009
摘要
Text classification is one of the Natural Language Processing (NLP) tasks. Its objective is to label textual elements, such as phrases, queries, paragraphs, and documents. In NLP, several approaches have achieved promising results regarding this task. Deep Learning-based approaches have been widely used in this context, with deep neural networks (DNNs) adding the ability to generate a representation for the data and a learning model. The increasing scale and complexity of DNN architectures was expected, creating new challenges to design and configure the models. In this paper, we present a study on the application of a grammar-based evolutionary approach to the design of DNNs, using models based on Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), and Graph Neural Networks (GNNs). We propose different grammars, which were defined to capture the features of each type of network, also proposing some combinations, verifying their impact on the produced designs and performance of the generated models. We create a grammar that is able to generate different networks specialized on text classification, by modification of Grammatical Evolution (GE), and it is composed of three main components: the grammar, mapping, and search engine. Our results offer promising future research directions as they show that the projected architectures have a performance comparable to that of their counterparts but can still be further improved. We were able to improve the results of a manually structured neural network in 8,18% in the best case.
科研通智能强力驱动
Strongly Powered by AbleSci AI