1.Faculty of Information Engineering and Automation, Kunming University of Science and Technology;2.Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology
Abstract: Document-level event extraction generally divides the task into three subtasks: candidate entity recognition, event detection, and argument recognition. The conventional approach involves sequentially performing these subtasks in a cascading manner, leading to error propagation. Additionally, most existing models implicitly predict the number of events during the decoding process and predict event arguments based on a predefined event and role order, so that the former extraction will not consider the latter extraction results. To address these issues, a multi-task joint and parallel event extraction framework is proposed in this paper. Firstly, a pre-trained language model is used as the encoder for document sentences. On this basis, the framework detects the types of events present in the document. It utilizes a structured self-attention mechanism to obtain pseudo-trigger word features and predicts the number of events for each event type. Subsequently, the pseudo-trigger word features are interacted with candidate argument features, and parallel prediction is performed to obtain various event arguments for each event, significantly reducing model training time while achieving performance comparable to the baseline model. The final F1 score for event extraction is 78%, with an F1 score of 98.7% for the event type detection subtask, 90.1% for the event quantity prediction subtask, and 90.3% for the entity recognition subtask.
Key words : document-level event extraction;multi-task joint;pre-trained language model;structured self-attention mechanism;parallel prediction