Abstract
Scientific relation extraction plays a crucial role in constructing scientific knowledge graphs that can contextually integrate knowledge from the scientific literature. However, a large majority of existing efforts do not support human guidance, which hinders refining the construction of scientific knowledge graphs and, thus, the natural cycle of scientific knowledge integration. Therefore, there is a necessity to ground the human–machine collaboration in learned mechanisms, the prerequisite of which is quantifying the contribution of candidate mechanisms. In addressing this, we introduce an efficient summation node architecture by leveraging a graph neural network (GNN) on semantic patterns among dependency graphs. Then, we quantify the potential of different semantic invariance in serving as semantic interfaces towards the flexible construction of scientific knowledge graphs. Specifically, we posit that collocation-level patterns can enhance both extraction accuracy and F1 scores. Our proposed solutions exhibit promising performances for certain relations under bi-classification configurations, facilitating the learning of more semantic invariance from the word level to the collocation level. In conclusion, we assert that the flexible and robust construction of scientific knowledge graphs in the future will necessitate continual improvements to augment learned semantic invariance. This can be achieved through the development of more integrated and extended input graphs and transformer-based GNN architectures.
Original language | English |
---|---|
Article number | 2276 |
Journal | Electronics (Switzerland) |
Volume | 14 |
Issue number | 11 |
DOIs | |
Publication status | Published - Jun 2025 |
Keywords
- dependency path
- graph attention networks
- scientific relation extraction
- semantic invariance
- summation node
ASJC Scopus subject areas
- Control and Systems Engineering
- Signal Processing
- Hardware and Architecture
- Computer Networks and Communications
- Electrical and Electronic Engineering