References
Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018.
“BERT: Pre-Training of Deep Bidirectional Transformers for
Language Understanding.” arXiv Preprint
arXiv:1810.04805.
He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016.
“Deep Residual Learning for Image Recognition.” In
Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 770–78.
Lewis, Patrick, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir
Karpukhin, Naman Goyal, Heinrich Küttler, et al. 2020.
“Retrieval-Augmented Generation for Knowledge-Intensive NLP
Tasks.” arXiv Preprint arXiv:2005.11401.
Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013.
“Efficient Estimation of Word Representations in Vector
Space.” arXiv Preprint arXiv:1301.3781.
Paulheim, Heiko, Jan Portisch, and Petar Ristoski. 2023. Embedding
Knowledge Graphs with RDF2vec. Springer Nature. https://doi.org/10.1007/978-3-031-30387-6.
Reimers, Nils, and Iryna Gurevych. 2019. “Sentence-BERT: Sentence
Embeddings Using Siamese BERT-Networks.” arXiv Preprint
arXiv:1908.10084.
Ristoski, Petar, and Heiko Paulheim. 2016. “RDF2Vec: RDF Graph
Embeddings for Data Mining.” In The Semantic Web–ISWC 2016:
15th International Semantic Web Conference, 498–514. Springer
International Publishing; Springer.
Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion
Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017.
“Attention Is All You Need.” Advances in Neural
Information Processing Systems 30.