Data Set Formation Method for Checking the Quality of Learning Language Models of the Transitive Relation in the Logical Conclusion Problem Context

2023;
: pp. 46 - 60
1
Lviv Polytechnic National University, Information Systems and Networks Department
2
Lviv Polytechnic National University, Information Systems and Networks Department
3
Lviv Polytechnic National University, Information Systems and Networks Department; Osnabrück University, Institute of Computer Science

A method for data set formation has been developed to verify the ability of pre-trained models to learn transitivity dependencies. The generated data set was used to test the quality of learning the transitivity dependencies in the task of natural language inference (NLI). Testing of a data set with a size of 10,000 samples (MultiNLI) used to test the RoBerta model. It was found that this model is good at studying transitive dependencies in the task of logical inference because all samples from the formed dataset were correctly classified as belonging to the class similar, contradiction and neutral. It was also investigated that in the task of logical inference, the class similarity is more directed than contradiction and neutral. Because if the premise and hypothesis in the data set are swapped, the accuracy of the RoBerta model decreases by a factor of $2.97, 1.17, 1.26$ for the similar $(0.98 \rightarrow 0.33)$, neutral $(0.90 \rightarrow 0.77)$, and contradiction $(0.98 \rightarrow 0.78)$ classes, respectively. The study iteration time is 0.0028 seconds, so only half of the data set requires approximately 84 hours of collection. This research is relevant because the ability of natural language models to explore such dependencies as transitivity, which is not explicitly specified in the training data set, is an important element of the model’s ability to generalize. It was found that RoBerta’s model is good at studying transitive dependencies in the logical inference task because it correctly classified belonging to the class similar, contradiction, and neutral  on all samples from the generated data set.

  1. Adina Williams, Nikita Nangia, Samuel R. Bowman, A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference, 2018. https://doi.org/10.48550/arXiv.1704.05426
  2. Doina Tatar, Gabriela Serban, Mihis Andreea, Textual Entailment as a Directional Relation, 2008. URL: https://search.informit.org/doi/abs/10.3316/INFORMIT.836390534395451
  3. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin, Attention Is All You Need, 2017. URL: https://proceedings.neurips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
  4. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, 2019. https://doi.org/10.48550/arXiv.1810.04805
  5. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, RoBERTa: A Robustly Optimized BERT Pretraining Approach, 2019. https://doi.org/10.48550/arXiv.1907.11692
  6. William McDaniel Albritton, ICS 241: Discrete Mathematics II, Waterloo, 2015
  7. Petro Zdebskyi, Vasyl Lytvyn, Yevhen Burov, Zoriana Rybchak, Petro Kravets, Olga Lozynska, Roman Holoshchuk, Solomiya Kubinska, Alina Dmytriv, Intelligent System for Semantically Similar Sentences Identification and Generation Based on Machine Learning Methods, CEUR workshop proceedings, Vol. 2604, 317–346, 2020. https://ceur-ws.org/Vol-2604/paper25.pdf
  8. Aikaterini-Lida Kalouli, Annebeth Buis, Livy Real, Explaining Simple Natural Language Inference, 2019. DOI: 10.18653/v1/W19-4016
  9. Tongfei Chen, Zhengping Jiang, Adam Poliak, Keisuke Sakaguchi, Benjamin Van Durme, Uncertain Natural Language Inference, 2020. https://doi.org/10.48550/arXiv.1909.0304
  10. Adam Poliak, A Survey on Recognizing Textual Entailment as an NLP Evaluation, 2020. https://doi.org/10.48550/arXiv.2010.03061
  11. Hanwen Zha, Zhiyu Chen, Xifeng Yan, Large Batch Optimization for Deep Learning: Training BERT in 76 minutes, 2021. https://doi.org/10.48550/arXiv.1904.00962
  12. Leo Z. Liu, Yizhong Wang, Jungo Kasai, Hannaneh Hajishirzi, Noah A. Smith, Probing Across Time: What Does RoBERTa Know and When?, 2021. https://doi.org/10.48550/arXiv.2104.07885
  13. Zdebskyi P., Berko A., Vysotska V. (2023). Investigation of Transitivity Relation in Natural Language Inference, CEUR Workshop Proceedings, Vol. 3396,  334–345.
  14. Bisikalo O., Vysotska V., Burov Y., Kravets P. (2020). Conceptual Model of Process Formation for the Semantics of Sentence in Natural Language, CEUR workshop proceedings, Vol. 2604, 151–177.
  15. Vysotska V., Holoshchuk S., Holoshchuk R. (2021). A comparative analysis for English and Ukrainian texts processing based on semantics and syntax approach. CEUR Workshop Proceedings, Vol. 2870,  311–356.
  16. Rogushina J. (2023). Ontological Approach in the Smart Data Paradigm as a Basis for Open Data Semantic Markup. CEUR Workshop Proceedings, Vol. 3403, 12–27.
  17. Hryhorovych V. (2023). Calculation of the Semantic Distance between Ontology Concepts: Taking into Account Critical Nodes, CEUR Workshop Proceedings, Vol. 3396, 206–216.
  18. Albota S. (2023). Creating a Model of War and Pandemic Apprehension: Textual Semantic Analysis. CEUR Workshop Proceedings, Vol. 3396,  228–243.
  19. Taran A. (2023). Corpus Аnalysis of Word Semantics. CEUR Workshop Proceedings, Vol. 3396, 373–384.
  20. Basyuk T., Vasilyuk A., Lytvyn V. (2019). Mathematical model of semantic search and search optimization. CEUR Workshop Proceedings, Vol. 2362, 96–105.
  21. Sharonova Natalia, Kyrychenko Iryna, Gruzdo Iryna, Tereshchenko Glib (2022). Generalized Semantic Analysis Algorithm of Natural Language Texts for Various Functional Style Types. CEUR Workshop Proceedings, Vol. 3171, 16–26.