BERT

Assessing the Quality of Scientific Publications: A Thorough Analysis of Citation-Based and Content-Oriented Metrics for Evaluating Research Impact and Scholarly Contribution

The evaluation of scientific publications is a cornerstone of scholarly research, providing essential insights into the impact, significance, and intellectual contributions of research outputs.  Traditional bibliometric indicators, including Impact Factor (IF), h-index, and citation counts, have historically been the dominant measures to assess research quality.  However, with the rapid evolution of Artificial Intelligence (AI) and its increasing integration into various scientific disciplines, these conventional evaluation methodologies are being reevaluated due to the

Data Set Formation Method for Checking the Quality of Learning Language Models of the Transitive Relation in the Logical Conclusion Problem Context

A method for data set formation has been developed to verify the ability of pre-trained models to learn transitivity dependencies. The generated data set was used to test the quality of learning the transitivity dependencies in the task of natural language inference (NLI). Testing of a data set with a size of 10,000 samples (MultiNLI) used to test the RoBerta model.