Large Language Models

Information Technology for Text Classification Tasks Using Large Language Models

The article addresses the problem of text classification in the context of growing information flows and the need for automated content analysis. A universal information technology is proposed, combining classical machine learning methods with the potential of Large Language Models for processing news, scientific, literary, journalistic and legal texts. Using the BBC News corpus (2225 texts), k-means clustering with TF-IDF demonstrated clear thematic grouping.

RESEARCH OF THE ORGANIC TRAFFIC OPTIMIZATION SYSTEM FOR E-COMMERCE PLATFORMS USING LARGE LANGUAGE MODELS

The paper explores the use of large language models (LLM) to optimize SEO processes to increase organic traffic for e-commerce platforms. The possibilities of scalable adaptation of large content volumes to the requirements of search algorithms using tools built on the basis of LLM are considered. A comparative analysis of the effectiveness of new automated SEO optimization methods and traditional manual tuning approaches is conducted using the example of an e-commerce platform with a wide range of products and a high level of traffic.

SED-UA-Small: Ukrainian Synthetic Dataset for Text Embedding Models

This paper presents Small Synthetic Embedding Dataset, a fully synthetic dataset in Ukrainian designed for training, fine-tuning, and evaluating text embedding models. The use of large language models (LLMs) allows for controlling the diversity of generated data in aspects such as NLP tasks, asymmetry between queries and documents, the presence of instructions, support for various languages, and avoidance of social biases. A zero-shot generation approach was used to create a set of Ukrainian query-documents pairs with corresponding similarity scores.

Capabilities and Limitations of Large Language Models

The work is dedicated to the study of large language models (LLMs) and approaches to improving their efficiency in a new service. The rapid development of LLMs based on transformer architecture has opened up new possibilities in natural language processing and the automation of various tasks. However, fully utilizing the potential of these models requires a thorough approach and consideration of numerous factors.

Prompting Techniques for Enhancing the Use of Large Language Models

The work is dedicated to the study of fundamental prompting techniques to improve the efficiency of using large language models (LLMs). Significant attention is given to the issue of prompt engineering. Various techniques are examined in detail: zero-shot prompting, feedback prompting, few-shot prompting, chain-of-thought, tree of thoughts, and instruction tuning. Special emphasis is placed on Reaction & Act Prompting and Retrieval Augmented Generation (RAG) as critical factors in ensuring effective interaction with LLMs.

UNDERSTANDING LARGE LANGUAGE MODELS: THE FUTURE OF ARTIFICIAL INTELLIGENCE

The article examines the newest direction in artificial intelligence - Large Language Models, which open a new era in natural language processing, providing the opportunity to create more flexible and adaptive systems. With their help, a high level of understanding of the context is achieved, which enriches the user experience and expands the fields of application of artificial intelligence. Large language models have enormous potential to redefine human interaction with technology and change the way we think about machine learning.