EFFICIENCY OF LLM INSTRUCTION FORMATS FOR CLASS IMBALANCE PROBLEMS IN TRAINING DATA FOR PREDICTIVE MONITORING SYSTEMS

2025;
: 75-81
Authors:
1
Lviv Polytechnic National University

The article examines approaches to formatting tabular data (HTML, XML, Markdown, CSV) for the subsequent generation of synthetic samples using large language models (LLM) in predictive monitoring tasks. Since real-world data are often characterized by class imbalance, generating additional samples helps improve training datasets, thereby enhancing the effectiveness of models. At the same time, an important issue arises regarding processing speed and query costs, which largely depend on how many input tokens are required by the chosen format for tabular data representation. The study analyzes computational resource consumption and query processing time for LLMs depending on the tabular data format. Although, according to research [1], HTML provides the highest level of accuracy, it also requires a significantly larger number of tokens due to its table representation format. This characteristic considerably increases the volume of input data and the overall query processing time. In contrast, less bulky formats (Markdown and CSV) require significantly fewer tokens, speeding up processing and reducing the cost of interaction with the model. A slight reduction in accuracy compared to HTML may be an acceptable trade-off, especially when the goal is to significantly expand the training dataset to compensate for the lack of examples of non-standard conditions. This approach proves to be effective in predictive monitoring systems, where response time and the volume of processed data directly affect the speed of anomaly detection and overall system resilience. The study results confirm that Markdown and CSV, due to their smaller input data volume, help reduce query processing time and the costs associated with generating synthetic training samples. At the same time, HTML and XML remain potentially useful in tasks where preserving complex structures and additional metadata is of utmost importance, although these formats require significantly more resources. Thus, the choice of a tabular data representation format should take into account the specific system requirements and operational environment characteristics, ranging from hardware limitations and token-based pricing to the required query processing time.

[1].    Sui, Y., Zhou, M., Zhou, M., Han, S. and Zhang, D. (2024), “Table Meets LLM: Can Large Language Models Understand Structured Table Data? A Benchmark and Empirical Study”, Proceedings of the 17th ACM International Conference on Web Search and Data Mining (WSDM '24), 4–8 March, Mérida, Yucatán, Mexico. ACM.

[2].    Луцюк, А.В., 2024. “Предиктивний моніторинг інформаційно-комунікаційних систем за допомогою спеціалізованої моделі машинного навчання”. Вчені записки Таврійського національного університету імені В.І. Вернадського. Серія: Технічні науки, 35(74)(6, ч.1), с. 129–135.

[3].    Aghajanyan, A., Okhonko, D., Lewis, M., Joshi, M., Xu, H., Ghosh, G. and Zettlemoyer, L. (2022) ‘HTLM: Hyper-Text Pre-Training and Prompting of Language Models’, 10th International Conference on Learning Representations (ICLR 2022), 25-29 April.

[4].    Mills, R. (2025) “LUFlow Network Intrusion Detection Data Set”, Kaggle [Data set]. Available at: https://doi.org/10.34740/KAGGLE/DSV/11027911 (Accessed: 15 February 2025).

[5].    Chen, W. (2023) “Large Language Models Are Few(1)-Shot Table Reasoners”, Findings of the Association for Computational Linguistics: EACL 2023, 2 April.

[6].    Dong, H., Cheng, Z., He, X., Zhou, M., Zhou, A., Zhou, F., Liu, A., Han, S. and Zhang, D. (2022) ‘Table Pre-training: A Survey on Model Architectures, Pre-training Objectives, and Downstream Tasks’, Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22), 23–29 July, Vienna, Austria.

[7].    Eisenschlos, J.M., Gor, M., Müller, T. and Cohen, W.W. (2021) “MATE: Multi-view Attention for Table Transformer Efficiency”, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021), 7–11 November, Punta Cana, Dominican Republic. Association for Computational Linguistics.

[8].    Herzig, J., Nowak, P.K., Müller, T., Piccinno, F. and Eisenschlos, J. (2020) ‘TaPas: Weakly supervised table parsing via pre-training’, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), 5–10 July, pp. 4320–4333. Association for Computational Linguistics.

[9].    Hulsebos, M., Demiralp, Ç. and Groth, P. (2023) ‘GitTables: A Large-Scale Corpus of Relational Tables’, Proceedings of the ACM on Management of Data, 1(1), pp. 1–17.

[10]. Iida, H., Thai, D., Manjunatha, V. and Iyyer, M. (2021) ‘TABBIE: Pretrained Representations of Tabular Data’, arXiv preprint. Available at: https://doi.org/10.48550/arXiv.2105.02584 (Accessed: 15 February 2025).