zero-shot learning

Contrastive language-image pre-training (CLIP) in e-commerce: applications, methodologies, and performance

This article thoroughly examines the architecture and applications of the Contrastive Language-Image Pre-training (CLIP) model within the e-commerce domain, focusing on key tasks such as visual search, product recommendation, and attribute extraction. The article also provides an in-depth analysis of the methodologies used for CLIP’s adaptation to e-commerce tasks and the relevant datasets employed.

Comprehensive Analysis of Few-shot Image Classification Method Using Triplet Loss

Image classification task is a very  important problem of a computer vision area. The first approaches to image classification tasks belong to a classic straightforward algorithm. Despite the successful applications of such algorithms a lot of image classification tasks had not been solved until machine learning approaches were involved in a computer vision area. An early successful result of machine learning applications helps researchers with extracted features classification which was not available without machine learning models.