The goal of this repo is to evaluate CLIP-like models on a standard set of datasets on different tasks such as zero-shot classification and zero-shot retrieval, and captioning.
fmeval is a library to evaluate Large Language Models (LLMs) in order to help select the best LLM for your use case. The library evaluates LLMs for the following tasks: Open-ended generation - The ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results