Page 6 of 12 · 1,325 tools
TruLens is an open-source framework for evaluating and tracking LLM applications. Feedback functions assess…
Phoenix by Arize is an open-source AI observability library for ML engineers. Traces LLM…
Braintrust is an enterprise AI evaluation platform for measuring, improving, and shipping AI applications.…
Helicone provides one-line LLM observability — add a single line to your OpenAI calls…
Opik by Comet is an open-source LLM evaluation framework for testing AI application quality…
Langfuse is an open-source LLM engineering platform for observability, testing, and prompt management. Debug…
PromptLayer is a platform for tracking, managing, and evaluating LLM prompts in production. Log…
Guardrails AI adds input/output validation to LLM applications. Define rules for what the LLM…
LiteLLM provides a unified API for 100+ LLM providers using the OpenAI format. Switch…
Instructor makes it easy to get structured outputs from LLMs using Python type hints.…
🔍
We review every submission within 24–48 hours. Free listing, no strings attached.