Page 15 of 80 · 1,324 tools
Phoenix Arize
Phoenix by Arize is an open-source AI observability library for ML engineers. Traces LLM and embedding applications, visualizes…
Braintrust
Braintrust is an enterprise AI evaluation platform for measuring, improving, and shipping AI applications. Logging, evaluation datasets, prompt…
Helicone
Helicone provides one-line LLM observability — add a single line to your OpenAI calls and get full logging,…
Opik
Opik by Comet is an open-source LLM evaluation framework for testing AI application quality at scale. Automated evaluation…
Langfuse
Langfuse is an open-source LLM engineering platform for observability, testing, and prompt management. Debug production AI issues, evaluate…
PromptLayer
PromptLayer is a platform for tracking, managing, and evaluating LLM prompts in production. Log every prompt and completion,…
Guardrails AI
Guardrails AI adds input/output validation to LLM applications. Define rules for what the LLM can and cannot say,…
LiteLLM
LiteLLM provides a unified API for 100+ LLM providers using the OpenAI format. Switch between GPT-4, Claude, Gemini,…
Instructor
Instructor makes it easy to get structured outputs from LLMs using Python type hints. Define a Pydantic model…
LlamaIndex
LlamaIndex is a data framework for building LLM-powered applications over your data. Simple connectors for 160+ data sources,…
🔍
Don't see your tool?
We review every submission within 24–48 hours. Free listing, no strings attached.
Submit Your Tool