LitefuseLitefuse Docs
CoreIntegrationsGuidesAdministrationAPI & ReferencesPricing
Ask AIStart Free
  • Overview
    • Overview
    • Evaluation of Rag with Ragas
    • Evaluation with Langchain
    • Evaluation with Uptrain
    • Migrating Data from One Litefuse Project to Another
    • Example Decorator Openai Langchain
    • Evaluating Multi-Turn Conversations
    • Example - Tracing and Evaluation for the OpenAI-Agents SDK
    • External Evaluation Pipelines
    • Guide - Building an intent classification pipeline
    • Example - Trace and Evaluate LangGraph Agents
    • Example Llm Security Monitoring
    • Example Multi Modal Traces
    • Agent Evaluation - How to Evaluate LLM Agents
    • Query Data in Litefuse via the SDK
    • Evaluating Multi-Turn Conversations (Simulation)
    • Synthetic Datasets
    • Amazon Bedrock
    • Anthropic (Python)
    • Integration Azure Openai Langchain
    • Databricks
    • Integration Langchain
    • Open Source Observability for LangGraph
    • Langserve
    • Cookbook - LiteLLM (Proxy) + Litefuse OpenAI Integration + Python Decorator
    • Integration Llama Index Callback
    • Integration Llama Index Instrumentation
    • Integration Llama Index Milvus Lite
    • LlamaIndex
    • Monitoring LlamaIndex applications with PostHog and Litefuse
    • LlamaIndex Workflows
    • OpenAI Assistants API
    • Cookbook - OpenAI Integration (Python)
    • Observe OpenAI Structured Outputs with Litefuse
    • Anthropic (JS/TS)
    • Langchain Integration (JS/TS)
    • LiteLLM (Proxy) + Litefuse OpenAI Integration (JS/TS)
    • OpenAI Integration (JS/TS)
    • JS/TS SDK Example
    • Prompt Management with Langchain (JS)
    • Langfuse SDK Performance Test
    • Tracing using the OpenInference SDK
    • MLflow Integration via OpenTelemetry
    • OpenLIT Integration via OpenTelemetry
    • Otel Integration Openllmetry
    • Example - Litefuse Prompt Management with Langchain (Python)
    • Prompt Management Openai Functions
    • Prompt Management Performance Benchmark
    • Overview
    • Beginner's Guide to RAG Evaluation with Litefuse and Ragas
    • External Evaluation Pipelines
    • Introducing Datasets v2
    • Introducing Litefuse 2.0
    • Introducing the observe() decorator for Python
    • LLM-as-a-Judge Evaluators for Dataset Experiments
    • LLM Playground
    • Posthog Integration
    • Run Litefuse Locally in 3 Minutes
    • Webinar: Traceability and Observability in Multi-Step LLM Systems

On This Page

  • Learn more
GuidesVideosExternal Evaluation Pipelines

External Evaluation Pipelines

Learn more

External Evaluation Pipeline Example
Beginner's Guide to RAG Evaluation with Litefuse and RagasIntroducing Datasets v2
Was this page helpful?
Support

Litefuse·© 2026
DocsBlogPricingPrivacyTerms