Langtrace adds support for Guardrails AI

Karthik Kalyanaraman

Cofounder and CTO

Nov 13, 2024

Introduction

We are excited to announce that Langtrace now supports tracing Guardrails AI natively. This means Langtrace will automatically capture traces from guardrails with useful information related to the validators and validator metadata and surface it on Langtrace. This helps you get visibility into your models performance based on the validators you are using and the corresponding hits Guardrails is capturing.

Setup

  1. Sign up to Langtrace, create a project and get a Langtrace API key

  2. Install Langtrace SDK

pip install -U langtrace-python-sdk
  1. Setup the .env var

export LANGTRACE_API_KEY=YOUR_LANGTRACE_API_KEY
  1. Initialize Langtrace in your code

import os
from langtrace_python_sdk import langtrace
from guardrails import Guard, OnFailAction
from guardrails.hub import ProfanityFree, ToxicLanguage

langtrace.init()

guard = Guard()
guard.name = 'ChatBotGuard'
guard.use_many(ProfanityFree(on_fail=OnFailAction.EXCEPTION), ToxicLanguage(on_fail="exception"), on="output")

os.environ["OPENAI_API_KEY"] = "<OPENAI_API_KEY>"


result = guard(
    messages=[{"role": "user", "content": "Hello, how are you?"}],
    model="gpt-4o-mini",
    stream=False,
)

  1. See the traces in Langtrace

Useful Resources

Ready to try Langtrace?

Try out the Langtrace SDK with just 2 lines of code.

Ready to deploy?

Try out the Langtrace SDK with just 2 lines of code.

Want to learn more?

Check out our documentation to learn more about how langtrace works

Join the Community

Check out our Discord community to ask questions and meet customers