Introduction

Previously, we explored the theoretical foundations of guardrails and their importance in ensuring AI systems operate safely and effectively. We now take a practical approach, walking through the AI implementation process with code examples and best practices.

Using the guardrails.ai framework, we will demonstrate how to take guardrails from concept to reality, providing actionable steps to enhance the reliability, safety, and compliance of AI applications. These techniques can be used to establish robust boundaries for AI systems while maintaining functionality and performance.

Overview of the Guardrails Implementation Process

Implementing Guardrails involves a structured approach:

Step 1: Setting Up Your Environment

Before implementing guardrails, install the required packages. The guardrails.ai framework provides a flexible foundation that works with popular LLM interfaces, such as LiteLLM, Langchain, and direct API calls to providers like OpenAI.

Installing Required Packages

To get started, install the core guardrails package and any validators you’ll need:

Fig 1. Installing Required Packages (Source: Persistent)

Each validator provides specific functionality:

  • restricttotopic: Ensures responses stay relevant to defined topics
  • toxicity: Filters out harmful, offensive, or inappropriate content
  • factchecking: Verifies factual accuracy of generated information

Step 2: Defining Validation Rules

The next step is to define what “good” inputs and outputs look like using validation schemas. For Schema, Refer: Official Documentation of Validators

Creating a Basic Schema

Here’s how to create a schema that ensures responses are non-toxic and stay on valid_topic:

Fig 2. Creating a Basic Schema (Source: Persistent)

This schema enforces two rules:

  1. Responses must relate to AI, ML, or data science (throwing an exception if not)
  2. Content cannot exceed a toxicity score of 0.8 (automatically fixing it if it does)

Step 3: Integrating with Your LLM

With your validation rules defined, you can now integrate guardrails with your LLM calls.

Basic Integration Example

Fig 3. Basic Integration Example – Creating Validator Schema (Source: Persistent)
Fig 4. Basic Integration Example – Validation (Source: Persistent)

This example demonstrates:

  1. Configuring the LLM client
  2. Creating a guard with topic restrictions
  3. Generating content with the LLM
  4. Validating the output against the guard’s rules
  5. Handling validation failures with try/except

Step 4: Handling Validation Failures

When content fails validation, having appropriate response strategies is crucial.

Implementing Multiple Failure Strategies

Fig 5. Validation Failure Strategies – Creating Multiple Validator Schema (Source: Persistent)
Fig 6. Validation Failure Strategies – Validation  (Source: Persistent)

This example implements three different failure strategies:

  1. Fix: Automatically correct toxic content
  2. Exception: Block responses with personal identifiable information
  3. Noop: Allow potentially inaccurate information through but log it for review

Advanced Implementation Patterns

As your AI applications grow in complexity, you can leverage more advanced guardrails patterns.

  • Implementing Multi-Stage Validation
    Fig 7. Multi-Stage Validation – Creating Multiple Validation Guard (Source: Persistent)
    Fig 8. Multi-Stage Validation – Multiple Validation (Source: Persistent)
    Multi-stage validation creates a defense-in-depth approach that:
    • Protects your system from malicious inputs
    • Ensures response quality meets standards
    • Provides final safety checks before delivery to users
  • Creating Custom Validators
    For domain-specific requirements, you can easily create custom validators:
    Fig9. Custom Validators – Creating Custom Validator (Source: Persistent)
    Fig10. Custom Validators – Defining Validation rule (Source: Persistent)
    Fig11. Custom Validators – Validation (Source: Persistent)
    Custom validators enable you to enforce organization-specific standards, industry requirements, or specialized knowledge domains.
  • Implementing Streaming Validation for Real-Time Applications
    For chat applications or other real-time interfaces, streaming validation is essential:
    Fig12. Streaming Validation (Source: Persistent)

Streaming validation ensures each token is checked as it’s generated, maintaining low latency while still enforcing safety guardrails.

Beyond these implementations, organizations often need to optimize performance and strengthen AI risk management as their AI applications scale. This typically involves caching validation results, using tiered validation approaches, and implementing robust monitoring systems. Additionally, as AI systems handle more requests, proper observability becomes critical for detecting emerging issues and maintaining system reliability.

Conclusion

Implementing guardrails transforms AI systems from potentially unpredictable tools into reliable, trustworthy solutions. Effective implementation requires systematic validation, strategic handling of failures, and multi-layered protections tailored to specific domains and use cases. For instance, in collaboration with a financial services company, we successfully implemented robust guardrails through our GenAI Hub product. This included validators such as PII detection, topic restriction, and regex-based filtering. These measures not only ensured compliance with industry standards but also enhanced the reliability and security of their AI systems, enabling them to confidently leverage AI for critical business operations. By following the patterns outlined in this guide, organizations can responsibly deploy AI while effectively managing AI security risks. As LLM technology evolves, those who invest in robust guardrails today will be well-positioned to meet tomorrow’s standards for safe, reliable AI.

Authors’ Profile

Shivam Gupta

Shivam Gupta

Software Engineer, Corporate CTO Organization BU

Shivam Gupta, an IIT BHU Varanasi graduate, is part of the Generative AI team within Persistent’s Corporate CTO R&D organization. He contributes to the design and development of practical AI solutions, supporting research-driven initiatives and helping integrate emerging technologies into real-world applications.


Abdul Aziz Barkat

Abdul Aziz Barkat

Lead Software Engineer, Corporate CTO Organization BU

Abdul Aziz Barkat is part of the Generative AI team within Persistent’s Corporate CTO R&D organization, where he focuses on the development of innovative solutions.