Skip to content

guardrails-ai/web_sanitization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Overview

| Developed by | Guardrails AI | | Date of development | Feb 15, 2024 | | Validator type | Format | | Blog | | | License | Apache 2 | | Input/Output | Output |

Description

Scans LLM outputs for strings that could cause browser script execution downstream. Uses the bleach library to detect and escape suspect characters.

Intended Use

Use this validator when you are passing the results of your LLM requests directly to a browser or other html-executable environment. It's a good idea to also implement other XSS and code injection prevention techniques.

Requirements

  • Dependencies:
    • bleach
    • guardrails-ai>=0.4.0

Installation

$ guardrails hub install hub://guardrails/web_sanitization

Usage Examples

Validating string output via Python

In this example, we apply the validator to a string output generated by an LLM.

# Import Guard and Validator
from guardrails import Guard
from guardrails.hub import WebSanitization

# Use the Guard with the validator
guard = Guard().use(WebSanitization, on_fail="exception")

# Test passing response
guard.validate(
    """MetaAI's Llama2 is the latest in their open-source LLM series. 
    It is a powerful language model."""
)

try:
    # Test failing response
    guard.validate(
        """MetaAI's Llama2 is the latest in their open-source LLM series. 
        It is a powerful language model. <script>alert('XSS')</script>"""
    )
except Exception as e:
    print(e)

Output:

Validation failed for field with errors: The output contains a web injection attack.

API Reference

__init__(self, on_fail="noop")

    Initializes a new instance of the WebSanitization validator class.

    Parameters:

    • on_fail (str, Callable): The policy to enact when a validator fails. If str, must be one of reask, fix, filter, refrain, noop, exception or fix_reask. Otherwise, must be a function that is called when the validator fails.

validate(self, value, metadata={}) -> ValidationResult

    Validates the given value using the rules defined in this validator. This method is automatically invoked by guard.parse(...), ensuring the validation logic is applied to the input data.

    Note:

    1. This method should not be called directly by the user. Instead, invoke guard.parse(...) where this method will be called internally for each associated Validator.
    2. When invoking guard.parse(...), ensure to pass the appropriate metadata dictionary that includes keys and values required by this validator. If guard is associated with multiple validators, combine all necessary metadata into a single dictionary.

    Parameters:

    • value (Any): The input value to validate.
    • metadata (dict): A dictionary containing metadata required for validation. Keys and values must match the expectations of this validator.

    Metadata is not used in this validator

About

Scans LLM outputs for code, code fragments, and keys

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •