Introducing the Patronus + Portkey Integration
Today, we are excited to announce that we are partnering with Portkey to help LLM product builders catch hallucinations and other LLM failures in production!
Portkey is the leading open source AI gateway. It’s blazing fast and supports over 200+ LLMs, all with 1 simple, universal API. Developers around the world use Portkey to operationally manage their AI products much more easily. There are lots of operational challenges to building and deploying LLM products to production: lots of LLMs to choose from, various frameworks to integrate, and costs are hard to track. But the biggest challenge of all is the lack of highly reliable LLM guardrails.
Powerful guardrails are the biggest missing component to confidently developing and deploying LLM applications in production. Examples of LLM failures include:
- Outputs can be hallucinated or factually inaccurate
- Outputs could be biased
- Outputs could violate privacy or data protection norms
- Outputs could be harmful to the company, for example violating brand policies
- Outputs can fail to follow structural formats needed for downstream calls
… and more!
These failures represent critical risks to AI product builders.
With this integration, Portkey and Patronus AI users can now access State-of-the-Art guardrail models directly on Portkey, such as Lynx for hallucination detection!
The Portkey Gateway integrates with 10+ Patronus evaluators on Day 1. It's super easy to integrate Patronus with Portkey and start using these checks for your Portkey requests.
- Grab your Patronus API Key from here and add it to Portkey
- Create Guardrail Checks by selecting the Patronus evaluators you want
- Set up actions on the Guardrails and then add the Guardrail to a request Config
Read the Portkey docs here on how to set this up: https://docs.portkey.ai/docs/product/guardrails/list-of-guardrail-checks/patronus-ai
Read Portkey’s blog post: https://portkey.ai/blog/patronus-ai-on-portkey-gateway-guardrails/
We are excited to empower LLM builders with the tools needed to evaluate and improve the performance and accuracy of their AI systems.