With Portkey, you can confidently take your Instructor pipelines to production and get complete observability over all of your calls + make them reliable - all with a 2 LOC change!
Instructor is a framework for extracting structured outputs from LLMs, available in Python & JS.
Let’s now bring down the cost of running your Instructor pipeline with Portkey caching. You can just create a Config object where you define your cache setting:
Copy
Ask AI
{ "cache": { "mode": "simple" }}
You can write it raw, or use Portkey’s Config builder and get a corresponding config id. Then, just pass it while instantiating your OpenAI client:
Copy
Ask AI
import instructorfrom pydantic import BaseModelfrom openai import OpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeaderscache_config = { "cache": { "mode": "simple" }}portkey = OpenAI( base_url=PORTKEY_GATEWAY_URL, default_headers=createHeaders( provider="@OPENAI_PROVIDER", api_key="PORTKEY_API_KEY", config=cache_config # Or pass your Config ID saved from Portkey app ))class User(BaseModel): name: str age: intclient = instructor.from_openai(portkey)user_info = client.chat.completions.create( model="gpt-4-turbo", max_tokens=1024, response_model=User, messages=[{"role": "user", "content": "John Doe is 30 years old."}],)print(user_info.name)print(user_info.age)
Similarly, you can add Fallback, Loadbalancing, Timeout, or Retry settings to your Configs and make your Instructor requests robust & reliable.