Documentation
¶
Overview ¶
Package gai provides a unified interface for interacting with various large language model (LLM) providers.
The package abstracts away provider-specific implementations, allowing you to write code that works with multiple AI providers (OpenAI, Anthropic, Google Gemini) without changing your core logic. It supports text, image, audio, and PDF modalities (provider dependent), tool integration with JSON Schema-based parameters, callback-based tool execution, automatic fallback strategies for reliability, standardized error types for better error handling, and detailed usage metrics.
Features ¶
- Unified API across different LLM providers
- Support for text, image, audio, and PDF modalities (provider dependent)
- Tool integration with JSON Schema-based parameters
- Callback-based tool execution
- Automatic fallback strategies for reliability
- Standardized error types for better error handling
- Detailed usage metrics
- Model Context Protocol (MCP) client support
Installation ¶
go get github.com/spachava753/gai
Core Concepts ¶
Generator: The core interface that all providers implement. It takes a Dialog and generates a Response.
type Generator interface {
Generate(ctx context.Context, dialog Dialog, options *GenOpts) (Response, error)
}
Each LLM provider (OpenAI, Anthropic, Gemini) has its own implementation of the Generator interface.
Dialog: A conversation with a language model, represented as a slice of Message objects.
type Dialog []Message
Message: A single exchange in the conversation, with a Role (User, Assistant, or ToolResult) and a collection of Blocks.
type Message struct {
Role Role
Blocks []Block
ToolResultError bool
ExtraFields map[string]interface{}
}
Block: A self-contained piece of content within a message, which can be text, image, audio, or a tool call.
type Block struct {
ID string
BlockType string
ModalityType Modality
MimeType string
Content fmt.Stringer
ExtraFields map[string]interface{}
}
Common block types include:
- Content - Regular content like text or images
- Thinking - Reasoning/thinking from the model
- ToolCall - A request to call a tool
Modalities: gai supports multiple modalities for input and output.
type Modality uint const ( Text Modality = iota Image Audio Video )
Support for specific modalities depends on the underlying model provider.
Tool: A function that can be called by the language model during generation.
type Tool struct {
Name string
Description string
InputSchema *jsonschema.Schema
}
The InputSchema defines the parameters the tool accepts using JSON Schema conventions:
&jsonschema.Schema{
Type: "object",
Properties: map[string]*jsonschema.Schema{...},
Required: []string{...},
}
Basic Usage Examples ¶
Basic usage with OpenAI:
package main
import (
"context"
"fmt"
"github.com/openai/openai-go/v3"
"github.com/spachava753/gai"
)
func main() {
// Create an OpenAI client
client := openai.NewClient()
// Create a generator with a specific model
generator := gai.NewOpenAiGenerator(
client.Chat.Completions,
openai.ChatModelGPT4,
"You are a helpful assistant.",
)
// Create a dialog with a user message
dialog := gai.Dialog{
{
Role: gai.User,
Blocks: []gai.Block{
{
BlockType: gai.Content,
ModalityType: gai.Text,
Content: gai.Str("What is the capital of France?"),
},
},
},
}
// Generate a response
response, err := generator.Generate(context.Background(), dialog, &gai.GenOpts{
Temperature: Ptr(0.7),
})
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
// Print the response
if len(response.Candidates) > 0 && len(response.Candidates[0].Blocks) > 0 {
fmt.Println(response.Candidates[0].Blocks[0].Content)
}
// Get usage metrics
if inputTokens, ok := gai.InputTokens(response.UsageMetadata); ok {
fmt.Printf("Input tokens: %d\n", inputTokens)
}
if outputTokens, ok := gai.OutputTokens(response.UsageMetadata); ok {
fmt.Printf("Output tokens: %d\n", outputTokens)
}
}
Tool Usage Example ¶
Using tools with a language model:
package main
import (
"context"
"encoding/json"
"fmt"
"time"
"github.com/openai/openai-go/v3"
"github.com/spachava753/gai"
)
// Define a tool callback for getting the current time
type TimeToolCallback struct{}
func (t TimeToolCallback) Call(ctx context.Context, parametersJSON json.RawMessage, toolCallID string) (gai.Message, error) {
return gai.ToolResultMessage(toolCallID, gai.TextBlock(time.Now().Format(time.RFC1123))), nil
}
func main() {
client := openai.NewClient()
// Create an OpenAI generator
baseGen := gai.NewOpenAiGenerator(
client.Chat.Completions,
openai.ChatModelGPT4,
"You are a helpful assistant.",
)
// Create a tool generator that wraps the base generator
toolGen := &gai.ToolGenerator{
G: &baseGen,
}
// Define a time tool
timeTool := gai.Tool{
Name: "get_current_time",
Description: "Get the current server time",
}
// Register the tool with its callback
if err := toolGen.Register(timeTool, &TimeToolCallback{}); err != nil {
fmt.Printf("Error registering tool: %v\n", err)
return
}
// Create a dialog
dialog := gai.Dialog{
{
Role: gai.User,
Blocks: []gai.Block{
{
BlockType: gai.Content,
ModalityType: gai.Text,
Content: gai.Str("What time is it now?"),
},
},
},
}
// Generate a response with tool usage
completeDialog, err := toolGen.Generate(context.Background(), dialog, func(d gai.Dialog) *gai.GenOpts {
return &gai.GenOpts{
ToolChoice: gai.ToolChoiceAuto,
}
})
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
// Print the final result
finalMsg := completeDialog[len(completeDialog)-1]
if len(finalMsg.Blocks) > 0 {
fmt.Println(finalMsg.Blocks[0].Content)
}
}
Fallback Strategy Example ¶
Implementing a fallback strategy between providers:
package main
import (
"context"
"fmt"
"github.com/anthropics/anthropic-sdk-go"
"github.com/openai/openai-go/v3"
"github.com/spachava753/gai"
)
func main() {
// Create clients for both providers
openaiClient := openai.NewClient()
anthropicClient := anthropic.NewClient()
// Create generators for each provider
openaiGen := gai.NewOpenAiGenerator(
openaiClient.Chat.Completions,
openai.ChatModelGPT4,
"You are a helpful assistant.",
)
anthropicGen := gai.NewAnthropicGenerator(
anthropicClient.Messages,
"claude-3-opus-20240229",
"You are a helpful assistant.",
)
// Create a fallback generator that tries OpenAI first, then falls back to Anthropic
fallbackGen, err := gai.NewFallbackGenerator(
[]gai.Generator{&openaiGen, &anthropicGen},
&gai.FallbackConfig{
// Custom fallback condition: fall back on rate limits and 5xx errors
ShouldFallback: gai.NewHTTPStatusFallbackConfig(429, 500, 502, 503, 504).ShouldFallback,
},
)
if err != nil {
fmt.Printf("Error creating fallback generator: %v\n", err)
return
}
// Create a dialog
dialog := gai.Dialog{
{
Role: gai.User,
Blocks: []gai.Block{
{
BlockType: gai.Content,
ModalityType: gai.Text,
Content: gai.Str("What is the meaning of life?"),
},
},
},
}
// Generate a response using the fallback strategy
response, err := fallbackGen.Generate(context.Background(), dialog, &gai.GenOpts{
Temperature: Ptr(0.7),
})
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
// Print the response
if len(response.Candidates) > 0 && len(response.Candidates[0].Blocks) > 0 {
fmt.Println(response.Candidates[0].Blocks[0].Content)
}
}
Working with Thinking Blocks ¶
Many LLM providers support "thinking" or "reasoning" output, where the model shows its internal reasoning process. gai normalizes these into Thinking blocks (BlockType == Thinking).
To identify which generator produced a thinking block, check the ThinkingExtraFieldGeneratorKey in the block's ExtraFields. This allows you to handle provider-specific features:
for _, block := range message.Blocks {
if block.BlockType == gai.Thinking {
generator := block.ExtraFields[gai.ThinkingExtraFieldGeneratorKey]
fmt.Printf("Thinking from %s: %s\n", generator, block.Content)
// Handle provider-specific fields
switch generator {
case gai.ThinkingGeneratorAnthropic:
// Anthropic requires signatures for extended thinking
if sig, ok := block.ExtraFields[gai.AnthropicExtraFieldThinkingSignature]; ok {
fmt.Printf("Signature: %s\n", sig)
}
case gai.ThinkingGeneratorGemini:
// Gemini may include thought signatures
if sig, ok := block.ExtraFields[gai.GeminiExtraFieldThoughtSignature]; ok {
fmt.Printf("Thought signature: %s\n", sig)
}
case gai.ThinkingGeneratorOpenRouter:
// OpenRouter includes reasoning metadata
reasonType := block.ExtraFields[gai.OpenRouterExtraFieldReasoningType]
fmt.Printf("Reasoning type: %s\n", reasonType)
}
}
}
Available generator constants:
- ThinkingGeneratorAnthropic - Anthropic Claude models with extended thinking
- ThinkingGeneratorCerebras - Cerebras models with reasoning
- ThinkingGeneratorGemini - Google Gemini models with thinking
- ThinkingGeneratorOpenRouter - OpenRouter with reasoning models
- ThinkingGeneratorResponses - OpenAI Responses API with reasoning
- ThinkingGeneratorZai - Zai generator with reasoning
Note: The OpenAI Chat Completions generator (OpenAiGenerator) does not support thinking blocks.
Working with PDFs ¶
gai supports PDF documents as a special case of the Image modality. PDFs are automatically converted to images at the model provider's API level:
package main
import (
"context"
"fmt"
"os"
"github.com/openai/openai-go/v3"
"github.com/spachava753/gai"
)
func main() {
// Read a PDF file
pdfData, err := os.ReadFile("document.pdf")
if err != nil {
fmt.Printf("Error reading PDF: %v\n", err)
return
}
// Create an OpenAI client and generator
client := openai.NewClient()
generator := gai.NewOpenAiGenerator(
&client.Chat.Completions,
openai.ChatModelGPT4o,
"You are a helpful document analyst.",
)
// Create a dialog with PDF content
dialog := gai.Dialog{
{
Role: gai.User,
Blocks: []gai.Block{
gai.TextBlock("Please summarize this PDF document:"),
gai.PDFBlock(pdfData, "document.pdf"),
},
},
}
// Generate a response
response, err := generator.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
// Print the response
if len(response.Candidates) > 0 && len(response.Candidates[0].Blocks) > 0 {
fmt.Println(response.Candidates[0].Blocks[0].Content)
}
}
PDF support notes:
- OpenAI Token counting: PDF token counting is not supported and will return an error when using the TokenCounter interface
- When creating a PDF block, you must provide both the PDF data and a filename, e.g. PDFBlock(data, "paper.pdf")
- All providers: PDFs are converted to images server-side, so exact page dimensions are not known
Provider Support ¶
The package supports multiple LLM providers with varying capabilities:
OpenAI: The OpenAI implementation supports text generation, image inputs (including PDFs), audio inputs, and tool calling.
import ( "github.com/openai/openai-go/v3" "github.com/spachava753/gai" ) client := openai.NewClient() generator := gai.NewOpenAiGenerator( &client.Chat.Completions, openai.ChatModelGPT4, "System instructions here.", )
Anthropic: The Anthropic implementation supports text generation, image inputs (including PDFs with special handling), and tool calling.
import ( "github.com/anthropics/anthropic-sdk-go" "github.com/spachava753/gai" ) client := anthropic.NewClient() generator := gai.NewAnthropicGenerator( &client.Messages, "claude-3-opus-20240229", "System instructions here.", )
Gemini: The Gemini implementation supports text generation, image inputs (including PDFs), audio inputs, and tool calling.
import (
"google.golang.org/genai"
"github.com/spachava753/gai"
)
client, err := genai.NewClient(ctx, &genai.ClientConfig{
APIKey: "your-api-key",
})
generator, err := gai.NewGeminiGenerator(
client,
"gemini-1.5-pro",
"System instructions here.",
)
Error Handling ¶
The package provides standardized error types for consistent error handling across providers:
- MaxGenerationLimitErr - Maximum token generation limit reached
- UnsupportedInputModalityErr - Model doesn't support the requested input modality
- UnsupportedOutputModalityErr - Model doesn't support the requested output modality
- InvalidToolChoiceErr - Invalid tool choice specified
- InvalidParameterErr - Invalid generation parameter
- ContextLengthExceededErr - Input dialog exceeds model's context length
- ContentPolicyErr - Content violates usage policies
- EmptyDialogErr - No messages provided
- AuthenticationErr - Authentication/authorization issues
- RateLimitErr - API request rate limits exceeded
- ApiErr - Other API errors with status code, type, and message
Example error handling:
response, err := generator.Generate(ctx, dialog, options)
if err != nil {
switch {
case errors.Is(err, gai.MaxGenerationLimitErr):
fmt.Println("Maximum generation limit reached")
case errors.Is(err, gai.ContextLengthExceededErr):
fmt.Println("Context length exceeded")
case errors.Is(err, gai.EmptyDialogErr):
fmt.Println("Empty dialog provided")
// Type-specific errors
case errors.As(err, &gai.RateLimitErr{}):
fmt.Println("Rate limit exceeded:", err)
case errors.As(err, &gai.ContentPolicyErr{}):
fmt.Println("Content policy violation:", err)
case errors.As(err, &gai.ApiErr{}):
apiErr := err.(gai.ApiErr)
fmt.Printf("API error: %d %s - %s\n", apiErr.StatusCode, apiErr.Type, apiErr.Message)
default:
fmt.Println("Unexpected error:", err)
}
return
}
Advanced Usage ¶
Tool Generator: The ToolGenerator provides advanced functionality for working with tools. It automatically handles registering tools with the underlying generator, executing tool callbacks when tools are called, managing the conversation flow during tool use, and handling parallel tool calls.
type ToolGenerator struct {
G ToolCapableGenerator
toolCallbacks map[string]ToolCallback
}
Example:
// Create a base generator (OpenAI or Anthropic)
baseGen := gai.NewOpenAiGenerator(...)
// Create a tool generator
toolGen := &gai.ToolGenerator{
G: &baseGen,
}
// Register tools with callbacks
toolGen.Register(weatherTool, &WeatherAPI{})
toolGen.Register(stockPriceTool, &StockAPI{})
// Generate with tool support
completeDialog, err := toolGen.Generate(ctx, dialog, func(d gai.Dialog) *gai.GenOpts {
return &gai.GenOpts{
ToolChoice: gai.ToolChoiceAuto,
Temperature: Ptr(0.7),
}
})
Fallback Generator: The FallbackGenerator provides automatic fallback between different providers. It automatically tries each generator in sequence, falls back based on configurable conditions, and preserves the original error if all generators fail.
type FallbackGenerator struct {
generators []Generator
config FallbackConfig
}
Configuration options:
- NewHTTPStatusFallbackConfig() - Fallback on specific HTTP status codes
- NewRateLimitOnlyFallbackConfig() - Fallback only on rate limit errors
- Custom fallback logic via ShouldFallback function
Example:
primaryGen := gai.NewOpenAiGenerator(...)
backupGen := gai.NewAnthropicGenerator(...)
fallbackGen, err := gai.NewFallbackGenerator(
[]gai.Generator{primaryGen, backupGen},
&gai.FallbackConfig{
ShouldFallback: func(err error) bool {
// Custom fallback logic
return gai.IsRateLimitError(err) || gai.IsServerError(err)
},
},
)
Model Context Protocol (MCP) ¶
The package includes MCP (Model Context Protocol) client support for connecting to external tools and data sources. The MCP client allows you to connect to MCP servers via stdio, HTTP, or other transports and use their tools within the gai framework.
Note: This MCP implementation does not support JSON-RPC batch requests/responses. All messages are sent and received individually for simplicity and forward compatibility with planned protocol changes.
Example MCP usage:
import "github.com/spachava753/gai/mcp"
// Create MCP client
transport := mcp.NewStdio(mcp.StdioConfig{
Command: "python",
Args: []string{"mcp_server.py"},
})
client, err := mcp.NewClient(ctx, transport, mcp.ClientInfo{
Name: "gai-client",
Version: "1.0.0",
}, mcp.ClientCapabilities{}, mcp.DefaultOptions())
// Register MCP tools with a tool generator
err = mcp.RegisterMCPToolsWithGenerator(ctx, client, toolGen)
For more information and examples, example files in the repository.
License ¶
This project is licensed under the MIT License.
Example (CreatingAWrapper) ¶
This example shows the recommended pattern for creating a reusable wrapper.
package main
import (
"fmt"
)
func main() {
fmt.Println("To create a middleware wrapper:")
fmt.Println("")
fmt.Println("1. Define a struct that embeds gai.GeneratorWrapper")
fmt.Println("2. Override only the methods you want to intercept")
fmt.Println("3. Call GeneratorWrapper.Method() to delegate to the next in chain")
fmt.Println("4. Create a WithXxx() function that returns gai.WrapperFunc")
fmt.Println("")
fmt.Println("Methods you DON'T override pass through automatically.")
}
Output: To create a middleware wrapper: 1. Define a struct that embeds gai.GeneratorWrapper 2. Override only the methods you want to intercept 3. Call GeneratorWrapper.Method() to delegate to the next in chain 4. Create a WithXxx() function that returns gai.WrapperFunc Methods you DON'T override pass through automatically.
Example (MiddlewareCallFlow) ¶
This example shows the complete call flow through a middleware stack, demonstrating the "onion" pattern where calls flow in and responses flow out.
package main
import (
"context"
"fmt"
"strings"
"github.com/spachava753/gai"
)
// trackingMockGen is a simple generator for examples that records calls via a callback.
type trackingMockGen struct {
record func(string)
tokenCount uint
}
func (m *trackingMockGen) Generate(ctx context.Context, dialog gai.Dialog, opts *gai.GenOpts) (gai.Response, error) {
m.record("base:Generate")
return gai.Response{
Candidates: []gai.Message{{Role: gai.Assistant}},
FinishReason: gai.EndTurn,
}, nil
}
func (m *trackingMockGen) Count(ctx context.Context, dialog gai.Dialog) (uint, error) {
m.record("base:Count")
return m.tokenCount, nil
}
func main() {
// CallTracker records the order of calls to visualize the flow
var calls []string
record := func(s string) { calls = append(calls, s) }
// Create wrappers that record before/after
withAlpha := func(g gai.Generator) gai.Generator {
return &alphaWrapper{
GeneratorWrapper: gai.GeneratorWrapper{Inner: g},
record: record,
}
}
withBeta := func(g gai.Generator) gai.Generator {
return &betaWrapper{
GeneratorWrapper: gai.GeneratorWrapper{Inner: g},
record: record,
}
}
// Base generator also uses the same record function
base := &trackingMockGen{record: record, tokenCount: 42}
// Stack: Alpha (outer) → Beta (inner) → base
gen := gai.Wrap(base, withAlpha, withBeta)
// Call Generate
_, _ = gen.Generate(context.Background(), gai.Dialog{}, nil)
fmt.Println("Generate call flow:")
fmt.Println(" " + strings.Join(calls, " → "))
// Reset and call Count
calls = nil
_, _ = gen.(gai.TokenCounter).Count(context.Background(), gai.Dialog{})
fmt.Println("\nCount call flow:")
fmt.Println(" " + strings.Join(calls, " → "))
}
// alphaWrapper and betaWrapper are helpers for Example_middlewareCallFlow
type alphaWrapper struct {
gai.GeneratorWrapper
record func(string)
}
func (a *alphaWrapper) Generate(ctx context.Context, d gai.Dialog, o *gai.GenOpts) (gai.Response, error) {
a.record("alpha:before")
resp, err := a.GeneratorWrapper.Generate(ctx, d, o)
a.record("alpha:after")
return resp, err
}
func (a *alphaWrapper) Count(ctx context.Context, d gai.Dialog) (uint, error) {
a.record("alpha:before")
count, err := a.GeneratorWrapper.Count(ctx, d)
a.record("alpha:after")
return count, err
}
type betaWrapper struct {
gai.GeneratorWrapper
record func(string)
}
func (b *betaWrapper) Generate(ctx context.Context, d gai.Dialog, o *gai.GenOpts) (gai.Response, error) {
b.record("beta:before")
resp, err := b.GeneratorWrapper.Generate(ctx, d, o)
b.record("beta:after")
return resp, err
}
func (b *betaWrapper) Count(ctx context.Context, d gai.Dialog) (uint, error) {
b.record("beta:before")
count, err := b.GeneratorWrapper.Count(ctx, d)
b.record("beta:after")
return count, err
}
Output: Generate call flow: alpha:before → beta:before → base:Generate → beta:after → alpha:after Count call flow: alpha:before → beta:before → base:Count → beta:after → alpha:after
Example (MixGenerators) ¶
ExampleMixGenerators demonstrates how to mix different AI model providers in a single conversation, switching between Anthropic and OpenAI models.
// Initialize clients for both providers
anthropicClient := a.NewClient()
openaiClient := openai.NewClient()
// Create generators for each provider
anthropicGen := NewAnthropicGenerator(
&anthropicClient.Messages,
string(a.ModelClaudeHaiku4_5),
"You are Claude, a helpful AI assistant from Anthropic. Always mention you are Claude in your responses.",
)
openaiGen := NewOpenAiGenerator(
&openaiClient.Chat.Completions,
openai.ChatModelGPT4oMini,
"You are GPT-4o Mini, a helpful AI assistant from OpenAI. Always mention you are GPT-4o Mini in your responses.",
)
// Start a conversation with a user message
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Can you tell me something interesting about quantum computing?"),
},
},
},
}
// First turn: Use Anthropic's Claude model
fmt.Println("Generating response with Claude...")
claudeResp, err := anthropicGen.Generate(
context.Background(),
dialog,
&GenOpts{MaxGenerationTokens: Ptr(1024)}, // Claude requires MaxGenerationTokens
)
if err != nil {
panic(err)
}
// Add Claude's response to the conversation
dialog = append(dialog, claudeResp.Candidates[0])
// User asks a follow-up question
dialog = append(dialog, Message{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Can you explain how quantum entanglement works in simple terms?"),
},
},
})
// Second turn: Use OpenAI's GPT model for the follow-up
fmt.Println("Generating response with GPT-4o Mini...")
gptResp, err := openaiGen.Generate(
context.Background(),
dialog,
&GenOpts{MaxGenerationTokens: Ptr(1024)},
)
if err != nil {
panic(err)
}
// Add GPT's response to the conversation
dialog = append(dialog, gptResp.Candidates[0])
// Example with tool usage between different models
// Register the same tool with both generators
stockTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
if err := anthropicGen.Register(stockTool); err != nil {
panic(err)
}
if err := openaiGen.Register(stockTool); err != nil {
panic(err)
}
// Start a new conversation about stocks
stockDialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What's the current price of Apple stock?"),
},
},
},
}
// First turn: Use OpenAI's GPT model with tool choice
fmt.Println("Using GPT with tool...")
gptToolResp, err := openaiGen.Generate(
context.Background(),
stockDialog,
&GenOpts{
ToolChoice: "get_stock_price",
MaxGenerationTokens: Ptr(1024),
},
)
if err != nil {
panic(err)
}
// Add GPT's tool call to the conversation
stockDialog = append(stockDialog, gptToolResp.Candidates[0])
// Add mock tool result
stockDialog = append(stockDialog, Message{
Role: ToolResult,
Blocks: []Block{
{
ID: gptToolResp.Candidates[0].Blocks[0].ID,
ModalityType: Text,
Content: Str("185.92"),
},
},
})
// Switch to Claude for final response
fmt.Println("Using Claude to interpret tool result...")
claudeToolResp, err := anthropicGen.Generate(
context.Background(),
stockDialog,
&GenOpts{MaxGenerationTokens: Ptr(1024)},
)
if err != nil {
panic(err)
}
// Add Claude's response to the conversation
stockDialog = append(stockDialog, claudeToolResp.Candidates[0])
// For the example output, we'll just print success messages
// In a real application, you would process the full conversation content
fmt.Println("\nSuccessfully completed conversation with mixed models")
fmt.Println("Successfully completed stock price conversation with mixed models")
Output: Generating response with Claude... Generating response with GPT-4o Mini... Using GPT with tool... Using Claude to interpret tool result... Successfully completed conversation with mixed models Successfully completed stock price conversation with mixed models
Example (SelectiveOverride) ¶
This example demonstrates how wrappers that override different methods create independent call chains for each method.
package main
import (
"context"
"fmt"
"log/slog"
"os"
"time"
"github.com/spachava753/gai"
)
// LoggingGenerator logs Generate calls. It does NOT override Count, Stream, or
// Register, so those methods pass through to Inner automatically via GeneratorWrapper.
type LoggingGenerator struct {
gai.GeneratorWrapper
Logger *slog.Logger
}
// Generate logs before and after delegating to the next generator in the chain.
func (l *LoggingGenerator) Generate(ctx context.Context, dialog gai.Dialog, opts *gai.GenOpts) (gai.Response, error) {
l.Logger.Info("generate: starting", "messages", len(dialog))
start := time.Now()
resp, err := l.GeneratorWrapper.Generate(ctx, dialog, opts)
l.Logger.Info("generate: finished", "duration", time.Since(start), "error", err)
return resp, err
}
// WithLogging returns a WrapperFunc for use with gai.Wrap.
func WithLogging(logger *slog.Logger) gai.WrapperFunc {
return func(g gai.Generator) gai.Generator {
return &LoggingGenerator{
GeneratorWrapper: gai.GeneratorWrapper{Inner: g},
Logger: logger,
}
}
}
// MetricsGenerator collects timing metrics for both Generate and Count operations.
// This demonstrates how a single wrapper can intercept multiple interface methods.
type MetricsGenerator struct {
gai.GeneratorWrapper
RecordMetric func(operation string, duration time.Duration, err error)
}
// Generate records metrics for generation calls.
func (m *MetricsGenerator) Generate(ctx context.Context, dialog gai.Dialog, opts *gai.GenOpts) (gai.Response, error) {
start := time.Now()
resp, err := m.GeneratorWrapper.Generate(ctx, dialog, opts)
m.RecordMetric("generate", time.Since(start), err)
return resp, err
}
// Count records metrics for token counting calls.
// By overriding this, MetricsGenerator participates in the Count call chain.
func (m *MetricsGenerator) Count(ctx context.Context, dialog gai.Dialog) (uint, error) {
start := time.Now()
count, err := m.GeneratorWrapper.Count(ctx, dialog)
m.RecordMetric("count", time.Since(start), err)
return count, err
}
// WithMetrics returns a WrapperFunc for use with gai.Wrap.
func WithMetrics(record func(string, time.Duration, error)) gai.WrapperFunc {
return func(g gai.Generator) gai.Generator {
return &MetricsGenerator{
GeneratorWrapper: gai.GeneratorWrapper{Inner: g},
RecordMetric: record,
}
}
}
// simpleMockGen is a minimal generator for examples that don't need call tracking.
type simpleMockGen struct {
tokenCount uint
}
func (m *simpleMockGen) Generate(ctx context.Context, dialog gai.Dialog, opts *gai.GenOpts) (gai.Response, error) {
return gai.Response{
Candidates: []gai.Message{{Role: gai.Assistant}},
FinishReason: gai.EndTurn,
}, nil
}
func (m *simpleMockGen) Count(ctx context.Context, dialog gai.Dialog) (uint, error) {
return m.tokenCount, nil
}
func main() {
// LoggingGenerator only overrides Generate
// MetricsGenerator overrides both Generate AND Count
base := &simpleMockGen{tokenCount: 100}
// Stack: Logging (outer) → Metrics (inner) → base
gen := gai.Wrap(base,
WithLogging(slog.New(slog.NewTextHandler(os.Stdout, &slog.HandlerOptions{
ReplaceAttr: func(groups []string, a slog.Attr) slog.Attr {
// Remove time for reproducible output
if a.Key == slog.TimeKey {
return slog.Attr{}
}
// Simplify duration
if a.Key == "duration" {
return slog.String("duration", "Xms")
}
return a
},
}))),
WithMetrics(func(op string, d time.Duration, err error) {
fmt.Printf("metric: %s took some time\n", op)
}),
)
fmt.Println("=== Calling Generate ===")
fmt.Println("Flow: Logging.Generate → Metrics.Generate → base.Generate")
_, _ = gen.Generate(context.Background(), gai.Dialog{}, nil)
fmt.Println("\n=== Calling Count ===")
fmt.Println("Flow: Metrics.Count → base.Count (Logging has no Count override)")
_, _ = gen.(gai.TokenCounter).Count(context.Background(), gai.Dialog{})
}
Output: === Calling Generate === Flow: Logging.Generate → Metrics.Generate → base.Generate level=INFO msg="generate: starting" messages=0 metric: generate took some time level=INFO msg="generate: finished" duration=Xms error=<nil> === Calling Count === Flow: Metrics.Count → base.Count (Logging has no Count override) metric: count took some time
Index ¶
- Constants
- Variables
- func CacheReadTokens(m Metadata) (int, bool)
- func CacheWriteTokens(m Metadata) (int, bool)
- func EnableMultiTurnCaching(_ context.Context, params *a.MessageNewParams) error
- func EnableSystemCaching(_ context.Context, params *a.MessageNewParams) error
- func GenerateSchema[T any]() (*jsonschema.Schema, error)
- func GetMetric[T any](m Metadata, key string) (T, bool)
- func InputTokens(m Metadata) (int, bool)
- func MarshalJSONToolUseInput(t ToolCallInput) ([]byte, error)
- func NewAnthropicGenerator(client AnthropicSvc, model, systemInstructions string) interface{ ... }
- func NewGeminiGenerator(client *genai.Client, modelName, systemInstructions string) (interface{ ... }, error)
- func OutputTokens(m Metadata) (int, bool)
- func Ptr[T any](v T) *T
- type AnthropicGenerator
- func (g *AnthropicGenerator) Count(ctx context.Context, dialog Dialog) (uint, error)
- func (g *AnthropicGenerator) Generate(ctx context.Context, dialog Dialog, options *GenOpts) (Response, error)
- func (g *AnthropicGenerator) Register(tool Tool) error
- func (g *AnthropicGenerator) Stream(ctx context.Context, dialog Dialog, options *GenOpts) iter.Seq2[StreamChunk, error]
- type AnthropicServiceParamModifierFunc
- type AnthropicServiceWrapper
- func (svc AnthropicServiceWrapper) CountTokens(ctx context.Context, params a.MessageCountTokensParams, ...) (res *a.MessageTokensCount, err error)
- func (svc AnthropicServiceWrapper) New(ctx context.Context, params a.MessageNewParams, opts ...option.RequestOption) (res *a.Message, err error)
- func (svc AnthropicServiceWrapper) NewStreaming(ctx context.Context, params a.MessageNewParams, opts ...option.RequestOption) (stream *ssestream.Stream[a.MessageStreamEventUnion])
- type AnthropicSvc
- type ApiErr
- type AudioConfig
- type AuthenticationErr
- type Block
- func AudioBlock(data []byte, mimeType string) Block
- func ImageBlock(data []byte, mimeType string) Block
- func MetadataBlock(metadata Metadata) Block
- func PDFBlock(data []byte, filename string) Block
- func TextBlock(text string) Block
- func ToolCallBlock(id, toolName string, parameters map[string]any) (Block, error)
- type CallbackExecErr
- type CerebrasGenerator
- type ContentPolicyErr
- type Dialog
- type FallbackConfig
- type FallbackGenerator
- type FinishReason
- type GeminiGenerator
- func (g *GeminiGenerator) Count(ctx context.Context, dialog Dialog) (uint, error)
- func (g *GeminiGenerator) Generate(ctx context.Context, dialog Dialog, options *GenOpts) (Response, error)
- func (g *GeminiGenerator) Register(tool Tool) error
- func (g *GeminiGenerator) Stream(ctx context.Context, dialog Dialog, options *GenOpts) iter.Seq2[StreamChunk, error]
- type GenOpts
- type GenOptsGenerator
- type Generator
- type GeneratorWrapper
- func (w *GeneratorWrapper) Count(ctx context.Context, dialog Dialog) (uint, error)
- func (w *GeneratorWrapper) Generate(ctx context.Context, dialog Dialog, opts *GenOpts) (Response, error)
- func (w *GeneratorWrapper) Register(tool Tool) error
- func (w *GeneratorWrapper) Stream(ctx context.Context, dialog Dialog, opts *GenOpts) iter.Seq2[StreamChunk, error]
- type InvalidParameterErr
- type InvalidToolChoiceErr
- type Message
- type Metadata
- type Modality
- type OpenAICompletionService
- type OpenAiGenerator
- func (g *OpenAiGenerator) Count(ctx context.Context, dialog Dialog) (uint, error)
- func (g *OpenAiGenerator) Generate(ctx context.Context, dialog Dialog, options *GenOpts) (Response, error)
- func (g *OpenAiGenerator) Register(tool Tool) error
- func (g *OpenAiGenerator) Stream(ctx context.Context, dialog Dialog, options *GenOpts) iter.Seq2[StreamChunk, error]
- type OpenRouterGenerator
- type PreprocessingGenerator
- type RateLimitErr
- type Response
- type ResponsesGenerator
- type ResponsesService
- type RetryGenerator
- type Role
- type Str
- type StreamChunk
- type StreamingAdapter
- type StreamingGenerator
- type TokenCounter
- type Tool
- type ToolCallBackFunc
- type ToolCallInput
- type ToolCallback
- type ToolCapableGenerator
- type ToolGenerator
- type ToolRegister
- type ToolRegistrationErr
- type UnsupportedInputModalityErr
- type UnsupportedOutputModalityErr
- type Validator
- type WrapperFunc
- type ZaiCompletionService
- type ZaiGenerator
- type ZaiGeneratorOption
Examples ¶
- Package (CreatingAWrapper)
- Package (MiddlewareCallFlow)
- Package (MixGenerators)
- Package (SelectiveOverride)
- AnthropicGenerator.Count
- AnthropicGenerator.Generate
- AnthropicGenerator.Generate (Image)
- AnthropicGenerator.Generate (Pdf)
- AnthropicGenerator.Generate (Thinking)
- AnthropicGenerator.Register
- AnthropicGenerator.Register (ParallelToolUse)
- AnthropicGenerator.Stream
- AnthropicGenerator.Stream (ParallelToolUse)
- CerebrasGenerator.Generate
- CerebrasGenerator.Generate (Reasoning_gptoss)
- CerebrasGenerator.Generate (Reasoning_zai)
- CerebrasGenerator.Register
- FallbackGenerator
- FallbackGenerator.Generate
- FallbackGenerator.Generate (CustomFallbackConfig)
- GeminiGenerator.Count
- GeminiGenerator.Generate
- GeminiGenerator.Generate (Audio)
- GeminiGenerator.Generate (Image)
- GeminiGenerator.Generate (Pdf)
- GeminiGenerator.Register
- GeminiGenerator.Register (ParallelToolUse)
- GeminiGenerator.Register (ParallelToolUse_multimedia)
- GeminiGenerator.Stream
- GeminiGenerator.Stream (ParallelToolUse)
- OpenAiGenerator.Count
- OpenAiGenerator.Generate
- OpenAiGenerator.Generate (Audio)
- OpenAiGenerator.Generate (Image)
- OpenAiGenerator.Generate (OpenRouter)
- OpenAiGenerator.Generate (Pdf)
- OpenAiGenerator.Generate (Thinking)
- OpenAiGenerator.Register
- OpenAiGenerator.Register (OpenRouter)
- OpenAiGenerator.Register (OpenRouterParallelToolUse)
- OpenAiGenerator.Register (ParallelToolUse)
- OpenAiGenerator.Stream
- OpenAiGenerator.Stream (ParallelToolUse)
- OpenRouterGenerator.Generate
- OpenRouterGenerator.Generate (Image)
- OpenRouterGenerator.Generate (InvalidModel)
- OpenRouterGenerator.Generate (ReasoningModel)
- OpenRouterGenerator.Register
- ResponsesGenerator.Generate
- ResponsesGenerator.Generate (Image)
- ResponsesGenerator.Generate (Pdf)
- ResponsesGenerator.Generate (Thinking)
- ResponsesGenerator.Register
- ResponsesGenerator.Register (ParallelToolUse)
- ResponsesGenerator.Stream (Thinking)
- StreamingAdapter
- StreamingAdapter (CustomUsage)
- StreamingAdapter (ErrorHandling)
- StreamingAdapter (MultipleBlocks)
- StreamingAdapter (ParallelToolCalls)
- StreamingAdapter (Responses)
- StreamingAdapter (Responses_toolUse)
- StreamingAdapter (WithToolGenerator)
- StreamingAdapter (WithTools)
- ToolCallBackFunc
- ToolGenerator.Generate
- ToolGenerator.Generate (Responses)
- ZaiGenerator (DisableThinking)
- ZaiGenerator.Generate
- ZaiGenerator.Generate (InterleavedThinking)
- ZaiGenerator.Generate (MultiTurn)
- ZaiGenerator.Generate (Thinking)
- ZaiGenerator.Register
- ZaiGenerator.Stream
- ZaiGenerator.Stream (ToolCalling)
Constants ¶
const ( // GeminiExtraFieldThoughtSignature stores the thought signature for thinking blocks. // Present in Block.ExtraFields for Thinking blocks from Gemini responses. // This signature is required when sending thinking blocks back to the API. GeminiExtraFieldThoughtSignature = "gemini_thought_signature" // GeminiExtraFieldFunctionName stores the function name for tool call blocks. // Present in Block.ExtraFields for ToolCall blocks from Gemini responses. GeminiExtraFieldFunctionName = "function_name" )
const ( ToolChoiceAuto = "auto" ToolChoiceToolsRequired = "required" )
const ( // Content represents unstructured content of a single Modality, like text, images and audio. Content = "content" // Thinking represents the thinking/reasoning content a Generator produced. // When a block has this type, its ExtraFields will contain ThinkingExtraFieldGeneratorKey // to identify which generator produced the thinking content. This allows consumers to // handle thinking blocks differently based on their source (e.g., accessing Anthropic-specific // signature fields via AnthropicExtraFieldThinkingSignature). Thinking = "thinking" // ToolCall represents a tool call request by the model. ToolCall = "tool_call" // MetadataBlockType represents a block containing usage metadata. MetadataBlockType = "metadata" // ThinkingExtraFieldGeneratorKey is set in Block.ExtraFields for Thinking blocks to identify // which generator produced the thinking content. All generators that support thinking blocks // set this field automatically. // // The value is one of the ThinkingGenerator* constants (e.g., ThinkingGeneratorAnthropic). // // Example usage: // // for _, block := range message.Blocks { // if block.BlockType == gai.Thinking { // if gen, ok := block.ExtraFields[gai.ThinkingExtraFieldGeneratorKey]; ok { // switch gen { // case gai.ThinkingGeneratorAnthropic: // // Access Anthropic-specific fields like AnthropicExtraFieldThinkingSignature // case gai.ThinkingGeneratorGemini: // // Access Gemini-specific fields like GeminiExtraFieldThoughtSignature // } // } // } // } ThinkingExtraFieldGeneratorKey = "thinking_generator" // ThinkingGeneratorAnthropic identifies thinking blocks from the Anthropic generator. // Anthropic thinking blocks may also contain AnthropicExtraFieldThinkingSignature. ThinkingGeneratorAnthropic = "anthropic" // ThinkingGeneratorCerebras identifies thinking blocks from the Cerebras generator. ThinkingGeneratorCerebras = "cerebras" // ThinkingGeneratorGemini identifies thinking blocks from the Gemini generator. // Gemini thinking blocks may also contain GeminiExtraFieldThoughtSignature. ThinkingGeneratorGemini = "gemini" // ThinkingGeneratorOpenRouter identifies thinking blocks from the OpenRouter generator. // OpenRouter thinking blocks may also contain OpenRouterExtraFieldReasoningType, // OpenRouterExtraFieldReasoningFormat, OpenRouterExtraFieldReasoningIndex, and // OpenRouterExtraFieldReasoningSignature. ThinkingGeneratorOpenRouter = "openrouter" // ThinkingGeneratorResponses identifies thinking blocks from the OpenAI Responses generator. ThinkingGeneratorResponses = "responses" // ThinkingGeneratorZai identifies thinking blocks from the Zai generator. ThinkingGeneratorZai = "zai" )
const ( // UsageMetricInputTokens is a metric key representing the number of tokens in the input Dialog. // The value associated with this key is expected to be of type int. UsageMetricInputTokens = "input_tokens" // UsageMetricGenerationTokens is a metric key representing the number of tokens generated // in the Response. The value associated with this key is expected to be of type int. UsageMetricGenerationTokens = "gen_tokens" // UsageMetricCacheReadTokens is a metric key representing the number of tokens read from cache. // This applies to providers that support prompt caching (e.g., Anthropic, OpenAI). // The value associated with this key is expected to be of type int. UsageMetricCacheReadTokens = "cache_read_tokens" // UsageMetricCacheWriteTokens is a metric key representing the number of tokens written to cache. // This applies to providers that support prompt caching (e.g., Anthropic). // The value associated with this key is expected to be of type int. UsageMetricCacheWriteTokens = "cache_write_tokens" // UsageMetricReasoningTokens is a metric key representing the number of reasoning tokens // in the output. This applies to providers that support reasoning/thinking models // (e.g., OpenAI Responses API with reasoning enabled). // The value associated with this key is expected to be of type int. UsageMetricReasoningTokens = "reasoning_tokens" )
const ( // OpenAIExtraFieldImageWidth stores the image width in pixels. // Can be set in Block.ExtraFields for Image blocks to specify dimensions // when they cannot be determined from the image data. OpenAIExtraFieldImageWidth = "width" // OpenAIExtraFieldImageHeight stores the image height in pixels. // Can be set in Block.ExtraFields for Image blocks to specify dimensions // when they cannot be determined from the image data. OpenAIExtraFieldImageHeight = "height" // OpenAIExtraFieldImageDetail stores the detail level for image processing. // Can be set in Block.ExtraFields for Image blocks to "low" or "high". // Defaults to "high" if not specified. OpenAIExtraFieldImageDetail = "detail" )
const ( // OpenRouterExtraFieldReasoningType stores the reasoning detail type (e.g., "reasoning.summary", "reasoning.text", "reasoning.encrypted"). // Present in Block.ExtraFields for Thinking blocks from OpenRouter responses. OpenRouterExtraFieldReasoningType = "reasoning_type" // OpenRouterExtraFieldReasoningFormat stores the reasoning detail format (e.g., "anthropic-claude-v1"). // Present in Block.ExtraFields for Thinking blocks from OpenRouter responses. OpenRouterExtraFieldReasoningFormat = "reasoning_format" // OpenRouterExtraFieldReasoningIndex stores the zero-based index of the reasoning detail in the response. // Present in Block.ExtraFields for Thinking blocks from OpenRouter responses. OpenRouterExtraFieldReasoningIndex = "reasoning_index" // OpenRouterExtraFieldReasoningSignature stores the signature for encrypted reasoning details. // Present in Block.ExtraFields for Thinking blocks with type "reasoning.text" when a signature is provided. OpenRouterExtraFieldReasoningSignature = "reasoning_signature" // OpenRouterUsageMetricReasoningDetailsAvailable indicates whether reasoning_details were present in the response. // Stored in Response.UsageMetadata as a boolean value. OpenRouterUsageMetricReasoningDetailsAvailable = "reasoning_details_available" )
const ( // AnthropicExtraFieldThinkingSignature stores the thinking signature for extended thinking blocks. // Present in Block.ExtraFields for Thinking blocks from Anthropic responses. // This signature is required when sending thinking blocks back to the API. AnthropicExtraFieldThinkingSignature = "anthropic_thinking_signature" )
const BlockFieldFilenameKey = "filename"
const ResponsesExtraFieldEncryptedContent = "responses_encrypted_content"
ResponsesExtraFieldEncryptedContent is the key used in Block.ExtraFields for Thinking blocks to store the encrypted reasoning content from the Responses API. When using the API in stateless mode (store=false), encrypted reasoning items must be passed back to the API during multi-turn function-calling conversations so the model can continue its reasoning.
Per the OpenAI docs: reasoning items should be included in the input for subsequent turns during ongoing function call chains (i.e., between the last user message and the current request). Once the assistant produces a non-tool-call response and a new user message begins a new turn, previous encrypted reasoning items are no longer needed. The API will automatically ignore reasoning items that aren't relevant to the current context.
const ResponsesExtraFieldReasoningID = "responses_reasoning_id"
ResponsesExtraFieldReasoningID is the key used in Block.ExtraFields for Thinking blocks to store the reasoning item's unique ID from the Responses API. This is needed to reconstruct reasoning input items when passing back in multi-turn conversations.
const ResponsesThoughtSummaryDetailParam = "responses_thought_summary_detail"
ResponsesThoughtSummaryDetailParam is a key used for storing the thought summary detail level in GenOpts.ExtraArgs. Setting parameter will set the level of detail of thought summaries that are returned from the OpenAI Responses API. One of `auto`, `concise`, or `detailed`.
Variables ¶
var ContextLengthExceededErr = errors.New("context length exceeded")
ContextLengthExceededErr is returned when the total number of tokens in the input Dialog exceeds the maximum context length supported by the Generator. Different Generator implementations may have different context length limits.
var EmptyDialogErr = errors.New("empty dialog: at least one message required")
EmptyDialogErr is returned when an empty Dialog is provided to Generate. At least one Message must be present in the Dialog.
var MaxGenerationLimitErr = errors.New("maximum generation limit reached")
MaxGenerationLimitErr is returned when a Generator has generated the maximum number of tokens specified by GenOpts.MaxGenerationTokens. This error indicates that the generation was terminated due to reaching the token limit rather than natural completion.
Functions ¶
func CacheReadTokens ¶ added in v0.30.0
CacheReadTokens returns the number of tokens read from cache from the metrics. This metric is populated by providers that support prompt caching (e.g., Anthropic, OpenAI). The first return value is the number of cache read tokens, and the second indicates whether the metric was present in the map.
If the metric is not present, returns (0, false). If the metric is present, returns (tokens, true).
Panics if the value in the metrics map cannot be type asserted to int.
func CacheWriteTokens ¶ added in v0.30.0
CacheWriteTokens returns the number of tokens written to cache from the metrics. This metric is populated by providers that support prompt caching (e.g., Anthropic). The first return value is the number of cache write tokens, and the second indicates whether the metric was present in the map.
If the metric is not present, returns (0, false). If the metric is present, returns (tokens, true).
Panics if the value in the metrics map cannot be type asserted to int.
func EnableMultiTurnCaching ¶ added in v0.4.0
func EnableMultiTurnCaching(_ context.Context, params *a.MessageNewParams) error
EnableMultiTurnCaching modifies Anthropic API parameters to enable caching for multi-turn conversations. This can significantly improve response time and reduce costs when having extended conversations with an Anthropic model.
When applied, this modifier adds an "ephemeral" cache control directive to the last content block of the last message in the conversation, enabling caching for various types of content including text, images, tool use, tool results, and documents.
Example:
// Create a wrapped client with multi-turn conversation caching
wrappedClient := NewAnthropicServiceWrapper(
client.Messages,
EnableMultiTurnCaching,
)
// Use the wrapped client with your generator
generator := NewAnthropicGenerator(
wrappedClient,
"claude-3-opus-20240229",
"You are a helpful assistant.",
)
Note: This has no effect if the request doesn't include any messages. For models prior to Claude Opus 4.5, caching is skipped when extended thinking is enabled because thinking blocks are stripped from prior turns, invalidating the cache. For Claude Opus 4.5 and later, thinking blocks are preserved by default, so caching works normally even with extended thinking.
It is particularly useful for applications with interactive, multi-turn conversations.
func EnableSystemCaching ¶ added in v0.4.0
func EnableSystemCaching(_ context.Context, params *a.MessageNewParams) error
EnableSystemCaching modifies Anthropic API parameters to enable caching of system instructions. This can improve performance and reduce costs when making multiple requests with the same system instructions.
When applied, this modifier adds an "ephemeral" cache control directive to the last system instruction block, indicating to Anthropic's API that the system instruction can be cached.
Example:
// Create a wrapped client with system instruction caching
wrappedClient := NewAnthropicServiceWrapper(
client.Messages,
EnableSystemCaching,
)
// Use the wrapped client with your generator
generator := NewAnthropicGenerator(
wrappedClient,
"claude-3-opus-20240229",
"You are a helpful assistant.",
)
Note: This has no effect if the request doesn't include system instructions. System prompts remain cached even with extended thinking enabled.
func GenerateSchema ¶ added in v0.8.0
func GenerateSchema[T any]() (*jsonschema.Schema, error)
GenerateSchema is a helper function to help generate the schema definition for Tool.InputSchema
func GetMetric ¶
GetMetric is a generic function that retrieves a metric value of type T from the metrics map. The first return value is the metric value of type T, and the second indicates whether the metric was present in the map.
If the metric is not present, returns (zero value of T, false). If the metric is present, returns (metric value, true).
Panics if the value in the metrics map cannot be type asserted to T.
Example usage:
// Get a float64 metric
if cost, ok := GetMetric[float64](metrics, "cost"); ok {
fmt.Printf("Request cost: $%.2f\n", cost)
}
// Get a string metric
if model, ok := GetMetric[string](metrics, "model"); ok {
fmt.Printf("Model used: %s\n", model)
}
func InputTokens ¶
InputTokens returns the number of tokens in the input Dialog from the metrics. The first return value is the number of input tokens, and the second indicates whether the metric was present in the map.
If the metric is not present, returns (0, false). If the metric is present, returns (tokens, true).
Panics if the value in the metrics map cannot be type asserted to int.
func MarshalJSONToolUseInput ¶ added in v0.4.0
func MarshalJSONToolUseInput(t ToolCallInput) ([]byte, error)
MarshalJSONToolUseInput marshals a ToolCallInput, never panics.
func NewAnthropicGenerator ¶
func NewAnthropicGenerator(client AnthropicSvc, model, systemInstructions string) interface { ToolCapableGenerator StreamingGenerator TokenCounter }
NewAnthropicGenerator creates a new Anthropic generator with the specified model. It returns a ToolCapableGenerator that preprocesses dialog for parallel tool use compatibility. This generator fully supports the anyOf JSON Schema feature.
Parameters:
- client: An Anthropic service client
- model: The Anthropic model to use (e.g., "claude-3-5-sonnet-20241022")
- systemInstructions: Optional system instructions that set the model's behavior
Supported modalities:
- Text: Both input and output
- Image: Input only (base64 encoded, including PDFs with MIME type "application/pdf")
PDF documents are handled specially using Anthropic's NewDocumentBlock function, which provides optimized PDF processing. Use the PDFBlock helper function to create PDF content blocks.
The returned generator also implements the TokenCounter interface for token counting.
func NewGeminiGenerator ¶ added in v0.4.0
func NewGeminiGenerator(client *genai.Client, modelName, systemInstructions string) (interface { ToolCapableGenerator StreamingGenerator TokenCounter }, error)
NewGeminiGenerator creates a new Gemini generator with the specified API key, model name, and system instructions. Returns a ToolCapableGenerator that preprocesses dialog for parallel tool use compatibility. The returned generator also implements the TokenCounter interface for token counting.
Parameters:
- client: A properly initialized genai.Client instance with API key configured
- modelName: The Gemini model to use (e.g., "gemini-1.5-pro", "gemini-1.5-flash")
- systemInstructions: Optional system instructions that set the model's behavior
Supported modalities:
- Text: Both input and output
- Image: Input only (base64 encoded, including PDFs with MIME type "application/pdf")
- Audio: Input only (base64 encoded)
PDF documents are supported as part of the Image modality. The PDF content is sent with the appropriate MIME type and processed by Gemini's multimodal capabilities. Use the PDFBlock helper function to create PDF content blocks.
Note on JSON Schema support limitations:
- The anyOf property has limited support in Gemini. It only supports the pattern [Type, null] to indicate nullable fields, which is implemented using Schema.Nullable=true.
- If you use anyOf with multiple non-null types or with only the null type, this generator will return errors, as the Gemini SDK doesn't support these patterns.
- For maximum compatibility across all generators, restrict usage of anyOf to the nullable pattern: e.g., "anyOf": [{"type": "string"}, {"type": "null"}]
Returns a ToolCapableGenerator that also implements TokenCounter, or an error if initialization fails.
func OutputTokens ¶
OutputTokens returns the number of tokens generated in the Response from the metrics. The first return value is the number of generated tokens, and the second indicates whether the metric was present in the map.
If the metric is not present, returns (0, false). If the metric is present, returns (tokens, true).
Panics if the value in the metrics map cannot be type asserted to int.
Types ¶
type AnthropicGenerator ¶
type AnthropicGenerator struct {
// contains filtered or unexported fields
}
AnthropicGenerator implements the gai.Generator interface using OpenAI's API
func (*AnthropicGenerator) Count ¶ added in v0.4.8
Count implements the TokenCounter interface for AnthropicGenerator. It converts the dialog to Anthropic's format and uses Anthropic's dedicated CountTokens API.
Unlike the OpenAI implementation which uses a local tokenizer, this method makes an API call to the Anthropic service. This provides the most accurate token count as it uses exactly the same tokenization logic as the actual generation.
The method accounts for:
- System instructions (if set during generator initialization)
- All messages in the dialog with their respective blocks
- Multi-modal content like images
- Tool definitions registered with the generator
The context parameter allows for cancellation of the API call.
Returns:
- The total token count as uint, representing input tokens only
- An error if the API call fails or if dialog conversion fails
Note: Anthropic's CountTokens API returns only input token count. For an estimate of output tokens, you would need to perform a separate calculation.
Example ¶
// Create an Anthropic client
client := a.NewClient()
// Create a generator with system instructions
generator := NewAnthropicGenerator(
&client.Messages,
string(a.ModelClaudeHaiku4_5),
"You are a helpful assistant.",
)
// Create a dialog with a user message
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the capital of France?"),
},
},
},
}
// Count tokens in the dialog
tokenCount, err := generator.Count(context.Background(), dialog)
if err != nil {
fmt.Printf("Error counting tokens: %v\n", err)
return
}
fmt.Printf("Dialog contains approximately %d tokens\n", tokenCount)
// Add a response to the dialog
dialog = append(dialog, Message{
Role: Assistant,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("The capital of France is Paris. It's a beautiful city known for its culture, art, and cuisine."),
},
},
})
// Count tokens in the updated dialog
tokenCount, err = generator.Count(context.Background(), dialog)
if err != nil {
fmt.Printf("Error counting tokens: %v\n", err)
return
}
fmt.Printf("Dialog with response contains approximately %d tokens\n", tokenCount)
Output: Dialog contains approximately 20 tokens Dialog with response contains approximately 42 tokens
func (*AnthropicGenerator) Generate ¶
func (g *AnthropicGenerator) Generate(ctx context.Context, dialog Dialog, options *GenOpts) (Response, error)
Generate implements gai.Generator
Example ¶
// Create an Anthropic client
client := a.NewClient()
// Demonstration of how to enable system prompt caching
svc := NewAnthropicServiceWrapper(&client.Messages, EnableSystemCaching)
// Instantiate an Anthropic Generator
gen := NewAnthropicGenerator(svc, string(a.ModelClaude3_5HaikuLatest), "You are a helpful assistant")
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Hi!"),
},
},
},
}
// Generate a response
// Note that anthropic generator requires that max generation tokens generation param be set
resp, err := gen.Generate(context.Background(), dialog, &GenOpts{MaxGenerationTokens: Ptr(1024)})
if err != nil {
panic(err.Error())
}
// The exact response text may vary, so we'll just print a placeholder
fmt.Println("Response received")
// Customize generation parameters
opts := GenOpts{
Temperature: Ptr(0.7),
MaxGenerationTokens: Ptr(1024),
}
resp, err = gen.Generate(context.Background(), dialog, &opts)
if err != nil {
panic(err.Error())
}
fmt.Println(len(resp.Candidates))
Output: Response received 1
Example (Image) ¶
apiKey := os.Getenv("ANTHROPIC_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set ANTHROPIC_API_KEY env]")
return
}
// This example assumes that sample.jpg is present in the current directory.
imgBytes, err := os.ReadFile("sample.jpg")
if err != nil {
fmt.Println("[Skipped: could not open sample.jpg]")
return
}
imgBase64 := Str(base64.StdEncoding.EncodeToString(imgBytes))
client := a.NewClient()
gen := NewAnthropicGenerator(
&client.Messages,
string(a.ModelClaudeHaiku4_5),
"You are a helpful assistant.",
)
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Image,
MimeType: "image/jpeg",
Content: imgBase64,
},
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is in this image? (Hint, it's a character from The Croods, a DreamWorks animated movie.)"),
},
},
},
}
resp, err := gen.Generate(context.Background(), dialog, &GenOpts{MaxGenerationTokens: Ptr(512)})
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) != 1 {
panic("Expected 1 candidate, got " + fmt.Sprint(len(resp.Candidates)))
}
if len(resp.Candidates[0].Blocks) != 1 {
panic("Expected 1 block, got " + fmt.Sprint(len(resp.Candidates[0].Blocks)))
}
fmt.Println(strings.Contains(resp.Candidates[0].Blocks[0].Content.String(), "Crood"))
Output: true
Example (Pdf) ¶
apiKey := os.Getenv("ANTHROPIC_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set ANTHROPIC_API_KEY env]")
return
}
// This example assumes that sample.pdf is present in the current directory.
pdfBytes, err := os.ReadFile("sample.pdf")
if err != nil {
fmt.Println("[Skipped: could not open sample.pdf]")
return
}
client := a.NewClient()
gen := NewAnthropicGenerator(
&client.Messages,
string(a.ModelClaudeSonnet4_0),
"You are a helpful assistant.",
)
// Create a dialog with PDF content
dialog := Dialog{
{
Role: User,
Blocks: []Block{
TextBlock("What is the title of this PDF? Just output the title and nothing else"),
PDFBlock(pdfBytes, "paper.pdf"),
},
},
}
// Generate a response
ctx := context.Background()
response, err := gen.Generate(ctx, dialog, &GenOpts{MaxGenerationTokens: Ptr(1024)})
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
// The response would contain the model's analysis of the PDF
if len(response.Candidates) > 0 && len(response.Candidates[0].Blocks) > 0 {
fmt.Println(response.Candidates[0].Blocks[0].Content)
}
Output: Attention Is All You Need
Example (Thinking) ¶
// Create an Anthropic client
client := a.NewClient()
// Instantiate an Anthropic Generator
gen := NewAnthropicGenerator(&client.Messages, string(a.ModelClaudeSonnet4_0), "You are a helpful assistant")
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Hi!"),
},
},
},
}
// Use thinking
opts := GenOpts{
Temperature: Ptr(1.0),
MaxGenerationTokens: Ptr(9000),
ThinkingBudget: "5000",
}
resp, err := gen.Generate(context.Background(), dialog, &opts)
if err != nil {
panic(err.Error())
}
fmt.Println(len(resp.Candidates))
dialog = append(dialog, resp.Candidates[0], Message{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What can you do?"),
},
},
})
resp, err = gen.Generate(context.Background(), dialog, &opts)
if err != nil {
panic(err.Error())
}
fmt.Println(len(resp.Candidates))
Output: 1 1
func (*AnthropicGenerator) Register ¶
func (g *AnthropicGenerator) Register(tool Tool) error
Register implements gai.ToolRegister
Example ¶
// Create an Anthropic client
client := a.NewClient()
// Demonstration of how to enable system and multi turn message prompt caching
svc := NewAnthropicServiceWrapper(&client.Messages, EnableSystemCaching, EnableMultiTurnCaching)
// Instantiate an Anthropic Generator
gen := NewAnthropicGenerator(
svc,
string(a.ModelClaudeSonnet4_5),
`You are a helpful assistant that returns the price of a stock and nothing else.
Only output the price, like
<example>
435.56
</example>
<example>
3235.55
</example>
`,
)
// Register tools
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
if err := gen.Register(tickerTool); err != nil {
panic(err.Error())
}
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the price of Apple stock?"),
},
},
},
}
// Customize generation parameters
opts := GenOpts{
ToolChoice: "get_stock_price", // Can specify a specific tool to force invoke
MaxGenerationTokens: Ptr(8096),
}
// Generate a response
resp, err := gen.Generate(context.Background(), dialog, &opts)
if err != nil {
panic(err.Error())
}
fmt.Println(resp.Candidates[0].Blocks[0].Content)
dialog = append(dialog, resp.Candidates[0], Message{
Role: ToolResult,
Blocks: []Block{
{
ID: resp.Candidates[0].Blocks[0].ID,
ModalityType: Text,
Content: Str("123.45"),
},
},
})
resp, err = gen.Generate(context.Background(), dialog, &GenOpts{MaxGenerationTokens: Ptr(8096)})
if err != nil {
panic(err.Error())
}
fmt.Println(resp.Candidates[0].Blocks[0].Content)
Output: {"name":"get_stock_price","parameters":{"ticker":"AAPL"}} 123.45
Example (ParallelToolUse) ¶
// Create an Anthropic client
client := a.NewClient()
// Register tools
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
// Instantiate an Anthropic Generator
gen := NewAnthropicGenerator(
&client.Messages,
string(a.ModelClaudeSonnet4_0),
`You are a helpful assistant that compares the price of two stocks and returns the ticker of whichever is greater.
Only mention one of the stock tickers and nothing else.
Only output the price, like
<example>
User: Which one is more expensive? Apple or NVidia?
Assistant: calls get_stock_price for both Apple and Nvidia
Tool Result: Apple: 123.45; Nvidia: 345.65
Assistant: Nvidia
</example>
<example>
User: Which one is more expensive? Microsft or Netflix?
Assistant: calls get_stock_price for both Apple and Nvidia
Tool Result: MSFT: 876.45; NFLX: 345.65
Assistant: MSFT
</example>
`,
)
// Register tools
tickerTool.Description += "\nYou can call this tool in parallel"
if err := gen.Register(tickerTool); err != nil {
panic(err.Error())
}
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Which stock, Apple vs. Microsoft, is more expensive?"),
},
},
},
}
// Generate a response
resp, err := gen.Generate(context.Background(), dialog, &GenOpts{
MaxGenerationTokens: Ptr(8096),
ThinkingBudget: "4000",
})
if err != nil {
panic(err.Error())
}
fmt.Println(resp.Candidates[0].Blocks[1].Content)
fmt.Println(resp.Candidates[0].Blocks[2].Content)
dialog = append(dialog, resp.Candidates[0], Message{
Role: ToolResult,
Blocks: []Block{
{
ID: resp.Candidates[0].Blocks[1].ID,
ModalityType: Text,
Content: Str("123.45"),
},
},
}, Message{
Role: ToolResult,
Blocks: []Block{
{
ID: resp.Candidates[0].Blocks[2].ID,
ModalityType: Text,
Content: Str("678.45"),
},
},
})
resp, err = gen.Generate(context.Background(), dialog, &GenOpts{
MaxGenerationTokens: Ptr(8096),
ThinkingBudget: "4000",
})
if err != nil {
panic(err.Error())
}
fmt.Println(resp.Candidates[0].Blocks[0].Content)
Output: {"name":"get_stock_price","parameters":{"ticker":"AAPL"}} {"name":"get_stock_price","parameters":{"ticker":"MSFT"}} MSFT
func (*AnthropicGenerator) Stream ¶ added in v0.6.0
func (g *AnthropicGenerator) Stream(ctx context.Context, dialog Dialog, options *GenOpts) iter.Seq2[StreamChunk, error]
Example ¶
// Create an Anthropic client
client := a.NewClient()
// Demonstration of how to enable system prompt caching
svc := NewAnthropicServiceWrapper(&client.Messages, EnableSystemCaching)
// Instantiate an Anthropic Generator
gen := NewAnthropicGenerator(svc, string(a.ModelClaude3_5HaikuLatest), "You are a helpful assistant")
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Hi!"),
},
},
},
}
// Stream a response
var blocks []Block
for chunk, err := range gen.Stream(context.Background(), dialog, &GenOpts{MaxGenerationTokens: Ptr(1024)}) {
if err != nil {
fmt.Println(err.Error())
return
}
blocks = append(blocks, chunk.Block)
}
if len(blocks) > 0 {
fmt.Println("Response received")
}
Output: Response received
Example (ParallelToolUse) ¶
// Create an Anthropic client
client := a.NewClient()
// Register tools
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
// Instantiate an Anthropic Generator
gen := NewAnthropicGenerator(
&client.Messages,
string(a.ModelClaudeSonnet4_0),
`You are a helpful assistant that compares the price of two stocks and returns the ticker of whichever is greater.
Only mention one of the stock tickers and nothing else.
Only output the price, like
<example>
User: Which one is more expensive? Apple or NVidia?
Assistant: calls get_stock_price for both Apple and Nvidia
Tool Result: Apple: 123.45; Nvidia: 345.65
Assistant: Nvidia
</example>
<example>
User: Which one is more expensive? Microsft or Netflix?
Assistant: calls get_stock_price for both Apple and Nvidia
Tool Result: MSFT: 876.45; NFLX: 345.65
Assistant: MSFT
</example>
`,
)
// Register tools
tickerTool.Description += "\nYou can call this tool in parallel"
if err := gen.Register(tickerTool); err != nil {
panic(err.Error())
}
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Which stock, Apple vs. Microsoft, is more expensive?"),
},
},
},
}
// Stream a response
var blocks []Block
for chunk, err := range gen.Stream(context.Background(), dialog, &GenOpts{
MaxGenerationTokens: Ptr(32000),
ThinkingBudget: "10000",
}) {
if err != nil {
fmt.Println(err.Error())
return
}
blocks = append(blocks, chunk.Block)
}
if len(blocks) > 1 {
fmt.Println("Response received")
}
// collect the blocks
var prevToolCallId string
var toolCalls []Block
var toolcallArgs string
var toolCallInput ToolCallInput
thinking := Block{
BlockType: Thinking,
ModalityType: Text,
MimeType: "text/plain",
ExtraFields: make(map[string]interface{}),
}
thinkingStr := ""
for _, block := range blocks {
if block.BlockType == Thinking {
if block.Content != nil {
thinkingStr += block.Content.String()
}
maps.Copy(thinking.ExtraFields, block.ExtraFields)
continue
}
// Skip metadata blocks
if block.BlockType == MetadataBlockType {
continue
}
if block.ID != "" && block.ID != prevToolCallId {
if toolcallArgs != "" {
// Parse the arguments string into a map
if err := json.Unmarshal([]byte(toolcallArgs), &toolCallInput.Parameters); err != nil {
panic(err.Error())
}
// Marshal back to JSON for consistent representation
toolUseJSON, err := json.Marshal(toolCallInput)
if err != nil {
panic(err.Error())
}
toolCalls[len(toolCalls)-1].Content = Str(toolUseJSON)
toolCallInput = ToolCallInput{}
toolcallArgs = ""
}
prevToolCallId = block.ID
toolCalls = append(toolCalls, Block{
ID: block.ID,
BlockType: ToolCall,
ModalityType: Text,
MimeType: "text/plain",
})
toolCallInput.Name = block.Content.String()
} else {
toolcallArgs += block.Content.String()
}
}
thinking.Content = Str(thinkingStr)
if toolcallArgs != "" {
// Parse the arguments string into a map
if err := json.Unmarshal([]byte(toolcallArgs), &toolCallInput.Parameters); err != nil {
panic(err.Error())
}
// Marshal back to JSON for consistent representation
toolUseJSON, err := json.Marshal(toolCallInput)
if err != nil {
panic(err.Error())
}
toolCalls[len(toolCalls)-1].Content = Str(toolUseJSON)
toolCallInput = ToolCallInput{}
}
fmt.Println(len(toolCalls))
assistantMsg := make([]Block, 0, len(toolCalls)+1)
assistantMsg = append(assistantMsg, thinking)
assistantMsg = append(assistantMsg, toolCalls...)
dialog = append(dialog, Message{
Role: Assistant,
Blocks: assistantMsg,
},
Message{
Role: ToolResult,
Blocks: []Block{
{
ID: toolCalls[0].ID,
ModalityType: Text,
Content: Str("123.45"),
},
},
}, Message{
Role: ToolResult,
Blocks: []Block{
{
ID: toolCalls[1].ID,
ModalityType: Text,
Content: Str("678.45"),
},
},
})
// Stream a response
blocks = nil
for chunk, err := range gen.Stream(context.Background(), dialog, &GenOpts{
MaxGenerationTokens: Ptr(32000),
ThinkingBudget: "10000",
}) {
if err != nil {
fmt.Println(err.Error())
return
}
blocks = append(blocks, chunk.Block)
}
if len(blocks) > 0 {
fmt.Println("Response received")
}
Output: Response received 2 Response received
type AnthropicServiceParamModifierFunc ¶ added in v0.4.0
type AnthropicServiceParamModifierFunc func(ctx context.Context, params *a.MessageNewParams) error
AnthropicServiceParamModifierFunc is a function type that modifies Anthropic API parameters before they are sent to the API. This allows for intercepting and modifying request parameters such as enabling caching, adding headers, or transforming the content.
The function receives a context and a pointer to the message parameters, and returns an error if the modification cannot be completed successfully. Multiple modifier functions can be chained together in a middleware-like pattern.
Example:
// Create a custom modifier that adds system context
addWeatherContext := func(_ context.Context, params *a.MessageNewParams) error {
params.System = append(params.System, a.SystemParam{Text: "Current weather: 72°F and sunny"})
return nil
}
// Wrap the Anthropic client with multiple modifiers
wrappedClient := NewAnthropicServiceWrapper(
client.Messages,
EnableSystemCaching,
EnableMultiTurnCaching,
addWeatherContext,
)
type AnthropicServiceWrapper ¶ added in v0.4.0
type AnthropicServiceWrapper struct {
// contains filtered or unexported fields
}
AnthropicServiceWrapper wraps an Anthropic API client with parameter modifier functions. This allows for intercepting and modifying requests before they are sent to the Anthropic API, enabling features like caching, request transformation, and dynamic context management.
The wrapper implements the AnthropicSvc interface, making it a drop-in replacement for the standard Anthropic client in the context of this library.
Common use cases include: - Enabling API response caching for reduced latency and costs - Adding dynamic system prompts based on runtime conditions - Transforming or filtering message content - Adding consistent metadata or parameters across all requests
func NewAnthropicServiceWrapper ¶ added in v0.4.0
func NewAnthropicServiceWrapper(wrapped AnthropicSvc, funcs ...AnthropicServiceParamModifierFunc) *AnthropicServiceWrapper
NewAnthropicServiceWrapper creates a new wrapper around an Anthropic API client with the provided parameter modifier functions.
The wrapper intercepts API calls, applies the modifier functions in sequence, and then forwards the modified parameters to the actual Anthropic API client.
This pattern is useful for consistently applying transformations or middleware-like functionality to all Anthropic API calls without modifying client code.
Example:
// Create a wrapped client with caching enabled
wrappedClient := NewAnthropicServiceWrapper(
client.Messages,
EnableSystemCaching,
EnableMultiTurnCaching,
)
// Use the wrapped client with AnthropicGenerator
generator := NewAnthropicGenerator(
wrappedClient,
"claude-3-opus-20240229",
"You are a helpful assistant.",
)
func (AnthropicServiceWrapper) CountTokens ¶ added in v0.4.8
func (svc AnthropicServiceWrapper) CountTokens(ctx context.Context, params a.MessageCountTokensParams, opts ...option.RequestOption) (res *a.MessageTokensCount, err error)
CountTokens forwards token counting requests to the wrapped service. This method simply passes the request through without applying any modifiers.
func (AnthropicServiceWrapper) New ¶ added in v0.4.0
func (svc AnthropicServiceWrapper) New(ctx context.Context, params a.MessageNewParams, opts ...option.RequestOption) (res *a.Message, err error)
New implements the AnthropicSvc interface by applying all registered parameter modifier functions to the request parameters before passing them to the wrapped service.
Each modifier function is called in the order they were registered. If any modifier returns an error, the request is aborted and the error is returned.
After all modifiers have been successfully applied, the modified parameters are passed to the wrapped service's New method.
func (AnthropicServiceWrapper) NewStreaming ¶ added in v0.4.13
func (svc AnthropicServiceWrapper) NewStreaming(ctx context.Context, params a.MessageNewParams, opts ...option.RequestOption) (stream *ssestream.Stream[a.MessageStreamEventUnion])
NewStreaming implements the AnthropicSvc interface by applying all registered parameter modifier functions to the request parameters before passing them to the wrapped service.
Each modifier function is called in the order they were registered. If any modifier returns an error, the request is aborted and the error is returned.
After all modifiers have been successfully applied, the modified parameters are passed to the wrapped service's New method.
type AnthropicSvc ¶ added in v0.4.8
type AnthropicSvc interface {
// New generates a new message using the Anthropic API
New(ctx context.Context, body a.MessageNewParams, opts ...option.RequestOption) (res *a.Message, err error)
// NewStreaming generates a new streaming message using the Anthropic API
NewStreaming(ctx context.Context, body a.MessageNewParams, opts ...option.RequestOption) (stream *ssestream.Stream[a.MessageStreamEventUnion])
// CountTokens counts tokens for a message without generating a response
CountTokens(ctx context.Context, body a.MessageCountTokensParams, opts ...option.RequestOption) (res *a.MessageTokensCount, err error)
}
AnthropicSvc defines the interface for interacting with the Anthropic API. It requires the methods needed for both generation and token counting.
This interface is implemented by the Anthropic SDK's MessageService, allowing for direct use or wrapping with additional functionality (such as caching via AnthropicServiceWrapper).
type ApiErr ¶
type ApiErr struct {
// StatusCode is the HTTP status code returned by the API
StatusCode int `json:"status_code" yaml:"status_code"`
// Type is the error type returned by the API (e.g., "invalid_request_error")
Type string `json:"type" yaml:"type"`
// Message is the error message returned by the API
Message string `json:"message" yaml:"message"`
}
ApiErr is returned when the API returns a non-success status code that doesn't fall into more specific error categories. This can include:
- 400 Bad Request errors (invalid_request_error)
- 404 Not Found errors (not_found_error)
- 413 Request Too Large errors (request_too_large)
- 500 Internal Server errors (api_error)
- 529 Service Overloaded errors (overloaded_error)
The struct contains the HTTP status code, error type, and message to provide detailed information about the API error.
type AudioConfig ¶
type AudioConfig struct {
// VoiceName represents what voice to use when generating an audio output as
// A Generator usually offers an option to generate speech using a specific built-in voice
VoiceName string `json:"voice_name,omitempty" yaml:"voice_name,omitempty"`
// Format specifies the output audio format. Must be one a valid audio file format, such as wav or mp3.
// A Generator's supported file formats will be specified in its docs
Format string `json:"format,omitempty" yaml:"format,omitempty"`
}
type AuthenticationErr ¶
type AuthenticationErr string
AuthenticationErr is returned when there are issues with authentication or authorization. This can include:
- Invalid or expired API keys
- Insufficient permissions
- Account suspension
The string value contains details about the specific authentication issue.
func (AuthenticationErr) Error ¶
func (a AuthenticationErr) Error() string
type Block ¶
type Block struct {
// ID is optional, it is commonly set when for ToolCall block types,
// and sometimes for Content type blocks. An empty string means that the ID field
// is not set
ID string `json:"id,omitempty" yaml:"id,omitempty"`
// BlockType is required, and if not set explicitly, the default value is of type Content.
// - A Content BlockType represents unstructured content of single Modality, like text, images and audio
// - A Thinking BlockType represents the thinking/reasoning a Generator produced
// - A ToolCall BlockType represents a tool call by the model
//
// Note that a Generator can support more block types than the ones listed above,
// the above block types are simply a common set of block types that a Generator can return.
BlockType string `json:"block_type" yaml:"block_type"`
// ModalityType represents the Modality of the content
ModalityType Modality `json:"modality_type" yaml:"modality_type"`
// MimeType represents the MIME type of the content.
// Common values include "text/plain", "image/jpeg", "image/png", "audio/mp3", "video/mp4", etc.
// If empty, defaults to "text/plain"
MimeType string `json:"mime_type,omitempty" yaml:"mime_type,omitempty"`
// Content represents the content of the block. It can be any type that implements fmt.Stringer.
// For non-text modalities like images, audio, or video, the Content's String() method should
// return base64 encoded data. The MimeType field should be set appropriately to indicate the
// content type.
Content fmt.Stringer `json:"content,omitempty" yaml:"content,omitempty"`
// ExtraFields allows a Generator to store Generator-specific extra information that can be used
// in a later invocation or for handling provider-specific features.
//
// Common fields include:
// - ThinkingExtraFieldGeneratorKey: Always set on Thinking blocks to identify the source generator
// - AnthropicExtraFieldThinkingSignature: Signature for Anthropic extended thinking blocks
// - GeminiExtraFieldThoughtSignature: Signature for Gemini thinking blocks
// - OpenRouterExtraFieldReasoningType/Format/Index/Signature: OpenRouter reasoning metadata
// - OpenAIExtraFieldImageWidth/Height/Detail: Image processing hints for OpenAI
// - BlockFieldFilenameKey: Filename for PDF blocks
//
// See each generator's documentation for provider-specific fields.
ExtraFields map[string]interface{} `json:"extra_fields,omitempty" yaml:"extra_fields,omitempty"`
}
Block represents a self-contained piece of a Message, meant to represent a "part" of a message. For example, if a message returned by a model contains audio and a tool call, the audio would be represented as one block, and the tool call another. Another example is if there are multiple tool calls in a response generated by a model, each tool call would be represented by single Block.
func AudioBlock ¶ added in v0.5.0
AudioBlock creates an audio content block with the given base64-encoded data and MIME type. This is a convenience function for creating audio blocks.
Example:
block := AudioBlock(audioData, "audio/mp3")
func ImageBlock ¶ added in v0.4.0
ImageBlock creates an image content block with the given base64-encoded data and MIME type. This is a convenience function for creating image blocks.
Example:
block := ImageBlock(base64EncodedJpeg, "image/jpeg")
func MetadataBlock ¶ added in v0.16.0
MetadataBlock creates a Block containing usage metadata. The metadata parameter should contain metric information such as token counts, typically using keys like UsageMetricInputTokens and UsageMetricGenerationTokens.
This block type is primarily used internally by streaming generators to emit usage information as the final block in a stream.
func PDFBlock ¶ added in v0.5.0
PDFBlock creates a PDF content block with the given base64-encoded data and filename. This is a convenience function for creating PDF blocks compatible with all providers.
PDFs are treated as a special type of image modality by model providers. The PDF is converted to a series of images at the provider API level. For OpenAI, supplying a filename is required for PDF file input.
Example:
pdfData, _ := os.ReadFile("paper.pdf")
block := PDFBlock(pdfData, "paper.pdf")
func TextBlock ¶ added in v0.4.0
TextBlock creates a simple text content block. This is a convenience function for creating basic text blocks.
Example:
block := TextBlock("Hello, world!")
func ToolCallBlock ¶ added in v0.4.0
ToolCallBlock creates a tool call block with the given ID, tool name, and parameters. The parameters are automatically marshaled to JSON.
Example:
block := ToolCallBlock("call_123", "get_weather", map[string]any{
"location": "New York",
"units": "fahrenheit",
})
type CallbackExecErr ¶ added in v0.4.1
type CallbackExecErr struct {
Err error `json:"err,omitempty" yaml:"err,omitempty"`
}
CallbackExecErr is an error type that wraps a real callback execution error. When returned from a callback (wrapped in this type), it signals to the caller that the error is a hard failure and execution should terminate, rather than being returned as an erroneous tool result.
func (CallbackExecErr) Error ¶ added in v0.4.1
func (c CallbackExecErr) Error() string
func (CallbackExecErr) Unwrap ¶ added in v0.4.1
func (c CallbackExecErr) Unwrap() error
Unwrap allows errors.Unwrap and errors.As to extract the underlying error.
type CerebrasGenerator ¶ added in v0.10.0
type CerebrasGenerator struct {
// contains filtered or unexported fields
}
CerebrasGenerator implements the Generator interface using Cerebras Chat Completions HTTP API Endpoint: POST {baseURL}/v1/chat/completions No streaming and no token counting support.
func NewCerebrasGenerator ¶ added in v0.10.0
func NewCerebrasGenerator(httpClient *http.Client, baseURL, model, systemInstructions string, apiKey string) *CerebrasGenerator
NewCerebrasGenerator creates a new Cerebras generator. If httpClient is nil, http.DefaultClient is used. If baseURL is empty, "https://api.cerebras.ai" is used. apiKey is read from CEREBRAS_API_KEY if empty.
func (*CerebrasGenerator) Generate ¶ added in v0.10.0
func (g *CerebrasGenerator) Generate(ctx context.Context, dialog Dialog, options *GenOpts) (Response, error)
Generate implements Generator
Example ¶
apiKey := os.Getenv("CEREBRAS_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set CEREBRAS_API_KEY env]")
return
}
gen := NewCerebrasGenerator(nil, "", "qwen-3-32b", "You are a helpful assistant.", apiKey)
dialog := Dialog{
{
Role: User,
Blocks: []Block{TextBlock("Hello!")},
},
}
resp, err := gen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) == 1 && len(resp.Candidates[0].Blocks) >= 1 {
fmt.Println("Response received")
}
Output: Response received
Example (Reasoning_gptoss) ¶
apiKey := os.Getenv("CEREBRAS_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set CEREBRAS_API_KEY env]")
return
}
// Use gpt-oss-120b model which supports reasoning with reasoning_effort parameter
gen := NewCerebrasGenerator(
nil,
"",
"gpt-oss-120b",
"You are a helpful assistant that explains your reasoning step by step.",
apiKey,
)
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the square root of 144?"),
},
},
},
}
// Generate response with reasoning enabled (medium effort)
resp, err := gen.Generate(context.Background(), dialog, &GenOpts{
ThinkingBudget: "medium",
})
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) > 0 && len(resp.Candidates[0].Blocks) > 0 {
// Check if we have thinking blocks (reasoning)
hasThinking := false
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Thinking {
hasThinking = true
fmt.Println("Reasoning found")
}
}
if hasThinking {
fmt.Println("Thinking blocks found")
}
// Find the main content block (not thinking)
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Content {
content := block.Content.String()
if strings.Contains(content, "12") {
fmt.Println("Correct answer found")
}
break
}
}
}
// Append the previous response and ask a follow-up question to test reasoning retention
dialog = append(dialog, resp.Candidates[0], Message{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the square root of 225?"),
},
},
})
// Generate response with reasoning (the previous reasoning should be retained)
resp, err = gen.Generate(context.Background(), dialog, &GenOpts{
ThinkingBudget: "medium",
})
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) > 0 && len(resp.Candidates[0].Blocks) > 0 {
// Check if we have thinking blocks
hasThinking := false
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Thinking {
hasThinking = true
}
}
if hasThinking {
fmt.Println("Thinking blocks found")
}
// Find the main content block
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Content {
content := block.Content.String()
if strings.Contains(content, "15") {
fmt.Println("Correct answer found")
}
break
}
}
}
Output: Reasoning found Thinking blocks found Correct answer found Thinking blocks found Correct answer found
Example (Reasoning_zai) ¶
apiKey := os.Getenv("CEREBRAS_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set CEREBRAS_API_KEY env]")
return
}
// Use zai-glm-4.6 model which supports reasoning with disable_reasoning parameter
gen := NewCerebrasGenerator(
nil,
"",
"zai-glm-4.6",
"You are a helpful assistant that explains your reasoning step by step.",
apiKey,
)
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is 15 * 12?"),
},
},
},
}
// Generate response with reasoning enabled (disable_reasoning: false)
resp, err := gen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) > 0 && len(resp.Candidates[0].Blocks) > 0 {
// Check if we have thinking blocks (reasoning)
hasThinking := false
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Thinking {
hasThinking = true
fmt.Println("Reasoning found")
}
}
if hasThinking {
fmt.Println("Thinking blocks found")
}
// Find the main content block (not thinking)
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Content {
content := block.Content.String()
if strings.Contains(content, "180") {
fmt.Println("Correct answer found")
}
break
}
}
}
// Append the previous response and ask a follow-up question to test reasoning retention
dialog = append(dialog, resp.Candidates[0], Message{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Now what is 20 * 15?"),
},
},
})
// Generate response with reasoning (the previous reasoning should be retained)
resp, err = gen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) > 0 && len(resp.Candidates[0].Blocks) > 0 {
// Check if we have thinking blocks
hasThinking := false
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Thinking {
hasThinking = true
}
}
if hasThinking {
fmt.Println("Thinking blocks found")
}
// Find the main content block
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Content {
content := block.Content.String()
if strings.Contains(content, "300") {
fmt.Println("Correct answer found")
}
break
}
}
}
Output: Reasoning found Thinking blocks found Correct answer found Thinking blocks found Correct answer found
func (*CerebrasGenerator) Register ¶ added in v0.10.0
func (g *CerebrasGenerator) Register(tool Tool) error
Register implements ToolRegister
Example ¶
apiKey := os.Getenv("CEREBRAS_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set CEREBRAS_API_KEY env]")
return
}
cgen := NewCerebrasGenerator(nil, "", "qwen-3-235b-a22b-instruct-2507", `You are a helpful assistant that returns the price of a stock and nothing else.
Only output the price, like
<example>
435.56
</example>
<example>
3235.55
</example>
`, apiKey)
// Register a tool
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
if err := cgen.Register(tickerTool); err != nil {
fmt.Println("Error:", err)
return
}
dialog := Dialog{
{Role: User, Blocks: []Block{TextBlock("What is the price of Apple stock?")}},
}
// Force the tool call
resp, err := cgen.Generate(context.Background(), dialog, &GenOpts{ToolChoice: "get_stock_price"})
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) == 0 || len(resp.Candidates[0].Blocks) == 0 {
fmt.Println("Error: empty response")
return
}
// Find and print the tool call JSON
var toolCall Block
for _, b := range resp.Candidates[0].Blocks {
if b.BlockType == ToolCall {
toolCall = b
break
}
}
fmt.Println(toolCall.Content)
// Append tool result and continue the conversation
dialog = append(dialog, resp.Candidates[0], Message{
Role: ToolResult,
Blocks: []Block{
{ID: toolCall.ID, BlockType: Content, ModalityType: Text, MimeType: "text/plain", Content: Str("123.45")},
},
})
// Ask model to answer now without calling tools
resp, err = cgen.Generate(context.Background(), dialog, &GenOpts{ToolChoice: "none"})
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) > 0 && len(resp.Candidates[0].Blocks) > 0 {
fmt.Println(resp.Candidates[0].Blocks[0].Content)
}
Output: {"name":"get_stock_price","parameters":{"ticker":"AAPL"}} 123.45
type ContentPolicyErr ¶
type ContentPolicyErr string
ContentPolicyErr is returned when the input or generated content violates the Generator's content policy. This can include:
- Unsafe or inappropriate content
- Prohibited topics or language
- Content that violates usage terms
The string value contains details about the specific policy violation.
func (ContentPolicyErr) Error ¶
func (c ContentPolicyErr) Error() string
type FallbackConfig ¶
type FallbackConfig struct {
// ShouldFallback is a function that determines whether to fallback to another generator
// based on the error returned by the current generator.
// If nil, the default behavior is used, which fallbacks on rate limit errors and 5xx status codes.
ShouldFallback func(err error) bool
}
FallbackConfig represents the configuration for when to fallback to another generator
func NewHTTPStatusFallbackConfig ¶
func NewHTTPStatusFallbackConfig(statusCodes ...int) FallbackConfig
NewHTTPStatusFallbackConfig creates a FallbackConfig that fallbacks on specific HTTP status codes. It will fallback on rate limit errors and the specified status codes.
func NewRateLimitOnlyFallbackConfig ¶
func NewRateLimitOnlyFallbackConfig() FallbackConfig
NewRateLimitOnlyFallbackConfig creates a FallbackConfig that only fallbacks on rate limit errors.
type FallbackGenerator ¶
type FallbackGenerator struct {
// contains filtered or unexported fields
}
FallbackGenerator implements the Generator interface by composing multiple generators. If one generator returns an error that meets the fallback criteria, it tries the next generator.
Example ¶
// This is just an example, in a real case you would use actual generators
openAIGen := &mockGenerator{
response: Response{
Candidates: []Message{
{
Role: Assistant,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Response from OpenAI"),
},
},
},
},
},
}
anthropicGen := &mockGenerator{
response: Response{
Candidates: []Message{
{
Role: Assistant,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Response from Anthropic"),
},
},
},
},
},
}
// Create a fallback generator that will try OpenAI first, then fallback to Anthropic
// This example makes it fallback on 400 errors too, not just 500s
fallbackGen, _ := NewFallbackGenerator(
[]Generator{openAIGen, anthropicGen},
&FallbackConfig{
ShouldFallback: NewHTTPStatusFallbackConfig(400, 429, 500, 502, 503, 504).ShouldFallback,
},
)
// Now we can use the fallback generator just like any other generator
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Tell me about AI fallback strategies"),
},
},
},
}
resp, err := fallbackGen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) > 0 && len(resp.Candidates[0].Blocks) > 0 {
fmt.Println(resp.Candidates[0].Blocks[0].Content)
}
Output: Response from OpenAI
func NewFallbackGenerator ¶
func NewFallbackGenerator(generators []Generator, config *FallbackConfig) (*FallbackGenerator, error)
NewFallbackGenerator creates a new FallbackGenerator with the provided generators and configuration. It returns an error if fewer than 2 generators are provided.
func (*FallbackGenerator) Generate ¶
func (f *FallbackGenerator) Generate(ctx context.Context, dialog Dialog, options *GenOpts) (Response, error)
Generate implements the Generator interface. It tries each generator in order, falling back to the next one if the current returns an error that meets the fallback criteria.
Example ¶
package main
import (
"context"
"fmt"
"github.com/spachava753/gai"
)
func main() {
// This example shows how to create a fallback generator that first tries a primary generator,
// and if that fails with rate limiting or 5xx errors, falls back to a secondary generator.
// Create mock generators for example purposes
primaryGen := &MockGenerator{name: "Primary Generator"}
secondaryGen := &MockGenerator{name: "Secondary Generator"}
// Create the fallback generator
// By default, it will fallback on rate limits and 5xx errors
fallbackGen, err := gai.NewFallbackGenerator(
[]gai.Generator{primaryGen, secondaryGen},
nil, // Use default config
)
if err != nil {
fmt.Println("Error creating fallback generator:", err)
return
}
// Create a dialog
dialog := gai.Dialog{
{
Role: gai.User,
Blocks: []gai.Block{
{
BlockType: gai.Content,
ModalityType: gai.Text,
Content: gai.Str("What are the best practices for implementing fallback strategies in AI systems?"),
},
},
},
}
// Generate a response
// The fallback generator will try the primary generator first, and if that fails with a rate limit or 5xx error,
// it will automatically try the secondary generator instead.
response, err := fallbackGen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Generation failed:", err)
return
}
// Print the response
if len(response.Candidates) > 0 && len(response.Candidates[0].Blocks) > 0 {
fmt.Println("Response:", response.Candidates[0].Blocks[0].Content)
}
}
// MockGenerator is a simple mock implementation of the Generator interface for example purposes
type MockGenerator struct {
name string
}
func (m *MockGenerator) Generate(ctx context.Context, dialog gai.Dialog, options *gai.GenOpts) (gai.Response, error) {
return gai.Response{
Candidates: []gai.Message{
{
Role: gai.Assistant,
Blocks: []gai.Block{
{
BlockType: gai.Content,
ModalityType: gai.Text,
Content: gai.Str(fmt.Sprintf("Response from %s", m.name)),
},
},
},
},
FinishReason: gai.EndTurn,
}, nil
}
Example (CustomFallbackConfig) ¶
package main
import (
"context"
"fmt"
"github.com/spachava753/gai"
)
func main() {
// This example shows how to create a fallback generator with a custom configuration
// that falls back on specific HTTP status codes including 400 errors.
// Create mock generators for example purposes
mockGen1 := &MockGenerator{name: "Primary Generator"}
mockGen2 := &MockGenerator{name: "Fallback Generator"}
// Create a fallback config that also fallbacks on 400 errors
customConfig := gai.NewHTTPStatusFallbackConfig(400, 429, 500, 502, 503)
// Create the fallback generator with the custom config
fallbackGen, err := gai.NewFallbackGenerator(
[]gai.Generator{mockGen1, mockGen2},
&customConfig,
)
if err != nil {
fmt.Println("Error creating fallback generator:", err)
return
}
// Use the fallback generator
dialog := gai.Dialog{
{
Role: gai.User,
Blocks: []gai.Block{
{
BlockType: gai.Content,
ModalityType: gai.Text,
Content: gai.Str("Hello"),
},
},
},
}
response, err := fallbackGen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Generation failed:", err)
return
}
// Print the response
if len(response.Candidates) > 0 && len(response.Candidates[0].Blocks) > 0 {
fmt.Println("Response:", response.Candidates[0].Blocks[0].Content)
}
}
// MockGenerator is a simple mock implementation of the Generator interface for example purposes
type MockGenerator struct {
name string
}
func (m *MockGenerator) Generate(ctx context.Context, dialog gai.Dialog, options *gai.GenOpts) (gai.Response, error) {
return gai.Response{
Candidates: []gai.Message{
{
Role: gai.Assistant,
Blocks: []gai.Block{
{
BlockType: gai.Content,
ModalityType: gai.Text,
Content: gai.Str(fmt.Sprintf("Response from %s", m.name)),
},
},
},
},
FinishReason: gai.EndTurn,
}, nil
}
type FinishReason ¶
type FinishReason uint8
FinishReason represents the reason why a Generator stopped generating and returned a Response
const ( // Unknown represents an invalid FinishReason, likely only seen with a zero value Response Unknown FinishReason = iota // EndTurn represents the end of the Generator generating an output. // Note that this is different to the ToolUse reason, // which the Generator waits for a tool call result EndTurn // StopSequence represents the Generator generating one of the [GenOpts.StopSequences] // and stopping generation StopSequence // MaxGenerationLimit represents the Generator generating too many tokens and // reaching the specified [GenOpts.MaxGenerationTokens] MaxGenerationLimit // ToolUse represents the Generator generating pausing generating output after // calling a tool to wait for a tool call result. ToolUse )
type GeminiGenerator ¶ added in v0.4.0
type GeminiGenerator struct {
// contains filtered or unexported fields
}
func (*GeminiGenerator) Count ¶ added in v0.4.8
Count implements the TokenCounter interface for GeminiGenerator. It converts the dialog to Gemini's format and uses Google's official CountTokens API.
Like the Anthropic implementation, this method makes an API call to obtain accurate token counts directly from Google's tokenizer. This ensures the count matches exactly what would be used in actual generation.
The method accounts for:
- System instructions (if set during generator initialization)
- All messages in the dialog with their respective blocks
- Multi-modal content including text and images
- Tool definitions registered with the generator
Special considerations:
- For multi-turn conversations, all dialog turns are included in the count
- The system instructions are prepended to the dialog for accurate counting
- Image tokens are counted based on Google's own token calculation
The context parameter allows for cancellation of the API call.
Returns:
- The total token count as uint, representing the combined input tokens
- An error if the API call fails or if dialog conversion fails
Note: Gemini's CountTokens API returns the total tokens for the entire dialog, including system instructions, unlike some other providers that break this down into more detailed metrics.
Example ¶
apiKey := os.Getenv("GEMINI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set GEMINI_API_KEY env]")
fmt.Println("Dialog contains approximately 25 tokens")
fmt.Println("Dialog with image contains approximately 400 tokens")
return
}
ctx := context.Background()
client, err := genai.NewClient(
ctx,
&genai.ClientConfig{
APIKey: apiKey,
Backend: genai.BackendGeminiAPI,
},
)
// Create a generator
g, err := NewGeminiGenerator(client, "gemini-2.5-pro", "You are a helpful assistant.")
if err != nil {
fmt.Println("Error creating GeminiGenerator:", err)
return
}
// Create a dialog with a user message
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the capital of France?"),
},
},
},
}
// Count tokens in the dialog
tokenCount, err := g.Count(context.Background(), dialog)
if err != nil {
fmt.Printf("Error counting tokens: %v\n", err)
return
}
fmt.Printf("Dialog contains approximately %d tokens\n", tokenCount)
// Try to load an image to add to the dialog
imgPath := "sample.jpg"
imgBytes, err := os.ReadFile(imgPath)
if err != nil {
fmt.Printf("Image file not found, skipping image token count example\n")
return
}
// Add an image to the dialog
dialog = Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Image,
MimeType: "image/jpeg",
Content: Str(base64.StdEncoding.EncodeToString(imgBytes)),
},
{
BlockType: Content,
ModalityType: Text,
Content: Str("Describe this image."),
},
},
},
}
// Count tokens with the image included
tokenCount, err = g.Count(context.Background(), dialog)
if err != nil {
fmt.Printf("Error counting tokens: %v\n", err)
return
}
fmt.Printf("Dialog with image contains approximately %d tokens\n", tokenCount)
Output: Dialog contains approximately 15 tokens Dialog with image contains approximately 270 tokens
func (*GeminiGenerator) Generate ¶ added in v0.4.0
func (g *GeminiGenerator) Generate(ctx context.Context, dialog Dialog, options *GenOpts) (Response, error)
Generate implements gai.Generator
Example ¶
apiKey := os.Getenv("GEMINI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set GEMINI_API_KEY env]")
return
}
ctx := context.Background()
client, err := genai.NewClient(
ctx,
&genai.ClientConfig{
APIKey: apiKey,
Backend: genai.BackendGeminiAPI,
},
)
g, err := NewGeminiGenerator(client, "models/gemini-3-pro-preview", "You are a helpful assistant. You respond to the user with plain text format.")
if err != nil {
fmt.Println("Error creating GeminiGenerator:", err)
return
}
dialog := Dialog{
{Role: User, Blocks: []Block{{BlockType: Content, ModalityType: Text, Content: Str("What is the blooms taxonomy, and how does it related to the psychology of child development?")}}},
}
response, err := g.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
if len(response.Candidates) > 0 && len(response.Candidates[0].Blocks) > 0 {
fmt.Println("Got text")
}
Output: Got text
Example (Audio) ¶
apiKey := os.Getenv("GEMINI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set GEMINI_API_KEY env]")
return
}
audioBytes, err := os.ReadFile("sample.wav")
if err != nil {
fmt.Println("[Skipped: could not open sample.wav]")
return
}
// Encode as base64 for inline audio usage
audioBase64 := Str(base64.StdEncoding.EncodeToString(audioBytes))
ctx := context.Background()
client, err := genai.NewClient(
ctx,
&genai.ClientConfig{
APIKey: apiKey,
Backend: genai.BackendGeminiAPI,
},
)
g, err := NewGeminiGenerator(client, "gemini-2.5-pro", "You are a helpful assistant.")
if err != nil {
fmt.Println("Error creating GeminiGenerator:", err)
return
}
// Using inline audio data
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Audio,
MimeType: "audio/wav",
Content: audioBase64,
},
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the name of person in the greeting in this audio? Return a one work response of the name"),
},
},
},
}
response, err := g.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
if len(response.Candidates) > 0 && len(response.Candidates[0].Blocks) > 0 {
fmt.Println(strings.ToLower(response.Candidates[0].Blocks[0].Content.String()))
}
Output: friday
Example (Image) ¶
apiKey := os.Getenv("GEMINI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set GEMINI_API_KEY env]")
return
}
// ---
// This example assumes that sample.jpg is present in the current directory.
// Place a JPEG image named sample.jpg in the same directory as this file (or adjust the path).
imgBytes, err := os.ReadFile("sample.jpg")
if err != nil {
fmt.Println("[Skipped: could not open sample.jpg]")
return
}
// Encode as base64 for API usage
imgBase64 := Str(
// Use standard encoding, as required for image MIME input.
// NOTE: the Blob part in Google Gemini Go SDK accepts raw bytes, but our gai.Block expects base64 encoded string.
// The actual Gemini implementation will decode as needed, see gai.go.
// This mirrors how other examples do it.
base64.StdEncoding.EncodeToString(imgBytes),
)
ctx := context.Background()
client, err := genai.NewClient(
ctx,
&genai.ClientConfig{
APIKey: apiKey,
Backend: genai.BackendGeminiAPI,
},
)
g, err := NewGeminiGenerator(client, "gemini-2.5-pro", "You are a helpful assistant.")
if err != nil {
fmt.Println("Error creating GeminiGenerator:", err)
return
}
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Image,
MimeType: "image/jpeg",
Content: imgBase64,
},
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is in this image? (Hint, it's a character from The Croods, a DreamWorks animated movie.)"),
},
},
},
}
response, err := g.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
if len(response.Candidates) != 1 {
panic("Expected 1 candidate, got " + fmt.Sprint(len(response.Candidates)))
}
if len(response.Candidates[0].Blocks) != 1 {
panic("Expected 1 block, got " + fmt.Sprint(len(response.Candidates[0].Blocks)))
}
fmt.Println(strings.Contains(response.Candidates[0].Blocks[0].Content.String(), "Crood"))
Output: true
Example (Pdf) ¶
apiKey := os.Getenv("GEMINI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set GEMINI_API_KEY env]")
return
}
ctx := context.Background()
client, err := genai.NewClient(
ctx,
&genai.ClientConfig{
APIKey: apiKey,
Backend: genai.BackendGeminiAPI,
},
)
g, err := NewGeminiGenerator(client, "models/gemini-3-pro-preview", "You are a helpful assistant.")
if err != nil {
fmt.Println("Error creating GeminiGenerator:", err)
return
}
// This example assumes that sample.pdf is present in the current directory.
pdfBytes, err := os.ReadFile("sample.pdf")
if err != nil {
fmt.Println("[Skipped: could not open sample.pdf]")
return
}
// Create a dialog with PDF content
dialog := Dialog{
{
Role: User,
Blocks: []Block{
TextBlock("What is the title of this PDF? Just output the title and nothing else"),
PDFBlock(pdfBytes, "paper.pdf"),
},
},
}
// Generate a response
response, err := g.Generate(ctx, dialog, &GenOpts{MaxGenerationTokens: Ptr(1024)})
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
// The response would contain the model's analysis of the PDF
if len(response.Candidates) > 0 && len(response.Candidates[0].Blocks) > 0 {
fmt.Println(response.Candidates[0].Blocks[0].Content)
}
Output: Attention Is All You Need
func (*GeminiGenerator) Register ¶ added in v0.4.0
func (g *GeminiGenerator) Register(tool Tool) error
Register implements gai.ToolRegister for GeminiGenerator
Example ¶
apiKey := os.Getenv("GEMINI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set GEMINI_API_KEY env]")
return
}
ctx := context.Background()
client, err := genai.NewClient(
ctx,
&genai.ClientConfig{
APIKey: apiKey,
Backend: genai.BackendGeminiAPI,
},
)
g, err := NewGeminiGenerator(
client,
"models/gemini-3-pro-preview",
`You are a helpful assistant. You can call tools in parallel.
When a user asks for the server time, always call the server time tool, don't use previously returned results`,
)
if err != nil {
fmt.Println("Error creating GeminiGenerator:", err)
return
}
stockTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
getServerTimeTool := Tool{
Name: "get_server_time",
Description: "Get the current server time in UTC.",
}
err = g.Register(stockTool)
if err != nil {
fmt.Println("Error registering tool:", err)
return
}
err = g.Register(getServerTimeTool)
if err != nil {
fmt.Println("Error registering tool:", err)
return
}
dialog := Dialog{
{
Role: User,
Blocks: []Block{{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the stock price for AAPL, and also tell me the server time?"),
}},
},
}
// Expect tool call for both tools
response, err := g.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
fmt.Println("tool calling response:")
for _, block := range response.Candidates[0].Blocks {
fmt.Printf("Block type: %s | ID: %s | Content: %s\n", block.BlockType, block.ID, block.Content)
}
dialog = append(dialog, response.Candidates[0])
// Simulate tool result for tool calls
dialog = append(dialog,
Message{
Role: ToolResult,
Blocks: []Block{{
ID: response.Candidates[0].Blocks[0].ID,
BlockType: Content,
ModalityType: Text,
MimeType: "text/plain",
Content: Str("AAPL is $200.00"),
}},
},
Message{
Role: ToolResult,
Blocks: []Block{{
ID: response.Candidates[0].Blocks[1].ID,
BlockType: Content,
ModalityType: Text,
MimeType: "text/plain",
Content: Str(time.Time{}.String()),
}},
},
)
response, err = g.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
toolResult := response.Candidates[0].Blocks[0].Content.String()
fmt.Println("Response has tool results:", strings.Contains(toolResult, "AAPL") &&
strings.Contains(toolResult, "200.00") &&
strings.Contains(toolResult, time.Time{}.String()),
)
dialog = append(dialog, response.Candidates[0], Message{
Role: User,
Blocks: []Block{{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the stock price for MSFT, and also tell me the server time again?"),
}},
})
response, err = g.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
fmt.Println("tool calling response:")
for _, block := range response.Candidates[0].Blocks {
fmt.Printf("Block type: %s | ID: %s | Content: %s\n", block.BlockType, block.ID, block.Content)
}
dialog = append(dialog, response.Candidates[0])
// Simulate tool result for tool calls
dialog = append(dialog,
Message{
Role: ToolResult,
Blocks: []Block{{
ID: response.Candidates[0].Blocks[0].ID,
BlockType: Content,
ModalityType: Text,
MimeType: "text/plain",
Content: Str("MSFT is $300.00"),
}},
},
Message{
Role: ToolResult,
Blocks: []Block{{
ID: response.Candidates[0].Blocks[1].ID,
BlockType: Content,
ModalityType: Text,
MimeType: "text/plain",
Content: Str(time.Time{}.Add(1 * time.Minute).String()),
}},
},
)
response, err = g.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
fmt.Println("Response has tool results:", strings.Contains(
response.Candidates[0].Blocks[0].Content.String(),
"MSFT",
) && strings.Contains(
response.Candidates[0].Blocks[0].Content.String(),
"300",
) && strings.Contains(
response.Candidates[0].Blocks[0].Content.String(),
"UTC",
))
Output: tool calling response: Block type: tool_call | ID: toolcall-1 | Content: {"name":"get_stock_price","parameters":{"ticker":"AAPL"}} Block type: tool_call | ID: toolcall-2 | Content: {"name":"get_server_time","parameters":{}} Response has tool results: true tool calling response: Block type: tool_call | ID: toolcall-3 | Content: {"name":"get_stock_price","parameters":{"ticker":"MSFT"}} Block type: tool_call | ID: toolcall-4 | Content: {"name":"get_server_time","parameters":{}} Response has tool results: true
Example (ParallelToolUse) ¶
apiKey := os.Getenv("GEMINI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set GEMINI_API_KEY env]")
return
}
ctx := context.Background()
client, err := genai.NewClient(
ctx,
&genai.ClientConfig{
APIKey: apiKey,
Backend: genai.BackendGeminiAPI,
},
)
g, err := NewGeminiGenerator(client, "models/gemini-3-pro-preview", "You are a helpful assistant.")
if err != nil {
fmt.Println("Error creating GeminiGenerator:", err)
return
}
// Register the get_stock_price tool
stockTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
err = g.Register(stockTool)
if err != nil {
fmt.Println("Error registering tool:", err)
return
}
dialog := Dialog{
{Role: User, Blocks: []Block{{BlockType: Content, ModalityType: Text, Content: Str("Give me the current prices for AAPL, MSFT, and TSLA.")}}},
}
response, err := g.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
for _, cand := range response.Candidates {
for _, block := range cand.Blocks {
fmt.Printf("Block type: %s | ID: %s | Content: %s\n", block.BlockType, block.ID, block.Content)
}
}
Output: Block type: tool_call | ID: toolcall-1 | Content: {"name":"get_stock_price","parameters":{"ticker":"AAPL"}} Block type: tool_call | ID: toolcall-2 | Content: {"name":"get_stock_price","parameters":{"ticker":"MSFT"}} Block type: tool_call | ID: toolcall-3 | Content: {"name":"get_stock_price","parameters":{"ticker":"TSLA"}}
Example (ParallelToolUse_multimedia) ¶
apiKey := os.Getenv("GEMINI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set GEMINI_API_KEY env]")
return
}
ctx := context.Background()
client, err := genai.NewClient(
ctx,
&genai.ClientConfig{
APIKey: apiKey,
Backend: genai.BackendGeminiAPI,
},
)
g, err := NewGeminiGenerator(client, "models/gemini-3-pro-preview", "You are a helpful assistant that can view files.")
if err != nil {
fmt.Println("Error creating GeminiGenerator:", err)
return
}
// Register a tool to view files
viewFileTool := Tool{
Name: "view_file",
Description: "View the contents of a file. Can handle text files, images, and other media types.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
FilePath string `json:"file_path" jsonschema:"required" jsonschema_description:"The path to the file to view"`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
err = g.Register(viewFileTool)
if err != nil {
fmt.Println("Error registering tool:", err)
return
}
// User asks to view multiple files
dialog := Dialog{
{Role: User, Blocks: []Block{{BlockType: Content, ModalityType: Text, Content: Str("Please view sample.jpg and README.md, and tell me what character is in the image, and what is gai from the README")}}},
}
// Model makes parallel tool calls
response, err := g.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
fmt.Println("Tool calls made:")
for _, block := range response.Candidates[0].Blocks {
if block.BlockType == Thinking {
continue
}
fmt.Printf("Block type: %s | ID: %s | Content: %s\n", block.BlockType, block.ID, block.Content)
}
dialog = append(dialog, response.Candidates[0])
// Find tool call blocks (skip thinking blocks)
toolCallBlocks := []Block{}
for _, block := range response.Candidates[0].Blocks {
if block.BlockType == ToolCall {
toolCallBlocks = append(toolCallBlocks, block)
}
}
if len(toolCallBlocks) < 2 {
fmt.Println("Error: Expected at least 2 tool calls")
return
}
// Simulate tool results - first for sample.jpg (image)
imgBytes, err := os.ReadFile("sample.jpg")
if err != nil {
fmt.Println("[Skipped: could not open sample.jpg]")
return
}
// Simulate tool results - for README.md (text)
readmeBytes, err := os.ReadFile("README.md")
if err != nil {
fmt.Println("[Skipped: could not open README.md]")
return
}
// Add both tool results in parallel
dialog = append(dialog,
Message{
Role: ToolResult,
Blocks: []Block{{
ID: toolCallBlocks[0].ID, // First tool call
BlockType: Content,
ModalityType: Image,
MimeType: "image/jpeg",
Content: Str(base64.StdEncoding.EncodeToString(imgBytes)),
}},
},
Message{
Role: ToolResult,
Blocks: []Block{{
ID: toolCallBlocks[1].ID, // Second tool call
BlockType: Content,
ModalityType: Text,
MimeType: "text/markdown",
Content: Str(string(readmeBytes)),
}},
},
)
// Get final response with tool results
response, err = g.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
fmt.Println("Response received with tool results")
fmt.Println("Response contains image content:", strings.Contains(response.Candidates[0].Blocks[0].Content.String(), "Crood"))
fmt.Println("Response contains README content:", strings.Contains(response.Candidates[0].Blocks[0].Content.String(), "gai"))
Output: Tool calls made: Block type: tool_call | ID: toolcall-1 | Content: {"name":"view_file","parameters":{"file_path":"sample.jpg"}} Block type: tool_call | ID: toolcall-2 | Content: {"name":"view_file","parameters":{"file_path":"README.md"}} Response received with tool results Response contains image content: true Response contains README content: true
func (*GeminiGenerator) Stream ¶ added in v0.6.0
func (g *GeminiGenerator) Stream(ctx context.Context, dialog Dialog, options *GenOpts) iter.Seq2[StreamChunk, error]
Example ¶
apiKey := os.Getenv("GEMINI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set GEMINI_API_KEY env]")
return
}
ctx := context.Background()
client, err := genai.NewClient(
ctx,
&genai.ClientConfig{
APIKey: apiKey,
Backend: genai.BackendGeminiAPI,
},
)
g, err := NewGeminiGenerator(client, "models/gemini-3-pro-preview", "You are a helpful assistant. You respond to the user with plain text format.")
if err != nil {
fmt.Println("Error creating GeminiGenerator:", err)
return
}
dialog := Dialog{
{Role: User, Blocks: []Block{{BlockType: Content, ModalityType: Text, Content: Str("What is the capital of France?")}}},
}
for chunk, err := range g.Stream(context.Background(), dialog, nil) {
if err != nil {
fmt.Println("Error:", err)
return
}
// Skip metadata blocks
if chunk.Block.BlockType == MetadataBlockType {
continue
}
fmt.Println(chunk.Block.Content.String())
}
Output: The capital of France is Paris.
Example (ParallelToolUse) ¶
apiKey := os.Getenv("GEMINI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set GEMINI_API_KEY env]")
return
}
ctx := context.Background()
client, err := genai.NewClient(
ctx,
&genai.ClientConfig{
APIKey: apiKey,
Backend: genai.BackendGeminiAPI,
},
)
g, err := NewGeminiGenerator(client, "models/gemini-3-pro-preview", "You are a helpful assistant.")
if err != nil {
fmt.Println("Error creating GeminiGenerator:", err)
return
}
// Register the get_stock_price tool
stockTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
err = g.Register(stockTool)
if err != nil {
fmt.Println("Error registering tool:", err)
return
}
dialog := Dialog{
{Role: User, Blocks: []Block{{BlockType: Content, ModalityType: Text, Content: Str("Give me the current prices for AAPL, MSFT, and TSLA.")}}},
}
for chunk, err := range g.Stream(context.Background(), dialog, nil) {
if err != nil {
fmt.Println("Error:", err)
return
}
// Skip metadata blocks
if chunk.Block.BlockType == MetadataBlockType {
continue
}
fmt.Printf("Block type: %s | ID: %s | Content: %s\n", chunk.Block.BlockType, chunk.Block.ID, chunk.Block.Content)
}
Output: Block type: tool_call | ID: toolcall-1 | Content: get_stock_price Block type: tool_call | ID: | Content: {"ticker":"AAPL"} Block type: tool_call | ID: toolcall-2 | Content: get_stock_price Block type: tool_call | ID: | Content: {"ticker":"MSFT"} Block type: tool_call | ID: toolcall-3 | Content: get_stock_price Block type: tool_call | ID: | Content: {"ticker":"TSLA"}
type GenOpts ¶
type GenOpts struct {
// Temperature is a parameter that controls the randomness of a Generator when calling [Generator.Generate].
// Higher temperatures lead to more creative and diverse outputs, while lower temperatures result in more
// conservative and deterministic outputs
//
// Must be between 0.0 and 1.0. When nil, the Generator uses its default value.
Temperature *float64 `json:"temperature,omitempty" yaml:"temperature,omitempty"`
// TopP is a parameter that uses nucleus sampling. The API computes the cumulative distribution over all
// options for each subsequent token in decreasing probability order and cuts it off once it reaches the
// specified probability. You should either alter Temperature or TopP, but not both
//
// Must be between 0.0 and 1.0. When nil, the Generator uses its default value.
TopP *float64 `json:"top_p,omitempty" yaml:"top_p,omitempty"`
// TopK is used to only sample from the top k options for each subsequent token, and generally
// used to remove "long tail" low probability responses.
//
// When nil, the Generator uses its default value.
//
// Recommended for advanced use cases only - you usually only need to use temperature
TopK *uint `json:"top_k,omitempty" yaml:"top_k,omitempty"`
// FrequencyPenalty is a number between -2.0 and 2.0. Positive values penalize new tokens based on their
// existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
//
// When nil, the Generator uses its default value.
//
// Note that this parameter is not supported by every Generator, in which case this parameter will be ignored
FrequencyPenalty *float64 `json:"frequency_penalty,omitempty" yaml:"frequency_penalty,omitempty"`
// PresencePenalty is a number between -2.0 and 2.0. Positive values penalize new tokens based on whether
// they appear in the text so far, increasing the model's likelihood to talk about new topics
//
// When nil, the Generator uses its default value.
//
// Note that this parameter is not supported by every Generator, in which case this parameter will be ignored
PresencePenalty *float64 `json:"presence_penalty,omitempty" yaml:"presence_penalty,omitempty"`
// N represents how many [Response.Candidates] to generate.
//
// When nil, the default value of 1 is used
//
// IMPORTANT: Note that you will be charged based on the number of generated tokens across all the choices.
// Keep N as 1 to minimize costs
N *uint `json:"n,omitempty" yaml:"n,omitempty"`
// MaxGenerationTokens is the maximum number of tokens to generate before stopping for each [Response.Candidates].
//
// Note that a Generator may stop before reaching this maximum.
// This parameter only specifies the absolute maximum number of tokens to generate.
// When nil, the Generator uses its default value.
MaxGenerationTokens *int `json:"max_generation_tokens,omitempty" yaml:"max_generation_tokens,omitempty"`
// ToolChoice represents how the Generator should use the provided tools.
// The Generator can use a specific tool, any available tool, or decide by itself
//
// Setting ToolChoice to specific value enables different behavior:
// - If set to ToolChoiceAuto, the Generator decides for itself whether it should call tools
// - If set to ToolChoiceToolsRequired, the Generator is required to generate a response with tool calls
// - If set to some non-empty value, it is interpreted as a tool name,
// and requires that Generator call the specific tool provided by name
ToolChoice string `json:"tool_choice,omitempty" yaml:"tool_choice,omitempty"`
// StopSequences are custom text sequences that will cause the model to stop generating
StopSequences []string `json:"stop_sequences,omitempty" yaml:"stop_sequences,omitempty"`
// OutputModalities is an optional parameter that represents what type of outputs a Generator can generate.
// If OutputModalities is nil or empty, then a default of only Text Modality is used.
// OutputModalities only needs to be specified when generating modalities other than Text.
OutputModalities []Modality `json:"output_modalities,omitempty" yaml:"output_modalities,omitempty"`
// AudioConfig are parameters for audio output.
// Required when audio output is requested with Modality Audio in OutputModalities
AudioConfig AudioConfig `json:"audio_config,omitempty" yaml:"audio_config,omitempty"`
// ThinkingBudget is an optional parameter used for a Generator that can perform reasoning.
//
// Note that if a Generator does not support this parameter, it will simply be ignored, even if set
ThinkingBudget string `json:"thinking_budget,omitempty" yaml:"thinking_budget,omitempty"`
// ExtraArgs is an optional parameter used to pass a Generator-specific generation parameters not
// already supported by any of the above fields
ExtraArgs map[string]any `json:"extra_args,omitempty" yaml:"extra_args,omitempty"`
}
GenOpts represents the parameters that customize how a response is generated by a Generator
type GenOptsGenerator ¶
GenOptsGenerator is a function that takes a dialog and returns generation options. This allows customizing the options based on the current state of the dialog.
type Generator ¶
type Generator interface {
Generate(ctx context.Context, dialog Dialog, options *GenOpts) (Response, error)
}
A Generator takes a Dialog and optional GenOpts and generates a Response or an error. A context.Context is provided to the Generator as to provide not only cancellation and request specific values, but also to pass Generator implementation specific parameters if needed.
An example would be an implementation of Generator offering a beta feature not yet offered as common functionality, or utilizing a special feature specific to an implementation of Generator.
A Generator implementation may return several types of errors:
- MaxGenerationLimitErr when the maximum token generation limit is exceeded
- UnsupportedInputModalityErr when encountering an unsupported input modality
- UnsupportedOutputModalityErr when requested to generate an unsupported output modality
- InvalidToolChoiceErr when an invalid tool choice is specified
- InvalidParameterErr when generation parameters are invalid or out of range
- ContextLengthExceededErr when input dialog is too long
- ContentPolicyErr when content violates usage policies
- EmptyDialogErr when no messages are provided in the dialog
- AuthenticationErr when there are authentication or authorization issues
func Wrap ¶ added in v0.27.0
func Wrap(gen Generator, wrappers ...WrapperFunc) Generator
Wrap applies wrappers to a generator, creating a middleware stack. Wrappers are applied in order: the first wrapper becomes the outermost layer (first to receive calls, last to return).
Example:
gen := Wrap(baseGen,
WithLogging(logger), // 1st: outermost - receives call first
WithMetrics(collector),// 2nd: middle
WithRetry(nil), // 3rd: innermost - closest to baseGen
)
This creates the structure: Logging{Metrics{Retry{baseGen}}}
When gen.Generate() is called:
- Logging.Generate runs (before logic)
- Metrics.Generate runs (before logic)
- Retry.Generate runs (with retry loop calling...)
- baseGen.Generate runs
- Retry.Generate returns
- Metrics.Generate runs (after logic)
- Logging.Generate runs (after logic)
type GeneratorWrapper ¶ added in v0.27.0
type GeneratorWrapper struct {
Inner Generator
}
GeneratorWrapper is a base type for creating middleware-style generator wrappers. Embed it in your custom wrapper struct to get automatic delegation for all generator interfaces, then override only the methods where you need custom behavior.
The Middleware Pattern ¶
When you stack multiple wrappers using Wrap, calls flow through them like an onion:
gen := Wrap(base, WithA(), WithB(), WithC()) // Structure: A wraps B wraps C wraps base // // Call flow for gen.Generate(): // A.Generate (before) → // B.Generate (before) → // C.Generate (before) → // base.Generate // C.Generate (after) ← // B.Generate (after) ← // A.Generate (after) ←
Each interface method (Generate, Count, Stream, Register) flows through the stack independently. If a wrapper doesn't override a method, GeneratorWrapper passes the call straight through to Inner.
Selective Override ¶
You choose which methods each wrapper intercepts by overriding them:
- Override a method → your wrapper participates in that method's call chain
- Don't override → GeneratorWrapper delegates directly to Inner (transparent pass-through)
For example, a logging wrapper might override both Generate and Count to log both operations, while a retry wrapper only overrides Generate (retrying Count doesn't make sense for most use cases).
Supported Interfaces ¶
GeneratorWrapper implements all standard generator interfaces:
- Generator: Generate() delegates to Inner.Generate()
- TokenCounter: Count() delegates to Inner if it implements TokenCounter
- ToolCapableGenerator: Register() delegates to Inner if it implements ToolCapableGenerator
- StreamingGenerator: Stream() delegates to Inner if it implements StreamingGenerator
If Inner doesn't implement an optional interface (TokenCounter, ToolCapableGenerator, StreamingGenerator), the corresponding method returns an appropriate error.
Example: Creating a Wrapper ¶
// TimingGenerator measures how long Generate and Count take.
type TimingGenerator struct {
gai.GeneratorWrapper // Embed for automatic delegation
Observer func(method string, duration time.Duration)
}
// Override Generate to add timing
func (t *TimingGenerator) Generate(ctx context.Context, d Dialog, o *GenOpts) (Response, error) {
start := time.Now()
resp, err := t.GeneratorWrapper.Generate(ctx, d, o) // Delegate to next in chain
t.Observer("Generate", time.Since(start))
return resp, err
}
// Override Count to add timing
func (t *TimingGenerator) Count(ctx context.Context, d Dialog) (uint, error) {
start := time.Now()
count, err := t.GeneratorWrapper.Count(ctx, d) // Delegate to next in chain
t.Observer("Count", time.Since(start))
return count, err
}
// Stream is NOT overridden - calls pass through to Inner automatically
// WrapperFunc for use with Wrap()
func WithTiming(observer func(string, time.Duration)) gai.WrapperFunc {
return func(g gai.Generator) gai.Generator {
return &TimingGenerator{
GeneratorWrapper: gai.GeneratorWrapper{Inner: g},
Observer: observer,
}
}
}
Example: Stacking Multiple Wrappers ¶
gen := gai.Wrap(baseGenerator,
WithLogging(logger), // Outermost: logs all calls
WithMetrics(collector), // Middle: collects metrics
WithRetry(nil), // Innermost: retries failed Generate calls
)
// Now gen.Generate() flows: Logging → Metrics → Retry → base
// And gen.Count() flows: Logging → Metrics → base (Retry doesn't override Count)
func (*GeneratorWrapper) Count ¶ added in v0.27.0
Count delegates to Inner.Count if Inner implements TokenCounter. Override this method in your wrapper to intercept Count calls. Returns an error if Inner does not implement TokenCounter.
func (*GeneratorWrapper) Generate ¶ added in v0.27.0
func (w *GeneratorWrapper) Generate(ctx context.Context, dialog Dialog, opts *GenOpts) (Response, error)
Generate delegates to Inner.Generate. Override this method in your wrapper to intercept Generate calls.
func (*GeneratorWrapper) Register ¶ added in v0.27.0
func (w *GeneratorWrapper) Register(tool Tool) error
Register delegates to Inner.Register if Inner implements ToolCapableGenerator. Override this method in your wrapper to intercept Register calls. Returns an error if Inner does not implement ToolCapableGenerator.
func (*GeneratorWrapper) Stream ¶ added in v0.27.0
func (w *GeneratorWrapper) Stream(ctx context.Context, dialog Dialog, opts *GenOpts) iter.Seq2[StreamChunk, error]
Stream delegates to Inner.Stream if Inner implements StreamingGenerator. Override this method in your wrapper to intercept Stream calls. Returns an error-yielding iterator if Inner does not implement StreamingGenerator.
type InvalidParameterErr ¶
type InvalidParameterErr struct {
// Parameter is the name of the invalid parameter
Parameter string `json:"parameter" yaml:"parameter"`
// Reason describes why the parameter is invalid
Reason string `json:"reason" yaml:"reason"`
}
InvalidParameterErr is returned when a generation parameter in GenOpts is invalid. This can occur in several scenarios:
- [GenOpts.Temperature], [GenOpts.TopP], or [GenOpts.TopK] values are out of valid range
- [GenOpts.FrequencyPenalty] or [GenOpts.PresencePenalty] are out of valid range
- [GenOpts.MaxGenerationTokens] is negative or zero
- Invalid combination of parameters (e.g., both [GenOpts.Temperature] and [GenOpts.TopP] set)
func (InvalidParameterErr) Error ¶
func (i InvalidParameterErr) Error() string
type InvalidToolChoiceErr ¶
type InvalidToolChoiceErr string
InvalidToolChoiceErr is returned when an invalid tool choice is specified in GenOpts.ToolChoice. This can occur in several scenarios:
- When a specific tool is requested but doesn't exist
- When tools are required (ToolChoiceToolsRequired) but no tools are provided
The string value of this error contains details about why the tool choice was invalid.
func (InvalidToolChoiceErr) Error ¶
func (i InvalidToolChoiceErr) Error() string
type Message ¶
type Message struct {
// Role is required, and the default value of Role is User. However, for readability purposes,
// it is recommended to always set the Role to User or Assistant and not rely on the zero value
// to make it clear to the reader what type of Message it is
Role Role `json:"role" yaml:"role"`
// Blocks represents the collection of different blocks produced by the User or Assistant
Blocks []Block `json:"blocks" yaml:"blocks"`
// ToolResultError indicates whether the tool execution resulted in an error.
// When true, the message content represents an error response from a tool call.
// This is used by providers to properly format error responses in the API request.
ToolResultError bool `json:"tool_result_error,omitempty" yaml:"tool_result_error,omitempty"`
// ExtraFields allows storing additional message-level information that can be used
// for provider-specific features or custom metadata. Unlike Block.ExtraFields which
// stores block-specific data, this field is for information that applies to the
// entire message.
ExtraFields map[string]interface{} `json:"extra_fields,omitempty" yaml:"extra_fields,omitempty"`
}
Message represents a collection of blocks produced by the user or meant for the assistant.
func ToolResultMessage ¶ added in v0.4.0
ToolResultMessage creates a message representing the result of a tool execution. This function constructs a Message with the ToolResult role containing one or more content blocks. The tool call ID is automatically set on all provided blocks.
Parameters:
- id: The identifier for the tool call, should match the original tool call ID
- blocks: One or more content blocks (use TextBlock, ImageBlock, PDFBlock, etc.)
Returns a Message configured with ToolResult role and the provided blocks.
Examples:
// Single text result
result := ToolResultMessage("call_123", TextBlock("Temperature: 72°F"))
// PDF with explanation
result := ToolResultMessage("call_123",
TextBlock("Here's the generated report:"),
PDFBlock(pdfData, "report.pdf"),
)
// Multiple images
result := ToolResultMessage("call_123",
TextBlock("Found 3 matching charts:"),
ImageBlock(chart1, "image/png"),
ImageBlock(chart2, "image/png"),
)
type Metadata ¶ added in v0.10.2
Metadata represents a collection of metrics returned by a Generator in a Response. The map's keys are metric names, and values can be of any type, though typically they are numeric types like int or float64.
Two common metrics that a Generator typically returns are:
- UsageMetricInputTokens: The number of tokens in the input Dialog
- UsageMetricGenerationTokens: The number of tokens generated in the Response
A Generator may return additional implementation-specific metrics.
type Modality ¶
type Modality uint
Modality represents the type of modality that a Block holds The default for Modality is a Text type
type OpenAICompletionService ¶
type OpenAICompletionService interface {
New(ctx context.Context, body oai.ChatCompletionNewParams, opts ...option.RequestOption) (res *oai.ChatCompletion, err error)
NewStreaming(ctx context.Context, body oai.ChatCompletionNewParams, opts ...option.RequestOption) (stream *oaissestream.Stream[oai.ChatCompletionChunk])
}
type OpenAiGenerator ¶
type OpenAiGenerator struct {
// contains filtered or unexported fields
}
OpenAiGenerator implements the gai.Generator interface using OpenAI's API
func NewOpenAiGenerator ¶
func NewOpenAiGenerator(client OpenAICompletionService, model, systemInstructions string) OpenAiGenerator
NewOpenAiGenerator creates a new OpenAI generator with the specified model. The returned generator implements the Generator, ToolRegister, and TokenCounter interfaces.
Parameters:
- client: An OpenAI completion service (typically &client.Chat.Completions)
- model: The OpenAI model to use (e.g., "gpt-4o", "gpt-4o-audio-preview")
- systemInstructions: Optional system instructions that set the model's behavior
Supported modalities:
- Text: Both input and output
- Image: Input only (base64 encoded, including PDFs with MIME type "application/pdf")
- Audio: Input only (base64 encoded, WAV and MP3 formats)
For audio input, use models with audio support like:
- openai.ChatModelGPT4oAudioPreview
- openai.ChatModelGPT4oMiniAudioPreview
PDF documents are supported as a special case of the Image modality. Use the PDFBlock helper function to create PDF content blocks.
This generator fully supports the anyOf JSON Schema feature.
func (*OpenAiGenerator) Count ¶ added in v0.4.8
Count implements the TokenCounter interface for OpenAiGenerator. It uses the tiktoken-go library to count tokens based on the model without making an API call.
The method accounts for:
- System instructions (if set during generator initialization)
- All messages in the dialog with their respective blocks
- Images in the dialog (with accurate token calculation based on dimensions)
- Tool definitions registered with the generator
For images, the token count depends on the model and follows OpenAI's token calculation rules:
- For "minimal" models (gpt-4.1-mini, gpt-4.1-nano, o4-mini), tokens are calculated based on 32px patches
- For other models (GPT-4o, GPT-4.1, etc.), tokens depend on image dimensions and detail level
Image dimensions are extracted directly from the image data when possible, or from ExtraFields. If dimensions cannot be determined, an error is returned.
Note: PDF token counting is not supported and will return an error. This is because PDFs are converted to images server-side and exact dimensions cannot be determined.
The context parameter allows for cancellation of long-running counting operations.
Returns:
- The total token count as uint
- An error if token counting fails (e.g., unsupported modality, image dimension extraction failure, PDF input)
Example ¶
// Create an OpenAI client
client := openai.NewClient()
// Create a generator
generator := NewOpenAiGenerator(
&client.Chat.Completions,
openai.ChatModelGPT4o,
"You are a helpful assistant.",
)
// Create a dialog with a user message
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the capital of France?"),
},
},
},
}
// Count tokens in the dialog
tokenCount, err := generator.Count(context.Background(), dialog)
if err != nil {
fmt.Printf("Error counting tokens: %v\n", err)
return
}
fmt.Printf("Dialog contains %d tokens\n", tokenCount)
// Add a response to the dialog
dialog = append(dialog, Message{
Role: Assistant,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("The capital of France is Paris. It's known as the 'City of Light' and is famous for landmarks like the Eiffel Tower, the Louvre Museum, and Notre-Dame Cathedral."),
},
},
})
// Count tokens in the updated dialog
tokenCount, err = generator.Count(context.Background(), dialog)
if err != nil {
fmt.Printf("Error counting tokens: %v\n", err)
return
}
fmt.Printf("Dialog with response contains %d tokens\n", tokenCount)
Output: Dialog contains 13 tokens Dialog with response contains 48 tokens
func (*OpenAiGenerator) Generate ¶
func (g *OpenAiGenerator) Generate(ctx context.Context, dialog Dialog, options *GenOpts) (Response, error)
Generate implements gai.Generator
Example ¶
// Create an OpenAI client
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
client := openai.NewClient(
option.WithAPIKey(apiKey),
)
// Instantiate a OpenAI Generator
gen := NewOpenAiGenerator(&client.Chat.Completions, openai.ChatModelGPT4oMini, "You are a helpful assistant")
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Hi!"),
},
},
},
}
// Generate a response
resp, err := gen.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
// The exact response text may vary, so we'll just print a placeholder
fmt.Println("Response received")
// Customize generation parameters
opts := GenOpts{
TopK: Ptr[uint](10),
N: Ptr[uint](2), // Set N to a value higher than 1 to generate multiple responses in a single request
MaxGenerationTokens: Ptr(1024),
}
resp, err = gen.Generate(context.Background(), dialog, &opts)
if err != nil {
panic(err.Error())
}
fmt.Println(len(resp.Candidates))
Output: Response received 2
Example (Audio) ¶
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
audioBytes, err := os.ReadFile("sample.wav")
if err != nil {
fmt.Println("[Skipped: could not open sample.wav]")
return
}
// Encode as base64 for inline audio usage
audioBase64 := Str(base64.StdEncoding.EncodeToString(audioBytes))
client := openai.NewClient(
option.WithAPIKey(apiKey),
)
gen := NewOpenAiGenerator(
&client.Chat.Completions,
openai.ChatModelGPT4oAudioPreview,
"You are a helpful assistant.",
)
// Using inline audio data
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Audio,
MimeType: "audio/wav",
Content: audioBase64,
},
{
BlockType: Content,
ModalityType: Text,
Content: Str("In this audio, a person is introducing themselves. What is the name of person in the greeting in this audio? Return a one word response of the name"),
},
},
},
}
resp, err := gen.Generate(context.Background(), dialog, &GenOpts{
MaxGenerationTokens: Ptr(128),
})
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) > 0 && len(resp.Candidates[0].Blocks) > 0 {
fmt.Println(strings.ToLower(resp.Candidates[0].Blocks[0].Content.String()))
}
Output: friday
Example (Image) ¶
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
imgBytes, err := os.ReadFile("sample.jpg")
if err != nil {
fmt.Println("[Skipped: could not open sample.jpg]")
return
}
imgBase64 := Str(base64.StdEncoding.EncodeToString(imgBytes))
client := openai.NewClient(
option.WithAPIKey(apiKey),
)
gen := NewOpenAiGenerator(
&client.Chat.Completions,
openai.ChatModelGPT4o,
"You are a helpful assistant.",
)
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Image,
MimeType: "image/jpeg",
Content: imgBase64,
},
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is in this image? (Hint, it's a character from The Croods, a DreamWorks animated movie.)"),
},
},
},
}
resp, err := gen.Generate(context.Background(), dialog, &GenOpts{MaxGenerationTokens: Ptr(512)})
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) != 1 {
panic("Expected 1 candidate, got " + fmt.Sprint(len(resp.Candidates)))
}
if len(resp.Candidates[0].Blocks) != 1 {
panic("Expected 1 block, got " + fmt.Sprint(len(resp.Candidates[0].Blocks)))
}
fmt.Println(strings.Contains(resp.Candidates[0].Blocks[0].Content.String(), "Crood"))
Output: true
Example (OpenRouter) ¶
// Create an OpenAI client for open router
client := openai.NewClient(
option.WithBaseURL("https://openrouter.ai/api/v1/"),
option.WithAPIKey(os.Getenv("OPENROUTER_API_KEY")),
)
// Instantiate a OpenAI Generator
gen := NewOpenAiGenerator(
&client.Chat.Completions,
"google/gemini-2.5-pro-preview-03-25",
"You are a helpful assistant",
)
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Hi!"),
},
},
},
}
// Customize generation parameters
opts := GenOpts{
MaxGenerationTokens: Ptr(1024),
}
// Generate a response
resp, err := gen.Generate(context.Background(), dialog, &opts)
if err != nil {
panic(err.Error())
}
// The exact response text may vary, so we'll just print a placeholder
fmt.Println("Response received")
fmt.Println(len(resp.Candidates))
Output: Response received 1
Example (Pdf) ¶
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
pdfBytes, err := os.ReadFile("sample.pdf")
if err != nil {
fmt.Println("[Skipped: could not open sample.wav]")
return
}
client := openai.NewClient(
option.WithAPIKey(apiKey),
)
gen := NewOpenAiGenerator(
&client.Chat.Completions,
openai.ChatModelGPT4_1,
"You are a helpful assistant.",
)
// Create a dialog with PDF content
dialog := Dialog{
{
Role: User,
Blocks: []Block{
TextBlock("What is the title of this PDF? Just output the title and nothing else"),
PDFBlock(pdfBytes, "sample.pdf"),
},
},
}
// Generate a response
ctx := context.Background()
response, err := gen.Generate(ctx, dialog, nil)
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
// The response would contain the model's analysis of the PDF
if len(response.Candidates) > 0 && len(response.Candidates[0].Blocks) > 0 {
fmt.Println(response.Candidates[0].Blocks[0].Content)
}
Output: Attention Is All You Need
Example (Thinking) ¶
// Create an OpenAI client
client := openai.NewClient()
// Instantiate a OpenAI Generator
gen := NewOpenAiGenerator(&client.Chat.Completions, openai.ChatModelO3Mini, "You are a helpful assistant")
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Hi!"),
},
},
},
}
// Customize generation parameters
opts := GenOpts{
MaxGenerationTokens: Ptr(4096),
ThinkingBudget: "low",
Temperature: Ptr(1.0),
}
// Generate a response
resp, err := gen.Generate(context.Background(), dialog, &opts)
if err != nil {
panic(err.Error())
}
// The exact response text may vary, so we'll just print a placeholder
fmt.Println("Response received")
dialog = append(dialog, resp.Candidates[0], Message{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What can you do?"),
},
},
})
resp, err = gen.Generate(context.Background(), dialog, &opts)
if err != nil {
panic(err.Error())
}
fmt.Println(len(resp.Candidates))
Output: Response received 1
func (*OpenAiGenerator) Register ¶
func (g *OpenAiGenerator) Register(tool Tool) error
Register implements gai.ToolRegister
Example ¶
// Create an OpenAI client
client := openai.NewClient(option.WithBaseURL("https://gateway.ai.cloudflare.com/v1/4eee6dd2fdc8cebc7802c5a638f460fe/cpe/openai/"))
// Instantiate a OpenAI Generator
gen := NewOpenAiGenerator(
&client.Chat.Completions,
openai.ChatModelGPT4oMini,
`You are a helpful assistant that returns the price of a stock and nothing else.
Only output the price, like
<example>
435.56
</example>
<example>
3235.55
</example>
`,
)
// Register tools
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
if err := gen.Register(tickerTool); err != nil {
panic(err.Error())
}
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the price of Apple stock?"),
},
},
},
}
// Customize generation parameters
opts := GenOpts{
ToolChoice: "get_stock_price", // Can specify a specific tool to force invoke
}
// Generate a response
resp, err := gen.Generate(context.Background(), dialog, &opts)
if err != nil {
panic(err.Error())
}
fmt.Println(resp.Candidates[0].Blocks[0].Content)
dialog = append(dialog, resp.Candidates[0], Message{
Role: ToolResult,
Blocks: []Block{
{
ID: resp.Candidates[0].Blocks[0].ID,
ModalityType: Text,
Content: Str("123.45"),
},
},
})
resp, err = gen.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
fmt.Println(resp.Candidates[0].Blocks[0].Content)
Output: {"name":"get_stock_price","parameters":{"ticker":"AAPL"}} 123.45
Example (OpenRouter) ¶
// Register tools
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
// Create an OpenAI client for open router
client := openai.NewClient(
option.WithBaseURL("https://openrouter.ai/api/v1/"),
option.WithAPIKey(os.Getenv("OPENROUTER_API_KEY")),
)
// Instantiate a OpenAI Generator
gen := NewOpenAiGenerator(
&client.Chat.Completions,
"google/gemini-2.5-pro-preview-03-25",
`You are a helpful assistant that returns the price of a stock and nothing else.
Only output the price, like
<example>
435.56
</example>
<example>
3235.55
</example>
`,
)
if err := gen.Register(tickerTool); err != nil {
panic(err.Error())
}
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the price of Apple stock?"),
},
},
},
}
// Customize generation parameters
opts := GenOpts{
ToolChoice: "get_stock_price", // Can specify a specific tool to force invoke
}
// Generate a response
resp, err := gen.Generate(context.Background(), dialog, &opts)
if err != nil {
panic(err.Error())
}
fmt.Println(resp.Candidates[0].Blocks[0].Content)
dialog = append(dialog, resp.Candidates[0], Message{
Role: ToolResult,
Blocks: []Block{
{
ID: resp.Candidates[0].Blocks[0].ID,
ModalityType: Text,
Content: Str("123.45"),
},
},
})
resp, err = gen.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
fmt.Println(resp.Candidates[0].Blocks[0].Content)
Output: {"name":"get_stock_price","parameters":{"ticker":"AAPL"}} 123.45
Example (OpenRouterParallelToolUse) ¶
// Create an OpenAI client
client := openai.NewClient(
option.WithBaseURL("https://openrouter.ai/api/v1/"),
option.WithAPIKey(os.Getenv("OPENROUTER_API_KEY")),
)
// Register tools
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
// Instantiate a OpenAI Generator
gen := NewOpenAiGenerator(
&client.Chat.Completions,
"google/gemini-2.5-pro-preview-03-25",
`You are a helpful assistant that compares the price of two stocks and returns the ticker of whichever is greater.
Only mentioned the ticker and nothing else.
Only output the price, like
<example>
User: Which one is more expensive? Apple or NVidia?
Assistant: calls get_stock_price for both Apple and Nvidia
Tool Result: Apple: 123.45; Nvidia: 345.65
Assistant: Nvidia
</example>
`,
)
// Register tools
if err := gen.Register(tickerTool); err != nil {
panic(err.Error())
}
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Which stock, Apple vs. Microsoft, is more expensive?"),
},
},
},
}
// Generate a response
resp, err := gen.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
fmt.Println(resp.Candidates[0].Blocks[0].Content)
fmt.Println(resp.Candidates[0].Blocks[1].Content)
dialog = append(dialog, resp.Candidates[0], Message{
Role: ToolResult,
Blocks: []Block{
{
ID: resp.Candidates[0].Blocks[0].ID,
ModalityType: Text,
Content: Str("123.45"),
},
},
}, Message{
Role: ToolResult,
Blocks: []Block{
{
ID: resp.Candidates[0].Blocks[1].ID,
ModalityType: Text,
Content: Str("678.45"),
},
},
})
resp, err = gen.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
fmt.Println(resp.Candidates[0].Blocks[0].Content)
Output: {"name":"get_stock_price","parameters":{"ticker":"AAPL"}} {"name":"get_stock_price","parameters":{"ticker":"MSFT"}} MSFT
Example (ParallelToolUse) ¶
// Create an OpenAI client
client := openai.NewClient()
// Register tools
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
// Instantiate a OpenAI Generator
gen := NewOpenAiGenerator(
&client.Chat.Completions,
openai.ChatModelGPT4oMini,
`You are a helpful assistant that compares the price of two stocks and returns the ticker of whichever is greater.
Only mentioned the ticker and nothing else.
Only output the price, like
<example>
User: Which one is more expensive? Apple or NVidia?
Assistant: calls get_stock_price for both Apple and Nvidia
Tool Result: Apple: 123.45; Nvidia: 345.65
Assistant: Nvidia
</example>
`,
)
// Register tools
tickerTool.Description += "\nYou can call this tool in parallel"
if err := gen.Register(tickerTool); err != nil {
panic(err.Error())
}
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Which stock, Apple vs. Microsoft, is more expensive?"),
},
},
},
}
// Generate a response
resp, err := gen.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
fmt.Println(resp.Candidates[0].Blocks[0].Content)
fmt.Println(resp.Candidates[0].Blocks[1].Content)
dialog = append(dialog, resp.Candidates[0], Message{
Role: ToolResult,
Blocks: []Block{
{
ID: resp.Candidates[0].Blocks[0].ID,
ModalityType: Text,
Content: Str("123.45"),
},
},
}, Message{
Role: ToolResult,
Blocks: []Block{
{
ID: resp.Candidates[0].Blocks[1].ID,
ModalityType: Text,
Content: Str("678.45"),
},
},
})
resp, err = gen.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
fmt.Println(resp.Candidates[0].Blocks[0].Content)
Output: {"name":"get_stock_price","parameters":{"ticker":"AAPL"}} {"name":"get_stock_price","parameters":{"ticker":"MSFT"}} MSFT
func (*OpenAiGenerator) Stream ¶ added in v0.6.0
func (g *OpenAiGenerator) Stream(ctx context.Context, dialog Dialog, options *GenOpts) iter.Seq2[StreamChunk, error]
Example ¶
// Create an OpenAI client
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
client := openai.NewClient(
option.WithAPIKey(apiKey),
)
// Instantiate a OpenAI Generator
gen := NewOpenAiGenerator(&client.Chat.Completions, openai.ChatModelGPT4oMini, "You are a helpful assistant")
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Hi!"),
},
},
},
}
// Stream a response
blocks := make([][]Block, 2)
for chunk, err := range gen.Stream(context.Background(), dialog, &GenOpts{N: Ptr[uint](2)}) {
if err != nil {
fmt.Println(err.Error())
return
}
blocks[chunk.CandidatesIndex] = append(blocks[chunk.CandidatesIndex], chunk.Block)
}
if len(blocks) == 2 && len(blocks[0]) > 1 && len(blocks[1]) > 1 {
fmt.Println("Response received")
}
Output: Response received
Example (ParallelToolUse) ¶
// Create an OpenAI client
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
client := openai.NewClient(
option.WithAPIKey(apiKey),
)
// Register tools
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
// Instantiate a OpenAI Generator
gen := NewOpenAiGenerator(&client.Chat.Completions, openai.ChatModelGPT4oMini, "You are a helpful assistant")
// Register tools
tickerTool.Description += "\nYou can call this tool in parallel"
if err := gen.Register(tickerTool); err != nil {
panic(err.Error())
}
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Which stock, Apple vs. Microsoft, is more expensive?"),
},
},
},
}
// Stream a response
var blocks []Block
for chunk, err := range gen.Stream(context.Background(), dialog, nil) {
if err != nil {
fmt.Println(err.Error())
return
}
blocks = append(blocks, chunk.Block)
}
if len(blocks) > 1 {
fmt.Println("Response received")
}
// collect the blocks
var prevToolCallId string
var toolCalls []Block
var toolcallArgs string
var toolCallInput ToolCallInput
for _, block := range blocks {
// Skip metadata blocks
if block.BlockType == MetadataBlockType {
continue
}
if block.ID != "" && block.ID != prevToolCallId {
if toolcallArgs != "" {
// Parse the arguments string into a map
if err := json.Unmarshal([]byte(toolcallArgs), &toolCallInput.Parameters); err != nil {
panic(err.Error())
}
// Marshal back to JSON for consistent representation
toolUseJSON, err := json.Marshal(toolCallInput)
if err != nil {
panic(err.Error())
}
toolCalls[len(toolCalls)-1].Content = Str(toolUseJSON)
toolCallInput = ToolCallInput{}
toolcallArgs = ""
}
prevToolCallId = block.ID
toolCalls = append(toolCalls, Block{
ID: block.ID,
BlockType: ToolCall,
ModalityType: Text,
MimeType: "text/plain",
})
toolCallInput.Name = block.Content.String()
} else {
toolcallArgs += block.Content.String()
}
}
if toolcallArgs != "" {
// Parse the arguments string into a map
if err := json.Unmarshal([]byte(toolcallArgs), &toolCallInput.Parameters); err != nil {
panic(err.Error())
}
// Marshal back to JSON for consistent representation
toolUseJSON, err := json.Marshal(toolCallInput)
if err != nil {
panic(err.Error())
}
toolCalls[len(toolCalls)-1].Content = Str(toolUseJSON)
toolCallInput = ToolCallInput{}
}
fmt.Println(len(toolCalls))
dialog = append(dialog, Message{
Role: Assistant,
Blocks: toolCalls,
}, Message{
Role: ToolResult,
Blocks: []Block{
{
ID: toolCalls[0].ID,
ModalityType: Text,
Content: Str("123.45"),
},
},
}, Message{
Role: ToolResult,
Blocks: []Block{
{
ID: toolCalls[1].ID,
ModalityType: Text,
Content: Str("678.45"),
},
},
})
// Stream a response
blocks = nil
for chunk, err := range gen.Stream(context.Background(), dialog, nil) {
if err != nil {
fmt.Println(err.Error())
return
}
blocks = append(blocks, chunk.Block)
}
if len(blocks) > 1 {
fmt.Println("Response received")
}
Output: Response received 2 Response received
type OpenRouterGenerator ¶ added in v0.19.0
type OpenRouterGenerator struct {
// contains filtered or unexported fields
}
OpenRouterGenerator implements the Generator interface using OpenRouter's API, which is largely compatible with OpenAI's API but includes additional features like reasoning tokens and extended error information.
OpenRouter is a unified API that provides access to multiple LLM providers (OpenAI, Anthropic, Google, Meta, etc.) through a single interface. This generator leverages the OpenAI SDK since OpenRouter's API is a superset of OpenAI's API.
Reasoning Support: OpenRouter supports reasoning tokens via the "reasoning" parameter with effort levels ("low", "medium", "high") or max_tokens (as string). This generator: 1. Sets reasoning config in requests via ThinkingBudget in GenOpts 2. Extracts reasoning_details from responses as Thinking blocks with extra fields:
- OpenRouterExtraFieldReasoningType
- OpenRouterExtraFieldReasoningFormat
- OpenRouterExtraFieldReasoningIndex
- OpenRouterExtraFieldReasoningSignature (when applicable)
3. Passes reasoning_details back in assistant messages (recommended by OpenRouter) 4. Sets OpenRouterUsageMetricReasoningDetailsAvailable in Response.UsageMetadata when reasoning_details are present
Note: Streaming is not yet implemented for this generator.
func NewOpenRouterGenerator ¶ added in v0.19.0
func NewOpenRouterGenerator(client OpenAICompletionService, model string, systemInstructions string) *OpenRouterGenerator
NewOpenRouterGenerator creates a new OpenRouter generator that uses the OpenAI SDK with OpenRouter-specific configuration. The baseURL should be "https://openrouter.ai/api/v1" and the apiKey should be your OpenRouter API key.
Example:
client := openai.NewClient(
option.WithBaseURL("https://openrouter.ai/api/v1"),
option.WithAPIKey(os.Getenv("OPENROUTER_API_KEY")),
)
gen := NewOpenRouterGenerator(&client.Chat.Completions, "anthropic/claude-3.5-sonnet", "You are helpful")
func (*OpenRouterGenerator) Generate ¶ added in v0.19.0
func (g *OpenRouterGenerator) Generate(ctx context.Context, dialog Dialog, options *GenOpts) (Response, error)
Generate implements Generator
Example ¶
// Create an OpenAI client configured for OpenRouter
apiKey := os.Getenv("OPENROUTER_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENROUTER_API_KEY env]")
return
}
client := openai.NewClient(
option.WithBaseURL("https://openrouter.ai/api/v1"),
option.WithAPIKey(apiKey),
)
// Instantiate an OpenRouter Generator
gen := NewOpenRouterGenerator(&client.Chat.Completions, "z-ai/glm-4.6:exacto", "You are a helpful assistant")
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Hi!"),
},
},
},
}
// Generate a response
resp, err := gen.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
// The exact response text may vary, so we'll just print a placeholder
fmt.Println("Response received")
// Customize generation parameters
opts := GenOpts{
MaxGenerationTokens: Ptr(10000),
}
resp, err = gen.Generate(context.Background(), dialog, &opts)
if err != nil {
panic(err.Error())
}
fmt.Println(len(resp.Candidates))
Output: Response received 1
Example (Image) ¶
apiKey := os.Getenv("OPENROUTER_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENROUTER_API_KEY env]")
return
}
imgBytes, err := os.ReadFile("sample.jpg")
if err != nil {
fmt.Println("[Skipped: could not open sample.jpg]")
return
}
imgBase64 := Str(base64.StdEncoding.EncodeToString(imgBytes))
client := openai.NewClient(
option.WithBaseURL("https://openrouter.ai/api/v1"),
option.WithAPIKey(apiKey),
)
// Use a vision-capable model through OpenRouter
gen := NewOpenRouterGenerator(
&client.Chat.Completions,
"qwen/qwen3-vl-235b-a22b-instruct",
"You are a helpful assistant.",
)
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Image,
MimeType: "image/jpeg",
Content: imgBase64,
},
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is in this image? (Hint, it's a character from The Croods, a DreamWorks animated movie.)"),
},
},
},
}
resp, err := gen.Generate(context.Background(), dialog, &GenOpts{MaxGenerationTokens: Ptr(512)})
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) != 1 {
panic("Expected 1 candidate, got " + fmt.Sprint(len(resp.Candidates)))
}
if len(resp.Candidates[0].Blocks) < 1 {
panic("Expected at least 1 block, got " + fmt.Sprint(len(resp.Candidates[0].Blocks)))
}
fmt.Println(strings.Contains(resp.Candidates[0].Blocks[0].Content.String(), "Crood"))
Output: true
Example (InvalidModel) ¶
// This example demonstrates handling of invalid model IDs with OpenRouter.
// OpenRouter returns a 400 status code with error details in the response body
// for invalid requests like nonsense model IDs.
apiKey := os.Getenv("OPENROUTER_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENROUTER_API_KEY env]")
return
}
client := openai.NewClient(
option.WithBaseURL("https://openrouter.ai/api/v1"),
option.WithAPIKey(apiKey),
)
// Use a nonsense model ID to trigger an error
gen := NewOpenRouterGenerator(&client.Chat.Completions, "invalid/model-does-not-exist", "You are helpful")
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("Hello"),
},
},
},
}
_, err := gen.Generate(context.Background(), dialog, nil)
if err != nil {
var apiErr ApiErr
if errors.As(err, &apiErr) {
fmt.Println("Handled error")
} else {
fmt.Println("Unexpected error type")
}
return
}
panic("unreachable")
Output: Handled error
Example (ReasoningModel) ¶
apiKey := os.Getenv("OPENROUTER_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENROUTER_API_KEY env]")
return
}
client := openai.NewClient(
option.WithBaseURL("https://openrouter.ai/api/v1"),
option.WithAPIKey(apiKey),
)
// Use a reasoning model through OpenRouter
// NOTE: Models that support reasoning (like those with extended thinking)
// will automatically return reasoning_details which are extracted as Thinking blocks
gen := NewOpenRouterGenerator(
&client.Chat.Completions,
"z-ai/glm-4.6:exacto",
"You are a helpful assistant.",
)
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the square root of 144?"),
},
},
},
}
// Generate response - reasoning models may return thinking blocks automatically
resp, err := gen.Generate(context.Background(), dialog, &GenOpts{ThinkingBudget: "low"})
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) > 0 && len(resp.Candidates[0].Blocks) > 0 {
// Check if we have thinking blocks (from reasoning_details)
hasThinking := false
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Thinking {
hasThinking = true
// Thinking blocks have reasoning metadata in ExtraFields
if reasoningType, ok := block.ExtraFields["reasoning_type"].(string); ok {
_ = reasoningType // reasoning.text, reasoning.summary, or reasoning.encrypted
}
}
}
if hasThinking {
fmt.Println("Thinking blocks found")
}
// Find the main content block (not thinking)
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Content {
content := block.Content.String()
if strings.Contains(content, "12") {
fmt.Println("Correct answer found")
}
break
}
}
}
dialog = append(dialog, resp.Candidates[0], Message{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the square root of 225?"),
},
},
})
// Generate response - reasoning models may return thinking blocks automatically
resp, err = gen.Generate(context.Background(), dialog, &GenOpts{ThinkingBudget: "low"})
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) > 0 && len(resp.Candidates[0].Blocks) > 0 {
// Check if we have thinking blocks (from reasoning_details)
hasThinking := false
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Thinking {
hasThinking = true
// Thinking blocks have reasoning metadata in ExtraFields
if reasoningType, ok := block.ExtraFields["reasoning_type"].(string); ok {
_ = reasoningType // reasoning.text, reasoning.summary, or reasoning.encrypted
}
}
}
if hasThinking {
fmt.Println("Thinking blocks found")
}
// Find the main content block (not thinking)
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Content {
content := block.Content.String()
if strings.Contains(content, "15") {
fmt.Println("Correct answer found")
}
break
}
}
}
Output: Thinking blocks found Correct answer found Thinking blocks found Correct answer found
func (*OpenRouterGenerator) Register ¶ added in v0.19.0
func (g *OpenRouterGenerator) Register(tool Tool) error
Register implements ToolRegister
Example ¶
apiKey := os.Getenv("OPENROUTER_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENROUTER_API_KEY env]")
return
}
client := openai.NewClient(
option.WithBaseURL("https://openrouter.ai/api/v1"),
option.WithAPIKey(apiKey),
)
gen := NewOpenRouterGenerator(
&client.Chat.Completions,
"moonshotai/kimi-k2-0905:exacto",
"You are a helpful assistant that returns the price of a stock and nothing else.",
)
// Register a tool
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
if err := gen.Register(tickerTool); err != nil {
fmt.Println("Error:", err)
return
}
dialog := Dialog{
{Role: User, Blocks: []Block{TextBlock("What is the price of Apple stock?")}},
}
// Force the tool call
resp, err := gen.Generate(context.Background(), dialog, &GenOpts{ToolChoice: "get_stock_price"})
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) == 0 || len(resp.Candidates[0].Blocks) == 0 {
fmt.Println("Error: empty response")
return
}
// Find and print the tool call JSON
var toolCall Block
for _, b := range resp.Candidates[0].Blocks {
if b.BlockType == ToolCall {
toolCall = b
break
}
}
fmt.Println(toolCall.Content)
// Append tool result and continue the conversation
dialog = append(dialog, resp.Candidates[0], Message{
Role: ToolResult,
Blocks: []Block{
{ID: toolCall.ID, BlockType: Content, ModalityType: Text, MimeType: "text/plain", Content: Str("123.45")},
},
})
resp, err = gen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) > 0 && len(resp.Candidates[0].Blocks) > 0 {
fmt.Println(resp.Candidates[0].Blocks[0].Content)
}
Output: {"name":"get_stock_price","parameters":{"ticker":"AAPL"}} 123.45
type PreprocessingGenerator ¶ added in v0.4.0
type PreprocessingGenerator struct {
GeneratorWrapper
}
PreprocessingGenerator is a transparent wrapper for any ToolCapableGenerator that automatically preprocesses the dialog before every Generate call.
Specifically, it consolidates parallel tool result messages into the format required by LLM providers such as Anthropic and Gemini, which expect parallel tool results to be delivered in a single message with multiple blocks, whereas OpenAI-style dialogs use separate messages for each. This wrapper ensures the dialog structure fed into the underlying generator is always in the correct, provider-specific format.
This helps keep generator implementations clean, centralizes parallel tool result normalization, and can be easily composed with future generators needing the same behavior.
type RateLimitErr ¶
type RateLimitErr string
RateLimitErr is returned when the API request exceeds the allowed rate limits. This can include:
- Too many requests in a short time period
- Quota limits being reached
- Per-minute, per-hour, or per-day limits exceeded
The string value contains details about the specific rate limit issue.
func (RateLimitErr) Error ¶
func (r RateLimitErr) Error() string
type Response ¶
type Response struct {
// Candidates represents the list of possible generations that a Generator generates,
// equal to the number specified in [GenOpts.N]. Since the default value of [GenOpts.N] is 1,
// you can expect at least one Message to be present
Candidates []Message `json:"candidates" yaml:"candidates"`
// FinishReason represents the reason why a Generator stopped generating
FinishReason FinishReason `json:"finish_reason" yaml:"finish_reason"`
// UsageMetadata represents some arbitrary metrics and values that a Generator can return.
// The metric UsageMetricInputTokens and UsageMetricGenerationTokens is most commonly returned by an
// implementation of a Generator, representing the total input tokens and output tokens consumed, however
// it is not guaranteed to have those metrics be present. In addition, a Generator may return additional metrics
// specific to the implementation.
UsageMetadata Metadata `json:"usage_metadata,omitempty" yaml:"usage_metadata,omitempty"`
}
Response is what is returned by a Generator
type ResponsesGenerator ¶ added in v0.10.2
type ResponsesGenerator struct {
// contains filtered or unexported fields
}
ResponsesGenerator is a stateless generator that calls OpenAI models via the Responses API.
This generator operates in fully stateless mode: it sets store=false on every request and includes "reasoning.encrypted_content" so that encrypted reasoning tokens are returned in API responses. These encrypted tokens are stored in Thinking block ExtraFields and automatically reconstructed as reasoning input items when the dialog is passed back for subsequent turns (e.g., during multi-step function calling).
func NewResponsesGenerator ¶ added in v0.10.2
func NewResponsesGenerator(client ResponsesService, model, systemInstructions string) ResponsesGenerator
NewResponsesGenerator creates a new OpenAI Responses API generator with the specified model. The returned generator implements the Generator, StreamingGenerator, and ToolRegister interfaces.
Parameters:
- client: An OpenAI completion service (typically &client.Responses)
- model: The OpenAI model to use (e.g., "gpt-5")
- systemInstructions: Optional system instructions that set the model's behavior
Supported modalities:
- Text: Both input and output
- Image: Input only (base64 encoded, including PDFs with MIME type "application/pdf")
- Audio: Input only (base64 encoded, WAV and MP3 formats)
For audio input, use models with audio support like:
- openai.ChatModelGPT4oAudioPreview
- openai.ChatModelGPT4oMiniAudioPreview
PDF documents are supported as a special case of the Image modality. Use the PDFBlock helper function to create PDF content blocks.
This generator fully supports the anyOf JSON Schema feature.
func (*ResponsesGenerator) Generate ¶ added in v0.10.2
func (r *ResponsesGenerator) Generate(ctx context.Context, dialog Dialog, options *GenOpts) (Response, error)
Example ¶
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
client := openai.NewClient(option.WithAPIKey(apiKey))
gen := NewResponsesGenerator(&client.Responses, openai.ChatModelGPT5Mini, "You are a helpful assistant")
dialog := Dialog{{Role: User, Blocks: []Block{TextBlock("Hi!")}}}
resp, err := gen.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
fmt.Println("Response received")
fmt.Println(len(resp.Candidates))
Output: Response received 1
Example (Image) ¶
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
imgBytes, err := os.ReadFile("sample.jpg")
if err != nil {
fmt.Println("[Skipped: could not open sample.jpg]")
return
}
imgBase64 := Str(base64.StdEncoding.EncodeToString(imgBytes))
client := openai.NewClient(option.WithAPIKey(apiKey))
gen := NewResponsesGenerator(&client.Responses, openai.ChatModelGPT5Mini, "You are a helpful assistant.")
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Image,
MimeType: "image/jpeg",
Content: imgBase64,
},
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is in this image? (Hint, it's a character from The Croods, a DreamWorks animated movie.) Answer with only the name of the character"),
},
},
},
}
resp, err := gen.Generate(context.Background(), dialog, &GenOpts{MaxGenerationTokens: Ptr(512), ThinkingBudget: "high"})
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) != 1 {
panic("Expected 1 candidate, got " + fmt.Sprint(len(resp.Candidates)))
}
if len(resp.Candidates[0].Blocks) == 0 {
panic("Expected at least 1 block")
}
// Find the first Content block (skip Thinking blocks from reasoning)
for _, blk := range resp.Candidates[0].Blocks {
if blk.BlockType == Content {
fmt.Println(strings.Contains(blk.Content.String(), "Guy"))
break
}
}
Output: true
Example (Pdf) ¶
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
pdfBytes, err := os.ReadFile("sample.pdf")
if err != nil {
fmt.Println("[Skipped: could not open sample.pdf]")
return
}
client := openai.NewClient(option.WithAPIKey(apiKey))
gen := NewResponsesGenerator(&client.Responses, openai.ChatModelGPT5Mini, "You are a helpful assistant.")
dialog := Dialog{
{
Role: User,
Blocks: []Block{
TextBlock("What is the title of this PDF? Just output the title and nothing else"),
PDFBlock(pdfBytes, "sample.pdf"),
},
},
}
resp, err := gen.Generate(context.Background(), dialog, &GenOpts{ThinkingBudget: "low"})
if err != nil {
fmt.Println("Error:", err)
return
}
// Find the first Content block (skip Thinking blocks from reasoning)
for _, blk := range resp.Candidates[0].Blocks {
if blk.BlockType == Content {
fmt.Println(blk.Content)
break
}
}
Output: Attention Is All You Need
Example (Thinking) ¶
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
client := openai.NewClient(option.WithAPIKey(apiKey))
gen := NewResponsesGenerator(&client.Responses, openai.ChatModelGPT5, "You are a helpful assistant")
dialog := Dialog{{Role: User, Blocks: []Block{TextBlock("Are LLMs conscious? Think it through and give a comprehensive answer")}}}
opts := GenOpts{ThinkingBudget: "medium", Temperature: Ptr(1.0), ExtraArgs: map[string]any{
ResponsesThoughtSummaryDetailParam: responses.ReasoningSummaryDetailed,
}}
resp, err := gen.Generate(context.Background(), dialog, &opts)
if err != nil {
panic(err.Error())
}
fmt.Println("Response received")
// The generator is stateless: just append the assistant response and continue.
// Reasoning blocks with encrypted content are automatically reconstructed as
// input reasoning items on the next call.
dialog = append(dialog, resp.Candidates[0], Message{Role: User, Blocks: []Block{TextBlock("What can you do?")}})
resp, err = gen.Generate(context.Background(), dialog, &opts)
if err != nil {
panic(err.Error())
}
fmt.Println(len(resp.Candidates))
Output: Response received 1
func (*ResponsesGenerator) Register ¶ added in v0.10.2
func (r *ResponsesGenerator) Register(tool Tool) error
Example ¶
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
client := openai.NewClient(option.WithAPIKey(apiKey))
gen := NewResponsesGenerator(&client.Responses, openai.ChatModelGPT5Mini, `You are a helpful assistant that returns the price of a stock and nothing else.
Only output the price, like
<example>
435.56
</example>
<example>
3235.55
</example>
`)
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
if err := gen.Register(tickerTool); err != nil {
panic(err.Error())
}
dialog := Dialog{{Role: User, Blocks: []Block{TextBlock("What is the price of Apple stock?")}}}
opts := GenOpts{ToolChoice: "get_stock_price"}
resp, err := gen.Generate(context.Background(), dialog, &opts)
if err != nil {
panic(err.Error())
}
// Find the first ToolCall block (reasoning models may produce Thinking blocks before tool calls)
var toolCallBlock Block
for _, blk := range resp.Candidates[0].Blocks {
if blk.BlockType == ToolCall {
toolCallBlock = blk
break
}
}
fmt.Println(toolCallBlock.Content)
// Append the assistant's response and the tool result. The generator is stateless
// and manages conversation context through the dialog.
dialog = append(dialog, resp.Candidates[0], Message{Role: ToolResult, Blocks: []Block{{ID: toolCallBlock.ID, ModalityType: Text, MimeType: "text/plain", Content: Str("123.45")}}})
resp, err = gen.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
// Find the first Content block in the final response
for _, blk := range resp.Candidates[0].Blocks {
if blk.BlockType == Content {
fmt.Println(blk.Content)
break
}
}
Output: {"name":"get_stock_price","parameters":{"ticker":"AAPL"}} 123.45
Example (ParallelToolUse) ¶
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
client := openai.NewClient(option.WithAPIKey(apiKey))
gen := NewResponsesGenerator(&client.Responses, openai.ChatModelGPT5Mini, `You are a helpful assistant that compares the price of two stocks and returns the ticker of whichever is greater.
Only mentioned the ticker and nothing else.
Only output the price, like
<example>
User: Which one is more expensive? Apple or NVidia?
Assistant: calls get_stock_price for both Apple and Nvidia
Tool Result: Apple: 123.45; Nvidia: 345.65
Assistant: Nvidia
</example>
`)
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.\nYou can call this tool in parallel",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
if err := gen.Register(tickerTool); err != nil {
panic(err.Error())
}
dialog := Dialog{{Role: User, Blocks: []Block{TextBlock("Which stock, Apple vs. Microsoft, is more expensive?")}}}
resp, err := gen.Generate(context.Background(), dialog, &GenOpts{ThinkingBudget: "medium"})
if err != nil {
panic(err.Error())
}
// Collect ToolCall blocks (reasoning models may produce Thinking blocks before tool calls)
var toolCallBlocks []Block
for _, blk := range resp.Candidates[0].Blocks {
if blk.BlockType == ToolCall {
toolCallBlocks = append(toolCallBlocks, blk)
}
}
fmt.Println(toolCallBlocks[0].Content)
fmt.Println(toolCallBlocks[1].Content)
// Append the assistant's response and tool results. The generator is stateless
// and manages conversation context through the dialog.
dialog = append(dialog, resp.Candidates[0], Message{Role: ToolResult, Blocks: []Block{{ID: toolCallBlocks[0].ID, ModalityType: Text, MimeType: "text/plain", Content: Str("123.45")}}}, Message{Role: ToolResult, Blocks: []Block{{ID: toolCallBlocks[1].ID, ModalityType: Text, MimeType: "text/plain", Content: Str("678.45")}}})
resp, err = gen.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
// Find the first Content block in the final response
for _, blk := range resp.Candidates[0].Blocks {
if blk.BlockType == Content {
fmt.Println(blk.Content)
break
}
}
Output: {"name":"get_stock_price","parameters":{"ticker":"AAPL"}} {"name":"get_stock_price","parameters":{"ticker":"MSFT"}} MSFT
func (*ResponsesGenerator) Stream ¶ added in v0.15.0
func (r *ResponsesGenerator) Stream(ctx context.Context, dialog Dialog, options *GenOpts) iter.Seq2[StreamChunk, error]
Example (Thinking) ¶
ExampleResponsesGenerator_Stream_thinking demonstrates consuming the raw streaming iterator with a reasoning model. The stream yields thinking chunks (reasoning deltas) interleaved with content chunks. At the end, a metadata block carries usage information. This example also shows how to build a dialog-ready assistant message from the streamed blocks using compressStreamingBlocks (via StreamingAdapter) for a follow-up turn.
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
client := openai.NewClient(option.WithAPIKey(apiKey))
gen := NewResponsesGenerator(&client.Responses, openai.ChatModelGPT5Nano, "You are a helpful assistant")
dialog := Dialog{{Role: User, Blocks: []Block{TextBlock("What is the capital of France? Reply with just the city name.")}}}
opts := &GenOpts{
ThinkingBudget: "low",
ExtraArgs: map[string]any{
ResponsesThoughtSummaryDetailParam: responses.ReasoningSummaryDetailed,
},
}
// Use StreamingAdapter so the streamed output is automatically compressed
// into a proper Response with Thinking blocks carrying ExtraFields (including
// encrypted reasoning content for stateless multi-turn conversations).
adapter := &StreamingAdapter{S: &gen}
resp, err := adapter.Generate(context.Background(), dialog, opts)
if err != nil {
panic(err.Error())
}
// The compressed response preserves Thinking blocks from the reasoning model.
hasThinking := false
for _, blk := range resp.Candidates[0].Blocks {
if blk.BlockType == Thinking {
hasThinking = true
break
}
}
fmt.Println("Has thinking blocks:", hasThinking)
// Append the full assistant message to the dialog. Thinking blocks with
// encrypted content are included, so the next call can reconstruct reasoning
// input items automatically.
dialog = append(dialog, resp.Candidates[0], Message{Role: User, Blocks: []Block{TextBlock("And what country is that in?")}})
resp, err = adapter.Generate(context.Background(), dialog, opts)
if err != nil {
panic(err.Error())
}
// Find the content block in the follow-up response.
for _, blk := range resp.Candidates[0].Blocks {
if blk.BlockType == Content {
fmt.Println(strings.Contains(blk.Content.String(), "France"))
break
}
}
Output: Has thinking blocks: true true
type ResponsesService ¶ added in v0.10.2
type ResponsesService interface {
New(ctx context.Context, body responses.ResponseNewParams, opts ...option.RequestOption) (res *responses.Response, err error)
NewStreaming(ctx context.Context, body responses.ResponseNewParams, opts ...option.RequestOption) (stream *ssestream.Stream[responses.ResponseStreamEventUnion])
}
type RetryGenerator ¶ added in v0.4.9
type RetryGenerator struct {
GeneratorWrapper // Embed for default Count/Register/Stream delegation
// contains filtered or unexported fields
}
RetryGenerator is a Generator that wraps another Generator and retries the Generate call according to a specified base backoff policy and retry options.
It retries on specific errors:
- context.DeadlineExceeded (from the Generate call itself, not the overall context)
- gai.RateLimitErr
- gai.ApiErr with HTTP status code 429 (Too Many Requests)
- gai.ApiErr with HTTP status codes 5xx (Server Errors)
func NewRetryGenerator ¶ added in v0.4.9
func NewRetryGenerator(generator Generator, baseBo backoff.BackOff, opts ...backoff.RetryOption) *RetryGenerator
NewRetryGenerator creates a new RetryGenerator.
Parameters:
- generator: The underlying Generator to wrap.
- baseBo: The base backoff.BackOff policy to use (e.g., an instance of *ExponentialBackOff). If nil, a default *ExponentialBackOff with standard intervals (Initial: 500ms, Max: 15s) is created.
- opts: Optional backoff.RetryOption(s) to apply to each Retry call. These can configure aspects like max elapsed time, max retries, or notification functions. If no opts are provided, a default MaxElapsedTime (1 minute) will be applied. If opts are provided, they are used directly; ensure they are comprehensive for your needs (e.g., if you provide WithMaxTries, consider if you also need WithMaxElapsedTime). It is recommended NOT to include backoff.WithBackOff() in opts, as `baseBo` is always applied as the primary backoff strategy.
func (*RetryGenerator) Generate ¶ added in v0.4.9
func (rg *RetryGenerator) Generate(ctx context.Context, dialog Dialog, options *GenOpts) (Response, error)
Generate calls the underlying Generator's Generate method, retrying on specific errors according to the configured backoff policy and options. The provided context (ctx) is respected by the retry loop: if ctx is cancelled, retries will stop.
type Role ¶
type Role uint
Role represents what type a Message is
const ( // User represents the user role in a list of messages User Role = iota // Assistant represents the assistant role in a list of messages. // A Message that has an Assistant role represents content generated // by the model Assistant // ToolResult represents the result of a tool execution. // A Message with this role contains the output from a tool that // was called during generation. This allows tool results to support // multiple Blocks of different Modalities ToolResult )
type StreamChunk ¶ added in v0.6.0
type StreamChunk struct {
Block Block `json:"block" yaml:"block"`
CandidatesIndex int `json:"candidates_index" yaml:"candidates_index"`
}
StreamChunk represents a single chunk of content yielded during streaming generation. Each chunk contains a partial Block that will be combined with other chunks to form complete blocks in the final response.
The Block field contains partial content that depends on the BlockType:
- For "content" blocks: partial text fragments
- For "thinking" blocks: partial reasoning fragments
- For "tool_call" blocks: either a header (with ID and tool name) or parameter fragments
- For MetadataBlockType blocks: usage metrics (always the last chunk)
CandidatesIndex indicates which candidate this chunk belongs to when N>1 is used. Currently only CandidatesIndex=0 is supported by the StreamingAdapter.
type StreamingAdapter ¶ added in v0.6.0
type StreamingAdapter struct {
S StreamingGenerator
}
StreamingAdapter converts a StreamingGenerator to a Generator by collecting all chunks and compressing them into a complete Response. This adapter handles the conversion from streaming chunks to the standard Response format expected by the Generator interface.
The adapter: 1. Collects all chunks from the StreamingGenerator 2. Uses compressStreamingBlocks to merge consecutive chunks of the same type 3. Constructs a Response with the compressed blocks 4. Sets FinishReason based on whether tool calls are present
Note: This adapter currently only supports single candidate responses (N=1). If the streaming generator yields chunks with CandidatesIndex > 0, an error is returned.
Example ¶
ExampleStreamingAdapter demonstrates how to use StreamingAdapter to convert a StreamingGenerator to a regular Generator. This is useful when you want to use streaming internally but present a non-streaming interface to users.
// Create an OpenAI client
client := openai.NewClient()
// Create a generator with streaming support
gen := NewOpenAiGenerator(
&client.Chat.Completions,
openai.ChatModelGPT4oMini,
"You are a helpful assistant.",
)
// Wrap the generator with StreamingAdapter
// This converts the streaming interface to a regular Generate interface
adapter := StreamingAdapter{S: &gen}
// Create a simple dialog
dialog := Dialog{
{
Role: User,
Blocks: []Block{
TextBlock("What is the capital of France?"),
},
},
}
// Use the adapter's Generate method - it will internally stream
// and compress the chunks into a complete response
response, err := adapter.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
// Print the response
if len(response.Candidates) > 0 && len(response.Candidates[0].Blocks) > 0 {
fmt.Println("Assistant:", response.Candidates[0].Blocks[0].Content)
}
Output: Assistant: The capital of France is Paris.
Example (CustomUsage) ¶
ExampleStreamingAdapter_customUsage shows how to create a custom generator that implements StreamingGenerator and use it with StreamingAdapter.
// This example shows how someone might implement their own StreamingGenerator
// and use it with StreamingAdapter
// Create a custom implementation
customGen := &customStreamingGenerator{
systemPrompt: "You are a helpful assistant.",
}
// Wrap with StreamingAdapter to get a regular Generator interface
adapter := StreamingAdapter{S: customGen}
// Now you can use it as a regular generator
dialog := Dialog{
{
Role: User,
Blocks: []Block{
TextBlock("Hello!"),
},
},
}
response, err := adapter.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
if len(response.Candidates) > 0 && len(response.Candidates[0].Blocks) > 0 {
// Remove the "Mock response: " prefix for consistent output
content := response.Candidates[0].Blocks[0].Content.String()
content = strings.TrimPrefix(content, "Mock response: ")
fmt.Println("Response:", content)
}
Output: Response: Hello! How can I help you today?
Example (ErrorHandling) ¶
ExampleStreamingAdapter_errorHandling demonstrates how StreamingAdapter handles errors that occur during streaming.
// Create an OpenAI client
client := openai.NewClient()
// Create a generator
gen := NewOpenAiGenerator(
&client.Chat.Completions,
openai.ChatModelGPT4oMini,
"You are a helpful assistant.",
)
// Wrap with StreamingAdapter
adapter := StreamingAdapter{S: &gen}
// Create an empty dialog (which should cause an error)
dialog := Dialog{}
// Try to generate - this should return an error
_, err := adapter.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Printf("Got expected error: %v\n", err)
}
Output: Got expected error: empty dialog: at least one message required
Example (MultipleBlocks) ¶
ExampleStreamingAdapter_multipleBlocks demonstrates how StreamingAdapter handles responses with multiple blocks of different types, showing the compression of consecutive blocks of the same type.
// This example demonstrates the internal behavior of StreamingAdapter
// by showing how it would handle a mock streaming generator
// Create a mock streaming generator that yields multiple chunks
mockGen := &mockStreamingGenerator{
chunks: []StreamChunk{
// First content chunk
{
Block: Block{
BlockType: Content,
ModalityType: Text,
MimeType: "text/plain",
Content: Str("The weather in "),
},
CandidatesIndex: 0,
},
// Second content chunk (will be concatenated)
{
Block: Block{
BlockType: Content,
ModalityType: Text,
MimeType: "text/plain",
Content: Str("Paris is "),
},
CandidatesIndex: 0,
},
// Third content chunk (will be concatenated)
{
Block: Block{
BlockType: Content,
ModalityType: Text,
MimeType: "text/plain",
Content: Str("sunny today."),
},
CandidatesIndex: 0,
},
},
}
// Wrap with StreamingAdapter
adapter := StreamingAdapter{S: mockGen}
// Generate
response, err := adapter.Generate(context.Background(), nil, nil)
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
// The adapter should have compressed the three chunks into one block
if len(response.Candidates) > 0 && len(response.Candidates[0].Blocks) == 1 {
fmt.Println("Compressed content:", response.Candidates[0].Blocks[0].Content)
fmt.Println("Finish reason:", response.FinishReason == EndTurn)
}
Output: Compressed content: The weather in Paris is sunny today. Finish reason: true
Example (ParallelToolCalls) ¶
ExampleStreamingAdapter_parallelToolCalls demonstrates how StreamingAdapter handles parallel tool calls, showing the compression of multiple tool call chunks.
// Create an OpenAI client
client := openai.NewClient()
// Create a generator
gen := NewOpenAiGenerator(
&client.Chat.Completions,
openai.ChatModelGPT4oMini,
"You are a helpful stock price assistant.",
)
// Register a stock price tool
stockTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
if err := gen.Register(stockTool); err != nil {
fmt.Printf("Error registering tool: %v\n", err)
return
}
// Wrap with StreamingAdapter
adapter := StreamingAdapter{S: &gen}
// Ask about multiple stocks
dialog := Dialog{
{
Role: User,
Blocks: []Block{
TextBlock("What are the current prices of Apple and Microsoft stocks?"),
},
},
}
// Generate with tool use
response, err := adapter.Generate(context.Background(), dialog, &GenOpts{
ToolChoice: ToolChoiceAuto,
})
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
// Count the number of tool calls
toolCallCount := 0
for _, block := range response.Candidates[0].Blocks {
if block.BlockType == ToolCall {
toolCallCount++
var toolCall ToolCallInput
if err := json.Unmarshal([]byte(block.Content.String()), &toolCall); err == nil {
fmt.Printf("Tool call %d: %s with ticker=%v\n",
toolCallCount, toolCall.Name, toolCall.Parameters["ticker"])
}
}
}
fmt.Printf("Finish reason: %v\n", response.FinishReason == ToolUse)
Output: Tool call 1: get_stock_price with ticker=AAPL Tool call 2: get_stock_price with ticker=MSFT Finish reason: true
Example (Responses) ¶
ExampleStreamingAdapter_responses demonstrates using StreamingAdapter with the ResponsesGenerator for stateless multi-turn conversation. The adapter compresses streaming chunks into complete Response objects, making it easy to append the assistant's response to the dialog for subsequent turns.
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
client := openai.NewClient(option.WithAPIKey(apiKey))
// Create the generator and wrap it with StreamingAdapter.
// StreamingAdapter.Generate streams internally, then compresses chunks into
// a standard Response — identical to what ResponsesGenerator.Generate returns.
gen := NewResponsesGenerator(&client.Responses, openai.ChatModelGPT5Nano, "You are a helpful assistant")
adapter := &StreamingAdapter{S: &gen}
dialog := Dialog{{Role: User, Blocks: []Block{TextBlock("Hi!")}}}
resp, err := adapter.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
fmt.Println("Response received")
// The adapter produces a complete Response with Candidates, just like Generate.
// Append the assistant's message and continue the conversation statelessly.
dialog = append(dialog, resp.Candidates[0], Message{Role: User, Blocks: []Block{TextBlock("What can you help me with?")}})
resp, err = adapter.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
fmt.Println(len(resp.Candidates))
Output: Response received 1
Example (Responses_toolUse) ¶
ExampleStreamingAdapter_responses_toolUse demonstrates using StreamingAdapter with tool calling on the Responses API. The adapter compresses streaming tool call chunks into complete blocks, preserving IDs and Thinking block ExtraFields so the dialog can be passed back for subsequent turns without any manual chunk reconstruction.
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set OPENAI_API_KEY env]")
return
}
client := openai.NewClient(option.WithAPIKey(apiKey))
gen := NewResponsesGenerator(&client.Responses, openai.ChatModelGPT5Mini, `You are a helpful assistant that returns the price of a stock and nothing else.
Only output the price, like
<example>
435.56
</example>
<example>
3235.55
</example>
`)
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.\nYou can call this tool in parallel",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
if err := gen.Register(tickerTool); err != nil {
panic(err.Error())
}
// StreamingAdapter wraps the generator so we get compressed Response objects
// instead of raw streaming chunks.
adapter := &StreamingAdapter{S: &gen}
dialog := Dialog{{Role: User, Blocks: []Block{TextBlock("Which stock, Apple vs. Microsoft, is more expensive?")}}}
// Turn 1: the model should call get_stock_price for both tickers.
resp, err := adapter.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
// Collect the tool call blocks from the compressed response.
var toolCallBlocks []Block
for _, blk := range resp.Candidates[0].Blocks {
if blk.BlockType == ToolCall {
toolCallBlocks = append(toolCallBlocks, blk)
}
}
fmt.Println(toolCallBlocks[0].Content)
fmt.Println(toolCallBlocks[1].Content)
// Append the full assistant message (including any Thinking blocks with encrypted
// reasoning content) and tool results. This is the key advantage of StreamingAdapter:
// the compressed Candidates[0] is directly usable in the dialog.
dialog = append(dialog, resp.Candidates[0],
Message{Role: ToolResult, Blocks: []Block{{ID: toolCallBlocks[0].ID, ModalityType: Text, MimeType: "text/plain", Content: Str("123.45")}}},
Message{Role: ToolResult, Blocks: []Block{{ID: toolCallBlocks[1].ID, ModalityType: Text, MimeType: "text/plain", Content: Str("678.45")}}},
)
// Turn 2: the model responds with the answer.
resp, err = adapter.Generate(context.Background(), dialog, nil)
if err != nil {
panic(err.Error())
}
for _, blk := range resp.Candidates[0].Blocks {
if blk.BlockType == Content {
fmt.Println(blk.Content)
break
}
}
Output: {"name":"get_stock_price","parameters":{"ticker":"AAPL"}} {"name":"get_stock_price","parameters":{"ticker":"MSFT"}} MSFT
Example (WithToolGenerator) ¶
ExampleStreamingAdapter_withToolGenerator demonstrates using StreamingAdapter together with ToolGenerator to create a complete tool-using assistant that internally uses streaming but presents a non-streaming interface.
// Create an OpenAI client
client := openai.NewClient()
// Create a generator with streaming support
baseGen := NewOpenAiGenerator(
&client.Chat.Completions,
openai.ChatModelGPT4oMini,
"You are a helpful assistant that can check weather and time.",
)
// Create a ToolGenerator
toolGen := &ToolGenerator{
G: &baseGen,
}
// Register weather tool with callback
weatherTool := Tool{
Name: "get_weather",
Description: "Get the current weather in a location",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Location string `json:"location" jsonschema:"required" jsonschema_description:"The city and state, e.g. San Francisco, CA"`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
// Simple weather callback
weatherCallback := ToolCallBackFunc[struct {
Location string `json:"location"`
}](func(ctx context.Context, params struct{ Location string }) (string, error) {
// Mock weather data
return fmt.Sprintf("The weather in %s is sunny and 72°F", params.Location), nil
})
if err := toolGen.Register(weatherTool, weatherCallback); err != nil {
fmt.Printf("Error registering tool: %v\n", err)
return
}
// Create a dialog
dialog := Dialog{
{
Role: User,
Blocks: []Block{
TextBlock("What's the weather in San Francisco?"),
},
},
}
// Use ToolGenerator's Generate method which will handle tool calls
// The underlying OpenAI generator uses streaming internally
completeDialog, err := toolGen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
// Print the final response (skipping tool calls and results)
foundWeatherResponse := false
for _, msg := range completeDialog {
if msg.Role == Assistant && len(msg.Blocks) > 0 {
block := msg.Blocks[0]
if block.BlockType == Content {
content := block.Content.String()
// Check if the response mentions weather in San Francisco
if strings.Contains(content, "San Francisco") &&
strings.Contains(content, "sunny") &&
strings.Contains(content, "72°F") {
foundWeatherResponse = true
fmt.Println("Found weather response for San Francisco")
}
}
}
}
if !foundWeatherResponse {
fmt.Println("No weather response found")
}
Output: Found weather response for San Francisco
Example (WithTools) ¶
ExampleStreamingAdapter_withTools demonstrates using StreamingAdapter with tool calls. The adapter handles the compression of streaming tool call chunks into complete tool calls.
// Create an OpenAI client
client := openai.NewClient()
// Create a generator with streaming support
gen := NewOpenAiGenerator(
&client.Chat.Completions,
openai.ChatModelGPT4oMini,
"You are a helpful weather assistant.",
)
// Register a weather tool
weatherTool := Tool{
Name: "get_weather",
Description: "Get the current weather in a given location",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Location string `json:"location" jsonschema:"The city and state, e.g. San Francisco, CA"`
Unit string `json:"unit,omitempty" jsonschema:"The unit of temperature"`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
if err := gen.Register(weatherTool); err != nil {
fmt.Printf("Error registering tool: %v\n", err)
return
}
// Wrap with StreamingAdapter
adapter := StreamingAdapter{S: &gen}
// Create a dialog asking about weather
dialog := Dialog{
{
Role: User,
Blocks: []Block{
TextBlock("What's the weather like in New York?"),
},
},
}
// Generate with tool use enabled
response, err := adapter.Generate(context.Background(), dialog, &GenOpts{
ToolChoice: ToolChoiceAuto,
})
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
// The response should contain a tool call
if len(response.Candidates) > 0 && len(response.Candidates[0].Blocks) > 0 {
block := response.Candidates[0].Blocks[0]
if block.BlockType == ToolCall {
// Parse the tool call
var toolCall ToolCallInput
if err := json.Unmarshal([]byte(block.Content.String()), &toolCall); err == nil {
fmt.Printf("Tool called: %s\n", toolCall.Name)
fmt.Printf("Location: %v\n", toolCall.Parameters["location"])
}
}
}
Output: Tool called: get_weather Location: New York, NY
func (*StreamingAdapter) Register ¶ added in v0.6.0
func (s *StreamingAdapter) Register(tool Tool) error
type StreamingGenerator ¶ added in v0.6.0
type StreamingGenerator interface {
Stream(ctx context.Context, dialog Dialog, options *GenOpts) iter.Seq2[StreamChunk, error]
}
StreamingGenerator is an interface for generators that support streaming responses. It takes a Dialog and optional GenOpts and returns an iterator that yields chunks of content as they are generated by the underlying model.
Usage Metadata in Streaming ¶
StreamingGenerator provides usage metrics as the final block in the stream. The last StreamChunk emitted will have a Block with BlockType MetadataBlockType containing token usage information in the Metadata map.
When using StreamingAdapter.Generate(), the metadata block is automatically extracted and populated into Response.UsageMetadata, providing the same experience as non-streaming generators.
If consuming the stream directly without StreamingAdapter, you can identify metadata blocks by checking BlockType == MetadataBlockType, then parsing the JSON content to extract usage metrics.
Streaming Chunk Patterns ¶
Implementations of StreamingGenerator should follow these patterns when yielding chunks:
1. Content Blocks (Text):
- Yield multiple chunks with BlockType="content", ModalityType=Text
- Each chunk contains a partial text fragment
- Consecutive content chunks will be concatenated during compression
- Must set MimeType="text/plain" for text content
2. Thinking Blocks:
- Yield multiple chunks with BlockType="thinking", ModalityType=Text
- Each chunk contains a partial reasoning fragment
- Consecutive thinking chunks will be concatenated during compression
- Must set MimeType="text/plain"
3. Tool Call Blocks:
- First chunk: BlockType="tool_call", ID=<unique_id>, Content=<tool_name>
- Subsequent chunks: BlockType="tool_call", ID="" (empty), Content=<JSON_fragment>
- JSON fragments are concatenated to form complete parameters
- The final concatenated JSON must be valid and parse to map[string]any
Chunk Compression ¶
The StreamingAdapter uses compressStreamingBlocks to convert streaming chunks into canonical blocks for the final Response. The compression follows these rules:
- Consecutive blocks of the same type are merged:
- Multiple "content" chunks -> Single content block with concatenated text
- Multiple "thinking" chunks -> Single thinking block with concatenated text
- Tool calls are reconstructed:
- Header chunk (with ID) marks the start of a new tool call
- Parameter chunks (no ID) are concatenated to form complete JSON
- Final block contains ToolCallInput{Name, Parameters} as JSON
Constraints and Requirements ¶
1. Modality Constraints:
- Content and Thinking blocks MUST have ModalityType=Text
- Tool call blocks MUST have ModalityType=Text
- Non-text modalities in streaming are not currently supported
2. Tool Call Structure:
- Tool calls must start with a header chunk (ID set, content = tool name)
- Parameter chunks must have empty ID
- Parameter chunks when concatenated must form valid JSON
3. Error Handling:
- Yield errors through the iterator's error return value
- Once an error is yielded, no further chunks should be yielded
- Common errors include rate limits, content policy violations, etc.
4. Candidate Support:
- Currently only CandidatesIndex=0 is supported
- Implementations should error if N>1 is requested
Example Implementation Pattern ¶
func (g *MyGenerator) Stream(ctx context.Context, dialog Dialog, options *GenOpts) iter.Seq2[StreamChunk, error] {
return func(yield func(StreamChunk, error) bool) {
// Validate inputs
if len(dialog) == 0 {
yield(StreamChunk{}, EmptyDialogErr)
return
}
// Start streaming from API
stream := g.api.StartStream(convertDialog(dialog))
defer stream.Close()
for event := range stream.Events() {
switch event.Type {
case "text_delta":
if !yield(StreamChunk{
Block: Block{
BlockType: Content,
ModalityType: Text,
MimeType: "text/plain",
Content: Str(event.Text),
},
CandidatesIndex: 0,
}, nil) {
return // User stopped iteration
}
case "tool_call_start":
if !yield(StreamChunk{
Block: Block{
ID: event.ToolID,
BlockType: ToolCall,
ModalityType: Text,
MimeType: "text/plain",
Content: Str(event.ToolName),
},
CandidatesIndex: 0,
}, nil) {
return
}
case "tool_call_delta":
if !yield(StreamChunk{
Block: Block{
BlockType: ToolCall,
ModalityType: Text,
MimeType: "text/plain",
Content: Str(event.JSONDelta),
},
CandidatesIndex: 0,
}, nil) {
return
}
case "error":
yield(StreamChunk{}, event.Error)
return
}
}
}
}
A Generator implementation may return several types of errors:
- MaxGenerationLimitErr when the maximum token generation limit is exceeded
- UnsupportedInputModalityErr when encountering an unsupported input modality
- UnsupportedOutputModalityErr when requested to generate an unsupported output modality
- InvalidToolChoiceErr when an invalid tool choice is specified
- InvalidParameterErr when generation parameters are invalid or out of range
- ContextLengthExceededErr when input dialog is too long
- ContentPolicyErr when content violates usage policies
- EmptyDialogErr when no messages are provided in the dialog
- AuthenticationErr when there are authentication or authorization issues
type TokenCounter ¶ added in v0.4.8
TokenCounter is an interface for a generator that can count the number of tokens in a Dialog. This is useful for:
- Estimating costs before sending a request to the API
- Checking if a dialog exceeds the context window limits of a model
- Optimizing prompt design by analyzing token usage
- Managing rate limits that are based on token counts
The exact method of token counting varies by provider:
- OpenAI uses tiktoken to count tokens without making an API call
- Anthropic calls a dedicated counting API endpoint
- Gemini calls a dedicated counting API endpoint
In all cases, the Count method takes a context for cancellation and a Dialog to analyze. The number of tokens is returned as a uint.
Note that some providers count system instructions separately, but this interface will include them in the returned count if the generator was initialized with them.
type Tool ¶
type Tool struct {
// Name is the identifier used to reference this tool.
// It should be unique among all tools provided to a Generator.
Name string `json:"name" yaml:"name"`
// Description explains what the tool does.
// This helps the Generator understand when and how to use the tool.
Description string `json:"description,omitempty" yaml:"description,omitempty"`
// InputSchema defines the parameters this tool accepts using JSON Schema.
// A nil value indicates no parameters are accepted.
// The schema should typically be of type "object" for parameter definitions.
InputSchema *jsonschema.Schema `json:"input_schema,omitempty" yaml:"input_schema,omitempty"`
}
Tool represents a tool that can be called by a Generator during generation. Each tool has a name, description, and a schema defining its input parameters.
Example tools:
A simple tool with a single required string parameter:
{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: &jsonschema.Schema{
Type: "object",
Properties: map[string]*jsonschema.Schema{
"ticker": {
Type: "string",
Description: "The stock ticker symbol, e.g. AAPL for Apple Inc.",
},
},
Required: []string{"ticker"},
},
}
A tool with both required and optional parameters:
{
Name: "get_weather",
Description: "Get the current weather in a given location",
InputSchema: &jsonschema.Schema{
Type: "object",
Properties: map[string]*jsonschema.Schema{
"location": {
Type: "string",
Description: "The city and state, e.g. San Francisco, CA",
},
"unit": {
Type: "string",
Enum: []interface{}{"celsius", "fahrenheit"},
Description: "The unit of temperature, either 'celsius' or 'fahrenheit'",
},
},
Required: []string{"location"},
},
}
A tool with an array parameter:
{
Name: "get_batch_stock_prices",
Description: "Get the current stock prices for a list of ticker symbols.",
InputSchema: &jsonschema.Schema{
Type: "object",
Properties: map[string]*jsonschema.Schema{
"tickers": {
Type: "array",
Description: "List of stock ticker symbols, e.g. ['AAPL', 'GOOGL', 'MSFT']",
Items: &jsonschema.Schema{
Type: "string",
Description: "A stock ticker symbol",
},
},
},
Required: []string{"tickers"},
},
}
A tool with no parameters:
{
Name: "get_server_time",
Description: "Get the current server time in UTC.",
InputSchema: nil, // or omit the field entirely
}
type ToolCallBackFunc ¶ added in v0.4.0
ToolCallBackFunc is a generic function type that wraps a callback function with a strongly-typed parameter struct, implementing the ToolCallback interface.
The type parameter T represents the struct type that will be unmarshaled from the tool's JSON parameters. This allows for type-safe tool callbacks without the need to manually handle JSON unmarshaling or message creation.
Callback error handling:
- If the callback returns an error value of type CallbackExecErr (or wraps one), this signals a true callback execution error (panic, cancellation, etc.), and execution will terminate.
- Any other non-nil error is treated as a tool result error: the error message will be sent as a textual tool result message to the generator, not treated as fatal.
Example usage:
type WeatherParams struct {
Location string `json:"location"`
Unit string `json:"unit,omitempty"`
}
func getWeather(ctx context.Context, params WeatherParams) (string, error) {
if params.Location == "" {
return "", fmt.Errorf("location is required") // Erroneous tool result
}
// Simulate a callback execution error
// return "", CallbackExecErr{Err: fmt.Errorf("panic occurred")}
return fmt.Sprintf("Weather in %s: 72°F", params.Location), nil
}
// Register the tool
weatherTool := Tool{
Name: "get_weather",
Description: "Get the current weather for a location",
// InputSchema definition...
}
toolGen.Register(weatherTool, ToolCallBackFunc(getWeather))
Example ¶
ExampleToolCallBackFunc demonstrates how to use ToolCallBackFunc to easily create tool callbacks with strongly-typed parameters.
package main
import (
"context"
"fmt"
"slices"
"github.com/google/jsonschema-go/jsonschema"
"github.com/spachava753/gai"
)
// Define a parameter struct for our weather tool
type WeatherParams struct {
Location string `json:"location"`
Unit string `json:"unit,omitempty"`
}
func (w WeatherParams) Validate() error {
knownLocs := []string{"San Francisco", "New York", "London"}
if !slices.Contains(knownLocs, w.Location) {
return fmt.Errorf("unknown location: %s", w.Location)
}
return nil
}
// ExampleToolCallBackFunc demonstrates how to use ToolCallBackFunc to easily create
// tool callbacks with strongly-typed parameters.
func main() {
// Create a simple weather function that will be wrapped by ToolCallBackFunc
getWeather := func(ctx context.Context, params WeatherParams) (string, error) {
unit := "celsius"
if params.Unit == "fahrenheit" {
unit = "fahrenheit"
}
// In a real implementation, you would call an external weather API here
temp := 22.5
if unit == "fahrenheit" {
temp = temp*9/5 + 32
}
return fmt.Sprintf("Weather in %s: %.1f°%s",
params.Location,
temp,
unit[0:1]), // C or F
nil
}
// Create a tool
weatherTool := gai.Tool{
Name: "get_weather",
Description: "Get the current weather for a location",
InputSchema: func() *jsonschema.Schema {
schema, err := gai.GenerateSchema[struct {
Location string `json:"location" jsonschema:"required" jsonschema_description:"The city and state, e.g. San Francisco, CA"`
Unit string `json:"unit" jsonschema:"enum=celsius,enum=fahrenheit" jsonschema_description:"The unit of temperature"`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
// Create an instance of the ToolGenerator
// In a real application, you would use a real generator like OpenAiGenerator
toolGen := &gai.ToolGenerator{
G: &ExampleMockGenerator{},
}
// Register the tool with the wrapped callback function
_ = toolGen.Register(weatherTool, gai.ToolCallBackFunc[WeatherParams](getWeather))
// The tool is now registered and ready to use
fmt.Println("Weather tool registered successfully")
}
// ExampleMockGenerator is a simple mock implementation of the ToolCapableGenerator interface
type ExampleMockGenerator struct{}
func (m *ExampleMockGenerator) Generate(ctx context.Context, dialog gai.Dialog, options *gai.GenOpts) (gai.Response, error) {
return gai.Response{}, nil
}
func (m *ExampleMockGenerator) Register(tool gai.Tool) error {
return nil
}
Output: Weather tool registered successfully
func (ToolCallBackFunc[T]) Call ¶ added in v0.4.0
func (f ToolCallBackFunc[T]) Call(ctx context.Context, parametersJSON json.RawMessage, toolCallID string) (Message, error)
Call implements the ToolCallback interface, handling JSON unmarshaling and message creation automatically.
It unmarshals the JSON parameters into the type T, optionally validates them if T implements the Validator interface, calls the wrapped function with the parsed parameters, and constructs a properly formatted ToolResult message from the result.
Error handling:
- If the callback returns a non-nil error of type CallbackExecErr (or wrapping one), this signals a real callback execution failure (e.g., panic, context cancellation), and the underlying error is returned. This will typically terminate execution in ToolGenerator.Generate.
- If the callback returns any other non-nil error, it is treated as an erroneous tool result, and a text ToolResult message containing the error message is returned instead of terminating execution.
type ToolCallInput ¶ added in v0.6.0
type ToolCallInput struct {
Name string `json:"name" yaml:"name"`
Parameters map[string]any `json:"parameters" yaml:"parameters"`
}
ToolCallInput represents a standardized format for tool use in all generators. It contains the name of the tool to use and the parameters to pass to it.
type ToolCallback ¶
type ToolCallback interface {
// Call executes the tool with the given parameters and returns a tool result message.
// The context should be used for cancellation and timeouts.
// The parametersJSON contains the tool's parameters as raw JSON as defined by its InputSchema.
// The toolCallID is the ID of the tool call block that initiated this tool execution.
//
// The returned message must have the ToolResult role and at least one block.
// Each block must have:
// - ID matching the provided toolCallID
// - Non-nil Content
// - A valid BlockType (usually "content")
// - A valid ModalityType (Text, Image, Audio, or Video)
// - A MimeType appropriate for the modality (e.g., "text/plain" for text, "image/jpeg" for images)
//
// The second return value should only be non-nil if the callback itself fails to execute
// (e.g., network errors, panics, context cancellation).
Call(ctx context.Context, parametersJSON json.RawMessage, toolCallID string) (Message, error)
}
ToolCallback represents a function that can be automatically executed by a ToolGenerator when a specific tool is called during generation.
The callback should return a message with role ToolResult containing the result of the tool execution. The message will be validated to ensure it has the correct role, at least one block, and that all blocks have: - The correct ID matching the tool call ID - Non-nil content - A valid block type - A valid modality type - A MimeType appropriate for the modality
Example implementation for a stock price tool:
type StockAPI struct{}
func (s *StockAPI) Call(ctx context.Context, parametersJSON json.RawMessage, toolCallID string) (Message, error) {
// Context can be used for timeouts and cancellation
if ctx.Err() != nil {
return Message{}, fmt.Errorf("context cancelled: %w", ctx.Err())
}
// Parse parameters from JSON
var params struct {
Ticker string `json:"ticker"`
}
if err := json.Unmarshal(parametersJSON, ¶ms); err != nil {
return Message{
Role: ToolResult,
Blocks: []Block{
{
ID: toolCallID,
BlockType: Content,
ModalityType: Text,
MimeType: "text/plain",
Content: Str(fmt.Sprintf("Error parsing parameters: %v", err)),
},
},
}, nil
}
price, err := s.fetchPrice(ctx, params.Ticker)
if err != nil {
// Example of expected error - fed back to Generator as a message
return Message{
Role: ToolResult,
Blocks: []Block{
{
ID: toolCallID, // Must match the tool call ID
BlockType: Content, // Must specify a block type
ModalityType: Text,
MimeType: "text/plain", // Required for all blocks
Content: Str(fmt.Sprintf("Error: failed to get price for %s: %v", params.Ticker, err)),
},
},
}, nil
}
// Return a successful result as a message
return Message{
Role: ToolResult,
Blocks: []Block{
{
ID: toolCallID,
BlockType: Content,
ModalityType: Text,
MimeType: "text/plain",
Content: Str(fmt.Sprintf("$%.2f", price)),
},
},
}, nil
}
// Example of a tool returning an image
type ImageGeneratorTool struct{}
func (t *ImageGeneratorTool) Call(ctx context.Context, parametersJSON json.RawMessage, toolCallID string) (Message, error) {
// Parse parameters
var params struct {
Prompt string `json:"prompt"`
}
if err := json.Unmarshal(parametersJSON, ¶ms); err != nil {
return Message{}, fmt.Errorf("failed to parse parameters: %w", err)
}
imageData, err := t.generateImage(ctx, params.Prompt)
if err != nil {
return Message{}, err
}
// Base64 encode the image data
encodedImage := base64.StdEncoding.EncodeToString(imageData)
return Message{
Role: ToolResult,
Blocks: []Block{
{
ID: toolCallID,
BlockType: Content,
ModalityType: Image, // Image modality
MimeType: "image/jpeg", // MimeType is required for all modalities
Content: Str(encodedImage),
},
},
}, nil
}
type ToolCapableGenerator ¶
type ToolCapableGenerator interface {
Generator
ToolRegister
}
type ToolGenerator ¶
type ToolGenerator struct {
G ToolCapableGenerator
// contains filtered or unexported fields
}
ToolGenerator represents a Generator that can use tools during generation. It extends the basic Generator interface with the ability to register tools with callbacks for automatic tool execution.
When a tool is called during generation, ToolGenerator will automatically execute the registered callback and include both the tool call and its result in the returned Message. If the callback returns a value implementing the error interface, it will be treated as a tool execution error and fed as a tool result into the underlying Generator.
Tools can be registered with nil callbacks, in which case execution will be terminated immediately when the tool is called. This is useful for tools like "finish_execution" that are meant to interrupt generation and return the dialog.
The behavior of tool usage is controlled via GenOpts.ToolChoice:
- ToolChoiceAuto: Generator decides when to use tools
- ToolChoiceToolsRequired: Generator must use at least one tool
- "<tool-name>": Generator must use the specified tool
Example usage:
// Create a ToolGenerator with an underlying generator
toolGen := &ToolGenerator{
G: myGenerator,
toolCallbacks: make(map[string]ToolCallback),
}
// Register a tool with automatic execution via callback
toolGen.Register(stockPriceTool, &StockAPI{})
// Register a tool that terminates execution when called
toolGen.Register(Tool{Name: "finish_execution"}, nil)
func (*ToolGenerator) Generate ¶
func (t *ToolGenerator) Generate(ctx context.Context, dialog Dialog, optsGen GenOptsGenerator) (Dialog, error)
Generate executes the given dialog with the underlying ToolCapableGenerator, handling any tool calls by executing their registered callbacks and feeding the results back into the generator. It returns the complete dialog including all intermediate tool calls, tool results, and the final response.
The optsGen parameter is a function that generates generation options based on the current state of the dialog. This allows customizing options like temperature, tool choice, or modalities based on the conversation context. If optsGen is nil, a default function that returns nil options will be used.
Error Handling: If an error occurs during the looped generation process (e.g., tool callback execution fails, invalid tool calls, context cancellation), the dialog accumulated up to that point is returned along with the error. This partial dialog includes all successfully processed messages, tool calls, and tool results that occurred before the error, allowing callers to inspect the conversation state when the error occurred.
Example usage with dynamic options:
dialog, err := toolGen.Generate(ctx, dialog, func(d Dialog) *GenOpts {
// Increase temperature after each tool use
toolUses := 0
for _, msg := range d {
if msg.Role == ToolResult {
toolUses++
}
}
return &GenOpts{
Temperature: 0.2 * float64(toolUses),
ToolChoice: ToolChoiceAuto,
}
})
Example usage with static options:
// Always use the same options
dialog, err := toolGen.Generate(ctx, dialog, func(d Dialog) *GenOpts {
return &GenOpts{
ToolChoice: ToolChoiceToolsRequired,
}
})
Example usage with no options:
// Use default options (nil) dialog, err := toolGen.Generate(ctx, dialog, nil)
The returned dialog will contain: 1. The original input dialog 2. Any tool call messages from the generator 3. Tool result messages from callback execution 4. The final response from the generator
For example, if the generator first calls a location tool and then a weather tool, the returned dialog might look like:
[0] User: "What's the weather where I am?" [1] Assistant: Tool call to get_location [2] Assistant: Tool result "New York" [3] Assistant: Tool call to get_weather with location="New York" [4] Assistant: Tool result "72°F and sunny" [5] Assistant: "The weather in New York is 72°F and sunny"
Example ¶
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
client := openai.NewClient()
// Instantiate a OpenAI Generator
gen := NewOpenAiGenerator(
&client.Chat.Completions,
openai.ChatModelGPT4oMini,
`You are a helpful assistant that returns the price of a stock and nothing else.
Only output the price, like
<example>
435.56
</example>
<example>
3235.55
</example>
`,
)
tg := ToolGenerator{
G: &gen,
}
// Register tools
if err := tg.Register(
tickerTool,
&TickerTool{
ticketPrices: map[string]float64{
"AAPL": 435.56,
},
},
); err != nil {
panic(err.Error())
}
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the price of Apple stock?"),
},
},
},
}
// Generate a response
newDialog, err := tg.Generate(context.Background(), dialog, func(d Dialog) *GenOpts {
return nil
})
if err != nil {
panic(err.Error())
}
fmt.Printf("len of the new dialog: %d\n", len(newDialog))
fmt.Printf("%s\n", newDialog[len(newDialog)-1].Blocks[0].Content)
Output: len of the new dialog: 4 435.56
Example (Responses) ¶
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
client := openai.NewClient()
// Instantiate a Responses Generator (stateless, no adapter needed)
gen := NewResponsesGenerator(
&client.Responses,
openai.ChatModelGPT5Mini,
`You are a helpful assistant that returns the price of a stock and nothing else.
Only output the price, like
<example>
435.56
</example>
<example>
3235.55
</example>
`,
)
tg := ToolGenerator{
G: &gen,
}
// Register tools
if err := tg.Register(
tickerTool,
&TickerTool{
ticketPrices: map[string]float64{
"AAPL": 435.56,
},
},
); err != nil {
panic(err.Error())
}
dialog := Dialog{
{
Role: User,
Blocks: []Block{
{
BlockType: Content,
ModalityType: Text,
Content: Str("What is the price of Apple stock?"),
},
},
},
}
// Generate a response
newDialog, err := tg.Generate(context.Background(), dialog, func(d Dialog) *GenOpts {
return nil
})
if err != nil {
panic(err.Error())
}
fmt.Printf("len of the new dialog: %d\n", len(newDialog))
// Find the first Content block (reasoning models may produce Thinking blocks first)
lastMsg := newDialog[len(newDialog)-1]
for _, blk := range lastMsg.Blocks {
if blk.BlockType == Content {
fmt.Printf("%s\n", blk.Content)
break
}
}
Output: len of the new dialog: 4 435.56
func (*ToolGenerator) Register ¶
func (t *ToolGenerator) Register(tool Tool, callback ToolCallback) error
Register adds a tool to the ToolGenerator's available tools with an optional callback. If a callback is provided, it will be automatically executed when the tool is called during generation. If the callback is nil, no automatic execution will occur. This is useful for tools that are meant to interrupt or terminate execution, such as a "finish_execution" tool that should end the generation process.
Returns an error if:
- Tool name is empty
- Tool name conflicts with an already registered tool
- Tool name matches special values ToolChoiceAuto or ToolChoiceToolsRequired
- The underlying ToolCapableGenerator's Register method returns an error
type ToolRegister ¶
type ToolRegister interface {
// Register adds a tool to the Generator's available tools.
//
// Some Generator implementations may have built-in tools. In such cases, only
// the Tool.Name needs to match a built-in tool's name to enable its use. The rest
// of the Tool fields (Description, InputSchema) will be ignored in favor of the
// built-in tool's definition. The callback behavior remains the same - you can
// optionally provide a callback for automatic execution.
//
// JSON Schema compatibility note:
// Different generators have different levels of support for the anyOf JSON Schema feature:
// - OpenAI and Anthropic: Full support for anyOf properties
// - Gemini: Limited support for anyOf - only supports [Type, null] pattern for nullable fields.
// Will error on multiple non-null types in anyOf or null-only anyOf.
//
// When using the anyOf property, the most portable approach is to restrict its usage to
// nullable fields following the pattern: anyOf: [{type: "string"}, {type: "null"}]
//
// Returns an error if:
// - Tool name is empty
// - Tool name conflicts with an already registered tool
// - Tool name conflicts with a built-in tool that's already registered
// - Tool name matches special values ToolChoiceAuto or ToolChoiceToolsRequired
// - Tool schema is invalid (e.g., Array type without Items field)
// - Tool schema uses unsupported JSON Schema features for the specific generator
Register(tool Tool) error
}
type ToolRegistrationErr ¶
type ToolRegistrationErr struct {
// Tool is the name of the tool that failed to register
Tool string `json:"tool" yaml:"tool"`
// Cause is the underlying error that caused the registration to fail
Cause error `json:"cause,omitempty" yaml:"cause,omitempty"`
}
ToolRegistrationErr is returned when registering a tool fails. This can occur in several scenarios:
- Empty tool name
- Tool name conflicts with an existing or built-in tool
- Tool name matches special values (ToolChoiceAuto, ToolChoiceToolsRequired)
- Invalid tool schema (e.g., Array type without Items field)
The Cause field contains the underlying error that caused the registration to fail.
func (ToolRegistrationErr) Error ¶
func (t ToolRegistrationErr) Error() string
func (ToolRegistrationErr) Unwrap ¶
func (t ToolRegistrationErr) Unwrap() error
Unwrap returns the underlying cause of the tool registration failure
type UnsupportedInputModalityErr ¶
type UnsupportedInputModalityErr string
UnsupportedInputModalityErr is returned when a Generator encounters an input Message with a Block that contains an unsupported Modality. The string value of this error contains the name of the unsupported modality.
For example, if a Generator only supports text input but receives an audio input, it will return this error with details about the unsupported audio modality.
func (UnsupportedInputModalityErr) Error ¶
func (u UnsupportedInputModalityErr) Error() string
type UnsupportedOutputModalityErr ¶
type UnsupportedOutputModalityErr string
UnsupportedOutputModalityErr is returned when a Generator is requested to generate a response in a Modality that it does not support via GenOpts.OutputModalities. The string value of this error contains the name of the unsupported modality.
For example, if a Generator only supports text output but is asked to generate audio content, it will return this error with details about the unsupported audio modality.
func (UnsupportedOutputModalityErr) Error ¶
func (u UnsupportedOutputModalityErr) Error() string
type Validator ¶ added in v0.4.0
type Validator interface {
// Validate checks if the struct's field values are valid.
// It returns nil if validation passes, or an error describing the validation failure.
Validate() error
}
Validator is an interface that can be implemented by tool parameter types to validate their contents after being unmarshaled from JSON.
This interface allows parameter types to perform custom validation that goes beyond what JSON Schema validation can provide, such as: - Cross-field validations (e.g., field A must be present if field B has a certain value) - Range or format validations (e.g., dates must be in a specific format or range) - Business rule validations (e.g., certain combinations of values are invalid)
Example implementation:
type WeatherParams struct {
Location string `json:"location"`
Unit string `json:"unit,omitempty"`
}
func (p *WeatherParams) Validate() error {
if p.Location == "" {
return fmt.Errorf("location is required")
}
if p.Unit != "" && p.Unit != "celsius" && p.Unit != "fahrenheit" {
return fmt.Errorf("unit must be either 'celsius' or 'fahrenheit'")
}
return nil
}
type WrapperFunc ¶ added in v0.27.0
WrapperFunc is a function that wraps a Generator, returning a new Generator. Use with Wrap to compose multiple wrappers into a middleware stack.
Convention: define a WithXxx function that returns a WrapperFunc for your wrapper:
func WithLogging(logger *slog.Logger) gai.WrapperFunc {
return func(g gai.Generator) gai.Generator {
return &LoggingGenerator{
GeneratorWrapper: gai.GeneratorWrapper{Inner: g},
Logger: logger,
}
}
}
func WithPreprocessing ¶ added in v0.27.0
func WithPreprocessing() WrapperFunc
WithPreprocessing returns a WrapperFunc that wraps a ToolCapableGenerator with preprocessing logic. Panics if the inner generator is not a ToolCapableGenerator.
func WithRetry ¶ added in v0.27.0
func WithRetry(baseBo backoff.BackOff, opts ...backoff.RetryOption) WrapperFunc
WithRetry returns a WrapperFunc that wraps a generator with retry logic. See NewRetryGenerator for parameter details.
type ZaiCompletionService ¶ added in v0.25.0
type ZaiCompletionService interface {
New(ctx context.Context, body oai.ChatCompletionNewParams, opts ...option.RequestOption) (*oai.ChatCompletion, error)
NewStreaming(ctx context.Context, body oai.ChatCompletionNewParams, opts ...option.RequestOption) *oaissestream.Stream[oai.ChatCompletionChunk]
}
ZaiCompletionService defines the interface for Z.AI chat completions
type ZaiGenerator ¶ added in v0.25.0
type ZaiGenerator struct {
// contains filtered or unexported fields
}
ZaiGenerator implements the Generator and StreamingGenerator interfaces for Z.AI API. Z.AI provides OpenAI-compatible endpoints with extended thinking/reasoning capabilities.
Key features:
- OpenAI-compatible chat completions API
- Interleaved thinking: the model can reason between tool calls
- Preserved thinking: reasoning context can be retained across turns
- Streaming with Server-Sent Events (SSE)
Supported models include glm-4.7, glm-4.6, glm-4.5, and variants.
Example (DisableThinking) ¶
apiKey := os.Getenv("Z_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set Z_API_KEY env]")
return
}
// Create generator with thinking disabled
gen := NewZaiGenerator(
nil, "glm-4.7",
"You are a helpful assistant. Be concise.",
apiKey,
WithZaiThinking(false),
)
dialog := Dialog{
{
Role: User,
Blocks: []Block{TextBlock("What is 2 + 2?")},
},
}
resp, err := gen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) == 0 || len(resp.Candidates[0].Blocks) == 0 {
fmt.Println("Error: empty response")
return
}
// Verify no thinking blocks exist
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Thinking {
fmt.Println("Error: thinking block found when thinking is disabled")
return
}
}
fmt.Println("No thinking blocks")
// Verify we got a content block with the answer
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Content {
if strings.Contains(block.Content.String(), "4") {
fmt.Println("Direct answer received")
return
}
fmt.Printf("Error: expected '4' in answer, got: %s\n", block.Content.String())
return
}
}
fmt.Println("Error: no content block found")
Output: No thinking blocks Direct answer received
func NewZaiGenerator ¶ added in v0.25.0
func NewZaiGenerator(client ZaiCompletionService, model, systemInstructions, apiKey string, opts ...ZaiGeneratorOption) *ZaiGenerator
NewZaiGenerator creates a new Z.AI generator using the OpenAI SDK. If client is nil, a new client is created with the Z.AI base URL. apiKey is read from Z_API_KEY environment variable if empty.
By default, thinking is enabled and clearThinking is true.
func (*ZaiGenerator) Generate ¶ added in v0.25.0
func (g *ZaiGenerator) Generate(ctx context.Context, dialog Dialog, options *GenOpts) (Response, error)
Generate implements Generator
Example ¶
apiKey := os.Getenv("Z_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set Z_API_KEY env]")
return
}
gen := NewZaiGenerator(nil, "glm-4.7", "You are a helpful assistant.", apiKey)
dialog := Dialog{
{
Role: User,
Blocks: []Block{TextBlock("Hello!")},
},
}
resp, err := gen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) == 0 {
fmt.Println("Error: no candidates returned")
return
}
if len(resp.Candidates[0].Blocks) == 0 {
fmt.Println("Error: no blocks in response")
return
}
fmt.Println("Response received")
Output: Response received
Example (InterleavedThinking) ¶
apiKey := os.Getenv("Z_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set Z_API_KEY env]")
return
}
// Create generator with preserved thinking (clearThinking=false)
gen := NewZaiGenerator(
nil, "glm-4.7",
"You are a helpful assistant.",
apiKey,
WithZaiClearThinking(false), // Enable preserved thinking
)
// Register a weather tool
weatherTool := Tool{
Name: "get_weather",
Description: "Get the current weather for a city",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
City string `json:"city" jsonschema:"required" jsonschema_description:"The city name"`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
if err := gen.Register(weatherTool); err != nil {
fmt.Println("Error registering tool:", err)
return
}
// First turn: ask about weather
dialog := Dialog{
{
Role: User,
Blocks: []Block{TextBlock("What's the weather like in Beijing?")},
},
}
resp, err := gen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
// Print block types from first turn
fmt.Print("First turn:")
var toolCallBlock Block
for _, block := range resp.Candidates[0].Blocks {
fmt.Printf(" %s", block.BlockType)
if block.BlockType == ToolCall {
toolCallBlock = block
}
}
fmt.Println()
if toolCallBlock.BlockType != ToolCall {
fmt.Println("Error: no tool call found")
return
}
// Append assistant response and provide tool result
dialog = append(dialog, resp.Candidates[0], Message{
Role: ToolResult,
Blocks: []Block{
{
ID: toolCallBlock.ID,
BlockType: Content,
ModalityType: Text,
MimeType: "text/plain",
Content: Str(`{"weather": "Sunny", "temperature": "25°C", "humidity": "40%"}`),
},
},
})
// Second turn: model reasons about the tool result
resp, err = gen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
// Print block types from second turn
fmt.Print("Second turn:")
for _, block := range resp.Candidates[0].Blocks {
fmt.Printf(" %s", block.BlockType)
}
fmt.Println()
Output: First turn: thinking content tool_call Second turn: thinking content
Example (MultiTurn) ¶
apiKey := os.Getenv("Z_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set Z_API_KEY env]")
return
}
gen := NewZaiGenerator(nil, "glm-4.7", "You are a helpful math tutor.", apiKey)
// First turn
dialog := Dialog{
{
Role: User,
Blocks: []Block{TextBlock("What is 5 + 3?")},
},
}
resp, err := gen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
found := false
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Content && strings.Contains(block.Content.String(), "8") {
found = true
break
}
}
if !found {
fmt.Println("Error: Turn 1 expected '8' in response")
return
}
fmt.Println("Turn 1: correct")
// Second turn: continue conversation
dialog = append(dialog, resp.Candidates[0], Message{
Role: User,
Blocks: []Block{TextBlock("Now multiply that result by 2")},
})
resp, err = gen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
found = false
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Content && strings.Contains(block.Content.String(), "16") {
found = true
break
}
}
if !found {
fmt.Println("Error: Turn 2 expected '16' in response")
return
}
fmt.Println("Turn 2: correct")
// Third turn
dialog = append(dialog, resp.Candidates[0], Message{
Role: User,
Blocks: []Block{TextBlock("Divide that by 4")},
})
resp, err = gen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
found = false
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Content && strings.Contains(block.Content.String(), "4") {
found = true
break
}
}
if !found {
fmt.Println("Error: Turn 3 expected '4' in response")
return
}
fmt.Println("Turn 3: correct")
Output: Turn 1: correct Turn 2: correct Turn 3: correct
Example (Thinking) ¶
apiKey := os.Getenv("Z_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set Z_API_KEY env]")
return
}
gen := NewZaiGenerator(
nil, "glm-4.7",
"You are a helpful assistant that explains your reasoning step by step.",
apiKey,
)
dialog := Dialog{
{
Role: User,
Blocks: []Block{TextBlock("What is the square root of 144?")},
},
}
resp, err := gen.Generate(context.Background(), dialog, nil)
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) == 0 || len(resp.Candidates[0].Blocks) == 0 {
fmt.Println("Error: empty response")
return
}
// Check for thinking block
hasThinking := false
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Thinking {
hasThinking = true
break
}
}
if !hasThinking {
fmt.Println("Error: no thinking block found")
return
}
fmt.Println("Thinking block found")
// Check for correct answer in content
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Content {
if strings.Contains(block.Content.String(), "12") {
fmt.Println("Correct answer found")
return
}
fmt.Printf("Error: expected '12' in content, got: %s\n", block.Content.String())
return
}
}
fmt.Println("Error: no content block found")
Output: Thinking block found Correct answer found
func (*ZaiGenerator) Register ¶ added in v0.25.0
func (g *ZaiGenerator) Register(tool Tool) error
Register implements ToolRegister
Example ¶
apiKey := os.Getenv("Z_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set Z_API_KEY env]")
return
}
gen := NewZaiGenerator(nil, "glm-4.7", `You are a helpful assistant that returns the price of a stock and nothing else.
Only output the price, like:
<example>
435.56
</example>`, apiKey)
// Register a stock price tool
tickerTool := Tool{
Name: "get_stock_price",
Description: "Get the current stock price for a given ticker symbol.",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Ticker string `json:"ticker" jsonschema:"required" jsonschema_description:"The stock ticker symbol, e.g. AAPL for Apple Inc."`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
if err := gen.Register(tickerTool); err != nil {
fmt.Println("Error:", err)
return
}
dialog := Dialog{
{Role: User, Blocks: []Block{TextBlock("What is the price of Apple stock?")}},
}
// Force the tool call
resp, err := gen.Generate(context.Background(), dialog, &GenOpts{ToolChoice: "get_stock_price"})
if err != nil {
fmt.Println("Error:", err)
return
}
if len(resp.Candidates) == 0 || len(resp.Candidates[0].Blocks) == 0 {
fmt.Println("Error: empty response")
return
}
// Find the tool call
var toolCall Block
for _, b := range resp.Candidates[0].Blocks {
if b.BlockType == ToolCall {
toolCall = b
break
}
}
if toolCall.BlockType != ToolCall {
fmt.Println("Error: no tool call found")
return
}
var tc ToolCallInput
if err := json.Unmarshal([]byte(toolCall.Content.String()), &tc); err != nil {
fmt.Println("Error parsing tool call:", err)
return
}
if tc.Name != "get_stock_price" {
fmt.Printf("Error: expected tool 'get_stock_price', got '%s'\n", tc.Name)
return
}
fmt.Println("Tool call received")
// Append tool result and continue
dialog = append(dialog, resp.Candidates[0], Message{
Role: ToolResult,
Blocks: []Block{
{ID: toolCall.ID, BlockType: Content, ModalityType: Text, MimeType: "text/plain", Content: Str("189.45")},
},
})
// Get final answer without calling tools
resp, err = gen.Generate(context.Background(), dialog, &GenOpts{ToolChoice: "none"})
if err != nil {
fmt.Println("Error:", err)
return
}
// Check if final response contains the price from tool result
for _, block := range resp.Candidates[0].Blocks {
if block.BlockType == Content {
if strings.Contains(block.Content.String(), "189.45") || strings.Contains(block.Content.String(), "189") {
fmt.Println("Final answer contains tool result")
return
}
fmt.Printf("Error: expected '189.45' in answer, got: %s\n", block.Content.String())
return
}
}
fmt.Println("Error: no content block in final response")
Output: Tool call received Final answer contains tool result
func (*ZaiGenerator) Stream ¶ added in v0.25.0
func (g *ZaiGenerator) Stream(ctx context.Context, dialog Dialog, options *GenOpts) iter.Seq2[StreamChunk, error]
Stream implements StreamingGenerator
Example ¶
apiKey := os.Getenv("Z_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set Z_API_KEY env]")
return
}
gen := NewZaiGenerator(nil, "glm-4.7", "You are a helpful assistant.", apiKey)
dialog := Dialog{
{
Role: User,
Blocks: []Block{TextBlock("Count from 1 to 5")},
},
}
var contentChunks int
var thinkingChunks int
for chunk, err := range gen.Stream(context.Background(), dialog, nil) {
if err != nil {
fmt.Println("Error:", err)
return
}
switch chunk.Block.BlockType {
case Content:
contentChunks++
case Thinking:
thinkingChunks++
case MetadataBlockType:
// ignore usage metadata
}
}
if contentChunks == 0 {
fmt.Println("Error: no content chunks received")
return
}
fmt.Println("Content chunks received")
if thinkingChunks == 0 {
fmt.Println("Error: no thinking chunks received")
return
}
fmt.Println("Thinking chunks received")
Output: Content chunks received Thinking chunks received
Example (ToolCalling) ¶
apiKey := os.Getenv("Z_API_KEY")
if apiKey == "" {
fmt.Println("[Skipped: set Z_API_KEY env]")
return
}
gen := NewZaiGenerator(nil, "glm-4.7", "You are a helpful assistant.", apiKey)
// Register a calculator tool
calcTool := Tool{
Name: "calculate",
Description: "Perform a mathematical calculation",
InputSchema: func() *jsonschema.Schema {
schema, err := GenerateSchema[struct {
Expression string `json:"expression" jsonschema:"required" jsonschema_description:"The mathematical expression to evaluate"`
}]()
if err != nil {
panic(err)
}
return schema
}(),
}
if err := gen.Register(calcTool); err != nil {
fmt.Println("Error registering tool:", err)
return
}
dialog := Dialog{
{
Role: User,
Blocks: []Block{TextBlock("What is 123 * 456? Use the calculator tool.")},
},
}
var hasToolCall bool
for chunk, err := range gen.Stream(context.Background(), dialog, &GenOpts{ToolChoice: ToolChoiceToolsRequired}) {
if err != nil {
fmt.Println("Error:", err)
return
}
if chunk.Block.BlockType == ToolCall {
hasToolCall = true
}
}
if !hasToolCall {
fmt.Println("Error: no tool call received in stream")
return
}
fmt.Println("Tool call streamed")
Output: Tool call streamed
type ZaiGeneratorOption ¶ added in v0.25.0
type ZaiGeneratorOption func(*ZaiGenerator)
ZaiGeneratorOption is a functional option for configuring the ZaiGenerator.
func WithZaiClearThinking ¶ added in v0.25.0
func WithZaiClearThinking(clear bool) ZaiGeneratorOption
WithZaiClearThinking controls whether to clear reasoning_content from previous turns. Set to false to enable preserved thinking (retain reasoning across turns).
func WithZaiThinking ¶ added in v0.25.0
func WithZaiThinking(enabled bool) ZaiGeneratorOption
WithZaiThinking enables or disables thinking mode. When enabled, the model will perform chain-of-thought reasoning.