AI Infrastructure

Noundry.AIG

Unified AI Gateway for .NET developers - Access OpenAI, Anthropic, Google Gemini, and more through a single, consistent API

dotnet add package Noundry.AIG.Client

What is Noundry.AIG?

Noundry AI Gateway (AIG) is a feature-complete AI Gateway for .NET/C# developers, similar to Vercel's AI Gateway but built specifically for the .NET ecosystem. It provides a unified interface to access multiple AI providers through a single, consistent API.

Whether you're building chatbots, content generation tools, or AI-powered applications, AIG simplifies the complexity of integrating with different AI providers while providing advanced features like automatic failover, streaming support, and prompt chaining.

Multi-Provider Support

Access OpenAI, Anthropic (Claude), Google (Gemini), and more through a unified interface.

Automatic Failover

Try multiple providers in order with automatic fallback if one fails.

Streaming Support

Real-time response streaming from all supported providers.

Fluent Prompt Builder

Intuitive API for building complex prompts with the builder pattern.

Prompt Chaining

Chain multiple prompts where output feeds into the next input seamlessly.

Thread-Safe

Built with HttpClientFactory for production-ready, concurrent operations.

Supported Providers

OpenAI

GPT-4, GPT-3.5

Anthropic

Claude 3.5, 3

Google

Gemini Pro

More

Extensible

Quick Start Guide

Get started with Noundry.AIG in minutes. Follow these steps to integrate AI capabilities into your .NET application.

1. Install the Package

dotnet add package Noundry.AIG.Client

2. Configure Providers

using Noundry.AIG.Client;
using Noundry.AIG.Client.Configuration;
using Noundry.AIG.Core.Models;
using Noundry.AIG.Providers;

var httpClientFactory = new SimpleHttpClientFactory();
var providerFactory = new ProviderFactory(httpClientFactory);

var options = new AigClientOptions
{
    UseLocalProviders = true,
    EnableRetries = true,
    MaxRetries = 3,
    ProviderConfigs = new Dictionary<string, ProviderConfig>
    {
        ["openai"] = new ProviderConfig
        {
            ApiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY")
        },
        ["anthropic"] = new ProviderConfig
        {
            ApiKey = Environment.GetEnvironmentVariable("ANTHROPIC_API_KEY")
        }
    }
};

var aigClient = new AigClient(providerFactory, options);

3. Send Your First Request

using Noundry.AIG.Client.Builders;
using Noundry.AIG.Core.Extensions;

var prompt = new PromptBuilder()
    .WithModel("anthropic/claude-sonnet-4")
    .WithTemperature(0.7f)
    .AddSystemMessage("You are a helpful AI assistant.")
    .AddUserMessage("Explain quantum computing in simple terms.")
    .Build();

var response = await aigClient.SendAsync(prompt);
Console.WriteLine(response.GetTextContent());

💡 Pro Tip

Store your API keys in environment variables or use Azure Key Vault for production applications. Never hardcode API keys in your source code.

PromptBuilder

PromptBuilder provides a fluent, intuitive API for constructing complex AI prompts with full type safety and IntelliSense support.

Basic Usage

var prompt = new PromptBuilder()
    .WithModel("openai/gpt-4")
    .AddUserMessage("What are the three primary colors?")
    .Build();

var response = await aigClient.SendAsync(prompt);
Console.WriteLine(response.GetTextContent());

Advanced Configuration

var prompt = new PromptBuilder()
    .WithModel("anthropic/claude-sonnet-4")
    .WithTemperature(0.8f)
    .WithMaxTokens(2000)
    .WithTopP(0.9f)
    .WithStopSequences("END", "STOP")
    .WithRepetitionPenalty(1.15f)
    .AddSystemMessage("You are a creative writing assistant.")
    .AddUserMessage("Write a sci-fi story opening.")
    .Build();

var response = await aigClient.SendAsync(prompt);

Multi-Turn Conversations

var conversationBuilder = new PromptBuilder()
    .WithModel("anthropic/claude-sonnet-4")
    .WithTemperature(0.8f)
    .AddSystemMessage("You are a creative storyteller.")
    .AddUserMessage("Start a story about a robot learning to paint.")
    .AddAssistantMessage("In a world of circuits and steel, there lived a robot named Canvas-7...")
    .AddUserMessage("What happens next?")
    .Build();

var response = await aigClient.SendAsync(conversationBuilder);
Console.WriteLine(response.GetTextContent());

Available Methods

  • WithModel(string model) - Set the model (format: "provider/model")
  • WithModels(params string[] models) - Set multiple models for fallback
  • WithTemperature(float temperature) - Control randomness (0.0 to 2.0)
  • WithMaxTokens(int maxTokens) - Maximum tokens to generate
  • WithStreaming(bool stream) - Enable streaming responses
  • AddSystemMessage(string content) - Add system message
  • AddUserMessage(string content) - Add user message
  • AddAssistantMessage(string content) - Add assistant message
  • Build() - Build the AI request

ChainPromptBuilder

ChainPromptBuilder enables you to create complex AI workflows where the output of one prompt automatically feeds into the next, creating powerful multi-step AI pipelines.

Basic Chain Example

var chain = new ChainPromptBuilder()
    .WithDefaultModel("anthropic/claude-sonnet-4")

    // Step 1: Generate an idea
    .AddStep("Generate Idea", _ =>
        new PromptBuilder()
            .AddUserMessage("Give me a random creative writing topic."))

    // Step 2: Write about it
    .AddStep("Write Content", previousOutput =>
        new PromptBuilder()
            .AddUserMessage($"Write a short paragraph about: {previousOutput}"))

    // Step 3: Translate
    .AddStep("Translate", previousOutput =>
        new PromptBuilder()
            .WithModel("openai/gpt-4")
            .AddUserMessage($"Translate to Spanish:\n{previousOutput}"));

var result = await chain.ExecuteAsync(aigClient);

if (result.Success)
{
    Console.WriteLine($"Final output: {result.FinalOutput}");

    // Inspect each step
    foreach (var step in result.Steps)
    {
        Console.WriteLine($"{step.StepName}: {step.Output}");
    }
}

Advanced Chain with Different Models

var chainBuilder = new ChainPromptBuilder()
    .WithDefaultModel("anthropic/claude-sonnet-4")

    // Generate a random topic
    .AddStep("Generate Topic", previousOutput =>
        new PromptBuilder()
            .WithModel("anthropic/claude-sonnet-4")
            .AddUserMessage("Give me a random interesting topic in one word."))

    // Write a haiku about the topic
    .AddStep("Write Haiku", previousOutput =>
        new PromptBuilder()
            .WithModel("anthropic/claude-sonnet-4")
            .AddUserMessage($"Write a haiku about: {previousOutput}"))

    // Translate to Spanish
    .AddStep("Translate", previousOutput =>
        new PromptBuilder()
            .WithModel("openai/gpt-4")
            .AddUserMessage($"Translate this haiku to Spanish:\n{previousOutput}"));

var chainResult = await chainBuilder.ExecuteAsync(aigClient);

Console.WriteLine($"Chain completed: {chainResult.Success}");
Console.WriteLine($"Steps executed: {chainResult.Steps.Count}");
Console.WriteLine($"Final Output: {chainResult.FinalOutput}");

Chain Result Structure

The ChainResult object provides comprehensive information about the execution:

  • Steps - List of all executed steps with their results
  • FinalOutput - The output from the last step
  • Success - Whether the entire chain succeeded
  • FailedAtStep - Index of the step that failed (if any)

✨ Use Cases

  • • Content generation pipelines (research → outline → write → edit)
  • • Translation workflows (detect language → translate → verify)
  • • Data processing (extract → transform → summarize)
  • • Multi-step analysis (analyze → categorize → report)

Streaming Responses

Stream responses in real-time for better user experience and reduced perceived latency. All providers support streaming through a unified API.

Basic Streaming

var streamingPrompt = new PromptBuilder()
    .WithModel("openai/gpt-4")
    .WithStreaming(true)
    .AddUserMessage("Write a short story about a time traveler.")
    .Build();

Console.Write("Response: ");

await foreach (var chunk in aigClient.SendStreamAsync(streamingPrompt))
{
    if (chunk.IsSuccess())
    {
        var text = chunk.GetTextContent();
        if (!string.IsNullOrEmpty(text))
        {
            Console.Write(text);
        }
    }
}

Console.WriteLine();

Streaming with Error Handling

var prompt = new PromptBuilder()
    .WithModel("anthropic/claude-sonnet-4")
    .WithStreaming(true)
    .AddUserMessage("Explain AI in simple terms.")
    .Build();

var fullResponse = new StringBuilder();

try
{
    await foreach (var chunk in aigClient.SendStreamAsync(prompt))
    {
        if (chunk.IsSuccess())
        {
            var text = chunk.GetTextContent();
            fullResponse.Append(text);
            Console.Write(text);
        }
        else
        {
            Console.WriteLine($"\nError: {chunk.Error?.Message}");
            break;
        }
    }
}
catch (Exception ex)
{
    Console.WriteLine($"Streaming error: {ex.Message}");
}

Console.WriteLine($"\n\nFull response: {fullResponse}");

âš¡ Performance Tip

Streaming provides a better user experience by showing content as it's generated. It's especially useful for longer responses where waiting for the complete response would create a poor experience.

Multi-Provider & Fallback

Use the order parameter to try multiple providers automatically. The gateway will use the first successful response, providing automatic failover for high availability.

Single Request, Multiple Models

var multiProviderPrompt = new PromptBuilder()
    .WithModels("openai/gpt-4", "anthropic/claude-sonnet-4", "google/gemini-pro")
    .AddUserMessage("What are the three primary colors?")
    .Build();

var multiResponse = await aigClient.SendMultiAsync(multiProviderPrompt);

Console.WriteLine($"Received {multiResponse.Responses.Count} responses");
Console.WriteLine($"Time taken: {multiResponse.TotalDurationMs}ms");
Console.WriteLine($"Has success: {multiResponse.HasSuccess}");

if (multiResponse.FirstSuccess != null)
{
    Console.WriteLine($"\nFirst successful response from {multiResponse.FirstSuccess.Provider}:");
    Console.WriteLine(multiResponse.FirstSuccess.GetTextContent());
}

// Show all responses
Console.WriteLine("\nAll responses:");
foreach (var resp in multiResponse.Responses)
{
    var status = resp.IsSuccess() ? "Success" : $"Failed - {resp.Error?.Message}";
    Console.WriteLine($"- {resp.Model}: {status}");
}

Using PromptBuilder for Fallback

// Try providers in order of preference
var prompt = new PromptBuilder()
    .WithModels(
        "openai/gpt-4",           // Try OpenAI first
        "anthropic/claude-sonnet-4",  // Fall back to Anthropic
        "google/gemini-pro"        // Finally try Google
    )
    .WithTemperature(0.7f)
    .AddSystemMessage("You are helpful assistant.")
    .AddUserMessage("Generate a creative story idea.")
    .Build();

var response = await aigClient.SendMultiAsync(prompt);

// Get the first successful response
if (response.HasSuccess && response.FirstSuccess != null)
{
    Console.WriteLine($"Got response from: {response.FirstSuccess.Provider}");
    Console.WriteLine(response.FirstSuccess.GetTextContent());
}
else
{
    Console.WriteLine("All providers failed!");
}

Analyzing All Responses

var prompt = new PromptBuilder()
    .WithModels("openai/gpt-4", "anthropic/claude-sonnet-4")
    .AddUserMessage("What is the meaning of life?")
    .Build();

var multiResponse = await aigClient.SendMultiAsync(prompt);

// Compare responses from different providers
foreach (var response in multiResponse.Responses)
{
    Console.WriteLine($"\n--- {response.Provider}/{response.Model} ---");

    if (response.IsSuccess())
    {
        Console.WriteLine($"Response: {response.GetTextContent()}");
        Console.WriteLine($"Tokens: {response.GetTotalTokens()}");
    }
    else
    {
        Console.WriteLine($"Error: {response.Error?.Message}");
    }
}

🎯 When to Use Multi-Provider

  • • High availability applications requiring automatic failover
  • • A/B testing different models for the same prompt
  • • Cost optimization by trying cheaper models first
  • • Comparing outputs from different AI providers

Web API

Deploy the AI Gateway as a Web API to provide a centralized AI service for your organization or applications.

API Endpoint

POST /v1/chat/completions

Compatible with OpenAI API format

cURL Example - Single Model

curl -X POST "https://aigw.noundry.ai/v1/chat/completions" \
  -H "Authorization: Bearer demo-key-12345" \
  -H "X-API-KEY: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4",
    "messages": [
      {
        "role": "user",
        "content": "Why is the sky blue?"
      }
    ],
    "temperature": 0.7,
    "max_tokens": 1000,
    "stream": false
  }'

cURL Example - Multi-Provider Fallback

curl -X POST "https://aigw.noundry.ai/v1/chat/completions" \
  -H "Authorization: Bearer demo-key-12345" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4",
    "order": [
      "openai/gpt-4",
      "anthropic/claude-sonnet-4",
      "google/gemini-pro"
    ],
    "messages": [
      {
        "role": "user",
        "content": "What are the three primary colors?"
      }
    ]
  }'

Configuration (appsettings.json)

{
  "ConnectionStrings": {
    "Analytics": "Data Source=aigw_analytics.db"
  },
  "ApiKeys": [
    "demo-key-12345",
    "your-api-key-here"
  ],
  "Providers": {
    "openai": {
      "ApiKey": "YOUR_OPENAI_API_KEY",
      "TimeoutSeconds": "120"
    },
    "anthropic": {
      "ApiKey": "YOUR_ANTHROPIC_API_KEY",
      "ApiVersionHeaderValue": "2023-06-01",
      "TimeoutSeconds": "120"
    },
    "google": {
      "ApiKey": "YOUR_GOOGLE_API_KEY",
      "TimeoutSeconds": "120"
    }
  }
}

Analytics Endpoints

Get Recent Logs

GET /v1/analytics/logs?count=100

Get Usage Statistics

GET /v1/analytics/usage?daysSince=7