One OpenAI-compatible API. Six providers. The router picks the best model for every request.
Run models locally with zero API costs. GPU-accelerated inference on your own hardware. Full privacy — data never leaves your network.
Full support for the GPT family. Automatic failover and rate limit management. Function calling and vision support.
Claude models with long-context support up to 1M tokens. Excellent for reasoning, analysis, and code generation.
Gemini models with native multimodal support. Strong on multilingual tasks and long documents.
Grok models with real-time knowledge access and web search capabilities built in.
Ultra-fast inference on custom LPU hardware. Sub-100ms latency for time-sensitive applications.
Dream-Weaver is OpenAI-compatible. Change your base_url and you're done.
from openai import OpenAI # Before: direct to OpenAI # client = OpenAI() # After: through Dream-Weaver (memory, personality, reasoning — free) client = OpenAI( base_url="https://dev.dream-weaver.ai/v1", api_key="dw_live_your_key_here", ) # Same API. Smarter responses. response = client.chat.completions.create( model="auto", # Router picks the best model messages=[{"role": "user", "content": "Hello!"}], )