ChatChain

Talk to any LLM from your terminal.

Download GitHub
chatchain

Features

🌐

Multi-Provider

OpenAI, Anthropic, Gemini, Vertex AI, OpenAI Responses API, and OpenClaw. All with tool calling support.

Streaming Responses

Tokens appear in real-time as the model generates them, with a loading spinner while waiting.

📎

File Attachments

Send images, PDFs, and code files alongside messages. Tab-completion for file paths.

💬

Interactive Chat

Full conversation history, interactive model selection, and color-coded terminal output.

🔧

Script-Friendly

Non-interactive mode with -m flag. Use -m - to read from stdin — pipe-friendly.

🛠

Config File

Persistent API keys, default models, and custom provider aliases via ~/.chatchain.yaml.

🔌

MCP Tool Support

Connect external MCP servers (filesystem, GitHub, databases) and let AI use them as tools during chat.

🎨

Markdown Highlighting

Headings, bold, italic, code, and tables are highlighted with ANSI colors in streaming output.

🤖

Claude Code Plugin

Use as a Claude Code plugin to call other LLMs directly within your coding workflow.

Providers

OpenAI Anthropic Gemini Vertex AI OpenAI Responses OpenClaw

Each provider uses its official SDK. Pass your API key via flag or environment variable (OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY, OPENCLAW_GATEWAY_TOKEN).

File Attachments

Attach files in interactive mode with the /file command. Tab-completion helps you navigate paths. Files are sent with your next message, then cleared automatically.

You> /file ~/photos/sunset.png
Attached: sunset.png (image/png, 245760 bytes)
You> /file report.pdf
Attached: report.pdf (application/pdf, 102400 bytes)
You> Summarize the report and describe the photo

Supported file types:

.jpg .png .gif .webp .pdf .txt .md .go .py .js .ts .json .yaml .html .css .sql .csv

Install

# Homebrew (macOS)
$ brew tap joyqi/tap
$ brew install chatchain
# Or via Go
$ go install github.com/joyqi/chatchain@latest

Claude Code Plugin

ChatChain works as a Claude Code plugin. Install it to call other LLMs directly within Claude Code.

# Add the marketplace and install
$ /plugin marketplace add joyqi/chatchain
$ /plugin install chatchain@chatchain-marketplace
# Use the slash command
$ /chatchain:ask openai gpt-4o "What is 1+1?"
# Or let Claude auto-detect when to use it
> Use chatchain to ask Gemini to explain quicksort

Usage

# Interactive chat with model selection
$ chatchain openai -k sk-xxx
# Specify a model directly
$ chatchain anthropic -M claude-sonnet-4-20250514
# Non-interactive one-shot
$ chatchain openai -M gpt-4o -m "Explain quicksort"
# System prompt
$ chatchain openai -M gpt-4o -s 'You are a translator' -m "Hello"
# Custom API endpoint
$ chatchain openai -u https://your-proxy.com/v1 -k sk-xxx
# With MCP tools
$ chatchain openai -M gpt-4o --mcp "npx -y @modelcontextprotocol/server-filesystem /tmp"
# List all providers and config aliases
$ chatchain -l
# List available models for a provider
$ chatchain -l openai

Flags

FlagShortDescription
--key-kAPI key (or set via env var)
--url-uCustom base URL
--model-MModel name (skip interactive selection)
--temperature-tSampling temperature (0.0-2.0)
--message-mNon-interactive: send a single message (use - to read from stdin)
--system-sSystem prompt (omit value for interactive input)
--list-lList configured providers, or models for a given provider
--mcpMCP server (command or URL, repeatable)
--config-cPath to config file (default: ~/.chatchain.yaml)
--verbose-vVerbose: print HTTP request/response bodies

Config File

Save API keys, default models, and custom provider aliases in ~/.chatchain.yaml. Priority: CLI flag > env var > config file.

# ~/.chatchain.yaml
providers:
openai:
key: sk-official
model: gpt-4o
deepseek: # custom alias
type: openai # underlying provider
key: sk-deepseek-xxx
url: https://api.deepseek.com/v1
model: deepseek-chat
system: You are a helpful coding assistant
mcp_servers: # MCP tool servers
filesystem:
command: npx
args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]

With this config, chatchain deepseek -m "hello" uses the OpenAI provider with DeepSeek's key, URL, and model. MCP servers are connected automatically on startup. Config files are loaded from ~/ (global) and ./ (project-local), or via -c <path>.

Chat Commands

CommandDescription
/file <path>Attach a file (with Tab completion for paths)
/filesList currently attached files
/clearRemove all attached files
/save [path]Save conversation to Markdown (default: history.md)
/import [path]Import conversation from a saved Markdown file (default: history.md)
/mcpShow connected MCP servers and their tools