Introduction
Welcome to Faster Chat
Section titled “Welcome to Faster Chat”A blazingly fast, privacy-first chat interface for AI that works with any LLM provider—cloud or completely offline.
Faster Chat is a self-hosted web application that gives you complete control over your AI conversations. Connect to OpenAI, Anthropic, Groq, Mistral, or run completely offline with Ollama, LM Studio, or llama.cpp.
Why Faster Chat?
Section titled “Why Faster Chat?”Privacy-First, Always
- 🗄️ Conversations stay on your machine - Local-first IndexedDB storage
- 🔐 Encrypted API key storage - Server-side encryption for provider credentials
- 🚫 No tracking, no analytics - Your privacy is paramount
Blazingly Fast
- ⚡ 3KB Preact runtime - Zero SSR overhead, instant responses
- 💬 Real-time streaming - Powered by Vercel AI SDK
- 📱 Responsive design - Works on desktop, tablet, and mobile
Provider-Agnostic
- 🤖 Multi-provider support - OpenAI, Anthropic, Ollama, Groq, Mistral, custom APIs
- 🔌 Auto-discover models - Integration with models.dev
- 🌐 Works completely offline - Run local models with Ollama or LM Studio
Key Features
Section titled “Key Features”Core Chat Experience
Section titled “Core Chat Experience”- Real-time streaming responses with the Vercel AI SDK
- Markdown rendering with syntax highlighting and LaTeX support
- File attachments with preview and download
- Conversation history and search
Administration
Section titled “Administration”- Multi-user authentication with role-based access (admin/member/readonly)
- Provider Hub for managing AI connections and API keys
- Admin panel for user management (CRUD, password reset, role changes)
- Auto-discovery of available models from configured providers
Deployment Options
Section titled “Deployment Options”- One-command Docker deployment with optional HTTPS via Caddy
- Development mode with Bun for fast iteration
- Production builds optimized for Node.js compatibility
Tech Stack
Section titled “Tech Stack”Frontend
- Preact - 3KB React alternative
- TanStack Router - Type-safe routing
- TanStack Query - Server state management
- Dexie.js - IndexedDB wrapper for local storage
- Tailwind CSS 4.1 - Utility-first styling
Backend
- Hono - Ultrafast web framework
- Vercel AI SDK - Multi-provider LLM streaming
- SQLite - Embedded database for users and configuration
- Argon2 - Password hashing
What’s Next?
Section titled “What’s Next?”- Installation - Get Faster Chat running locally
- Quick Start - Configure providers and start chatting
Community & Contributing
Section titled “Community & Contributing”Faster Chat is MIT licensed and welcomes contributions:
Built with ❤️ by 1337Hero for developers who value privacy, speed, and control.