Skip to content

Installation

Choose your installation method based on your needs:

  • Development Mode - Fast iteration with hot reload (recommended for development)
  • Docker - Quick deployment with one command
  • Docker + HTTPS - Production deployment with automatic SSL certificates

Choose based on your installation method:

Development Mode:

  • Bun (recommended) or Node.js 20+
  • Git

Docker:

  • Docker and Docker Compose
  • Git

For Offline AI (optional):

  • Ollama - For running local models
  • At least 8GB RAM recommended for local models

Perfect for contributing, customizing, or exploring the codebase.

Terminal window
git clone https://github.com/1337hero/faster-chat.git
cd faster-chat
bun install # or npm install
Terminal window
bun run dev # Starts both frontend and backend

On first run, the server will automatically:

  • ✅ Generate a secure encryption key for API key storage (server/.env)
  • ✅ Create required data directories
  • ✅ Initialize the SQLite database

The application will be available at:

  1. Navigate to http://localhost:3000/login
  2. Click “Register” to create an account
  3. Your first account is automatically promoted to admin

Quick deployment with a single command.

Terminal window
git clone https://github.com/1337hero/faster-chat.git
cd faster-chat
Terminal window
echo "API_KEY_ENCRYPTION_KEY=$(node -e "console.log(require('crypto').randomBytes(32).toString('hex'))")" > server/.env
Terminal window
docker compose up -d

Access at: http://localhost:8787

Default Settings:

  • Port: 8787 (configurable via APP_PORT in server/.env)
  • Storage: SQLite database in chat-data volume
  • Runtime: Node.js 22 on Debian (native module compatibility)

Environment Variables (server/.env):

Terminal window
# Required: Encryption key for API keys
API_KEY_ENCRYPTION_KEY=... # Auto-generated above
# Optional: Configure port
APP_PORT=8787 # Internal port (default: 8787)
# For local Ollama access from Docker
OLLAMA_BASE_URL=http://host.docker.internal:11434

Useful Commands:

Terminal window
docker compose up -d # Start
docker compose logs -f # View logs
docker compose down # Stop
docker compose up -d --build # Rebuild
# Reset database
docker compose down
docker volume rm faster-chat_chat-data
docker compose up -d

Production deployment with automatic HTTPS certificates via Caddy.

Edit Caddyfile and replace localhost with your domain:

yourdomain.com {
reverse_proxy app:8787
}

Create an A record pointing your domain to your server’s IP address.

Terminal window
docker compose -f docker-compose.yml -f docker-compose.caddy.yml up -d

Caddy Features:

  • Automatic HTTPS with Let’s Encrypt
  • Certificate auto-renewal
  • HTTP/2 and HTTP/3 support
  • Compression and security headers
  • Only 13MB overhead (Alpine-based)

Access at: https://yourdomain.com

Run AI models completely offline on your local machine.

macOS / Linux:

Terminal window
curl -fsSL https://ollama.ai/install.sh | sh

Windows: Download from ollama.ai

Terminal window
# Fast, general-purpose model
ollama pull llama3.2
# Larger, more capable model
ollama pull llama3.1:70b
Terminal window
ollama serve # Usually runs automatically

Ollama runs on http://localhost:11434 by default.