Quick Start
Quick Start Guide
Section titled “Quick Start Guide”This guide walks you through setting up Faster Chat and starting your first conversation.
Step 1: Create Your Account
Section titled “Step 1: Create Your Account”After installing Faster Chat, navigate to the login page:
Development: http://localhost:3000/login Docker: http://localhost:8787/login Production: https://yourdomain.com/login
- Click Register to create a new account
- Enter your username and password
- Submit the registration form
Step 2: Configure AI Providers
Section titled “Step 2: Configure AI Providers”Access the Admin Panel to add your AI providers:
- Navigate to
/admin(click your avatar → Admin Panel) - Go to the Providers tab
- Click Add Provider
Option A: Cloud Provider (OpenAI, Anthropic, etc.)
Section titled “Option A: Cloud Provider (OpenAI, Anthropic, etc.)”Configure a cloud-based AI service:
OpenAI Example:
Provider Name: OpenAIBase URL: https://api.openai.com/v1API Key: sk-proj-... (your OpenAI API key)Anthropic Example:
Provider Name: AnthropicBase URL: https://api.anthropic.com/v1API Key: sk-ant-... (your Anthropic API key)Other Cloud Providers:
- Groq:
https://api.groq.com/openai/v1+ API key - Mistral:
https://api.mistral.ai/v1+ API key - OpenRouter (100+ models):
https://openrouter.ai/api/v1+ API key
Option B: Local Provider (Ollama, LM Studio)
Section titled “Option B: Local Provider (Ollama, LM Studio)”Configure a local AI service running on your machine:
Ollama Example:
Provider Name: OllamaBase URL: http://localhost:11434API Key: (leave empty)LM Studio Example:
Provider Name: LM StudioBase URL: http://localhost:1234/v1API Key: (leave empty)Step 3: Enable Models
Section titled “Step 3: Enable Models”After adding a provider, discover and enable available models:
- Stay in the Providers tab of the Admin Panel
- Find your newly added provider
- Click Refresh Models to auto-discover available models
- Toggle models ON to make them available in the chat interface
- Optionally set a default model for new conversations
Step 4: Start Chatting
Section titled “Step 4: Start Chatting”Now you’re ready to chat!
- Navigate to
/(home page) - Click New Chat or use an existing conversation
- Select a model from the dropdown
- Type your message and press Enter or click Send
- Watch as the AI streams its response in real-time
Example Prompts
Section titled “Example Prompts”Try these to get started:
Explain how Faster Chat's local-first storage worksWrite a Python script that analyzes log filesHelp me debug this JavaScript error: [paste your error]Understanding the Interface
Section titled “Understanding the Interface”Main Chat View:
- Model Selector - Switch between enabled models
- Message Input - Type your prompts (supports Shift+Enter for newlines)
- Conversation List - Access your chat history (stored locally in IndexedDB)
Admin Panel (/admin):
- Providers Tab - Manage AI connections and API keys
- Models Tab - Enable/disable models from all providers
- Users Tab - Manage user accounts and permissions (admins only)
Working Offline
Section titled “Working Offline”Faster Chat is built for offline-first operation:
- Configure Ollama or LM Studio as a provider
- Pull models locally (e.g.,
ollama pull llama3.2) - Disconnect from the internet
- Continue chatting - everything runs on your machine
Your conversations are stored locally in IndexedDB and never sent to the server unless you upload files.
File Attachments
Section titled “File Attachments”Upload files to provide context to the AI:
- Click the 📎 attachment icon in the message input
- Select a file (images, PDFs, text files)
- The file content is sent with your message
- Files are stored server-side in
server/data/uploads/
Troubleshooting
Section titled “Troubleshooting”No models showing up?
- Ensure you’ve added at least one provider in
/admin→ Providers - Click Refresh Models after adding a provider
- Check that the provider’s Base URL is correct and accessible
Connection errors with Ollama?
- Verify Ollama is running:
ollama serve - Check Ollama is accessible:
curl http://localhost:11434 - If using Docker, use
http://host.docker.internal:11434
API key errors?
- Verify your API key is correct
- Check for proper formatting (no extra spaces)
- Ensure the key has sufficient quota/credits
Slow responses?
- Use a smaller, faster model (e.g.,
llama3.2:3binstead ofllama3.1:70b) - Check system resources (RAM/CPU usage)
- For cloud providers, check your network connection
Next Steps
Section titled “Next Steps”- Provider Configuration - Deep dive into provider setup
- Admin Guide - User management and permissions
- Architecture - Learn how Faster Chat works under the hood