Chat UI
A modern, open-source chat interface for interacting with LLM models. Built with SvelteKit, this application provides a clean and intuitive interface for chat-based AI interactions.
Overview
Chat UI is a SvelteKit application that provides a web-based interface for chatting with various LLM models. It supports multiple backends including OpenAI, Ollama, Hugging Face, and custom API endpoints.
Features
- Modern UI: Clean, responsive chat interface
- Multiple Model Support: Connect to various LLM providers
- Real-time Streaming: Support for streaming responses
- Message History: Persistent chat history (with MongoDB)
- Customizable: Easy to customize and extend
- TypeScript Support: Type-safe development
Installation
Prerequisites
- Node.js 18+ and npm 9+
- MongoDB (for chat history, optional)
Setup
- Install dependencies:
npm install- Copy environment variables:
cp .env.example .env.local- Create a
.env.localfile and configure your environment variables:
VITE_API_BASE_URL=http://localhost:3000
MONGODB_URL=mongodb://localhost:27017/chatui
HF_TOKEN=your_huggingface_token
OPENAI_API_KEY=your_openai_key
OLLAMA_BASE_URL=http://localhost:11434
- Start the development server:
npm run devThe application will be available at http://localhost:5173.
Usage
Development
# Start dev server
npm run dev
# Build for production
npm run build
# Preview production build
npm run preview
# Run tests
npm test
# Lint code
npm run lint
# Format code
npm run format🏭 Production Deployment
1. Docker Deployment (Recommended)
Build and run the Docker container:
# Build
docker build -t chatui:latest .
# Run
docker run -d \
-p 3000:3000 \
--env-file .env.production \
--name chatui \
chatui:latest2. Vercel Deployment
ChatUi is optimized for Vercel deployment:
- Install Vercel CLI:
npm i -g vercel - Deploy:
vercel - Set environment variables in the Vercel dashboard.
3. Node.js Hosting (PM2)
For VPS deployment (DigitalOcean, EC2):
npm run build
pm2 start build/index.js --name "chatui"WebSocket Scaling
To scale WebSocket connections across multiple instances:
- Redis Adapter: Use Redis to broadcast events between instances.
- Sticky Sessions: Configure your load balancer (Nginx/HAProxy) to use IP-hash or sticky sessions to ensure a client stays connected to the same server.
CDN Configuration
Serve static assets (_app/immutable/) via a CDN (Cloudflare/CloudFront) by configuring svelte.config.js:
kit: {
paths: {
assets: 'https://cdn.yourdomain.com'
}
}Performance Optimization
- Tree Shaking: Enabled by default in Vite build.
- Image Optimization: Use
@sveltejs/enhanced-img. - Lazy Loading: Code-split routes are automatically lazy-loaded.
Monitoring
- Error Tracking: Integrate Sentry in
hooks.server.js. - Performance: Monitor Core Web Vitals using Vercel Analytics or Google Analytics.
- Server Metrics: Monitor CPU/RAM usage of the Node.js process.
Environment Variables
| Variable | Description | Default |
|---|---|---|
VITE_API_BASE_URL |
Base URL for API | http://localhost:3000 |
MONGODB_URL |
MongoDB connection string | - |
HF_TOKEN |
Hugging Face API token | - |
OPENAI_API_KEY |
OpenAI API key | - |
OLLAMA_BASE_URL |
Ollama server URL | http://localhost:11434 |
ORIGIN |
Allowed origin for CORS | http://localhost:3000 |
Connecting to LLM Backends
OpenAI
Set your OpenAI API key in .env.local:
OPENAI_API_KEY=sk-...
Ollama
Set Ollama base URL:
OLLAMA_BASE_URL=http://localhost:11434
Hugging Face
Set your Hugging Face token:
HF_TOKEN=hf_...
Project Structure
ChatUi/
├── src/
│ ├── components/ # Svelte components
│ │ └── ChatInterface.svelte
│ ├── lib/ # Utilities and libraries
│ │ └── api.js # API client
│ ├── routes/ # SvelteKit routes
│ │ ├── api/ # API endpoints
│ │ │ └── chat/ # Chat API
│ │ └── +page.svelte # Main page
│ └── app.html # HTML template
├── static/ # Static assets
├── package.json
├── svelte.config.js
├── vite.config.js
└── README.mdComponents
ChatInterface
The main chat component that handles:
- Message display
- User input
- Message sending
- Loading states
- Typing indicators
API Client
Located in src/lib/api.js, provides:
sendMessage(): Send a message and get responsestreamMessage(): Stream responses in real-timegetModels(): Fetch available models
API Endpoints
POST /api/chat
Send a chat message and get a response.
Request:
{
"message": "Hello, how are you?",
"model": "gpt-3.5-turbo",
"temperature": 0.7
}Response:
{
"role": "assistant",
"content": "I'm doing well, thank you!",
"timestamp": "2024-01-15T10:30:00Z",
"model": "gpt-3.5-turbo"
}Customization
Styling
Modify styles in component <style> blocks or create a global stylesheet in src/app.css.
Adding Features
- New Components: Add to
src/components/ - API Routes: Add to
src/routes/api/ - Utilities: Add to
src/lib/
Deployment
Build for Production
npm run buildDeploy to Vercel
npm install -g vercel
vercelDeploy to Netlify
npm install -g netlify-cli
netlify deployDocker
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["node", "build"]Environment Variables
| Variable | Description | Default |
|---|---|---|
VITE_API_BASE_URL |
Base URL for API | http://localhost:3000 |
MONGODB_URL |
MongoDB connection string | - |
HF_TOKEN |
Hugging Face API token | - |
OPENAI_API_KEY |
OpenAI API key | - |
OLLAMA_BASE_URL |
Ollama server URL | http://localhost:11434 |
Troubleshooting
Port Already in Use
Change the port in vite.config.js:
server: {
port: 5174 // Use different port
}MongoDB Connection Issues
Ensure MongoDB is running:
# Using Docker
docker run -d -p 27017:27017 --name mongo mongo:latest
# Or check if already running
docker ps | grep mongoAPI Errors
Check browser console and server logs for detailed error messages. Ensure your API keys are correctly set in .env.local.
Additional Resources
- 📚 Project Wiki - Comprehensive documentation
- 📖 ChatUi Wiki Page - Detailed project documentation
- 🔧 Development Setup - Development environment setup
- 🐛 Troubleshooting - Common issues and solutions
Contributing
Contributions welcome! Please see the main repository Contributing Guidelines.
For issues, questions, or suggestions, please use the GitHub Issues page.
License
See main repository LICENSE file.