Terminal Agents - Production-Ready AI Coding Assistant
A comprehensive terminal-based AI agent for code assistance, similar to OpenCode. Provides AI-powered code analysis, generation, explanation, and debugging directly from your terminal.
π Features
- Multi-Provider LLM Support: OpenAI, Anthropic Claude, Ollama (free/local), Google, Azure
- Code Analysis: Analyze code files for issues, security vulnerabilities, and improvements
- Code Explanation: Get detailed explanations of code functionality
- Code Generation: Generate code from natural language descriptions
- Code Fixing: Fix bugs and improve code quality
- Code Refactoring: Refactor code for better maintainability
- Interactive Chat: Real-time chat interface with conversation history
- Rich Terminal UI: Beautiful terminal interface with colors, markdown, and syntax highlighting
- File Operations: Read, analyze, and work with code files
- Configuration Management: YAML config files and environment variables
π Prerequisites
- Python 3.8+
- At least one LLM provider configured:
- Ollama (Recommended - Free, local): Install Ollama
- OpenAI API Key: Get API Key
- Anthropic API Key: Get API Key
- Google API Key: Get API Key
- Azure OpenAI: Configure Azure endpoint
π οΈ Installation
Quick Setup
cd projects/terminal_agents
./setup.shManual Setup
Navigate to the project:
cd projects/terminal_agentsCreate virtual environment:
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activateInstall dependencies:
pip install -r requirements.txtMake agent executable:
chmod +x agent.py
βοΈ Configuration
Option 1: Environment Variables (Recommended)
# For OpenAI
export OPENAI_API_KEY=your_api_key_here
# For Anthropic
export ANTHROPIC_API_KEY=your_api_key_here
# For Ollama (default, no key needed if running locally)
export OLLAMA_BASE_URL=http://localhost:11434
export OLLAMA_MODEL=llama3.1:8bOption 2: Config File
Create ~/.terminal_agents/config.yaml:
# Provider selection (auto-detect if not set)
provider: ollama # Options: ollama, openai, anthropic, google, azure
# Ollama (Free, Local)
ollama_base_url: http://localhost:11434
ollama_model: llama3.1:8b
# OpenAI
openai_api_key: your_key_here
openai_model: gpt-4o-mini
# Anthropic
anthropic_api_key: your_key_here
anthropic_model: claude-3-5-sonnet-20241022Option 3: Command Line Arguments
python agent.py --api-key your_key --provider openai --model gpt-4 chat "Hello"π Usage
Interactive Mode (Recommended)
Start an interactive chat session:
python agent.py interactive
# or
./agent.py interactiveInteractive Commands:
@analyze <file>- Analyze code file@explain <file>- Explain code file@generate <description>- Generate code@fix <file>- Fix code issues@refactor <file>- Refactor codeclear- Clear conversation historysave <file>- Save conversationhelp- Show helpexit- Exit interactive mode
Command-Line Commands
Chat
Send a message to the agent:
python agent.py chat "Explain Python decorators"Analyze Code
Analyze a code file:
python agent.py analyze app.pyExplain Code
Explain a piece of code:
python agent.py explain "def fibonacci(n): return n if n < 2 else fibonacci(n-1) + fibonacci(n-2)"
# or
python agent.py explain app.pyGenerate Code
Generate code from a description:
python agent.py generate "A function to calculate factorial"Fix Code
Fix code issues:
python agent.py fix "def broken_function(x): return x / 0"
# or
python agent.py fix buggy_code.pyRefactor Code
Refactor code for improvement:
python agent.py refactor app.pyHelp
View all available commands:
python agent.py helpπ Production Considerations
CLI Distribution
To distribute this tool to a team:
PyPI Package: Package the agent as a Python package and publish to a private PyPI repository.
python -m build twine upload dist/*Standalone Binary: Use PyInstaller to create a single-file executable.
pyinstaller --onefile agent.pyDocker Image: Distribute as a Docker image for consistent environments.
docker run -it -v $(pwd):/app/code terminal-agent:latest
Configuration Management
For team-wide configuration:
- Shared Config: Distribute a standard
config.yamlto~/.terminal_agents/via configuration management tools (Ansible, Chef). - Environment Variables: Enforce API keys via environment variables in CI/CD pipelines.
Security Hardening
- API Key Storage: Never commit
config.yamlwith API keys to version control. Use a secrets manager (e.g.,keyringpython package) for local storage. - Input Sanitization: The agent executes within the userβs shell context. Ensure prompts do not contain malicious shell commands if piping input.
- Audit Logging: Enable logging to a file to audit agent usage and generated code.
π― Use Cases
Code Review
python agent.py analyze src/main.pyLearning New Code
python agent.py explain "$(cat complex_algorithm.py)"Quick Code Generation
python agent.py generate "A REST API endpoint for user authentication"Debugging
python agent.py fix "$(cat buggy_code.py)"General Questions
python agent.py chat "What is the difference between async and await in Python?"π¦ Project Structure
terminal_agents/
βββ agent.py # Main agent application
βββ config.py # Configuration management
βββ llm_providers.py # LLM provider implementations
βββ requirements.txt # Python dependencies
βββ setup.sh # Setup script
βββ DESIGN.md # Design documentation
βββ README.md # This file
π§ Customization
Changing the Default Model
Edit the default model in config.py or set environment variables:
export OLLAMA_MODEL=mistral:7b
export OPENAI_MODEL_NAME=gpt-4Adding New Commands
Add new command handlers in the main() function in agent.py.
Custom Prompts
Modify prompt templates in the agent methods.
π¨ Terminal UI
The agent uses the rich library for beautiful terminal output:
- Colors: Syntax highlighting and colored output
- Markdown: Renders markdown in terminal
- Panels: Beautiful bordered panels for help text
- Progress: Progress indicators for long operations
- Syntax Highlighting: Code blocks with syntax highlighting
If rich is not available, the agent falls back to plain text output.
π Security & Safety
- File write operations require explicit confirmation
- API keys are never logged or displayed
- Error messages are sanitized
- Safe file path handling
π Troubleshooting
βNo LLM provider availableβ
Solution: Configure at least one provider:
- Install Ollama:
curl -fsSL https://ollama.ai/install.sh | sh - Or set API keys:
export OPENAI_API_KEY=your_key
βModule not foundβ errors
Solution: Install dependencies:
pip install -r requirements.txtOllama connection errors
Solution: Ensure Ollama is running:
ollama serve
# In another terminal:
ollama pull llama3.1:8bRich library not working
Solution: The agent will fall back to plain text. To fix:
pip install rich pygmentsπ Examples
Example 1: Analyze Python File
python agent.py analyze my_script.pyExample 2: Generate Code
python agent.py generate "A function to sort a list of dictionaries by a key"Example 3: Interactive Session
python agent.py interactive
> @analyze app.py
> @generate "A REST API with FastAPI"
> @fix buggy_function.py
> exitExample 4: With API Key
python agent.py --api-key your_key_here chat "Hello"Example 5: Pipe Code
cat code.py | python agent.py explainπ License
See main repository LICENSE file.
π€ Contributing
Contributions welcome! Please read the main repository contributing guidelines.
π Acknowledgments
Inspired by OpenCode and similar terminal-based AI coding assistants.
Made with β€οΈ for developers who love the terminal