Skip to main content

Terminal Agents - Production-Ready AI Coding Assistant

A comprehensive terminal-based AI agent for code assistance, similar to OpenCode. Provides AI-powered code analysis, generation, explanation, and debugging directly from your terminal.

πŸš€ Features

  • Multi-Provider LLM Support: OpenAI, Anthropic Claude, Ollama (free/local), Google, Azure
  • Code Analysis: Analyze code files for issues, security vulnerabilities, and improvements
  • Code Explanation: Get detailed explanations of code functionality
  • Code Generation: Generate code from natural language descriptions
  • Code Fixing: Fix bugs and improve code quality
  • Code Refactoring: Refactor code for better maintainability
  • Interactive Chat: Real-time chat interface with conversation history
  • Rich Terminal UI: Beautiful terminal interface with colors, markdown, and syntax highlighting
  • File Operations: Read, analyze, and work with code files
  • Configuration Management: YAML config files and environment variables

πŸ“‹ Prerequisites

πŸ› οΈ Installation

Quick Setup

cd projects/terminal_agents
./setup.sh

Manual Setup

  1. Navigate to the project:

    cd projects/terminal_agents
  2. Create virtual environment:

    python3 -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Make agent executable:

    chmod +x agent.py

βš™οΈ Configuration

Option 2: Config File

Create ~/.terminal_agents/config.yaml:

# Provider selection (auto-detect if not set)
provider: ollama  # Options: ollama, openai, anthropic, google, azure

# Ollama (Free, Local)
ollama_base_url: http://localhost:11434
ollama_model: llama3.1:8b

# OpenAI
openai_api_key: your_key_here
openai_model: gpt-4o-mini

# Anthropic
anthropic_api_key: your_key_here
anthropic_model: claude-3-5-sonnet-20241022

Option 3: Command Line Arguments

python agent.py --api-key your_key --provider openai --model gpt-4 chat "Hello"

πŸš€ Usage

Command-Line Commands

Chat

Send a message to the agent:

python agent.py chat "Explain Python decorators"

Analyze Code

Analyze a code file:

python agent.py analyze app.py

Explain Code

Explain a piece of code:

python agent.py explain "def fibonacci(n): return n if n < 2 else fibonacci(n-1) + fibonacci(n-2)"
# or
python agent.py explain app.py

Generate Code

Generate code from a description:

python agent.py generate "A function to calculate factorial"

Fix Code

Fix code issues:

python agent.py fix "def broken_function(x): return x / 0"
# or
python agent.py fix buggy_code.py

Refactor Code

Refactor code for improvement:

python agent.py refactor app.py

Help

View all available commands:

python agent.py help

🏭 Production Considerations

CLI Distribution

To distribute this tool to a team:

  1. PyPI Package: Package the agent as a Python package and publish to a private PyPI repository.

    python -m build
    twine upload dist/*
  2. Standalone Binary: Use PyInstaller to create a single-file executable.

    pyinstaller --onefile agent.py
  3. Docker Image: Distribute as a Docker image for consistent environments.

    docker run -it -v $(pwd):/app/code terminal-agent:latest

Configuration Management

For team-wide configuration:

  1. Shared Config: Distribute a standard config.yaml to ~/.terminal_agents/ via configuration management tools (Ansible, Chef).
  2. Environment Variables: Enforce API keys via environment variables in CI/CD pipelines.

Security Hardening

  1. API Key Storage: Never commit config.yaml with API keys to version control. Use a secrets manager (e.g., keyring python package) for local storage.
  2. Input Sanitization: The agent executes within the user’s shell context. Ensure prompts do not contain malicious shell commands if piping input.
  3. Audit Logging: Enable logging to a file to audit agent usage and generated code.

🎯 Use Cases

Code Review

python agent.py analyze src/main.py

Learning New Code

python agent.py explain "$(cat complex_algorithm.py)"

Quick Code Generation

python agent.py generate "A REST API endpoint for user authentication"

Debugging

python agent.py fix "$(cat buggy_code.py)"

General Questions

python agent.py chat "What is the difference between async and await in Python?"

πŸ“¦ Project Structure

terminal_agents/
β”œβ”€β”€ agent.py              # Main agent application
β”œβ”€β”€ config.py            # Configuration management
β”œβ”€β”€ llm_providers.py     # LLM provider implementations
β”œβ”€β”€ requirements.txt      # Python dependencies
β”œβ”€β”€ setup.sh             # Setup script
β”œβ”€β”€ DESIGN.md            # Design documentation
└── README.md            # This file

πŸ”§ Customization

Changing the Default Model

Edit the default model in config.py or set environment variables:

export OLLAMA_MODEL=mistral:7b
export OPENAI_MODEL_NAME=gpt-4

Adding New Commands

Add new command handlers in the main() function in agent.py.

Custom Prompts

Modify prompt templates in the agent methods.

🎨 Terminal UI

The agent uses the rich library for beautiful terminal output:

  • Colors: Syntax highlighting and colored output
  • Markdown: Renders markdown in terminal
  • Panels: Beautiful bordered panels for help text
  • Progress: Progress indicators for long operations
  • Syntax Highlighting: Code blocks with syntax highlighting

If rich is not available, the agent falls back to plain text output.

πŸ” Security & Safety

  • File write operations require explicit confirmation
  • API keys are never logged or displayed
  • Error messages are sanitized
  • Safe file path handling

πŸ› Troubleshooting

β€œNo LLM provider available”

Solution: Configure at least one provider:

  • Install Ollama: curl -fsSL https://ollama.ai/install.sh | sh
  • Or set API keys: export OPENAI_API_KEY=your_key

β€œModule not found” errors

Solution: Install dependencies:

pip install -r requirements.txt

Ollama connection errors

Solution: Ensure Ollama is running:

ollama serve
# In another terminal:
ollama pull llama3.1:8b

Rich library not working

Solution: The agent will fall back to plain text. To fix:

pip install rich pygments

πŸ“š Examples

Example 1: Analyze Python File

python agent.py analyze my_script.py

Example 2: Generate Code

python agent.py generate "A function to sort a list of dictionaries by a key"

Example 3: Interactive Session

python agent.py interactive
> @analyze app.py
> @generate "A REST API with FastAPI"
> @fix buggy_function.py
> exit

Example 4: With API Key

python agent.py --api-key your_key_here chat "Hello"

Example 5: Pipe Code

cat code.py | python agent.py explain

πŸ“ License

See main repository LICENSE file.

🀝 Contributing

Contributions welcome! Please read the main repository contributing guidelines.

πŸ™ Acknowledgments

Inspired by OpenCode and similar terminal-based AI coding assistants.


Made with ❀️ for developers who love the terminal