Noundry.AndyAI 1.0.19
by Noundry
AndyAI - Noundry AI Development Assistant
An AI-powered CLI tool for the Noundry platform that works like Claude Code - with conversational mode, tool execution, and MCP server integration. Supports both local models (CodeLlama) and cloud providers (Anthropic, OpenAI, Google, Nebius, and custom OpenAI-compatible APIs).
_ _ _ ____ __ __ _ ___
/ \ | \ | | | _ \ \ \ / / / \ |_ _|
/ _ \ | \| | | | | | \ V / / _ \ | |
/ ___ \ | |\ | | |_| | | | / ___ \ | |
/_/ \_\ |_| \_| |____/ |_| /_/ \_\ |___|
AndyAI - Noundry AI Development Assistant
Like Claude Code, but for Noundry
What AndyAI Does
AndyAI is an AI coding assistant that can:
- Chat conversationally about your code and project
- Read and write files in your project
- Execute shell commands and analyze output
- Search your codebase for files and patterns
- Generate code using Noundry patterns and TagHelpers
- Connect to MCP servers for extended tool capabilities
Two modes available:
- Local Mode: 100% private with CodeLlama - no API keys, no cloud
- Hosted Mode (BYOK): Use your own API keys for Anthropic, OpenAI, Google, Nebius, or any OpenAI-compatible endpoint
Quick Start
Option A: Use Hosted Providers (Recommended for best quality)
# Build
dotnet build
# Configure your API key (choose one)
dotnet run -- config set anthropic.apikey sk-ant-your-key
dotnet run -- config set openai.apikey sk-your-key
dotnet run -- config set nebius.apikey your-nebius-key
# Set active provider
dotnet run -- config set provider anthropic
# Start chatting
dotnet run -- chat
Option B: Run 100% Local
1. Download the Model (~4GB)
# Windows
.\scripts\download-model.ps1
# Linux/Mac
./scripts/download-model.sh
2. Build and Run
dotnet build
dotnet run -- config set mode Local
dotnet run -- chat
3. Start Chatting
> Create a contact form with validation
Andy: I'll create a contact form for you...
──────────────────── Tool: write_file ────────────────────
╭─ ✓ Result ────────────────────────────────────────────────╮
│ File written successfully: ContactForm.cshtml │
╰────────────────────────────────────────────────────────────╯
See Getting Started Guide for detailed walkthrough with examples.
Verified Features
All features have been tested with dotnet run -- test:
| Feature | Status | Notes |
|---|---|---|
| File read/write/edit | ✅ Working | Full CRUD operations |
| Command execution | ✅ Working | Shell commands with output capture |
| Tool call parsing | ✅ Working | 7/7 parsing tests pass |
| Agentic loop | ✅ Working | Model→Tools→Results→Model cycle |
| MCP client | ✅ Working | Graceful connection handling |
| Streaming responses | ✅ Working | Token-by-token output |
| Slash commands | ✅ Working | /file, /run, /ls, /tools, etc. |
| Configuration | ✅ Working | appsettings.json management |
Commands
CLI Commands
andy chat # Start interactive chat
andy chat --no-mcp # Chat without MCP server
andy chat -c ./src # Start with specific context path
andy run "prompt" # Run single prompt and exit
andy config list # View all configuration
andy config providers # View AI provider status
andy config set key value # Set configuration value
andy config get key # Get configuration value
andy config validate # Validate all API keys
andy config validate <provider> # Validate specific provider
andy config add-provider <name> <key> --baseurl <url> --model <model>
# Add custom OpenAI-compatible provider
andy config remove <provider> # Remove a provider's API key
andy test # Run validation tests
Slash Commands (in chat)
| Command | Description |
|---|---|
/file <path> |
Read and display a file |
/run <command> |
Execute a shell command |
/search <pattern> |
Search for files |
/ls [path] |
List directory contents |
/pwd |
Show current working directory |
/tools |
List available tools |
/mcp |
Show MCP server status |
/history |
Show recent conversation |
/save |
Save conversation to disk |
/clear |
Clear conversation context |
/help |
Show all commands |
/exit |
Exit chat |
Available Tools
AndyAI has 6 built-in tools that work offline:
| Tool | Description |
|---|---|
read_file |
Read contents of a file |
write_file |
Write/create a file |
edit_file |
Replace text in a file |
execute_command |
Run shell commands |
search_files |
Find files by pattern |
list_directory |
List directory contents |
When connected to the Noundry MCP server, additional platform-specific tools become available.
How It Works
AndyAI implements a Claude Code-like agentic loop:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ User Input │────▶│ LLM Model │────▶│ Response │
└─────────────┘ └──────┬──────┘ └─────────────┘
│
▼ (if tool calls detected)
┌──────────────┐
│ Extract Tool │
│ Calls │
└──────┬───────┘
│
▼
┌──────────────┐
│ Execute │
│ Tools │
└──────┬───────┘
│
▼
┌──────────────┐
│ Feed Results │──────┐
│ Back to LLM │ │
└──────────────┘ │
▲ │
└──────────────┘
(loop until done)
The model outputs structured tool calls:
<tool_call>
{"name": "read_file", "arguments": {"path": "Program.cs"}}
</tool_call>
AndyAI extracts these, executes the tools, and feeds results back.
Configuration
AI Providers (BYOK - Bring Your Own Key)
AndyAI supports multiple AI providers. Configure using CLI commands or appsettings.json.
Built-in Providers
| Provider | Config Key | Environment Variable | Default Model |
|---|---|---|---|
| Anthropic | anthropic.apikey |
ANTHROPIC_API_KEY |
claude-opus-4-5-20251101 |
| OpenAI | openai.apikey |
OPENAI_API_KEY |
gpt-4o |
google.apikey |
GOOGLE_API_KEY |
gemini-pro | |
| Noundry | noundry.apikey |
NOUNDRY_API_KEY |
noundry-andy-v1 |
| Nebius | nebius.apikey |
NEBIUS_API_KEY |
openai/gpt-oss-120b |
Configure via CLI
# Set API key
andy config set anthropic.apikey sk-ant-api03-xxx
andy config set openai.apikey sk-xxx
# Set active provider
andy config set provider anthropic
# View all providers
andy config providers
# Validate API keys
andy config validate # Validate all
andy config validate anthropic # Validate specific
# Remove invalid/expired key
andy config remove anthropic
Configure via Environment Variables
export ANTHROPIC_API_KEY=sk-ant-xxx
export OPENAI_API_KEY=sk-xxx
export NEBIUS_API_KEY=your-key
Adding Custom OpenAI-Compatible Providers
Any API that follows the OpenAI /chat/completions format can be added via CLI:
# Add Nebius
andy config add-provider nebius $NEBIUS_API_KEY \
--baseurl https://api.tokenfactory.nebius.com/v1 \
--model openai/gpt-oss-120b
# Add Together AI
andy config add-provider together $TOGETHER_KEY \
-b https://api.together.xyz/v1 \
-m meta-llama/Llama-3-70b-chat-hf
# Add local Ollama
andy config add-provider ollama not-needed \
-b http://localhost:11434/v1 \
-m llama3
# Then activate it
andy config set provider nebius
See Provider Configuration Guide for detailed setup.
API Key Validation
Validate your API keys to ensure they're working:
# Validate all configured providers
andy config validate
# Output:
# ╭───────────┬───────────┬──────────────────────────┬───────────────────────────╮
# │ Provider │ Status │ Model │ Message │
# ├───────────┼───────────┼──────────────────────────┼───────────────────────────┤
# │ anthropic │ ✓ Valid │ claude-opus-4-5-20251101 │ API key is valid │
# │ openai │ ✗ Invalid │ gpt-4o │ Invalid API key │
# ╰───────────┴───────────┴──────────────────────────┴───────────────────────────╯
If a key is invalid or expired, remove it:
andy config remove openai
General Configuration
Config file: appsettings.json
{
"ModelProvider": "Hosted",
"LocalModel": {
"Path": "models/codellama-7b-instruct.Q4_K_M.gguf",
"ContextSize": 4096,
"Temperature": 0.7,
"GpuLayers": -1
},
"Mcp": {
"ServerUrl": "https://mcp.noundry.ai/mcp",
"Enabled": true
}
}
View all settings:
andy config list
GPU Acceleration
AndyAI automatically uses GPU acceleration when:
- NVIDIA GPU is installed with 6GB+ VRAM
- CUDA Toolkit 12.x is installed (Download)
- GpuLayers is set to
-1in appsettings.json (default)
Verify GPU is active:
Model initialized successfully - Backend: CUDA, GPU Layers: -1
Configure GPU layers:
{
"LocalModel": {
"GpuLayers": -1 // -1 = all layers on GPU, 0 = CPU only
}
}
See System Requirements for detailed GPU setup.
Requirements
Minimum
- .NET 9.0 SDK
- 8GB+ RAM
- ~8GB disk space (model + application)
For GPU Acceleration (Recommended)
- NVIDIA GPU with 6GB+ VRAM
- CUDA Toolkit 12.x (Download)
- NVIDIA Driver 525+
See System Requirements for detailed setup instructions.
Project Structure
AndyAI/
├── Commands/ # CLI commands (chat, run, config, test)
├── Services/ # Core services (LLM, MCP, file, process)
├── UI/ # Terminal UI (Spectre.Console)
├── Tests/ # Validation tests
├── scripts/ # Model download scripts
└── docs/ # Documentation
Validation
Run all tests to verify your installation:
dotnet run -- test
Expected output:
=== Tool Call Parsing Test Results ===
[PASS] SingleToolCall
[PASS] MultipleToolCalls
[PASS] ComplexArguments
[PASS] SpecialCharacters
[PASS] NoToolCalls
[PASS] MalformedToolCall
[PASS] NewlinesInJson
7/7 tests passed
=== Validation Test Results ===
[PASS] FileSystemService
[PASS] ProcessService
[PASS] ToolExecutor
[PASS] McpService
[PASS] AgenticConversationService
[PASS] ToolCallParsing
6/6 tests passed
Documentation
- Getting Started Guide - Detailed walkthrough with examples
- Provider Configuration - AI provider setup and custom endpoints
- System Requirements - Hardware, GPU, and CUDA setup
License
MIT License
This package has no dependencies.
v1.0.19: Noundry-first code generation, comprehensive NUnit test suite, MCP-first architecture, expanded template library (20 templates + component gallery)
Info
- Last updated 13 days ago
- License
- Download package
Statistics
- Total Downloads
- 0
- Current Version Downloads
- 0
Authors
Noundry