
Security News
Axios Maintainer Confirms Social Engineering Attack Behind npm Compromise
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.
@llm-dev-ops/connector-hub-cli
Advanced tools
Command-line interface for LLM Connector Hub - a unified CLI for interacting with multiple Large Language Model providers.
npm install -g @llm-dev-ops/cli
Initialize configuration:
llm-hub config init
Set API keys:
llm-hub config set providers.openai.apiKey "your-api-key"
llm-hub config set providers.anthropic.apiKey "your-api-key"
llm-hub config set providers.google.apiKey "your-api-key"
Test provider connectivity:
llm-hub providers test openai
Get a completion:
llm-hub complete "What is TypeScript?" --provider openai
Start interactive chat:
llm-hub chat --provider anthropic --model claude-3-opus-20240229
completeGet a single completion from an LLM provider.
llm-hub complete <prompt> [options]
Options:
-p, --provider <provider> LLM provider (openai, anthropic, google) (default: "openai")
-m, --model <model> Model to use
-t, --temperature <number> Temperature (0-2)
--max-tokens <number> Maximum tokens to generate
--stream Stream the response
--json Output as JSON
Examples:
# Basic completion
llm-hub complete "Explain quantum computing"
# With specific provider and model
llm-hub complete "Write a haiku" --provider anthropic --model claude-3-sonnet-20240229
# Stream response
llm-hub complete "Tell me a story" --stream
# JSON output
llm-hub complete "What is AI?" --json
chatStart an interactive chat session.
llm-hub chat [options]
Options:
-p, --provider <provider> LLM provider (openai, anthropic, google) (default: "openai")
-m, --model <model> Model to use
-t, --temperature <number> Temperature (0-2) (default: 0.7)
--max-tokens <number> Maximum tokens to generate (default: 1000)
--system <message> System message
Examples:
# Start chat with default provider
llm-hub chat
# Chat with specific provider and model
llm-hub chat --provider anthropic --model claude-3-opus-20240229
# Chat with system message
llm-hub chat --system "You are a helpful coding assistant"
configManage CLI configuration.
llm-hub config <subcommand>
Subcommands:
show Show current configuration
set <key> <value> Set a configuration value
get <key> Get a configuration value
init Initialize configuration interactively
Examples:
# Show current configuration
llm-hub config show
# Set API key
llm-hub config set providers.openai.apiKey "sk-..."
# Get default provider
llm-hub config get defaultProvider
# Interactive setup
llm-hub config init
providersManage LLM providers.
llm-hub providers <subcommand>
Subcommands:
list List available providers
test <provider> Test provider connectivity
models <provider> List available models for a provider
Examples:
# List all providers
llm-hub providers list
# Test OpenAI connectivity
llm-hub providers test openai
# List Anthropic models
llm-hub providers models anthropic
Configuration is stored in ~/.llm-hub/config.json.
{
"defaultProvider": "openai",
"providers": {
"openai": {
"apiKey": "sk-..."
},
"anthropic": {
"apiKey": "sk-ant-..."
},
"google": {
"apiKey": "..."
}
},
"defaults": {
"temperature": 0.7,
"maxTokens": 1000
}
}
You can also set API keys via environment variables:
OPENAI_API_KEY - OpenAI API keyANTHROPIC_API_KEY - Anthropic API keyGOOGLE_AI_API_KEY - Google AI API keyopenaianthropicgoogle# Quick completion
llm-hub complete "What is the capital of France?"
# Stream a longer response
llm-hub complete "Write a short story about a robot" --stream --max-tokens 2000
# Use different provider
llm-hub complete "Explain machine learning" --provider anthropic
# Start chat
llm-hub chat
You: Hello!
Assistant: Hi! How can I help you today?
You: What's the weather like?
Assistant: I don't have access to real-time weather data...
You: exit
Goodbye!
# Initialize config
llm-hub config init
# Set default provider
llm-hub config set defaultProvider anthropic
# Set default temperature
llm-hub config set defaults.temperature 0.8
# View configuration
llm-hub config show
# Test all providers
llm-hub providers test openai
llm-hub providers test anthropic
llm-hub providers test google
# List available models
llm-hub providers models openai
MIT OR Apache-2.0
FAQs
Command-line interface for LLM Connector Hub
We found that @llm-dev-ops/connector-hub-cli demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.

Security News
The Axios compromise shows how time-dependent dependency resolution makes exposure harder to detect and contain.