Janitor AI Proxy Setup — Complete API Configuration Guide
Janitor AI's killer feature is the ability to route chats through your own large language model. It is also the platform's biggest source of confusion. This guide walks through the three main approaches — OpenAI, OpenRouter, and reverse-proxy — and helps you decide when proxy setup is worth the hassle.
What is a Janitor AI proxy and why use one?
"Proxy" is a loose term in this community. It usually means one of three things: routing chats through your own OpenAI API key, using OpenRouter's multi-model API, or pointing Janitor AI at a third-party reverse-proxy URL. All three replace the platform's shared default model with something faster or more flexible.
The reasons users do it: speed (no queue), response quality (better models), content flexibility (different filters), and resilience (the chat keeps working when the shared default model is down). The cost: you pay per token used to the provider, you take on a small amount of configuration, and you accept the risks of whichever provider you choose.
If those benefits do not matter to you, the default free model is fine and the proxy setup is not worth it. Read on if any of them do.
Proxy options: OpenAI vs OpenRouter vs reverse-proxy
| Option | Setup difficulty | Cost | Model selection | Risk profile |
|---|---|---|---|---|
| OpenAI direct | Easy | Per-token to OpenAI | OpenAI models only | Low — first-party provider |
| OpenRouter | Easy | Per-token, prepaid credit | Many models in one key | Low — single trusted aggregator |
| Reverse-proxy | Variable | Often "free" with risks | Depends on operator | High — unknown operator handles your traffic |
For most users, the choice is between OpenAI and OpenRouter. OpenAI gives access to the latest flagship models with clear billing. OpenRouter gives a single API key that unlocks dozens of models — including more permissive options — with prepaid credit. The Janitor AI community leans toward OpenRouter for model variety; OpenAI is simpler if you just want one reliable model.
Reverse-proxies are a separate animal. They are operated by unknown third parties, can read every chat you send through them, and routinely shut down without warning. We do not recommend specific reverse-proxy URLs and we do not link to them.
Step-by-step: OpenAI API setup
- Create an OpenAI account. Sign up on the OpenAI platform site, verify your email and phone, and add a payment method. Usage is billed per token, so set a usage limit to avoid surprises.
- Generate an API key. In the API keys section of your dashboard, create a new secret key. Copy it immediately — OpenAI will not show the full key again.
- Open Janitor AI settings. In the chat settings or API panel on Janitor AI, select OpenAI as your provider.
- Paste your key and pick a model. Paste the secret key and choose a model — usually a GPT-4-class model for quality or a GPT-3.5-class model for cheaper, faster chats.
- Test with a message. Send a message to any character. A working configuration returns a response in seconds. An error message usually points to an invalid key, missing credit, or the wrong model identifier.
Step-by-step: OpenRouter setup
- Create an OpenRouter account. Sign up at openrouter.ai, then top up your credit balance with a payment method.
- Generate an OpenRouter API key. In your account, create a new key. Copy and store it securely.
- Pick a model. Browse the model catalogue and pick one with the price-per-token and policy you want. Note the model identifier exactly as listed.
- Paste key and model into Janitor AI. Open the model settings and select OpenRouter (or a custom OpenAI-compatible endpoint, depending on the UI version). Paste your key and enter the model identifier.
- Test and tune. Send a test message. If it works, adjust temperature and max tokens as you normally would.
OpenRouter's appeal is that you can swap models without changing the key. If one model gets too expensive or too filtered, switch in the dropdown.
Reverse-proxy: how it works
A reverse-proxy in this context is a third-party server that accepts your Janitor AI traffic, forwards it to an LLM provider on your behalf, and returns the response. Users sometimes use them to share an API key, to bypass regional restrictions, or to access models they would otherwise not have credit for.
The mechanics are simple: you point Janitor AI at the proxy URL instead of OpenAI's URL. The platform sends requests there. The operator handles authentication, routing, rate-limiting, and the actual model call. They see every message you send and every response you receive.
This is why we do not recommend or link to specific reverse-proxies. The trust assumption — that you trust a stranger with all your conversations and potentially your API key — is incompatible with the level of privacy most users want. If you do not have a strong technical reason to use one, either pay for your own OpenAI or OpenRouter key, or use one of the alternative platforms we recommend.
Common errors and troubleshooting
- "Invalid API key" or 401: wrong key, expired key, or missing billing. Regenerate and try again.
- "Insufficient quota" or 429: you are out of credit, or hitting a per-minute rate limit. Top up or wait.
- "Model not found": the model identifier in your settings does not match what your provider accepts. Re-check the exact string.
- Timeouts: the model is overloaded or the endpoint URL is wrong. Test the endpoint independently.
- Garbage responses on a known-good model: the temperature or max tokens is misconfigured. Reset to defaults and retry.
When proxy setup is not worth it
If you chat occasionally, if you are not picky about response speed, or if you do not want to manage billing in two places, proxy setup is overkill. The same is true if you are still evaluating whether Janitor AI is the right platform at all.
In all of those cases, the cleaner path is to use a Janitor-AI-style platform that handles the model server-side, has no queue, and asks for no API keys. The alternatives page ranks the options. The current top pick is the closest match for the experience Janitor AI users want without the configuration burden.