FAQ

Common questions.

Straight answers about how Aorka works, what it costs, and how we handle your data.

Pricing & Plans

What counts as a user?

Any IT professional who logs into the Aorka console — technicians, admins, engineers. Endpoints are unlimited on every plan — you're never penalized for connecting more machines.

Is there a free trial?

We offer a guided pilot program. We'll deploy agents in your environment and walk you through the platform over two weeks. No commitment, no credit card.

What's the difference between the plans?

Every plan includes the full platform — same features, same safety pipeline, unlimited endpoints and users. The difference is the AI models available and how you pay. Starter and Professional include AI usage at a flat rate. Enterprise has a lower base price with metered AI usage, access to all Claude models, and MCP server access for connecting external AI tools.

AI & Models

What AI model powers the chat?

Aorka uses Anthropic's Claude by default. Starter gets Haiku and Sonnet. Professional gets Sonnet and Opus. Enterprise gets all three Claude models — Haiku, Sonnet, and Opus — plus MCP server access so external AI tools like Claude Code, Codex CLI, and Gemini CLI can connect directly.

Do I have to generate my own scripts?

No. Aorka includes a library of thousands of parameterized scripts — every one security-evaluated and scored by real execution history. When a script succeeds, its score rises. When it fails, it drops. You can see what works before you run it. The library grows constantly as new solutions are added, and every script is searchable by describing the problem in plain English.

Can I use my own AI tools with Aorka?

Enterprise tier includes MCP server access. Claude Code, Codex CLI, Gemini CLI, and any MCP-compatible tool can connect directly to your Aorka instance with full safety pipeline enforcement.

Security & Safety

Does the AI have unrestricted access to my machines?

No. Every command passes through a three-layer safety pipeline: regex command filter, AI risk evaluation, and human approval. Dangerous operations are blocked outright — there is no admin override. You approve every system change before it runs.

What if someone compromises my Aorka account?

Multiple layers limit the blast radius. Scripts scoring above 51 are hard-blocked — no admin can override this. Every write operation requires a separate browser-based approval, not just API access. Jobs are cryptographically signed (HMAC-SHA256) and verified independently at dispatch. New client units start read-only and must be explicitly unlocked. MFA protects sensitive operations. An attacker with a stolen session still can't run destructive commands through Aorka.

Can the AI be tricked into running dangerous commands?

Prompt injection can make an AI suggest a dangerous command — it cannot make the safety pipeline approve it. Layer 1 is deterministic regex: it classifies every cmdlet before the AI sees it, and no prompt can change a string match. Layer 3 is a human clicking "Approve" in a separate browser session. The AI has no ability to click that button. The layers that matter most are not AI.

How are agent updates secured?

Agent updates are signed with an RSA-2048 private key that never leaves the development machine. The agent verifies the signature before applying any update — a compromised update server cannot produce a valid signature. Updates are pull-based (the agent decides whether to trust the payload) and rolled out in stages, never to the entire fleet at once.

What privileges does the agent need?

The agent runs as LocalSystem — the same privilege level as every other RMM agent (ConnectWise, Datto, NinjaRMM). It needs this to manage services, query AD, and install updates. The difference is what controls those privileges: outbound-only connections (no listening ports), temp-file execution (EDR-visible), DPAPI-encrypted credentials, device fingerprinting, and the full three-layer safety pipeline.

What about data privacy?

Your data is tenant-isolated at every layer. Facts, understanding, credentials, and conversations are scoped to your organization and stored on our infrastructure — nothing is stored with Anthropic, OpenAI, Google, or any AI provider. AI requests are stateless API calls. Your data is never used to train models.

Do I need to open firewall ports?

No. Agents initiate outbound WebSocket connections only. No inbound rules, no VPN, no port forwarding. If the machine can reach the internet over HTTPS, it can run Aorka.

Is there a detailed security document?

Yes. Our security architecture page covers each threat model in detail: control plane compromise, prompt injection, supply chain integrity, agent privileges, and data isolation — with an honest assessment of residual risk.

Deployment

How do I deploy agents?

Agents are packaged as a standard Windows MSI. Deploy through GPO, Intune, NinjaRMM, JumpCloud, or anything else that can push an MSI. You can also push directly from the Aorka console or run a one-line PowerShell installer.

What platforms are supported?

The agent currently supports Windows (Server and Desktop). The platform also integrates with FortiGate network devices and Microsoft 365 environments.

Still have questions?

Request a demo and we'll answer everything live.

Request a demo