Security
Any management agent with execution privileges is a security commitment. We don't minimize this — we engineered for it. Here's exactly what stands between the AI and your machines.
Every remote management tool — ConnectWise, Datto, NinjaRMM, Kaseya, and now Aorka — creates the same fundamental trade-off: you gain centralized control, but you also create a high-value target. If the control plane is compromised, every connected endpoint is at risk.
This is not unique to AI-powered tools. It's inherent to the RMM model. What is unique to Aorka is that the AI layer adds security constraints that traditional RMM tools lack entirely. In a conventional RMM, an operator types a command and it executes — no analysis, no scoring, no second opinion. Aorka's AI pipeline introduces three independent safety layers before anything runs.
We built Aorka assuming the control plane would be targeted. Every design decision below reflects that assumption.
"If an attacker compromises Aorka's central control plane, they could run commands on everything at once."
Scripts scoring above 51 are blocked for all users — including admins. There is no override, no emergency bypass, no "god mode." Destructive operations are caught by deterministic pattern matching before the AI evaluator even runs. An attacker with full admin access still cannot execute these commands through Aorka.
Every write operation requires human approval in the browser — a separate authenticated session, not the same context that generated the script. Even if an API session is compromised, the attacker needs a valid browser session to approve execution. MCP (external AI tool) requests require approval at a unique, expiring URL with session authentication. MCP tokens are IP-bound at creation — requests from a different IP are rejected, and rebinding requires MFA verification.
Every job is cryptographically signed (HMAC-SHA256) at creation and verified at dispatch. The signing key is separate from the API credentials. A compromised API layer cannot forge jobs that pass signature verification at the WebSocket dispatch layer — the two systems must independently agree.
All agent connections use TLS (WSS) — the same baseline as any RMM. Aorka stacks additional layers on top. Agents verify the server's identity via HMAC-SHA256 challenge-response on every connection — if the server fails, the agent disconnects. The server verifies the agent via trusted hash registry and device fingerprint — if an agent's hash is unrecognized or its fingerprint doesn't match the registered endpoint, the connection is rejected. Both sides must prove identity before any commands are exchanged. Proof values auto-refresh during key rotation.
Aorka authenticates through your Azure AD tenant — your conditional access policies, your MFA requirements, your session controls all apply first. Aorka then adds its own TOTP-based MFA on top for sensitive operations: credential access, MCP token management, and security-critical settings. Your security posture is the floor, not the ceiling. Session management includes active session listing, remote revocation, and security audit views. Rate limiting on authentication endpoints prevents brute-force attacks.
Every new client unit starts in read-only mode. Even with full admin access, an attacker cannot run write scripts against locked units. The tenant admin must explicitly unlock a unit before any state-changing commands are allowed — and if a parent unit is locked, all descendants are locked regardless of their individual setting.
"An attacker could trick the AI into bypassing safety filters and executing unauthorized commands."
This is the most common concern about AI-powered tools, and it reflects a real risk in systems where the AI has direct execution authority. Aorka's architecture is specifically designed so that the AI never has direct execution authority.
Regex command filter
Deterministic pattern matching. Every operation is classified as read-only, write, or blocked before the AI sees the command. No prompt can bypass a pattern match — it's string comparison, not language understanding.
AI risk evaluation
Multiple independent AI evaluations run from different model contexts and their analyses are compared. This is not the chat model scoring its own output — it's separate models evaluating the same script independently. They only score — they cannot approve or execute. Even if one evaluation were manipulated to return a low score, the others would flag the discrepancy, Layer 1's classification still holds, and Layer 3's human approval still applies for any write operation.
Human approval gate
Every write operation requires explicit human approval in the browser. The AI cannot click "Approve" — that's a separate authenticated action by a human user. This is not a soft guardrail the AI can talk its way around; it's a UI gate in a different security context.
The key insight: prompt injection can make an AI suggest a dangerous command. It cannot make a regex filter misclassify it. It cannot make a human click "Approve." It cannot forge a cryptographic job signature. The layers that matter most are not AI.
"If Aorka's update server is hijacked, a malicious agent version could be pushed to your entire fleet."
Agent updates are signed with an RSA-2048 private key that never leaves the development machine. The agent verifies the signature against an embedded public key before applying any update. Even if the update server is fully compromised, the attacker cannot produce a valid signature without the offline private key.
Every agent version's hash is registered in the database. Agents report their running hash on connection. The server verifies the hash against the trusted registry — unrecognized hashes are flagged and can be blocked. This provides a second independent check beyond RSA signature verification.
The server never pushes arbitrary code to agents. It sets a flag indicating an update is available. The agent polls for this flag, downloads the update, verifies the RSA signature, and applies it. This is a fundamentally different model from "server pushes code directly" — the agent decides whether to trust the payload.
Updates are never pushed to the entire fleet at once. New versions are deployed to test endpoints first, verified, then gradually rolled out. A bad update affects a handful of machines, not your entire infrastructure.
"The agent requires elevated privileges. If it's compromised, the attacker inherits those permissions."
This is accurate and unavoidable. The agent requires elevated privileges because it needs to manage services, query directories, install updates, and perform the operations IT teams require. Every RMM agent — ConnectWise, Datto, NinjaRMM — has this same requirement.
The mitigation is not reducing privileges (which would make the tool useless) but controlling what those privileges are used for:
"How do you prevent a single vulnerability from cascading into full compromise?"
The Rule of Two, established by the Chromium security team, states that a component must never combine more than two of three dangerous properties. If all three are present, a single exploit can cascade into full system compromise. Security-critical systems are designed so that at least one property is always absent.
The original Rule of Two addresses code execution: can crafted input exploit a memory-unsafe language to hijack a privileged process?
Untrustworthy input. Commands arrive from the server over a network connection. Input is filtered through the deterministic safety pipeline and server identity is verified via HMAC challenge-response before any commands are accepted.
Unsafe implementation language. The agent runs on a managed, memory-safe runtime (.NET CLR for PowerShell, or equivalent on other platforms). No C/C++ in the execution path. Memory corruption exploits do not apply.
High privilege. The agent runs as SYSTEM (Windows), root (Linux/macOS), or equivalent. This is required for IT management operations.
Two of three. The absent property — unsafe language — means that even if an attacker could deliver malicious input, the managed runtime prevents memory corruption from escalating into arbitrary code execution.
The classic Rule of Two addresses memory safety. AI orchestration introduces a different class of risk: non-deterministic processing. An AI model interpreting untrusted data can be manipulated in ways that are difficult to predict — analogous to how memory-unsafe code can be exploited by crafted input. For AI orchestration, the three properties become:
Untrustworthy input. Agent-reported data — job results, diagnostics, status reports — originates from endpoints that could be compromised.
Non-deterministic processing. An AI model reasons about the input, interprets meaning, and makes decisions. Unlike deterministic code, its behavior under adversarial input is not fully predictable.
High privilege. Write access to the knowledge base, credential dispatch, and job execution across the managed infrastructure.
If all three combine in a single component, a compromised endpoint can feed crafted data to the AI, manipulate its reasoning, and use its privileged access to poison knowledge, harvest credentials, or influence actions on other endpoints. This is the AI orchestration equivalent of a buffer overflow leading to privilege escalation.
Aorka's architecture prevents this by ensuring deterministic gates stand between the AI and every privileged infrastructure action. Every write operation to managed infrastructure passes through multiple independent filters: deterministic command classification, AI risk scoring, human approval, and EDR policy enforcement on the endpoint itself. Knowledge mutations are logged with full provenance and are reversible. Credential dispatch and command execution are gated by deterministic validation layers that the AI cannot bypass.
Aorka plays its cards face up. Every layer of the system is designed so that both sides — Aorka and your environment — can independently verify the other. You don't have to take our word for it.
The Aorka agent is plaintext source code — not a compiled binary. Open the agent file on any endpoint and read every line of code that runs on your machine. Your EDR can read it. Your security team can audit it. There is nothing hidden. If someone edits it, the hash changes and the server will not connect — the trusted hash registry catches the mismatch immediately.
Commands execute as readable script files, not encoded or obfuscated blobs. Your endpoint protection sees every command that runs — the full text, in the clear. This is effectively another safety layer, but one that your endpoints hold, not us. If your EDR policy blocks a command, it blocks it — Aorka doesn't bypass endpoint security.
This principle runs through the entire system. Your IDP enforces your security policies — we add TOTP on top. You can verify us, we can verify you. The agent verifies the server's identity. The server verifies the agent's identity. Scripts are visible to your EDR. Audit logs are visible to your team. No layer depends solely on trust in Aorka.
Every data layer is tenant-scoped: facts, understanding, credentials, conversations, endpoints, and job history. Multi-tenant SSO auto-discovery assigns users to their Azure AD tenant. Cross-tenant data access is structurally impossible — queries are scoped at the database level, not the application level.
AI requests are stateless API calls. Your infrastructure data, knowledge base, and credentials are never stored by Anthropic, OpenAI, Google, or any AI provider. Conversation history is stored in Aorka's database under your tenant, not in the AI provider's systems. Your data is never used to train models.
Stored credentials use AES-256-GCM encryption at rest. Viewing a credential requires MFA verification — not just session authentication. Break-glass access for emergencies is logged and alerted. Credential access controls are independent of unit access.
Application secrets (API keys, signing keys, encryption keys) are managed via AWS Secrets Manager with IAM-scoped access policies. No secrets are stored in code, configuration files, or environment variables on the application server.
No security architecture eliminates all risk. Here's what we consider the honest residual:
Like every SaaS tool, Aorka requires trust in the vendor. We mitigate this with cryptographic verification (RSA signatures, HMAC signing, mutual authentication), staged rollouts, and comprehensive audit logging — but the trust relationship exists. That's true of ConnectWise, Datto, and every other RMM.
An attacker who compromises a tenant admin's session can approve scripts within Aorka's allowed risk range (0–50). MFA, session management, and the hard ceiling at 51 limit this, but it's the most realistic attack vector. Treat your Aorka console access with the same seriousness as your RMM — because it is one.
| Threat | Traditional RMM | Aorka |
|---|---|---|
| Operator runs destructive command | Executes immediately | Blocked if score >50; write approval required |
| Compromised admin session | Full execution access | Hard ceiling at score 51; unit locks; MFA on sensitive ops |
| Update server hijacked | Varies by vendor | RSA signature + trusted hash verification |
| MITM on agent connection | TLS only | TLS + HMAC challenge-response + trusted hash + device fingerprint |
| AI manipulation | N/A (no AI layer) | Deterministic filter + separate evaluator + human gate |
| Agent transparency | Compiled binary (opaque) | Plaintext source code, EDR-visible execution, hash-verified integrity |
We'll walk you through the safety pipeline with your own infrastructure. Break it if you can.
Request a demo