
NEAR AI Cloud
Pre-launchMost InnovativeOpenClaw in Trusted Execution Environments with cryptographic privacy
Security Score: 36/100 — Good
NEAR AI Cloud stands out with a genuinely differentiated security architecture: hardware-enforced Trusted Execution Environments (Intel TDX + NVIDIA Confidential Compute) that provide cryptographic privacy guarantees rather than trust-based promises. The platform scores well on data isolation, credential protection, and platform security due to the TEE foundation. However, the OpenClaw hosting offering is still in private beta ('early access by invitation only') and the documentation focuses almost entirely on the inference privacy story. Critical agent-specific security concerns — rogue agent controls, kill switches, behavioral monitoring, approval workflows, backup/export, and output trustworthiness — are either undocumented or absent. The company is well-funded (NEAR Foundation, $542M raised), has a named leadership team including Illia Polosukhin (co-author of 'Attention Is All You Need'), and maintains comprehensive legal documentation (Privacy Policy, ToS, DPA, AUP). IronClaw (their Rust-based OpenClaw alternative) shows promising security-first design with WASM sandboxing and leak detection, but it is a separate product from the hosted OpenClaw offering. The gap between NEAR AI's excellent infrastructure-level security and the limited agent-level safety documentation is the primary concern.
10 risk categories scored 1-10 × evidence weight. Based on our methodology, grounded in OWASP Agentic Security, NIST CSF 2.0, and CIS Controls.
NEAR AI Cloud provides strong documented data isolation through hardware-enforced TEEs. Their documentation states 'Your OpenClaw runs in an encrypted execution environment that no external party can inspect, including NEAR AI' and 'We do not retain plaintext Content outside the enclave.' The privacy policy explicitly states 'NEAR AI will not use Customer Data to train any generative AI models' and confirms 'Chat Content, including prompts and outputs are encrypted in transit and at rest. During inference, they are decrypted only inside an attested hardware enclave (e.g., Intel TDX) designed to prevent access by our personnel and cloud operators.' The Cloud page also specifies '100% TLS 1.3 encryption, AES-256 at rest.' Employee access is architecturally prevented by TEE design rather than just policy.
The IronClaw project (NEAR AI's Rust-based OpenClaw alternative) documents hardware-enforced sandboxing with 'WASM containers for tool execution with capability-based permissions' and 'pattern-based prompt injection detection.' NEAR AI Cloud runs workloads inside TEEs where 'Code and data inside the TEE cannot be modified or intercepted.' However, for the OpenClaw hosting specifically (still in private beta), there is limited documentation about prompt injection defenses, human-in-the-loop controls, or memory integrity protection specific to the OpenClaw deployment. The TEE provides strong container escape prevention, but agent-level hijacking mitigations are less clearly documented.
TEE architecture provides strong credential protection: 'long-term memory, credentials, and tool access can persist without ever leaving encrypted memory—even while running in the cloud.' IronClaw documents 'Secrets injected at host boundary, never exposed to WASM code' with 'Leak detection - Scans requests and responses for secret exfiltration attempts' and 'AES-256-GCM encryption for stored secrets.' The privacy policy confirms 'We discard IP addresses after routing.' However, specific credential lifecycle management (rotation, revocation) for OpenClaw deployments is not documented, and there is no mention of credential leak detection in agent outputs for the hosted OpenClaw product specifically.
No documentation was found on the OpenClaw hosting offering regarding agent guardrails such as rate limiting on agent actions, kill switches, spending caps on agent behavior, least-privilege tool access, or behavioral monitoring. The OpenClaw page focuses entirely on the privacy/TEE value proposition. IronClaw mentions 'Rate limiting and resource constraints per tool' but this is a separate product not yet integrated into the hosted offering. The OpenClaw hosting is described as an 'always-on AI agent' with 'deep access' but without documented controls on what that agent can do autonomously.
No documentation was found about backup procedures, data export capabilities, or disaster recovery for OpenClaw deployments on NEAR AI Cloud. The Cloud page claims '99.5% monthly uptime for confidential enclaves' which is relatively modest. The Terms of Service state 'ALL PAYMENTS ARE NONREFUNDABLE' and 'NEAR AI will not be liable for any change to or any suspension or discontinuation of the Services.' The product is still in private beta ('Early access is by invitation only'), which increases risk of service changes. NEAR AI is backed by NEAR Foundation with $542M raised, providing some stability signal, but there is no documented data portability plan.
The Cloud page publishes specific per-token pricing for multiple models (e.g., 'GLM-4.6 FP8: $0.75/M input tokens, $2/M output tokens'). The Terms mention 'Usage Credits' that are prepaid and non-refundable, with access suspended when credits are exhausted — this acts as a natural spending cap. However, the Terms also state 'Any use of the Services in excess of the usage limits set forth in a Plan will be billed in arrears' and NEAR AI reserves the right to 'change our available Plans, or the fees for a Plan, at any time.' There is no documented alerting system for usage spikes specific to OpenClaw agent workloads.
The privacy policy describes a comprehensive data handling framework with GDPR compliance, CCPA compliance, and multiple jurisdiction coverage. It states 'We maintain layered controls for data confidentiality, integrity, and availability' including 'continuous monitoring and audit logging.' The Cloud page claims 'Real-time monitoring with immutable audit logs.' Attestation artifacts include 'Per-request CPU/GPU enclave quotes, container hash, policy digest, timestamp, account ID.' Contact channels are documented (privacy@near.ai, legal@near.ai). However, no specific incident response timeline or breach notification SLA is published, and there is no public status page referenced.
NEAR AI maintains 'written agreements with the providers of the Third-Party Generative AI Services prohibiting third parties from using Customer Data to train their AI models.' The verification system provides 'cryptographic proofs that verify the integrity of the execution environment' and 'All AI outputs are cryptographically signed inside the TEE.' IronClaw is open-source (Apache 2.0 / MIT). However, there is no published SBOM, no documented dependency scanning process, and no mention of MCP server or tool vetting for the OpenClaw deployment. The attestation covers model integrity but not the broader supply chain of plugins and integrations that OpenClaw uses.
The Cloud platform supports authentication via Google, GitHub, and NEAR Wallet (SSO). The privacy policy describes 'strong identity and access management with SSO/MFA/RBAC, network and application hardening.' TLS termination occurs 'inside the TEE, not at an external load balancer' which is a notably strong architectural choice. The policy mentions 'independent reviews and penetration testing' though no specific audit reports are published. The Cloud page specifies 'HSM-backed key rotation every 90 days.' Use restrictions in the Terms prohibit users from attempting to 'bypass or disable rate limits, security, or attestation/verification mechanisms.' Inter-agent communication security is not specifically documented.
No documentation was found about OpenClaw-specific mitigations for hallucinations, output manipulation, approval workflows, undo capabilities, or transparency about AI uncertainty. The Terms disclaim liability for outputs: 'OUTPUTS MAY BE INACCURATE, INCOMPLETE, MISLEADING, OFFENSIVE, OR OTHERWISE UNSUITABLE FOR ANY PARTICULAR PURPOSE' and place responsibility entirely on the customer. IronClaw mentions 'Content sanitization with policy enforcement' but this relates to input safety, not output trustworthiness. The OpenClaw hosting offering does not document any human-in-the-loop or verification workflows for high-impact agent decisions.
Key Features
- ✓24/7 operation via Telegram
- ✓Persistent memory
- ✓Gmail, Calendar, Notion integration
Integrations
Strengths
- +Only genuinely novel security technology
- +Hardware-level guarantees others can't match
- +Well-funded ($500M+ NEAR Protocol)
Weaknesses
- −Pre-launch with limited access (application required)
- −NEAR token association limits mainstream appeal
- −TEE protects from host, not from the agent itself
- −Crypto/Web3 branding may hurt credibility
Verdict
Most innovative security approach, but limited availability and crypto-native focus. Watch this space.