Skip to main content

3 posts tagged with "AI Agent"

Artificial Intelligence Agent

View All Tags

vArmor v0.10.1: AI Agent Traffic Inspection, Key Injection, and CVE-2026-31431 Mitigation

· 10 min read
Danny Wei
ByteDance

In vArmor v0.10.0, we introduced the NetworkProxy enforcer — a sidecar-based transparent proxy that brings L4/L7 network access control to Kubernetes workloads. While v0.10.0 could already enforce allow/deny policies on plaintext HTTP and TLS SNI, HTTPS encrypted traffic remained a black box: the proxy could see the destination domain via SNI, but could not inspect request paths, headers, or response bodies.

vArmor v0.10.1 completes the Phase 2 of the NetworkProxy enforcer by adding TLS Man-in-the-Middle (MITM) capabilities, unlocking deep HTTPS inspection, automatic header injection, and anti-Domain-Fronting protection. This release also introduces IPv6 dual-stack support, configurable sidecar resource quotas, a ConfigMap-to-Secret migration for improved security, and demonstrates rapid CVE response capabilities through the CVE-2026-31431 mitigation case study.

vArmor v0.10.0: Network Access Control for AI Agents

· 10 min read
Danny Wei
ByteDance

With the explosive growth of AI Agents, more and more enterprises are deploying Agents in Kubernetes clusters as containerized workloads. These Agents typically need to call external LLM APIs (such as OpenAI, Anthropic, etc.), execute code, access tool plugins, and even connect to various external services through MCP (Model Context Protocol). However, the high degree of autonomy of Agents also brings new security challenges — how can we ensure that an Agent only accesses authorized network resources?

vArmor v0.10.0 introduces the brand-new NetworkProxy enforcer, which implements L4/L7 network traffic interception and access control through a sidecar proxy architecture, providing fine-grained network security protection for AI Agent workloads. This article focuses on this core feature and its application in AI Agent protection scenarios.

AI Application Development Platform Security Hardening Practices

· 7 min read
Danny Wei
ByteDance

With the advent of the era of large language models, AI applications based on LLMs have been constantly emerging. This has also given rise to AI application development platforms represented by Coze, Dify, Camel, etc. These platforms provide visual design and orchestration tools, enabling users to quickly build various AI applications using no-code or low-code approaches with the capabilities of large language models (LLMs), thus meeting personalized needs and realizing business value.

An AI application development platform is essentially a SaaS platform, where different users can develop and host AI applications. Therefore, the platform needs to pay attention to the risk of cross-tenant attacks and take corresponding preventive measures. This article will take the actual risk of the "code execution plugin" as an example to demonstrate the necessity of isolation and hardening. It will also introduce to you how to use vArmor to harden plugins, thereby ensuring the security of the platform and its tenants.