Agents broke the security stack
and it's costing you a lot
We made three security investments at Array Ventures this year in these areas of data security, network security, and employee security testing. We are looking for more.
The security stack we built for the last 20 years assumed a human was somewhere in the loop. Agents are breaking the stack and humans can’t make sense of it. Worse of it insurance companies can’t make sense of it.
Investment 1: Agents broke data security
The first thing AI agents broke is data security. It’s already happening. Most security teams can’t see it.
April 2026. Vercel got breached without anyone touching Vercel. The attackers compromised Context.ai, a third-party AI assistant a Vercel employee had connected to their Google Workspace. They stole the OAuth tokens, took over the Google account, pivoted into Vercel’s internal systems, and walked out with 580 employee records. The database is on a hacker forum right now. No exploit. No phishing. Just an AI assistant, three companies removed from where the breach started.
2025. Researchers disclosed EchoLeak, a vulnerability in Microsoft 365 Copilot. An attacker sends an email with prompt injection hidden in the markdown. Copilot reads it automatically, follows the hidden instructions, and exfiltrates SharePoint and Teams data through approved Microsoft URLs. No human ever opens the email. No alert fires.
Both attacks share one property the old stack can’t handle: the actor isn’t a person. It’s a process. It has tokens. It has admin permissions. It moves data through syscalls.
There are now roughly 100 non-human identities inside the average enterprise for every human user. Most security teams can’t even inventory their own.
Endpoint security watches keyboards and screens. DLP writes rules per app. SaaS security tracks login flows. None of it catches an agent. Write a new rule and the agent has already found ten ways around it.
IBM’s 2025 report puts the average breach at $10.22M and the average detection time at 241 days. By the time you find out, your data has been moving for eight months.
Investment 2: Agents broke trust at the network layer
Your website doesn’t run on one thing. It runs on a stack of internet plumbing most people never see. DNS maps your domain to an IP. BGP routes traffic to that IP. TLS proves the server is really you. JavaScript is the actual code running in your customers’ browsers.
Each layer was secured separately. Each layer mostly works. But attackers figured out years ago that you don’t need to break a layer if you can compromise the trust between layers.
2024. Polyfill.io. A Chinese company bought the domain. 100,000+ websites loaded JavaScript from it, including Intuit, Mercedes-Benz, and the World Economic Forum. Months later, the new owners started serving malware. The TLS was valid. The DNS resolved correctly. The script came from the same trusted endpoint everyone had used for years. No single-layer tool was watching the gap between who owned the domain and what the script did.
For years, cross-layer attacks were rare enough that most companies got away with not thinking about them. That’s changing fast. Enterprise traffic is now mostly machine-to-machine. AI agents call APIs and load third-party code without a human in the loop. The blast radius of a single chain attack is bigger than it’s ever been.
Attackers are also using AI to probe for these gaps. A category of attack that used to require expert craftsmanship now runs at scale.
Crypto exchanges. Banks. Healthcare portals. Tax filing services. Anyone whose business depends on customers reaching the right server through the right code. This is where impersonation attacks become regulatory incidents and lost customer trust at scale.
Investment 3: The breach isn’t what kills you. The response does.
The most expensive part of a breach isn’t the technical exploit. It’s the chaos inside the company afterward.
70% of security leaders say internal chaos during a breach causes more damage than the attack itself (Cytactic CIRM Report 2025). 57% say their most recent incident was something they had never seen before. The runbooks and the dry runs do not survive contact with the actual fight.
MGM, 2023. A 10-minute social engineering call to the IT help desk. The response cost over $100M. Slot machines, hotel keys, ATMs, and websites went down across 30+ properties. Staff reverted to pen and paper. Guests waited hours in lines. Nobody could tell them whether the outage would last a day or three weeks.
Here’s the structural problem: 150,000 security teams worldwide run the same compliance-mandated crisis exercise every year. A static narrative, written months earlier. No surprises. No adaptation. Teams check the box and move on. When a real attack arrives that nobody has seen before, nobody has rehearsed for it.
Crisis preparation is the part of security that has never scaled. The technical layer keeps getting better every year. Detection. Prevention. Endpoint. Network. The human and organizational layer of incident response has barely moved. Putting a team through realistic crisis pressure used to require flying in expensive specialists, which only the largest enterprises could afford.
AI is the first technology that makes adaptive crisis preparation tractable for everyone else.
What we already backed
One bet in each layer this year:
Data movement governance for agents. A team rebuilding DLP at the kernel level, treating agents and humans as the same kind of process moving data through syscalls.
Web infrastructure security. Catching impersonation attacks that exploit gaps between the layers of the web stack. The kind no single tool sees.
Adaptive crisis training. Replacing the static compliance tabletop with AI adversaries that probe, adapt, and force teams to rehearse for the attacker they’ll actually face.
Where we’re still looking
We’re actively writing checks in:
Agent identity and authority management. Non-human identities outnumber humans 100 to 1. Most companies can’t inventory theirs.
Autonomous SOC and incident response. Defenders need to match offense’s clock speed.
AI red-teaming and pre-deployment testing. Models ship faster than testing tools can keep up. We want the team that fixes this.
Vibe coding security. Agents are shipping production code with security holes baked in. Nobody has solved it.
How are you thinking about security in your stack? What’s the gap that worries you most?
If you’re building a company in any of these areas, email us deals@array.vc.


