Traditional perimeter security assumes everything inside the network is safe. AI agents obliterate that assumption by operating across the open internet with no fixed perimeter. Zero trust — verify every request, never trust by default — is the only security model that works for autonomous agents. Our 102 million domain categorization database provides the verification layer that makes zero trust enforceable at every navigation event.
Every autonomous agent today follows a trust-everything model by default. It treats a phishing site identically to a Fortune 500 corporate page — because it has no mechanism to differentiate.
When a human employee browses the web, the corporate network applies firewall rules, CASB policies, and web proxy filters at the network edge. The employee operates within a defined security perimeter. AI agents shatter this model entirely. An agent executing a research task may visit 50 domains in 30 seconds, crossing organizational boundaries, jurisdictions, and trust zones with every click. There is no network perimeter to enforce — the agent IS the perimeter.
Zero trust for AI agents requires a verification oracle — a system that can answer the question "should this agent be allowed to visit this URL?" in under a millisecond, for every single navigation event, without exception. Our 102 million domain database serves as that oracle. Every domain is pre-classified with IAB categories, page types, reputation scores, and popularity rankings. The agent's harness queries this database before every navigation — no URL gets implicit trust.
The zero trust model maps directly to three database fields: IAB category determines whether the content is within the agent's approved scope. Page type determines whether the page function is safe for automated interaction. Reputation score determines whether the domain itself is trustworthy. All three checks must pass before the agent is allowed to proceed — verify everything, trust nothing.
How URL categorization enables verify-first, least-privilege, continuous-validation browsing for AI agents
Every URL the agent targets is verified against the 102M domain database before the HTTP request fires. The verification returns the domain's IAB category, page type, reputation score, and popularity ranking. No URL bypasses this check — not even URLs the agent has visited before in the same session. Re-verification on every request prevents trust escalation attacks where a legitimate domain redirects to a malicious one mid-session.
Instead of giving agents blanket internet access, zero trust restricts each agent to the minimum set of domain categories required for its task. A financial research agent gets access to "Business and Finance" and "News" categories only. A product comparison agent gets "Shopping" and "Technology" categories. The database's 700+ IAB categories enable granular privilege scoping that matches the agent's actual task requirements — nothing more.
Zero trust does not stop at the first verification. As the agent navigates from page to page, every link click, redirect, and JavaScript-triggered navigation is re-validated against the database. Session context is maintained to detect drift — if an agent starts in "Technology" domains and gradually migrates toward "Adult" or "Gambling" content through link chains, the continuous validation catches the category boundary crossing and halts navigation.
Production-ready snippets implementing verify-first navigation for autonomous agents
import http.client
import json
from datetime import datetime
class ZeroTrustAgentGateway:
"""Every URL is verified before agent navigation. No implicit trust."""
SENSITIVE_PAGE_TYPES = ["login", "checkout", "settings", "admin", "signup"]
def __init__(self, api_key, allowed_categories=None):
self.api_key = api_key
self.allowed_categories = allowed_categories or []
self.conn = http.client.HTTPSConnection(
"www.websitecategorizationapi.com"
)
self.audit_log = []
def verify_url(self, target_url):
"""Zero trust verification: classify, evaluate, decide."""
payload = (
f"query={target_url}"
f"&api_key={self.api_key}"
f"&data_type=url"
f"&expanded_categories=1"
)
headers = {
"Content-Type": "application/x-www-form-urlencoded"
}
self.conn.request(
"POST",
"/api/iab/iab_web_content_filtering.php",
payload,
headers
)
res = self.conn.getresponse()
return json.loads(res.read().decode("utf-8"))
def enforce_zero_trust(self, target_url, agent_id):
"""Three-check verification: category, page type, reputation."""
data = self.verify_url(target_url)
# Check 1: Category within allowed scope
categories = [
c[0].split("Category name: ")[1]
for c in data.get("iab_classification", [])
]
category_approved = any(
cat in self.allowed_categories for cat in categories
)
# Check 2: Page type is not sensitive
page_type = data.get("page_type", "unknown")
page_type_safe = page_type not in self.SENSITIVE_PAGE_TYPES
# Check 3: Reputation score above threshold
reputation = data.get("open_page_rank", 0)
reputation_ok = float(reputation) >= 2.0
# All three must pass — zero trust means zero exceptions
trust_decision = (
category_approved and page_type_safe and reputation_ok
)
self.audit_log.append({
"timestamp": datetime.utcnow().isoformat(),
"agent_id": agent_id,
"url": target_url,
"categories": categories,
"page_type": page_type,
"reputation": reputation,
"decision": "allow" if trust_decision else "deny"
})
return trust_decision, self.audit_log[-1]
# Usage
gateway = ZeroTrustAgentGateway(
api_key="your_api_key",
allowed_categories=["Technology & Computing", "Business and Finance"]
)
allowed, audit = gateway.enforce_zero_trust(
"https://example.com/admin/settings", agent_id="agent-007"
)
print(f"Decision: {audit['decision']} | Reason: page_type={audit['page_type']}")
class ZeroTrustAgentWrapper {
constructor(apiKey, allowedCategories, minReputation = 2.0) {
this.apiKey = apiKey;
this.allowedCategories = allowedCategories;
this.minReputation = minReputation;
this.sessionHistory = [];
}
async verifyNavigation(targetURL, agentContext) {
const response = await fetch(
"https://www.websitecategorizationapi.com" +
"/api/iab/iab_web_content_filtering.php",
{
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded"
},
body: new URLSearchParams({
query: targetURL,
api_key: this.apiKey,
data_type: "url",
expanded_categories: "1"
})
}
);
const classification = await response.json();
const pageType = classification.page_type || "unknown";
const reputation = parseFloat(
classification.open_page_rank || 0
);
const category =
classification.filtering_taxonomy?.[0]?.[0]
?.replace("Category name: ", "") || "Unknown";
// Zero trust: all three checks must pass
const categoryOk = this.allowedCategories.includes(category);
const pageTypeOk = !["login","admin","checkout","settings"]
.includes(pageType);
const reputationOk = reputation >= this.minReputation;
const decision = {
url: targetURL,
category, pageType, reputation,
checks: { categoryOk, pageTypeOk, reputationOk },
action: (categoryOk && pageTypeOk && reputationOk)
? "allow" : "deny",
timestamp: new Date().toISOString()
};
this.sessionHistory.push(decision);
return decision;
}
}
Purpose-built domain databases for AI agent filtering. Includes IAB categories, 20+ page types, reputation scores, and popularity rankings. One-time purchase with perpetual license.
10 Million Domains with Page-Type Intelligence
One-time purchase: Perpetual license | Optional Updates: $1,599/year
20 Million Domains with Full Intelligence Suite
One-time purchase: Perpetual license | Optional Updates: $2,999/year
50 Million Domains with Complete Intelligence Suite
One-time purchase: Perpetual license | Optional Updates: $4,999/year
Also available: Enterprise URL Database up to 102M domains from $2,499. View all database tiers →
Search any IAB or Web Filtering category to see how many domains are in our 102M Enterprise Database — the same data your zero trust verification layer will reference.
How 102 million domains from our main Enterprise Database are distributed across IAB v3 taxonomy classifications
Spanning Tier 1 through Tier 4 classifications from our 102M Enterprise Database
Charts display domain counts for the top 50 out of 700+ categories in our 102M Enterprise Database. To check the number of domains for the remaining 650+ categories, use the Category Counter tool above .
Traditional network security operates on a castle-and-moat model: everything inside the perimeter is trusted, everything outside is blocked. This model was already crumbling under the pressure of remote work, SaaS adoption, and cloud migration. AI agents deliver the final blow. An autonomous web-browsing agent has no fixed location, no corporate network to anchor to, and no predictable traffic pattern. It is a roaming entity that interacts with arbitrary internet endpoints based on its task context. The only security architecture that accommodates this reality is zero trust.
Zero trust, as formalized by NIST SP 800-207, operates on three principles: verify explicitly, use least-privilege access, and assume breach. Applied to AI agents, these principles translate directly into concrete technical requirements. Verify explicitly means every URL must be classified and evaluated before the agent navigates to it. Least-privilege access means each agent receives only the minimum set of domain categories needed for its assigned task. Assume breach means the system logs every navigation event, monitors for anomalous category transitions, and has the ability to terminate agent sessions in real time.
A zero trust verification layer for AI agents draws on five distinct signals from the URL categorization database, each contributing a different dimension of trust assessment. The first signal is the IAB content category, which tells you what the domain is about. A financial research agent should be visiting "Business and Finance" domains — if it suddenly appears in "Adult" or "Gambling," the category signal triggers a block. The second signal is the web filtering category, which provides a security-oriented classification. Categories like "Malware," "Phishing," and "Spam" are hard blocks regardless of the agent's task scope.
The third signal is page type. Even within an approved category, certain page functions are off-limits for autonomous agents. Login pages, admin panels, settings pages, and checkout flows should never be accessed by an agent without explicit human authorization. The fourth signal is the OpenPageRank score, which serves as a proxy for domain authority and legitimacy. A domain with a PageRank of 0 and no web presence is significantly riskier than a well-established domain with a score of 6+. The fifth signal is global popularity ranking. Domains in the top 100K are well-known, heavily monitored entities. Domains ranked beyond 10M are obscure and may warrant additional scrutiny.
The most powerful application of zero trust to AI agents is least-privilege category scoping. Instead of deploying an agent with unrestricted internet access, you define a category allowlist that matches the agent's task requirements. A travel booking agent receives access to "Travel," "Maps," and "Weather" categories — nothing else. A competitive intelligence agent receives "Business and Finance," "Technology & Computing," and "News" categories. A customer service agent receives access only to domains that belong to the company and its known partners.
This scoping is possible because the 102M domain database provides granular IAB categorization at four taxonomy tiers. You can scope broadly at Tier 1 (all of "Technology & Computing") or narrowly at Tier 4 (only "Artificial Intelligence > Machine Learning > Natural Language Processing"). The granularity of the taxonomy directly maps to the precision of your zero trust policy. More specific categories mean tighter least-privilege boundaries, which mean lower risk.
Zero trust is not a one-time check at session start — it is continuous validation throughout the agent's entire operation. Each navigation event is logged with its classification result, building a real-time session profile. This profile enables category drift detection: if an agent starts its session visiting "Technology" domains but gradually migrates toward "Entertainment" or "Social Media" through link chains, the drift detection system flags the anomaly and can pause the session for human review.
Category drift is a particularly insidious risk because it can happen organically. A legitimate technology blog may link to a social media discussion, which links to a user profile, which links to unrelated content. Each individual hop may seem reasonable, but the cumulative drift takes the agent far outside its approved scope. Zero trust continuous validation catches this by evaluating every hop against the original category allowlist, not just the previous hop.
Many organizations start their agent security journey with blocklists — lists of known-bad domains that the agent is prohibited from visiting. While blocklists are a necessary component of any security stack, they are fundamentally insufficient for zero trust. A blocklist only protects against known threats. It does nothing for the millions of legitimate but inappropriate domains that an agent should not visit based on its task scope. Zero trust inverts the model: instead of blocking known-bad, you explicitly allow known-good. The 102M domain database enables this inversion by classifying every domain with enough metadata to make allow/deny decisions at the category level.
The practical difference is dramatic. A blocklist of 100K known-malicious domains blocks 100K domains and allows everything else — including millions of domains that are not malicious but are still inappropriate for the agent's task. A zero trust allowlist based on IAB categories permits only the specific categories the agent needs, which might represent 5M domains out of 102M. The remaining 97M domains are implicitly blocked — not because they are malicious, but because they are outside the agent's authorized scope. This is least-privilege access in action.
Zero trust architectures generate comprehensive audit trails by design, since every access decision is logged. For AI agents, this audit capability is critical for regulatory compliance. When a regulator asks "what websites did your AI agent visit, and why was each visit authorized?" you need a deterministic answer. The URL categorization database provides that answer: the agent visited domain X, which is classified as "Business and Finance > Financial Services," which matches the agent's approved category scope, and the page type is "pricing" which is an allowed page function.
Forensic analysis also benefits from zero trust logging. If an agent causes an incident — triggering a security alert, exposing sensitive data, or making an unauthorized purchase — the audit trail shows exactly which domains the agent visited, what categories those domains belong to, what page types were encountered, and which policy decisions were made at each step. This level of detail enables root cause analysis and policy refinement that would be impossible without the categorization layer.
Enterprise organizations are not deploying one agent — they are deploying hundreds or thousands of agents, each with different task scopes, different risk profiles, and different compliance requirements. Zero trust with URL categorization scales naturally to this model because policies are defined per agent role, not per agent instance. A "financial analyst" agent role has its category allowlist, a "marketing researcher" agent role has a different allowlist, and a "customer support" agent role has yet another. Each role inherits its zero trust policy from a centralized configuration, which the security team manages alongside their existing identity and access management infrastructure.
The 102M domain database supports this multi-tenant model because the data is the same for all agents — what changes is the policy layer on top. One database, many policy configurations, zero trust enforced consistently across every agent instance in the fleet. This is the same principle that drives zero trust for human users, extended to autonomous AI agents navigating the open web.
Any organization deploying autonomous web-browsing agents in production needs zero trust controls. This includes enterprises using Anthropic Computer Use, OpenAI Operator, Google Project Mariner, or custom agent frameworks built on LangChain, CrewAI, or AutoGen. It includes managed service providers operating agents on behalf of clients who demand provable security controls. It includes platform vendors building agent orchestration tools who need to offer built-in governance to win enterprise contracts.
The regulatory pressure is intensifying. The EU AI Act requires transparency and accountability for autonomous AI systems. NIST's AI Risk Management Framework calls for continuous monitoring and risk assessment. SOC 2 auditors are beginning to ask about AI agent controls. Zero trust with URL categorization provides the technical foundation to meet these requirements — not as a future roadmap item, but as a deployable solution that works with the 102M domain database available today.
102 million pre-classified domains provide the verification layer that makes zero trust enforceable at every navigation event. One-time purchase, perpetual license, zero implicit trust.