WebsiteCategorizationAPI
Home
Demo Tools - Categorization
Website Categorization Text Classification URL Database Taxonomy Mapper
Demo Tools - Website Intel
Technology Detector Quality Score Competitor Finder
Demo Tools - Brand Safety
Brand Safety Checker Brand Suitability Quality Checker
Demo Tools - Content
Sentiment Analyzer Context Aware Ads
MCP Servers
MCP Real-Time API MCP Database Lookup
AI Agents
Map of Internet for AI Agents 100 Use Cases
Domains By
Domains for your ICP Domains by Vertical Domains by Country Domains by Technologies
Resources
API Documentation Pricing Login
Try Categorization
Option C: Real-Time API Server

MCP Real-Time API Server

Connect Claude Desktop, Claude Code, Cursor, and any MCP-compatible AI assistant to live website categorization. Classify any URL in real time, including subpages, across every major industry taxonomy. Your AI gets instant access to website intelligence without leaving the conversation.

What Is the Model Context Protocol?

The Model Context Protocol (MCP) is an open standard created by Anthropic that provides a universal way to connect AI assistants to external tools, services, and data sources. Think of it as a standardized bridge between AI models and the outside world.

Before MCP, every AI integration required custom code, proprietary plugins, or complicated middleware. Each AI assistant had its own way of connecting to external services, which meant developers had to build and maintain separate integrations for every tool and every AI platform. MCP changes that by providing a single, open protocol that any AI assistant can use to communicate with any compatible server.

The protocol works through a simple but powerful architecture. An MCP server runs as a lightweight process on your local machine. It communicates with the AI assistant (the MCP client) using a standardized transport layer, typically stdio for local connections. When the AI determines that it needs external data, such as classifying a website URL, it sends a structured request to the MCP server. The server processes that request, which in this case means making a secure API call to the WebsiteCategorizationAPI.com endpoint, and returns the results directly to the AI in a format it can understand and reason about.

This entire process is seamless. You simply ask your AI assistant a question like "What category is nytimes.com/politics?" and the AI automatically invokes the right MCP tool, fetches the classification, and presents the results within the same conversation. There is no copy-pasting API responses, no switching between applications, and no manual data formatting.

MCP is currently supported by several leading AI platforms. Claude Desktop and Claude Code, both built by Anthropic, provide native MCP support. Cursor, the AI-powered code editor, also supports MCP servers natively. And because MCP is an open standard, more clients are being added regularly. Any tool that implements the MCP client specification can connect to this server.

The security model of MCP is designed with privacy in mind. The MCP server runs on your own machine, which means your API keys and credentials never leave your local environment. The server only makes outbound requests to the APIs you have configured, and it never sends your data to any third party. This local-first architecture gives you full control over what data is shared and when.

For developers and power users, MCP opens up a new paradigm of AI-assisted workflows. Rather than treating your AI assistant as an isolated chatbot, you can give it direct access to the tools and data sources it needs to provide genuinely useful, real-time answers. The WebsiteCategorizationAPI MCP server is one example of this: it gives your AI the ability to understand and classify any website on the internet in seconds.

How the MCP Architecture Works

AI Assistant
Claude Desktop, Claude Code, Cursor
stdio transport
Local MCP Server
Python process on your machine
HTTPS API call
WebsiteCategorizationAPI
Classification engine
JSON response
Classification Results
IAB, IPTC, Google Shopping, etc.

What This Server Does

The Real-Time API MCP server classifies any publicly accessible URL on demand. Unlike database lookup approaches that are limited to homepage-level data, this server analyzes URLs in real time, which means it can classify individual subpages with the same depth and accuracy as homepages.

When you ask your AI assistant to classify a URL like stripe.com/pricing or nytimes.com/section/politics, the MCP server fetches that specific page, analyzes its content, and returns multi-taxonomy classifications in seconds. This is fundamentally different from domain-level classification because individual pages on a website can cover vastly different topics. The homepage of a news site might be categorized under "News & Media," but its technology section would correctly be classified under "Technology & Computing" when analyzed at the page level.

The server returns classifications across all major industry taxonomy standards simultaneously. Each request gives you a comprehensive understanding of the URL across multiple classification frameworks, so you never need to make separate calls for different taxonomies. Below are the supported taxonomies and the depth of coverage each one provides.

In addition to taxonomy classifications, every response includes a rich set of enriched metadata. This supplementary data provides context that goes far beyond simple category labels. You receive buyer persona profiles that describe the likely audience of the page, sentiment analysis scores that capture the emotional tone of the content, extracted topics and keywords, named entity recognition results, a list of competitor websites, detected web technologies, and malware screening results. This combination of structured taxonomy data and unstructured enrichment data gives your AI assistant a 360-degree understanding of any URL.

Because the analysis happens in real time, the data is always fresh. There is no stale cache or outdated database to worry about. If a website changed its content five minutes ago, the MCP server will classify the current version. This makes it the right choice for use cases where accuracy and freshness matter more than raw speed.

Supported Taxonomy Standards

IAB Content Taxonomy v2
698 categories across 4 tiers
IAB Content Taxonomy v3
703 categories across 4 tiers
IPTC NewsCodes
1,124 categories across 3 tiers
Google Shopping / Product
5,474 categories
Shopify Product Taxonomy
10,560 categories
Amazon Product Taxonomy
39,004 categories
Web Content Filtering
44 categories for safety filtering

Enriched Data Included with Every Response

Beyond taxonomy classifications, each API response is packed with supplementary intelligence that gives your AI assistant deep context about any URL.

Buyer Personas Sentiment Analysis Topics Keywords Named Entities Competitors Web Technologies Malware Detection

Available MCP Tools

The MCP server exposes four tools that your AI assistant can call. Each tool is designed for a specific workflow, from single URL classification to batch processing and account management.

categorize_url

Single URL Classification

This is the primary tool for classifying a single URL. Pass any publicly accessible URL and the server will fetch the page, analyze its content, and return taxonomy classifications along with all enrichment data. This tool works with both root domains (like example.com) and full paths (like example.com/blog/article-title). The response includes classifications from all supported taxonomies in a single call.

Use this tool when you need to understand what a specific webpage is about, verify its content category for ad placement decisions, or gather intelligence about a particular piece of online content. The AI assistant will automatically parse the response and present the most relevant classification data based on the context of your conversation.

Example prompt
# Ask your AI assistant:
"Classify the URL https://stripe.com/payments and tell me its IAB categories."

# The AI calls categorize_url with url="https://stripe.com/payments"
# and returns structured IAB, IPTC, and Google Shopping categories.
categorize_url_with_options

Classification with Expanded Options

This tool extends the basic classification with additional parameters. You can request expanded category details, which include confidence scores for each taxonomy assignment, as well as supplementary data fields. When you need more granular control over the output, such as requesting only specific taxonomies or including confidence thresholds, this is the tool to use.

The expanded output includes confidence scores expressed as percentages for each category assignment, making it possible to assess how certain the classification engine is about each label. This is particularly valuable for automated workflows where you need to set a confidence threshold before taking an action, such as only blocking ads on pages where the brand-safety risk confidence exceeds 80%.

Example prompt
# Ask your AI assistant:
"Categorize https://techcrunch.com/2024/01/ai-startup with expanded categories
 and show me confidence scores for each classification."

# The AI calls categorize_url_with_options with expanded=true
# Returns IAB Tier 1-4 categories each with confidence percentages.
batch_categorize

Batch Classification (Up to 10 URLs)

When you need to classify multiple URLs at once, the batch tool accepts up to 10 URLs in a single request. This is significantly more efficient than calling the single-URL tool repeatedly, because the server processes the batch concurrently and returns all results together. Each URL in the batch receives the same comprehensive classification output as a single-URL request.

Batch classification is ideal for comparative analysis. You might ask your AI assistant to classify a set of competitor homepages and summarize the differences in their content profiles. Or you might provide a list of URLs from an ad campaign report and ask the AI to identify which ones fall into brand-safe categories. The AI receives all classification results at once and can reason across them to provide synthesized insights.

Example prompt
# Ask your AI assistant:
"Classify these 5 URLs and compare their content categories:
 - https://shopify.com
 - https://bigcommerce.com
 - https://woocommerce.com
 - https://squarespace.com
 - https://wix.com/ecommerce"

# The AI calls batch_categorize with all 5 URLs.
# Each URL costs 1 API credit. Total cost: 5 credits.
check_credits

Monitor Your API Credit Balance

This utility tool returns your current API credit balance and account status. It is useful for keeping track of your usage without leaving the AI conversation. You can ask your assistant to check your remaining credits before running a large batch classification, or set up a habit of checking your balance at the start of each session.

The tool requires no parameters. It simply reads the API key from your local configuration and queries the account endpoint to return your current credit count, account tier, and any usage limits that apply to your plan.

Example prompt
# Ask your AI assistant:
"How many API credits do I have left?"

# The AI calls check_credits and reports your balance.

Step-by-Step Setup

Follow these steps to install and configure the MCP Real-Time API server on your machine. The process takes about 10 minutes and requires no server infrastructure of your own.

1

Create an Account

Visit websitecategorizationapi.com and create a free account. You will need to provide a valid email address and set a password. After submitting the registration form, check your inbox for a verification email and click the confirmation link to activate your account. Your account must be verified before you can generate an API key. Free accounts come with a starter credit allocation so you can test the MCP server immediately.

2

Get Your API Key

Once logged in, navigate to your Profile page. You will find your API key displayed in the account settings section. Copy this key and store it in a secure location. You will need it during the configuration step. Treat your API key like a password. Do not share it publicly, and do not commit it to version control repositories.

3

Install Python 3.10 or Higher

The MCP server is written in Python and requires Python 3.10 or later. Check your current Python version by running python3 --version in your terminal. If you need to install or upgrade Python, visit python.org for download links, or use a package manager like Homebrew on macOS (brew install [email protected]), apt on Ubuntu/Debian (sudo apt install python3), or the official installer on Windows.

4

Download the MCP Server Files

Clone or download the MCP server repository to your local machine. Choose a permanent directory for the files, as the AI client will need to reference this path in its configuration. A recommended location is your home directory or a dedicated tools folder.

Terminal
git clone https://github.com/websitecategorizationapi/mcp-realtime-api.git
cd mcp-realtime-api
5

Install Dependencies

Install the required Python packages using pip. The server depends on three main libraries: mcp (the Model Context Protocol SDK), httpx (for making async HTTP requests to the API), and python-dotenv (for loading environment variables from a configuration file). All dependencies are listed in the requirements file for convenience.

Terminal
pip install -r requirements.txt

# Or install individually:
pip install mcp httpx python-dotenv
6

Configure Your API Key

The server reads your API key from a local environment file. Create a .env file in the MCP server directory and add your key. This file stays on your machine and is never transmitted to any third party.

.env
WEBSITE_CATEGORIZATION_API_KEY=your_api_key_here

Alternatively, you can set the environment variable directly in your shell session or system environment variables. On macOS and Linux, add the line export WEBSITE_CATEGORIZATION_API_KEY=your_api_key_here to your ~/.bashrc or ~/.zshrc file.

7

Configure in Claude Desktop

Open your Claude Desktop configuration file and add the MCP server entry. The configuration file location depends on your operating system. Add the following JSON block to the mcpServers section of your config file.

macOS Windows Linux
~/Library/Application Support/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "website-categorization-realtime": {
      "command": "python3",
      "args": ["/Users/yourname/mcp-realtime-api/server.py"],
      "env": {
        "WEBSITE_CATEGORIZATION_API_KEY": "your_api_key_here"
      }
    }
  }
}

On Windows, the config file is located at %APPDATA%\Claude\claude_desktop_config.json. On Linux, check ~/.config/Claude/claude_desktop_config.json. Replace /Users/yourname/mcp-realtime-api/server.py with the actual path where you downloaded the MCP server files. After saving the file, restart Claude Desktop for the changes to take effect.

8

Configure in Claude Code

Claude Code supports adding MCP servers via the command line or through a project configuration file. The fastest approach is the CLI command, which registers the server in a single step.

Terminal - CLI method
claude mcp add website-categorization-realtime \
  --command "python3 /path/to/mcp-realtime-api/server.py" \
  --env "WEBSITE_CATEGORIZATION_API_KEY=your_api_key_here"

Alternatively, create or edit the .mcp.json file in your project root directory for project-scoped configuration:

.mcp.json
{
  "mcpServers": {
    "website-categorization-realtime": {
      "command": "python3",
      "args": ["/path/to/mcp-realtime-api/server.py"],
      "env": {
        "WEBSITE_CATEGORIZATION_API_KEY": "your_api_key_here"
      }
    }
  }
}
9

Configure in Cursor

Cursor provides a built-in MCP server management interface. Open Cursor and navigate to Settings (gear icon or Cmd+, on macOS, Ctrl+, on Windows/Linux). Look for the MCP Servers section in the sidebar. Click "Add MCP Server" and fill in the following fields: set the name to website-categorization-realtime, the command to python3, and the arguments to the full path to server.py. Add the WEBSITE_CATEGORIZATION_API_KEY environment variable with your API key value. Save the configuration and the server will start automatically.

10

Test Your Setup

Open your configured AI assistant and try these example prompts to verify everything is working correctly. If the server is properly connected, the AI will invoke the MCP tool automatically and return classification results within a few seconds.

Example prompts to try
# Basic classification
"What category is https://www.bbc.com/news?"

# Subpage classification
"Classify https://stripe.com/pricing and describe the buyer personas."

# Batch comparison
"Compare the content categories of amazon.com, ebay.com, and etsy.com."

# Credit check
"How many API credits do I have remaining?"

# Brand safety check
"Is this URL brand-safe for a children's toy advertiser: https://example.com/article?"

Use Cases

The MCP Real-Time API server unlocks a wide range of workflows by giving your AI assistant the ability to understand website content on demand. Here are some of the most common applications.

Ad-Tech Contextual Targeting

Programmatic advertising platforms need to understand the content of publisher pages before placing ads. Use the MCP server to classify publisher URLs in real time and match them against advertiser targeting requirements. Ask your AI to evaluate whether a specific article page is contextually relevant for a product campaign, or batch-classify a list of placement URLs to identify the strongest contextual matches. The multi-taxonomy output lets you target using IAB categories for standard programmatic workflows or IPTC codes for news-specific campaigns.

Brand Safety Verification

Before placing an ad or forming a partnership, brands need to verify that the destination content aligns with their values. The MCP server returns both content categories and web content filtering labels, which flag potentially harmful or controversial content. Ask your AI assistant to evaluate a list of URLs against your brand safety guidelines and generate a risk report. The sentiment analysis data adds another layer, helping you avoid pages with strongly negative emotional tones even when the topic category itself appears safe.

Content Moderation

Platforms that accept user-submitted URLs, whether in forums, social networks, link aggregators, or messaging apps, need to understand what those links point to. Use the MCP server to classify submitted URLs and flag content that violates your platform policies. The web content filtering taxonomy includes 44 categories specifically designed for safety screening, covering categories like adult content, malware, gambling, violence, and hate speech. Combined with malware detection, you get a comprehensive safety profile for any link.

Competitive Intelligence

Understanding how competitors position their websites and individual pages gives you a strategic advantage. Use the MCP server to classify competitor URLs and compare their content strategies. Ask your AI to analyze a competitor's product pages, blog posts, and landing pages to identify what topics they are investing in, what audience segments they are targeting, and how their content strategy differs from yours. The competitor data field in the API response also surfaces related companies that the classification engine identifies as similar.

SEO and Content Strategy

Content teams can use the MCP server to analyze how search engines and classification systems see their pages. Classify your own URLs to verify that your content is being categorized the way you intend. If your technology blog post is being classified under "Business" rather than "Technology," that signals a content optimization opportunity. Compare your page classifications against top-ranking competitor pages for the same keywords to identify content gaps and alignment opportunities.

Compliance and Regulatory Monitoring

Regulated industries like finance, healthcare, and pharmaceuticals need to monitor the content context where their brand appears. Use the MCP server to classify pages where your ads are running or where your brand is mentioned. Generate compliance reports that document the content categories of every page in an advertising campaign. For financial services companies, this helps demonstrate that ads were not placed alongside prohibited content categories, supporting regulatory audit requirements.

Real-Time API vs. Database Lookup

WebsiteCategorizationAPI.com offers two MCP servers that serve different needs. Understanding when to use each one will help you build the right workflow for your use case.

The Real-Time API server, which is this page's focus, fetches and analyzes URLs live. Every request goes to the actual webpage, scrapes its current content, and runs it through the classification engine. This produces the freshest possible results and supports full URL paths including subpages. The tradeoff is speed: each classification takes 2 to 10 seconds depending on the complexity of the page and how quickly the target server responds.

The Database Lookup MCP server queries a pre-built database of over 100 million categorized domains. Lookups are essentially instant, returning results in milliseconds. However, this approach only works at the domain level. It cannot classify subpages, and the data reflects the last time the domain was crawled and categorized rather than the current live content of the page.

Feature Real-Time API (This Server) Database Lookup
Response time 2 - 10 seconds per URL Instant (milliseconds)
URL coverage Full URL paths + subpages Domain-level only
Data freshness Always live, real-time Last crawl date (periodic updates)
Enrichment data Full: personas, sentiment, entities, tech, etc. Categories and basic metadata only
Taxonomies supported IAB v2, IAB v3, IPTC, Google Shopping, Shopify, Amazon, Web Filtering IAB v2, IAB v3, IPTC, Google Shopping, Shopify, Amazon, Web Filtering
Batch support Up to 10 URLs per request Up to 100 domains per request
Cost per lookup 1 API credit per URL Free (included with database subscription)
Offline operation Requires internet Local database, no internet needed for lookups
Best for Subpage analysis, fresh data, enrichment, ad-hoc research Bulk domain classification, speed-critical pipelines, offline use

Not sure which to choose? You can configure both MCP servers simultaneously. Your AI assistant will learn when to use each one based on the context of your request. If you ask about a specific subpage, it will use the Real-Time API. If you ask about a list of domains, it may prefer the Database Lookup for speed.

Simple, Credit-Based Pricing

The MCP Real-Time API server uses the same credit system as the standard WebsiteCategorizationAPI. There are no separate fees, no MCP surcharges, and no hidden costs.

How Credits Work

Each URL classification consumes 1 API credit, regardless of which MCP tool you use. A single-URL classification costs 1 credit. A batch of 10 URLs costs 10 credits. Checking your credit balance is always free and costs 0 credits. Credits are shared across all API access methods: whether you call the API directly via HTTP, use the MCP server through Claude, or access it from any other integration, they all draw from the same credit pool.

New accounts receive a starter credit allocation for testing. When you need more credits, you can purchase them through the pricing page. Volume discounts are available for larger packages.

Frequently Asked Questions

Common questions about the MCP Real-Time API server, its setup, security, and functionality.

Do I need to install anything on my own server or cloud infrastructure?

No. The MCP server runs entirely on your local machine, right alongside your AI assistant. There is no server-side deployment, no Docker containers to manage, and no cloud infrastructure to provision. You install a small Python script on your laptop or desktop, configure it with your API key, and point your AI client (Claude Desktop, Claude Code, or Cursor) to that script. The only external connection the server makes is HTTPS API calls to websitecategorizationapi.com when your AI assistant requests a URL classification. Everything else stays local.

Is my API key secure when using the MCP server?

Yes. Your API key is stored locally on your machine, either in a .env file in the MCP server directory or as a system environment variable. The key is only used to authenticate requests from your local MCP server to the WebsiteCategorizationAPI endpoint. It is never sent to Anthropic, Cursor, or any other third party. The AI assistant itself does not see or store your API key; it only sees the classification results that the MCP server returns. As a best practice, do not commit your .env file to version control, and do not share your API key in chat conversations.

What happens when I am offline or my internet connection drops?

The Real-Time API MCP server requires an active internet connection to function. When your AI assistant calls one of the classification tools, the MCP server makes an outbound HTTPS request to the WebsiteCategorizationAPI. If your machine is offline or the connection is interrupted, the API call will fail and the AI assistant will inform you that the tool could not complete the request. The server will not crash or enter a bad state; it will simply return an error message. Once your connection is restored, you can retry the request. If you need offline classification capabilities, consider using the Database Lookup MCP server instead, which can operate from a locally stored database.

Can I use this with MCP clients other than Claude Desktop and Cursor?

Yes. The MCP server implements the standard Model Context Protocol specification using stdio transport. Any MCP-compatible client can connect to it. While Claude Desktop, Claude Code, and Cursor are the most common clients today, the protocol is open and growing. As new AI assistants and development tools add MCP support, they will be able to connect to this server without any changes to the server code. The configuration process for each client may differ slightly, but the general pattern is the same: point the client to the Python script and provide the environment variable for the API key.

What types of URLs can I classify?

You can classify any publicly accessible URL on the internet. This includes standard website homepages, specific subpages and article URLs, blog posts, product pages, news articles, documentation pages, and landing pages. The URL must be reachable over HTTP or HTTPS. Private pages that require authentication (such as pages behind a login wall), intranet pages, and localhost addresses cannot be classified because the API server needs to fetch and render the page content. URLs that return server errors (like 404 or 500 status codes) will return an error indicating the page could not be analyzed.

How fast are the classification results?

Real-time classification typically takes between 2 and 10 seconds per URL. The exact time depends on several factors: how quickly the target website responds to the fetch request, how much content is on the page, and the current load on the classification engine. Simple pages with lightweight content tend to classify faster, while content-heavy pages or slow-loading sites take longer. Batch requests process URLs concurrently, so a batch of 10 URLs does not take 10 times longer than a single URL. In most cases, a batch of 10 completes within 15 to 25 seconds. If speed is your primary concern and you only need domain-level classification, the Database Lookup MCP server offers instant results.

Can I use both MCP servers at the same time?

Absolutely. You can configure both the Real-Time API server and the Database Lookup server in the same AI client. They register as separate MCP servers with distinct tool names, so there are no conflicts. Your AI assistant will have access to all tools from both servers and can choose the most appropriate one based on your request. For example, if you ask "What categories does example.com fall into?" the AI might use the faster database lookup. But if you ask "Classify this specific article page at example.com/blog/some-article," it will use the real-time API because the database server cannot handle subpage URLs. This gives you the best of both worlds in a single conversation.

What Python version do I need, and does it work on all operating systems?

The MCP server requires Python 3.10 or later. It is tested and supported on macOS, Windows, and Linux. The Python dependencies (mcp, httpx, and python-dotenv) are all cross-platform libraries with no operating-system-specific requirements. If you are on macOS, Python 3 can be installed via Homebrew. On Windows, download the installer from python.org and make sure to check the "Add Python to PATH" option during installation. On Linux distributions, Python 3.10+ is available through your system package manager on most modern distributions, or you can use pyenv for version management.

Ready to Give Your AI Real-Time Website Intelligence?

Create a free account, grab your API key, and have your AI assistant classifying URLs in under 10 minutes. No server setup, no infrastructure, no hassle.

Create Free Account API Documentation Database Lookup MCP