Rate Limit Overview
Rate limits ensure fair API access for all users and protect our infrastructure from abuse. Limits apply to both requests per second (RPS) and monthly API calls. Understanding these limits helps you design integrations that work reliably at scale.
| Plan |
Monthly Calls |
Rate Limit (RPS) |
Batch Size |
| Free |
1,000 |
10 |
100 |
| Starter |
50,000 |
50 |
500 |
| Professional |
500,000 |
200 |
1,000 |
| Business |
2,000,000 |
500 |
1,000 |
| Enterprise |
Custom |
1,000+ |
Custom |
Rate Limit Headers
Every API response includes headers indicating your current rate limit status:
X-RateLimit-Limit: 200 // Your plan's RPS limit
X-RateLimit-Remaining: 195 // Requests remaining in current window
X-RateLimit-Reset: 1640000000 // Unix timestamp when limit resets
X-Monthly-Limit: 500000 // Your monthly call allocation
X-Monthly-Remaining: 485230 // Calls remaining this month
Monitor these headers to track your usage and implement proactive throttling before hitting limits.
Handling Rate Limit Errors
When you exceed rate limits, the API returns a 429 status code with a Retry-After header indicating when to retry:
HTTP/1.1 429 Too Many Requests
Retry-After: 2
Content-Type: application/json
{
"error": "rate_limit_exceeded",
"message": "Rate limit exceeded. Retry after 2 seconds.",
"retry_after": 2
}
Best Practice
Always implement exponential backoff when handling 429 errors. Start with the Retry-After value, then double the wait time for subsequent retries up to a maximum delay. See our best practices guide for implementation examples.
Strategies for High-Volume Usage
For applications requiring high throughput, consider these strategies:
- Implement caching: Cache categorization results locally to reduce redundant API calls. Most website categories remain stable for 24-48 hours.
- Use batch endpoints: Process multiple domains in single requests using our batch API for better efficiency.
- Request queuing: Implement a request queue that respects rate limits, processing requests at your plan's maximum RPS.
- Off-peak processing: For non-time-sensitive batch jobs, process during off-peak hours when system load is lower.
- Upgrade your plan: Higher-tier plans offer significantly increased rate limits for demanding applications.
Rate Limit Implementation Example
class RateLimitedClient {
constructor(apiKey, maxRPS = 50) {
this.apiKey = apiKey;
this.maxRPS = maxRPS;
this.queue = [];
this.processing = false;
}
async categorize(domain) {
return new Promise((resolve, reject) => {
this.queue.push({ domain, resolve, reject });
this.processQueue();
});
}
async processQueue() {
if (this.processing || !this.queue.length) return;
this.processing = true;
while (this.queue.length) {
const batch = this.queue.splice(0, this.maxRPS);
await this.processBatch(batch);
await this.delay(1000); // Wait 1 second between batches
}
this.processing = false;
}
}