Rate Limits
Understand and work with Zaits API rate limits to build reliable applications.
Overview
The Zaits API implements rate limiting to ensure fair usage and maintain service quality for all users. Rate limits vary by subscription tier.
Rate Limit Headers
All API responses include rate limit information:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1705317600X-RateLimit-Limit
Maximum requests allowed in the current window
X-RateLimit-Remaining
Requests remaining in the current window
X-RateLimit-Reset
Unix timestamp when the window resets
Rate Limit Tiers
Rate limits by subscription tier:
Free
50
200
2
Basic
100
2,500
5
Pro
500
20,000
10
Enterprise
1,000
Unlimited
25
Rate Limit Response
When you exceed the rate limit, you'll receive a 429 response:
{
"success": false,
"error": {
"code": "rate_limit_exceeded",
"message": "Too many requests. Please wait before retrying.",
"details": {
"limit": 100,
"retry_after": 30,
"reset_time": "2024-01-15T14:35:00Z"
}
}
}Headers included:
HTTP/1.1 429 Too Many Requests
Retry-After: 30
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1705317600Handling Rate Limits
Basic Retry Logic
async function callWithRetry(apiCall, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await apiCall();
} catch (error) {
if (error.status === 429 && i < maxRetries - 1) {
const retryAfter = error.retryAfter || Math.pow(2, i);
await sleep(retryAfter * 1000);
continue;
}
throw error;
}
}
}Python Example
import time
import requests
def call_with_retry(api_call, max_retries=3):
for attempt in range(max_retries):
try:
return api_call()
except requests.HTTPError as e:
if e.response.status_code == 429 and attempt < max_retries - 1:
retry_after = int(e.response.headers.get('Retry-After', 2 ** attempt))
time.sleep(retry_after)
continue
raiseBest Practices
1. Monitor Rate Limits
Always check the rate limit headers in responses:
const response = await fetch(apiUrl, options);
const remaining = response.headers.get('X-RateLimit-Remaining');
const reset = response.headers.get('X-RateLimit-Reset');
if (remaining < 10) {
console.warn(`Only ${remaining} requests remaining. Resets at ${new Date(reset * 1000)}`);
}2. Implement Exponential Backoff
Wait progressively longer between retries:
1st retry: 1 second
2nd retry: 2 seconds
3rd retry: 4 seconds
etc.
3. Use Request Queues
For high-volume applications, implement a queue to manage request rate:
Track requests per minute
Queue excess requests
Process queue at sustainable rate
4. Cache Results
Cache API responses when appropriate:
OCR results for static documents
Face verification results (short TTL)
Analysis results for known images
Don't cache:
Real-time verification requests
Liveness detection
Time-sensitive operations
5. Batch When Possible
If processing multiple items, spread them out:
async function processItems(items) {
for (const item of items) {
await processItem(item);
await sleep(100); // 100ms delay between requests
}
}6. Handle 429 Gracefully
Always handle rate limit errors:
try {
const result = await apiCall();
} catch (error) {
if (error.status === 429) {
// Queue for retry or notify user
console.log('Rate limited. Please try again in a moment.');
} else {
throw error;
}
}Endpoint-Specific Limits
Some endpoints have different computational costs:
Heavy Processing (50% of standard limit):
Face Analysis (age, gender, emotion)
Face Landmarks detection
Liveness Detection
Light Operations (2x standard limit):
Usage Analytics
Webhook Management
API Key Management
Monitoring Usage
Track your API usage in the dashboard:
Current rate limit status
Historical usage patterns
Peak usage times
Requests remaining
Check programmatically:
curl https://api.zaits.net/v1/usage/summary \
-H "Authorization: Bearer YOUR_API_KEY"Common Issues
Issue: Hitting Rate Limits Frequently
Solutions:
Implement caching
Use request queuing
Spread requests over time
Upgrade to higher tier
Issue: Burst Traffic
Solutions:
Implement request queue
Use exponential backoff
Consider Enterprise tier for burst allowance
Issue: Multiple Services Sharing Key
Solutions:
Use distributed rate limiting (Redis)
Create separate API keys per service
Implement centralized API gateway
Rate Limit Tips
Start conservative - Don't use all your limit at once
Monitor headers - Track remaining requests
Implement retries - Always handle 429 errors
Add delays - Space out batch operations
Cache results - Reduce duplicate requests
Upgrade when needed - Don't let limits block your growth
Getting Help
If you need custom rate limits:
Contact support through your dashboard
Enterprise plans include custom limits
Rate limits can be adjusted based on use case
For more information:
Last updated