Rate Limits
Understanding API rate limits and best practices for staying within them.
The Moqup API implements rate limiting to ensure fair usage and maintain service quality. This guide explains the limits and how to work within them.
Rate Limit Overview
Limits by Plan
| Plan | Requests per Minute |
|---|---|
| Free | 60 |
| Solo | 300 |
| Team | 1,000 |
Rate limits are calculated using a sliding 1-minute window based on your API key.
Rate Limit Headers
Every API response includes rate limit headers:
http
HTTP/1.1 200 OK
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 287
X-RateLimit-Reset: 1641825600Header Descriptions
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests per minute |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when limit resets |
Rate Limit Exceeded
Response
When you exceed the limit, you receive a 429 Too Many Requests response:
http
HTTP/1.1 429 Too Many Requests
Retry-After: 45
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1641825645
{
"error": {
"message": "Rate limit exceeded. Please try again later.",
"code": 429,
"retryAfter": 45
}
}Handling 429 Responses
javascript
async function apiRequest(url, options, retries = 3) {
const response = await fetch(url, options);
if (response.status === 429 && retries > 0) {
const retryAfter = parseInt(response.headers.get('Retry-After')) || 60;
console.log(`Rate limited. Waiting ${retryAfter} seconds...`);
await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
return apiRequest(url, options, retries - 1);
}
return response;
}Best Practices
1. Implement Exponential Backoff
javascript
async function requestWithBackoff(fn, maxRetries = 5) {
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (error) {
if (error.status === 429) {
const delay = Math.pow(2, i) * 1000; // 1s, 2s, 4s, 8s, 16s
await new Promise(resolve => setTimeout(resolve, delay));
} else {
throw error;
}
}
}
throw new Error('Max retries exceeded');
}2. Use Caching
Cache responses to reduce API calls:
javascript
const cache = new Map();
const CACHE_TTL = 60000; // 1 minute
async function cachedFetch(url) {
const cached = cache.get(url);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const response = await fetch(url, {
headers: { Authorization: `Bearer ${API_KEY}` }
});
const data = await response.json();
cache.set(url, { data, timestamp: Date.now() });
return data;
}3. Use Larger Page Sizes
Reduce the number of requests by fetching more items per page:
javascript
// Instead of:
const page1 = await fetch('/v1/projects?per_page=10'); // Many requests
// Use:
const page1 = await fetch('/v1/projects?per_page=100'); // Fewer requests4. Monitor Rate Limit Headers
Track your usage proactively:
javascript
async function trackedRequest(url, options) {
const response = await fetch(url, options);
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
if (remaining < limit * 0.2) {
console.warn(`Rate limit warning: ${remaining}/${limit} remaining`);
}
return response;
}5. Limit Concurrent Requests
Control parallelism to avoid bursts:
javascript
async function processWithLimit(items, fn, concurrency = 5) {
const results = [];
for (let i = 0; i < items.length; i += concurrency) {
const batch = items.slice(i, i + concurrency);
const batchResults = await Promise.all(batch.map(fn));
results.push(...batchResults);
// Small delay between batches
if (i + concurrency < items.length) {
await new Promise(r => setTimeout(r, 1000));
}
}
return results;
}Rate Limit Increases
Upgrade Plan
For higher limits, upgrade to a higher plan:
- Solo: 5x the Free limit (300/min)
- Team: ~17x the Free limit (1,000/min)
Temporary Increases
For migrations or one-time bulk operations:
- Contact support in advance
- Explain the use case and expected volume
- We can arrange temporary limit increases
Troubleshooting
Constantly Hitting Limits
- Review request patterns - are you polling too frequently?
- Implement client-side caching
- Use larger page sizes
- Upgrade plan if needed
Unexpected 429 Errors
- Multiple applications sharing the same API key
- Background jobs running simultaneously
- Retry loops that don't respect
Retry-After
Debugging Rate Limits
javascript
async function debugRequest(url) {
const response = await fetch(url, {
headers: { Authorization: `Bearer ${API_KEY}` }
});
console.log('Rate Limit Status:', {
limit: response.headers.get('X-RateLimit-Limit'),
remaining: response.headers.get('X-RateLimit-Remaining'),
reset: new Date(response.headers.get('X-RateLimit-Reset') * 1000)
});
return response;
}Summary
| Do | Don't |
|---|---|
| Cache responses | Poll frequently |
| Use large page sizes | Make many small requests |
| Implement backoff | Retry immediately on 429 |
| Monitor headers | Ignore rate limit warnings |
| Use separate API keys | Share one key across apps |
Next Steps
- Authentication - API access
- API Overview - Getting started