Rate Limits

Understanding API rate limits and best practices for staying within them.

The Moqup API implements rate limiting to ensure fair usage and maintain service quality. This guide explains the limits and how to work within them.

Rate Limit Overview

Limits by Plan

PlanRequests per Minute
Free60
Solo300
Team1,000

Rate limits are calculated using a sliding 1-minute window based on your API key.

Rate Limit Headers

Every API response includes rate limit headers:

http
HTTP/1.1 200 OK
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 287
X-RateLimit-Reset: 1641825600

Header Descriptions

HeaderDescription
X-RateLimit-LimitMaximum requests per minute
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when limit resets

Rate Limit Exceeded

Response

When you exceed the limit, you receive a 429 Too Many Requests response:

http
HTTP/1.1 429 Too Many Requests
Retry-After: 45
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1641825645

{
  "error": {
    "message": "Rate limit exceeded. Please try again later.",
    "code": 429,
    "retryAfter": 45
  }
}

Handling 429 Responses

javascript
async function apiRequest(url, options, retries = 3) {
  const response = await fetch(url, options);

  if (response.status === 429 && retries > 0) {
    const retryAfter = parseInt(response.headers.get('Retry-After')) || 60;
    console.log(`Rate limited. Waiting ${retryAfter} seconds...`);

    await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
    return apiRequest(url, options, retries - 1);
  }

  return response;
}

Best Practices

1. Implement Exponential Backoff

javascript
async function requestWithBackoff(fn, maxRetries = 5) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await fn();
    } catch (error) {
      if (error.status === 429) {
        const delay = Math.pow(2, i) * 1000; // 1s, 2s, 4s, 8s, 16s
        await new Promise(resolve => setTimeout(resolve, delay));
      } else {
        throw error;
      }
    }
  }
  throw new Error('Max retries exceeded');
}

2. Use Caching

Cache responses to reduce API calls:

javascript
const cache = new Map();
const CACHE_TTL = 60000; // 1 minute

async function cachedFetch(url) {
  const cached = cache.get(url);
  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.data;
  }

  const response = await fetch(url, {
    headers: { Authorization: `Bearer ${API_KEY}` }
  });
  const data = await response.json();

  cache.set(url, { data, timestamp: Date.now() });
  return data;
}

3. Use Larger Page Sizes

Reduce the number of requests by fetching more items per page:

javascript
// Instead of:
const page1 = await fetch('/v1/projects?per_page=10'); // Many requests

// Use:
const page1 = await fetch('/v1/projects?per_page=100'); // Fewer requests

4. Monitor Rate Limit Headers

Track your usage proactively:

javascript
async function trackedRequest(url, options) {
  const response = await fetch(url, options);

  const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
  const limit = parseInt(response.headers.get('X-RateLimit-Limit'));

  if (remaining < limit * 0.2) {
    console.warn(`Rate limit warning: ${remaining}/${limit} remaining`);
  }

  return response;
}

5. Limit Concurrent Requests

Control parallelism to avoid bursts:

javascript
async function processWithLimit(items, fn, concurrency = 5) {
  const results = [];
  for (let i = 0; i < items.length; i += concurrency) {
    const batch = items.slice(i, i + concurrency);
    const batchResults = await Promise.all(batch.map(fn));
    results.push(...batchResults);

    // Small delay between batches
    if (i + concurrency < items.length) {
      await new Promise(r => setTimeout(r, 1000));
    }
  }
  return results;
}

Rate Limit Increases

Upgrade Plan

For higher limits, upgrade to a higher plan:

  • Solo: 5x the Free limit (300/min)
  • Team: ~17x the Free limit (1,000/min)

Temporary Increases

For migrations or one-time bulk operations:

  1. Contact support in advance
  2. Explain the use case and expected volume
  3. We can arrange temporary limit increases

Troubleshooting

Constantly Hitting Limits

  1. Review request patterns - are you polling too frequently?
  2. Implement client-side caching
  3. Use larger page sizes
  4. Upgrade plan if needed

Unexpected 429 Errors

  1. Multiple applications sharing the same API key
  2. Background jobs running simultaneously
  3. Retry loops that don't respect Retry-After

Debugging Rate Limits

javascript
async function debugRequest(url) {
  const response = await fetch(url, {
    headers: { Authorization: `Bearer ${API_KEY}` }
  });

  console.log('Rate Limit Status:', {
    limit: response.headers.get('X-RateLimit-Limit'),
    remaining: response.headers.get('X-RateLimit-Remaining'),
    reset: new Date(response.headers.get('X-RateLimit-Reset') * 1000)
  });

  return response;
}

Summary

DoDon't
Cache responsesPoll frequently
Use large page sizesMake many small requests
Implement backoffRetry immediately on 429
Monitor headersIgnore rate limit warnings
Use separate API keysShare one key across apps

Next Steps

  1. Authentication - API access
  2. API Overview - Getting started