Skip to main content

Rate limits

To ensure fair usage and protect the Emogg REST API from excessive requests, we implement rate limiting using a Token Bucket algorithm. This mechanism helps in controlling the rate at which user requests are processed, ensuring a consistent and reliable service for all users.

How it works

The Token Bucket algorithm allows for flexible and efficient control of the request flow, ensuring fair and equitable use of the service.

  1. The Bucket: Your user is given a bucket that represents the limit of requests you can make to the API. This bucket can hold a maximum amount of "tokens".
  2. Making Requests: Each endpoint is assigned a points value (the default points value per request for an endpoint is 1). When you make a request to the endpoint, if there are tokens available in the bucket (i.e., the bucket isn't empty), the request is allowed and the endpoint's points value is substracted from the bucket.
  3. Filling with Tokens: Over time, the bucket fills up with tokens at a predefined constant rate. If the bucket is full, additional tokens are discarded; you can't accumulate more tokens than the bucket can hold.
  4. Handling Excesses: If you try to make a request but the bucket has not enough tokens available, the request is not permitted. You must wait until time passes and more tokens are added to the bucket before you can make another request.

Keeping track of your usage

Each response from the Emogg REST API includes headers that provide information about your current rate limit status:

HeaderDescription
RateLimit-LimitIndicates the maximum number of tokens in the bucket, representing your maximum request quota in a given timeframe.
RateLimit-RemainingShows how many requests you can still make before hitting your rate limit.
RateLimit-ResetTells how many seconds are left until your bucket is fully refilled.
RateLimit-PolicyProvides details about the current rate limit policy in the format {bucket_capacity};w={time_window_in_seconds}.

Current rate limit

The current rate limit is set to 80 tokens every 60 seconds, though this may change without prior notice. It's important for applications to monitor these headers to adapt to the current rate limit and plan subsequent requests accordingly.

Rate limit exceedance

If you exceed the rate limit, the API will include a Retry-After header in the response, indicating how many seconds you need to wait before making another request. Additionally, the API will respond with a 429 status code (Too Many Requests).

Best practices

Graceful Handling of 429 Status

Implement logic to handle 429 status codes gracefully, using the Retry-After header to wait before sending new requests.

Optimize Request Strategy

If possible, optimize your application's request patterns to minimize the risk of hitting the rate limit. This could involve caching responses or aggregating requests.

Monitor Rate Limit Headers

Regularly check the rate limit headers to understand your current quota and adjust your request strategy accordingly.