Send Docs Feedback

Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs.

Spike Arrest policy


The Spike Arrest policy protects against traffic spikes. It throttles the number of requests processed by an API proxy and sent to a backend, protecting against performance lags and downtime. See the <Rate> element for a more detailed behavior description. See also "How spike arrest works", below. 

Need help deciding which rate limiting policy to use? See Comparing Quota, Spike Arrest, and Concurrent Rate Limit Policies.


While you can attach this policy anywhere in the flow, we recommend that you attach it in the following location so that it can provide spike protection at the immediate entry point of your API proxy. For more guidance on Spike Arrest policy placement, see

ProxyEndpoint TargetEndpoint
    PreFlow Flow PostFlow PreFlow Flow PostFlow    
    PostFlow Flow PreFlow PostFlow Flow PreFlow    


These videos show you how to protect your APIs against traffic spikes using the Spike Arrest policy:


<SpikeArrest name="SpikeArrest">

5 per second. The policy smoothes the rate to 1 request allowed every 200 milliseconds (1000 / 5).

<SpikeArrest name="SpikeArrest">

12 per minute. The policy smoothes the rate to 1 request allowed every 5 seconds (60 / 12).

<SpikeArrest name="SpikeArrest">
  <Identifier ref="client_id" />
  <MessageWeight ref="request.header.weight" />

12 per minute (1 request allowed every 5 seconds, 60 / 12), with message weight that provides additional throttling on specific clients or apps (captured by the Identifier).

<SpikeArrest name="SpikeArrest">
  <Rate ref="request.header.rate" />

Setting rate with a variable in the request. The variable value must be in the form of {int}pm or {int}ps.

Check out this Apigee Community post that explains how to set the spike arrest rate using custom variables set in an API product.

Element reference

Following are elements and attributes you can configure on this policy.

<SpikeArrest async="false" continueOnError="false" enabled="true" name="Spike-Arrest-1">
    <DisplayName>Custom label used in UI</DisplayName>
    <Identifier ref="request.header.some-header-name"/>
    <MessageWeight ref="request.header.weight"/>

<SpikeArrest> attributes

<SpikeArrest async="false" continueOnError="false" enabled="true" name="Spike-Arrest-1">

The following attributes are common to all policy parent elements.

Attribute 説明 デフォルト Presence

The internal name of the policy. Characters you can use in the name are restricted to: A-Z0-9._\-$ %. However, the Edge management UI enforces additional restrictions, such as automatically removing characters that are not alphanumeric.

Optionally, use the <DisplayName> element to label the policy in the management UI proxy editor with a different, natural-language name.

該当なし Required

Set to false to return an error when a policy fails. This is expected behavior for most policies.

Set to true to have flow execution continue even after a policy fails.

false Optional

Set to true to enforce the policy.

Set to false to "turn off" the policy. The policy will not be enforced even if it remains attached to a flow.

true Optional

This attribute is deprecated.

false Deprecated

<DisplayName> element

Use in addition to the name attribute to label the policy in the management UI proxy editor with a different, natural-language name.

<DisplayName>Policy Display Name</DisplayName>


If you omit this element, the value of the policy's name attribute is used.

Presence: Optional
Type: 文字列


<Rate> element

Specifies the rate at which to limit traffic spikes (or bursts). Specify a number of requests that are allowed in per minute or per second intervals. However, keep reading for a description of how the policy behaves at runtime to smoothly throttle traffic. See also "How spike arrest works", below. 

<Rate ref="request.header.rate" />
デフォルト 該当なし
Presence Required
Valid values
  • {int}ps (number of requests per second, smoothed into intervals of milliseconds)
  • {int}pm (number of requests per minute, smoothed into intervals of seconds)

    In rate smoothing, the number of requests is always a whole number greater than zero. Smoothing never involves calculating fractions of requests.


Attribute 説明 デフォルト Presence

A reference to the variable containing the rate setting, in the form of {int}pm or {int}ps.

該当なし Optional

<Identifier> element

Use the <Identifier> element to uniquely identify and apply spike arrest against individual apps or developers. You can use a variety of variables to indicate a unique developer or app, whether you're using custom variables or predefined variables, such as those available with the Verify API Key policy. See also the Variables reference.

Use in conjunction with <MessageWeight> for more fine-grained control over request throttling.

If you don't use this element, all calls made to the API proxy are counted for spike arrest.

This element is also discussed in the following Apigee Community post:

<Identifier ref="client_id"/>
デフォルト 該当なし
Presence Optional


Attribute 説明 デフォルト Presence

A reference to the variable containing the data that identifies the app or developer.

該当なし Required

<MessageWeight> element

Use in conjunction with <Identifier> to further throttle requests by specific clients or apps.

Specifies the weighting defined for each message. Message weight is used to modify the impact of a single request on the calculation of the Spike Arrest limit. Message weight can be set by variables based on HTTP headers, query parameters, or message body content. For example, if the Spike Arrest Rate is 10pm, and an app submits requests with weight 2, then only 5 messages per minute are permitted from that app.

<MessageWeight ref="request.header.weight"/>
デフォルト 該当なし
Presence Optional


Attribute 説明 デフォルト Presence

A reference to the variable containing the message weight for the specific app or client.

該当なし Required

How spike arrest works

Think of Spike Arrest as a way to generally protect against traffic spikes rather than as a way to limit traffic to a specific number of requests. Your APIs and backend can handle a certain amount of traffic, and the Spike Arrest policy helps you smooth traffic to the general amounts you want.

The runtime Spike Arrest behavior differs from what you might expect to see from the literal per-minute or per-second values you enter.

For example, say you enter a rate of 30pm (30 requests per minute). In testing, you might think you could send 30 requests in 1 second, as long as they came within a minute. But that's not how the policy enforces the setting. If you think about it, 30 requests inside a 1-second period could be considered a mini spike in some environments.

What actually happens, then? To prevent spike-like behavior, Spike Arrest smooths the number of full requests allowed by dividing your settings into smaller intervals:

  • Per-minute rates get smoothed into full requests allowed in intervals of seconds.
    For example, 30pm gets smoothed like this:
    60 seconds (1 minute) / 30pm = 2-second intervals, or 1 request allowed every 2 seconds. A second request inside of 2 seconds will fail. Also, a 31st request within a minute will fail.
  • Per-second rates get smoothed into full requests allowed in intervals of milliseconds.
    For example, 10ps gets smoothed like this:
    1000 milliseconds (1 second) / 10ps = 100-millisecond intervals, or 1 request allowed every 100 milliseconds. A second request inside of 100ms will fail. Also, an 11th request within a second will fail.

In rate smoothing, the number of requests is always a whole number greater than zero. Smoothing never involves calculating fractions of requests.

There's more: 1 request * number of message processors

Spike Arrest is not distributed, so request counts are not synchronized across message processors. With more than one message processor, especially those with a round-robin configuration, each handles its own Spike Arrest throttling independently. With one message processor, a 30pm rate smooths traffic to 1 request every 2 seconds (60 / 30). With two message processors (the default for Edge cloud), that number doubles to 2 requests every 2 seconds. So multiply your calculated number of full requests per interval by the number of message processors to get your overall arrest rate.

What is the difference between spike arrest and quota

Quota policies configure the number of request messages that a client app is allowed to submit to an API over the course of an hour, day, week, or month. The quota policy enforces consumption limits on client apps by maintaining a distributed counter that tallies incoming requests.

Use a quota policy to enforce business contracts or SLAs with developers and partners, rather than for operational traffic management. Use spike arrest to protect against sudden spikes in API traffic. See also Comparing Quota, Spike Arrest, and Concurrent Rate Limit Policies.

Usage notes

  • In general, you should use Spike Arrest to set a limit that throttles traffic to what your backend services can handle.
  • See also "How spike arrest works". 


See our Github repository samples for the most recent schemas.

Flow variables

When a Spike Arrest policy executes, the following Flow variable is populated.

For more information about Flow variables, see Variables reference.

変数 Permission 説明
ratelimit.{policy_name}.failed Boolean 読み取り専用

Indicates whether or not the policy failed (true or false).

Error reference

This section describes the error messages and flow variables that are set when this policy triggers an error. This information is important to know if you are developing fault rules for a proxy. To learn more, see What you need to know about policy errors and Handling faults.

Error code prefix

policies.ratelimit (What's this?)

Runtime errors

These errors can occur when the policy executes.

Error name HTTP status Occurs when
SpikeArrestViolation 500 The rate limit is exceeded.
InvalidMessageWeight 500 The message weight value must be an integer.
FailedToResolveSpikeArrestRate 500 The referenced variable used to specify the rate can't be resolved.

Deployment errors

These errors can occur when you deploy a proxy containing this policy.

Error name Occurs when
InvalidAllowedRate Valid values are [int]ps or [int]pm.

Other errors

Error name Occurs when
ErrorLoadingProperties See fault string.

Fault variables

These variables are set when a runtime error occurs. For more information, see What you need to know about policy errors.

Variables set (Learn more) Where
[prefix].[policy_name].failed The fault variable [prefix] is ratelimit.
The [policy_name] is the name of the policy that threw the error.
ratelimit.SA-SpikeArrestPolicy.failed = true
fault.[error_name] [error_name] = The specific error name to check for as listed in the table above. Matches "SpikeArrestViolation"

Example error response

For error handling, the best practice is to trap the errorcode part of the error response. Do not rely on the text in the faultstring, because it could change.

      "faultstring":"Spike arrest violation. Allowed rate : 10ps"

Example fault rule

    <FaultRule name="Spike Arrest Errors">
            <Condition>( Matches "SpikeArrestViolation") </Condition>

The current HTTP status code for exceeding the rate limit is 500, but it will soon be changed to 429. Until the change occurs, if you are want the status code to be 429, a property needs to be set on your organization (features.isHTTPStatusTooManyRequestEnabled). If you're a cloud customer, contact Apigee Support to have the property enabled. See this community article for guidance on the upcoming change.

Edge for Private Cloud のお客様の場合は、次の API 呼び出しでこのプロパティを設定します。

curl -u email:password -X POST -H "Content-type:application/xml" http://host:8080/v1/o/myorg -d \
"<Organization type="trial" name="MyOrganization">
        <Property name="features.isHTTPStatusTooManyRequestEnabled">true</Property>


For working samples of API proxies, see the Samples list.


Help or comments?