Struct governor::state::RateLimiter
source · pub struct RateLimiter<K, S, C, MW = NoOpMiddleware>{ /* private fields */ }
Expand description
A rate limiter.
This is the structure that ties together the parameters (how many cells to allow in what time
period) and the concrete state of rate limiting decisions. This crate ships in-memory state
stores, but it’s possible (by implementing the StateStore
trait) to make others.
Implementations§
source§impl<S, C, MW> RateLimiter<NotKeyed, S, C, MW>where
S: DirectStateStore,
C: ReasonablyRealtime,
MW: RateLimitingMiddleware<C::Instant, NegativeOutcome = NotUntil<C::Instant>>,
impl<S, C, MW> RateLimiter<NotKeyed, S, C, MW>where
S: DirectStateStore,
C: ReasonablyRealtime,
MW: RateLimitingMiddleware<C::Instant, NegativeOutcome = NotUntil<C::Instant>>,
§Direct rate limiters - async
/await
sourcepub async fn until_ready(&self) -> MW::PositiveOutcome
pub async fn until_ready(&self) -> MW::PositiveOutcome
Asynchronously resolves as soon as the rate limiter allows it.
When polled, the returned future either resolves immediately (in the case where the rate limiter allows it), or else triggers an asynchronous delay, after which the rate limiter is polled again. This means that the future might resolve at some later time (depending on what other measurements are made on the rate limiter).
If multiple futures are dispatched against the rate limiter, it is advisable to use
until_ready_with_jitter
, to avoid thundering herds.
sourcepub async fn until_ready_with_jitter(
&self,
jitter: Jitter,
) -> MW::PositiveOutcome
pub async fn until_ready_with_jitter( &self, jitter: Jitter, ) -> MW::PositiveOutcome
Asynchronously resolves as soon as the rate limiter allows it, with a randomized wait period.
When polled, the returned future either resolves immediately (in the case where the rate limiter allows it), or else triggers an asynchronous delay, after which the rate limiter is polled again. This means that the future might resolve at some later time (depending on what other measurements are made on the rate limiter).
This method allows for a randomized additional delay between polls of the rate limiter, which can help reduce the likelihood of thundering herd effects if multiple tasks try to wait on the same rate limiter.
sourcepub async fn until_n_ready(
&self,
n: NonZeroU32,
) -> Result<MW::PositiveOutcome, InsufficientCapacity>
pub async fn until_n_ready( &self, n: NonZeroU32, ) -> Result<MW::PositiveOutcome, InsufficientCapacity>
Asynchronously resolves as soon as the rate limiter allows it.
This is similar to until_ready
except it waits for an abitrary number
of n
cells to be available.
Returns InsufficientCapacity
if the n
provided exceeds the maximum
capacity of the rate limiter.
sourcepub async fn until_n_ready_with_jitter(
&self,
n: NonZeroU32,
jitter: Jitter,
) -> Result<MW::PositiveOutcome, InsufficientCapacity>
pub async fn until_n_ready_with_jitter( &self, n: NonZeroU32, jitter: Jitter, ) -> Result<MW::PositiveOutcome, InsufficientCapacity>
Asynchronously resolves as soon as the rate limiter allows it, with a randomized wait period.
This is similar to until_ready_with_jitter
except it waits for an
abitrary number of n
cells to be available.
Returns InsufficientCapacity
if the n
provided exceeds the maximum
capacity of the rate limiter.
source§impl RateLimiter<NotKeyed, InMemoryState, DefaultClock, NoOpMiddleware>
impl RateLimiter<NotKeyed, InMemoryState, DefaultClock, NoOpMiddleware>
§Direct in-memory rate limiters - Constructors
Here we construct an in-memory rate limiter that makes direct (un-keyed) rate-limiting decisions. Direct rate limiters can be used to e.g. regulate the transmission of packets on a single connection, or to ensure that an API client stays within a service’s rate limit.
sourcepub fn direct(
quota: Quota,
) -> RateLimiter<NotKeyed, InMemoryState, DefaultClock, NoOpMiddleware>
pub fn direct( quota: Quota, ) -> RateLimiter<NotKeyed, InMemoryState, DefaultClock, NoOpMiddleware>
Constructs a new in-memory direct rate limiter for a quota with the default real-time clock.
source§impl<C> RateLimiter<NotKeyed, InMemoryState, C, NoOpMiddleware<C::Instant>>where
C: Clock,
impl<C> RateLimiter<NotKeyed, InMemoryState, C, NoOpMiddleware<C::Instant>>where
C: Clock,
sourcepub fn direct_with_clock(quota: Quota, clock: &C) -> Self
pub fn direct_with_clock(quota: Quota, clock: &C) -> Self
Constructs a new direct rate limiter for a quota with a custom clock.
source§impl<S, C, MW> RateLimiter<NotKeyed, S, C, MW>
impl<S, C, MW> RateLimiter<NotKeyed, S, C, MW>
§Direct rate limiters - Manually checking cells
sourcepub fn check(&self) -> Result<MW::PositiveOutcome, MW::NegativeOutcome>
pub fn check(&self) -> Result<MW::PositiveOutcome, MW::NegativeOutcome>
Allow a single cell through the rate limiter.
If the rate limit is reached, check
returns information about the earliest
time that a cell might be allowed through again.
sourcepub fn check_n(
&self,
n: NonZeroU32,
) -> Result<Result<MW::PositiveOutcome, MW::NegativeOutcome>, InsufficientCapacity>
pub fn check_n( &self, n: NonZeroU32, ) -> Result<Result<MW::PositiveOutcome, MW::NegativeOutcome>, InsufficientCapacity>
Allow only all n
cells through the rate limiter.
This method can succeed in only one way and fail in two ways:
- Success: If all
n
cells can be accommodated, it returnsOk(())
. - Failure (but ok): Not all cells can make it through at the current time.
The result is
Err(NegativeMultiDecision::BatchNonConforming(NotUntil))
, which can be interrogated about when the batch might next conform. - Failure (the batch can never go through): The rate limit quota’s burst size is too low for the given number of cells to ever be allowed through.
§Performance
This method diverges a little from the GCRA algorithm, using multiplication to determine the next theoretical arrival time, and so is not as fast as checking a single cell.
source§impl<K, C> RateLimiter<K, HashMapStateStore<K>, C, NoOpMiddleware<C::Instant>>
impl<K, C> RateLimiter<K, HashMapStateStore<K>, C, NoOpMiddleware<C::Instant>>
sourcepub fn hashmap_with_clock(quota: Quota, clock: &C) -> Self
pub fn hashmap_with_clock(quota: Quota, clock: &C) -> Self
Constructs a new rate limiter with a custom clock, backed by a HashMap
.
source§impl<K, C> RateLimiter<K, DashMapStateStore<K>, C, NoOpMiddleware<C::Instant>>
impl<K, C> RateLimiter<K, DashMapStateStore<K>, C, NoOpMiddleware<C::Instant>>
sourcepub fn dashmap_with_clock(quota: Quota, clock: &C) -> Self
pub fn dashmap_with_clock(quota: Quota, clock: &C) -> Self
Constructs a new rate limiter with a custom clock, backed by a
DashMap
.
source§impl<K, S, C, MW> RateLimiter<K, S, C, MW>where
K: Hash + Eq + Clone,
S: KeyedStateStore<K>,
C: ReasonablyRealtime,
MW: RateLimitingMiddleware<C::Instant, NegativeOutcome = NotUntil<C::Instant>>,
impl<K, S, C, MW> RateLimiter<K, S, C, MW>where
K: Hash + Eq + Clone,
S: KeyedStateStore<K>,
C: ReasonablyRealtime,
MW: RateLimitingMiddleware<C::Instant, NegativeOutcome = NotUntil<C::Instant>>,
§Keyed rate limiters - async
/await
sourcepub async fn until_key_ready(&self, key: &K) -> MW::PositiveOutcome
pub async fn until_key_ready(&self, key: &K) -> MW::PositiveOutcome
Asynchronously resolves as soon as the rate limiter allows it.
When polled, the returned future either resolves immediately (in the case where the rate limiter allows it), or else triggers an asynchronous delay, after which the rate limiter is polled again. This means that the future might resolve at some later time (depending on what other measurements are made on the rate limiter).
If multiple futures are dispatched against the rate limiter, it is advisable to use
until_ready_with_jitter
, to avoid thundering herds.
sourcepub async fn until_key_ready_with_jitter(
&self,
key: &K,
jitter: Jitter,
) -> MW::PositiveOutcome
pub async fn until_key_ready_with_jitter( &self, key: &K, jitter: Jitter, ) -> MW::PositiveOutcome
Asynchronously resolves as soon as the rate limiter allows it, with a randomized wait period.
When polled, the returned future either resolves immediately (in the case where the rate limiter allows it), or else triggers an asynchronous delay, after which the rate limiter is polled again. This means that the future might resolve at some later time (depending on what other measurements are made on the rate limiter).
This method allows for a randomized additional delay between polls of the rate limiter, which can help reduce the likelihood of thundering herd effects if multiple tasks try to wait on the same rate limiter.
sourcepub async fn until_key_n_ready(
&self,
key: &K,
n: NonZeroU32,
) -> Result<MW::PositiveOutcome, InsufficientCapacity>
pub async fn until_key_n_ready( &self, key: &K, n: NonZeroU32, ) -> Result<MW::PositiveOutcome, InsufficientCapacity>
Asynchronously resolves as soon as the rate limiter allows it.
This is similar to until_key_ready
except it waits for an abitrary number
of n
cells to be available.
Returns InsufficientCapacity
if the n
provided exceeds the maximum
capacity of the rate limiter.
sourcepub async fn until_key_n_ready_with_jitter(
&self,
key: &K,
n: NonZeroU32,
jitter: Jitter,
) -> Result<MW::PositiveOutcome, InsufficientCapacity>
pub async fn until_key_n_ready_with_jitter( &self, key: &K, n: NonZeroU32, jitter: Jitter, ) -> Result<MW::PositiveOutcome, InsufficientCapacity>
Asynchronously resolves as soon as the rate limiter allows it, with a randomized wait period.
This is similar to until_key_ready_with_jitter
except it waits for an
abitrary number of n
cells to be available.
Returns InsufficientCapacity
if the n
provided exceeds the maximum
capacity of the rate limiter.
source§impl<K> RateLimiter<K, DefaultKeyedStateStore<K>, DefaultClock>
impl<K> RateLimiter<K, DefaultKeyedStateStore<K>, DefaultClock>
§Keyed rate limiters - default constructors
source§impl<K> RateLimiter<K, HashMapStateStore<K>, DefaultClock>
impl<K> RateLimiter<K, HashMapStateStore<K>, DefaultClock>
source§impl<K, S, C, MW> RateLimiter<K, S, C, MW>
impl<K, S, C, MW> RateLimiter<K, S, C, MW>
§Keyed rate limiters - Manually checking cells
sourcepub fn check_key(
&self,
key: &K,
) -> Result<MW::PositiveOutcome, MW::NegativeOutcome>
pub fn check_key( &self, key: &K, ) -> Result<MW::PositiveOutcome, MW::NegativeOutcome>
Allow a single cell through the rate limiter for the given key.
If the rate limit is reached, check_key
returns information about the earliest
time that a cell might be allowed through again under that key.
sourcepub fn check_key_n(
&self,
key: &K,
n: NonZeroU32,
) -> Result<Result<MW::PositiveOutcome, MW::NegativeOutcome>, InsufficientCapacity>
pub fn check_key_n( &self, key: &K, n: NonZeroU32, ) -> Result<Result<MW::PositiveOutcome, MW::NegativeOutcome>, InsufficientCapacity>
Allow only all n
cells through the rate limiter for the given key.
This method can succeed in only one way and fail in two ways:
- Success: If all
n
cells can be accommodated, it returnsOk(Ok(()))
. - Failure (but ok): Not all cells can make it through at the current time.
The result is
Ok(Err(NotUntil))
, which can be interrogated about when the batch might next conform. - Failure (the batch can never go through): The rate limit is too low for the given number
of cells. The result is
Err(InsufficientCapacity)
§Performance
This method diverges a little from the GCRA algorithm, using multiplication to determine the next theoretical arrival time, and so is not as fast as checking a single cell.
source§impl<K, S, C, MW> RateLimiter<K, S, C, MW>
impl<K, S, C, MW> RateLimiter<K, S, C, MW>
§Keyed rate limiters - Housekeeping
As the inputs to a keyed rate-limiter can be arbitrary keys, the set of retained keys retained grows, while the number of active keys may stay smaller. To save on space, a keyed rate-limiter allows removing those keys that are “stale”, i.e., whose values are no different from keys’ that aren’t present in the rate limiter state store.
sourcepub fn retain_recent(&self)
pub fn retain_recent(&self)
Retains all keys in the rate limiter that were used recently enough.
Any key whose rate limiting state is indistinguishable from a “fresh” state (i.e., the theoretical arrival time lies in the past).
sourcepub fn shrink_to_fit(&self)
pub fn shrink_to_fit(&self)
Shrinks the capacity of the rate limiter’s state store, if possible.
source§impl<K, S, C, MW> RateLimiter<K, S, C, MW>
impl<K, S, C, MW> RateLimiter<K, S, C, MW>
sourcepub fn new(quota: Quota, state: S, clock: &C) -> Self
pub fn new(quota: Quota, state: S, clock: &C) -> Self
Creates a new rate limiter from components.
This is the most generic way to construct a rate-limiter; most users should prefer
direct
or other methods instead.
sourcepub fn into_state_store(self) -> S
pub fn into_state_store(self) -> S
Consumes the RateLimiter
and returns the state store.
This is mostly useful for debugging and testing.
source§impl<K, S, C, MW> RateLimiter<K, S, C, MW>
impl<K, S, C, MW> RateLimiter<K, S, C, MW>
sourcepub fn with_middleware<Outer: RateLimitingMiddleware<C::Instant>>(
self,
) -> RateLimiter<K, S, C, Outer>
pub fn with_middleware<Outer: RateLimitingMiddleware<C::Instant>>( self, ) -> RateLimiter<K, S, C, Outer>
Convert the given rate limiter into one that uses a different middleware.