(make-boolean-collector deref-key)
(make-boolean-collector deref-key initial-value)
Create an atomic, blocking boolean value store.
Create an atomic, blocking boolean value store.
(make-boolean-counter deref-truthy-key deref-falsy-key)
(make-boolean-counter deref-truthy-key deref-falsy-key options)
Create a metrics collector that stores the count of truthy and falsy values separately, and on deref
returns a map
as follows:
{deref-truthy-key truthy-count
deref-falsy-key falsy-count}
Create a metrics collector that stores the count of truthy and falsy values separately, and on `deref` returns a map as follows: {deref-truthy-key truthy-count deref-falsy-key falsy-count}
(make-dummy-collector)
(make-dummy-collector {:keys [count-val deref-val]
:or {count-val 0 deref-val {}}})
Create dummy metrics collector with specified mock return values.
Create dummy metrics collector with specified mock return values.
(make-integer-counter deref-key)
(make-integer-counter deref-key
{:keys [initial-value counter-shard-count shard-count]
:or {initial-value 0}})
Create an atomic, blocking long integer counter.
Required arguments:
deref-key - the reporting keyword to associate the value with upon deref
Optional arguments:
:initial-value (long) - Value to initialize the counter with; default 0.
:shard-count (int) - Number of shards to create to reduce contention; auto-detect by default.
:counter-shard-count - overrides :shard-count
Create an atomic, blocking long integer counter. Required arguments: deref-key - the reporting keyword to associate the value with upon `deref` Optional arguments: :initial-value (long) - Value to initialize the counter with; default 0. :shard-count (int) - Number of shards to create to reduce contention; auto-detect by default. :counter-shard-count - overrides :shard-count
(make-rolling-boolean-counter deref-truthy-key deref-falsy-key bucket-count)
(make-rolling-boolean-counter deref-truthy-key
deref-falsy-key
bucket-count
{:keys [bucket-interval buckets-truthy-key
buckets-falsy-key deref-head? event-id-fn
shard-count]
:or {bucket-interval 1000
deref-head? false
event-id-fn u/now-millis
shard-count 0}})
Create bucketed rolling count collector for truthy and falsy values respectively. Optional args default to making a per-second counter. Arguments: deref-truthy-key (keyword) key to associate the truthy count with (upon deref) deref-falsy-key (keyword) key to associate the falsy count with (upon deref) bucket-count (integer) number of buckets in the buffer Options: :bucket-interval (integer) diff between min and max possible event IDs in any bucket (default 1000 = 1 second) :buckets-truthy-key (keyword) key to associate the truthy buckets data in the deref result (nil omits bucket data) :buckets-falsy-key (keyword) key to associate the falsy buckets data in the deref result (nil omits bucket data) :deref-head? (boolean) query even the current bucket during deref? (false by default) :event-id-fn (function) no-arg fn to return latest event ID (returns current time in milliseconds by default) :shard-count (integer) number of shards to split write-load across
Create bucketed rolling count collector for truthy and falsy values respectively. Optional args default to making a per-second counter. Arguments: deref-truthy-key (keyword) key to associate the truthy count with (upon deref) deref-falsy-key (keyword) key to associate the falsy count with (upon deref) bucket-count (integer) number of buckets in the buffer Options: :bucket-interval (integer) diff between min and max possible event IDs in any bucket (default 1000 = 1 second) :buckets-truthy-key (keyword) key to associate the truthy buckets data in the deref result (nil omits bucket data) :buckets-falsy-key (keyword) key to associate the falsy buckets data in the deref result (nil omits bucket data) :deref-head? (boolean) query even the current bucket during deref? (false by default) :event-id-fn (function) no-arg fn to return latest event ID (returns current time in milliseconds by default) :shard-count (integer) number of shards to split write-load across
(make-rolling-integer-counter deref-key bucket-count)
(make-rolling-integer-counter deref-key
bucket-count
{:keys [bucket-interval buckets-key deref-head?
event-id-fn shard-count]
:or {bucket-interval 1000
deref-head? false
event-id-fn u/now-millis
shard-count 0}})
Create bucketed rolling count collector. Optional args default to making a per-second counter. Arguments: deref-key (keyword) key to associate the count with (upon deref) bucket-count (integer) number of buckets in the buffer Options: :bucket-interval (integer) diff between min and max possible event IDs in any bucket (default 1000 = 1 second) :buckets-key (keyword) key to associate the buckets data in the deref result (nil omits bucket data) :deref-head? (boolean) query even the current/head bucket during deref? (false by default) :event-id-fn (function) no-arg fn to return latest event ID (returns current time in milliseconds by default) :shard-count (integer) number of shards to split write-load across
Create bucketed rolling count collector. Optional args default to making a per-second counter. Arguments: deref-key (keyword) key to associate the count with (upon deref) bucket-count (integer) number of buckets in the buffer Options: :bucket-interval (integer) diff between min and max possible event IDs in any bucket (default 1000 = 1 second) :buckets-key (keyword) key to associate the buckets data in the deref result (nil omits bucket data) :deref-head? (boolean) query even the current/head bucket during deref? (false by default) :event-id-fn (function) no-arg fn to return latest event ID (returns current time in milliseconds by default) :shard-count (integer) number of shards to split write-load across
(make-rolling-max-collector deref-key
bucket-count
{:keys [bucket-interval buckets-key deref-head?
event-id-fn shard-count]
:or {bucket-interval 1000
deref-head? false
event-id-fn u/now-millis
shard-count 0}})
Create bucketed rolling max-value collector. Optional args default to making a per-second counter. Arguments: deref-key (keyword) key to associate the count with (upon deref) bucket-count (integer) number of buckets in the buffer Options: :bucket-interval (integer) diff between min and max possible event IDs in any bucket (default 1000 = 1 second) :buckets-key (keyword) key to associate the buckets data in the deref result (nil omits bucket data) :deref-head? (boolean) query even the current bucket during deref? (false by default) :event-id-fn (function) no-arg fn to return latest event ID (returns current time in milliseconds by default) :shard-count (integer) number of shards to split write-load across
Create bucketed rolling max-value collector. Optional args default to making a per-second counter. Arguments: deref-key (keyword) key to associate the count with (upon deref) bucket-count (integer) number of buckets in the buffer Options: :bucket-interval (integer) diff between min and max possible event IDs in any bucket (default 1000 = 1 second) :buckets-key (keyword) key to associate the buckets data in the deref result (nil omits bucket data) :deref-head? (boolean) query even the current bucket during deref? (false by default) :event-id-fn (function) no-arg fn to return latest event ID (returns current time in milliseconds by default) :shard-count (integer) number of shards to split write-load across
(make-rolling-percentile-collector deref-key percentiles bucket-count)
(make-rolling-percentile-collector deref-key
percentiles
bucket-count
{:keys [bucket-interval bucket-capacity
buckets-key deref-head? event-id-fn
shard-count]
:or {bucket-interval 1000
bucket-capacity 128
deref-head? false
event-id-fn u/now-millis
shard-count 0}})
Create bucketed rolling percentile collector. Optional args default to making a per-second counter. Arguments: deref-key (keyword) key to associate the count with (upon deref) percentiles (seqable) list of percentiles to calculate bucket-count (integer) number of buckets in the buffer Options: :bucket-interval (integer) diff between min and max possible event IDs in any bucket (default 1000 = 1 second) :bucket-capacity (integer) max number of values in every bucket :buckets-key (keyword) key to associate the buckets data in the deref result (nil omits bucket data) :deref-head? (boolean) query the current bucket during deref? (false by default) :event-id-fn (function) no-arg fn to return latest event ID (returns current time in milliseconds by default) :shard-count (integer) number of shards to split write-load across
Create bucketed rolling percentile collector. Optional args default to making a per-second counter. Arguments: deref-key (keyword) key to associate the count with (upon deref) percentiles (seqable) list of percentiles to calculate bucket-count (integer) number of buckets in the buffer Options: :bucket-interval (integer) diff between min and max possible event IDs in any bucket (default 1000 = 1 second) :bucket-capacity (integer) max number of values in every bucket :buckets-key (keyword) key to associate the buckets data in the deref result (nil omits bucket data) :deref-head? (boolean) query the current bucket during deref? (false by default) :event-id-fn (function) no-arg fn to return latest event ID (returns current time in milliseconds by default) :shard-count (integer) number of shards to split write-load across
(make-sharding-collector f collectors)
Create a unified collector that randomly distributes record!
calls across given metrics collectors in a random
sharding manner. If collectors >= CPU cores, any concurrent record!
operations may be mostly contention free.
NOTE: Works only when record!
operations on all collectors are associative and commutative, e.g. shared counters.
Create a unified collector that randomly distributes `record!` calls across given metrics collectors in a random sharding manner. If collectors >= CPU cores, any concurrent `record!` operations may be mostly contention free. NOTE: Works only when `record!` operations on all collectors are associative and commutative, e.g. shared counters.
(make-union-collector collectors)
Create a super collector from one or more collectors, so that invoking it propagates request to all collectors.
Calling deref
on the union recorder simply merges the deref
results from all constituent recorders. Beware of
other protocols implemented in the original recorders being unavailable in the union recorder.
Create a super collector from one or more collectors, so that invoking it propagates request to all collectors. Calling `deref` on the union recorder simply merges the `deref` results from all constituent recorders. Beware of other protocols implemented in the original recorders being unavailable in the union recorder.
(resolve-shard-count shard-count)
Resolve shard count. Arguments :detect
and :detect-java7
return a value up to 128 based on the number of
available CPU cores.
Resolve shard count. Arguments `:detect` and `:detect-java7` return a value up to 128 based on the number of available CPU cores.
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close