* first pass at handling different return data limits
* put ws_provider in an arcswap
* add min max_latency
* add min max_latency
* subscribe with reconnect
* better logging around reconnect
* select on both watches
* subscribe to the correct watch
* wip
* AsRef finally works like i wanted
* actually return the block
* start adding async trait
* remove stale import
* include id in the error response when possible
* remove stale comments
* quick cache and allocate less
* improve /status cache
* prepare to cache raw transaction hashes so we dont dos our backends
* simple benchmark for /health and /status
* mut not needed with atomics
* DRY all the status pages
* use u64 instead of bytes for subscriptions
* fix setting earliest_retry_at and improve logs
* Revert "use kanal instead of flume or tokio channels (#68)"
This reverts commit 510612d343fc51338a8a4282dcc229b50097835b.
* fix automatic retries
* put relaxed back
* convert error message time to seconds
* assert instead of debug_assert while we debug
* ns instead of seconds
* disable peak_latency for now
* null is the default
* cargo fmt
* comments
* remove request caching for now
* log on exit
* unit weigher for now
* make cache smaller. we need a weigher for prod. just debugging
* oops. we need async
* add todo
* no need for to_string on a RawValue
* use peak-ewma instead of head for latency calculation
* Implement some suggested changes from PR
* move latency to new package in workspace root
* fix unit tests which now require peak_latency on Web3Rpc
* Switch to atomics for peak-ewma
This change is to avoid locking from tokio::sync::watch.
* add decay calculation to latency reads in peak-ewma
* Add some tests for peak-ewma
* Sensible latency defaults and not blocking on full
* Cleanup and a couple additional comments
* move protected transactions into their own function and dry stats sending
* cargo upgrade
* comments
* time to live instead of time to idle
* minor workaround for eth_chainId
* cargo upgrade