better logs

This commit is contained in:
Bryan Stitt 2022-07-25 22:36:02 +00:00
parent 68190fb3c9
commit 1aa6b4cdb4
5 changed files with 35 additions and 16 deletions

@ -11,7 +11,7 @@ All other requests are sent to an RPC server on the latest block (alchemy, moral
Each server has different limits to configure. The `soft_limit` is the number of parallel active requests where a server starts to slow down. The `hard_limit` is where a server starts giving rate limits or other errors.
```
$ cargo run --release -p web3-proxy -- --help
$ cargo run --release -- --help
```
```
Compiling web3-proxy v0.1.0 (/home/bryan/src/web3-proxy/web3-proxy)
@ -31,7 +31,7 @@ Options:
Start the server with the defaults (listen on `http://localhost:8544` and use `./config/example.toml` which proxies to a bunch of public nodes:
```
cargo run --release -p web3-proxy -- --config ./config/example.toml
cargo run --release -- --config ./config/example.toml
```
## Common commands
@ -75,6 +75,7 @@ Run the proxy under gdb for advanced debugging:
cargo build --release && RUST_LOG=web3_proxy=debug rust-gdb --args target/debug/web3-proxy --listen-port 7503 --rpc-config-path ./config/production-eth.toml
TODO: also enable debug symbols in the release build by modifying the root Cargo.toml
## Load Testing

16
TODO.md

@ -58,13 +58,6 @@
- we can improve this by only publishing the synced connections once a threshold of total available soft and hard limits is passed. how can we do this without hammering redis? at least its only once per block per server
- [x] instead of tracking `pending_synced_connections`, have a mapping of where all connections are individually. then each change, re-check for consensus.
- [x] synced connections swap threshold set to 1 so that it always serves something
- [ ] if we request an old block, more servers can handle it than we currently use.
- [ ] instead of the one list of just heads, store our intermediate mappings (rpcs_by_hash, rpcs_by_num, blocks_by_hash) in SyncedConnections. this shouldn't be too much slower than what we have now
- [ ] remove the if/else where we optionally route to archive and refactor to require a BlockNumber enum
- [ ] then check syncedconnections for the blockNum. if num given, use the cannonical chain to figure out the winning hash
- [ ] this means if someone requests a recent but not ancient block, they can use all our servers, even the slower ones
- [ ] nice output when cargo doc is run
- [ ] basic request method stats
## V1
@ -79,7 +72,14 @@
- [cancelled] eth_getBlockByNumber and similar calls served from the block map
- will need all Block<TxHash> **and** Block<TransactionReceipt> in caches or fetched efficiently
- so maybe we don't want this. we can just use the general request cache for these. they will only require 1 request and it means requests won't get in the way as much on writes as new blocks arrive.
- [ ] cli tool for managing users and resetting api keys
- [ ] incoming rate limiting by api key
- [ ] nice output when cargo doc is run
- [ ] if we request an old block, more servers can handle it than we currently use.
- [ ] instead of the one list of just heads, store our intermediate mappings (rpcs_by_hash, rpcs_by_num, blocks_by_hash) in SyncedConnections. this shouldn't be too much slower than what we have now
- [ ] remove the if/else where we optionally route to archive and refactor to require a BlockNumber enum
- [ ] then check syncedconnections for the blockNum. if num given, use the cannonical chain to figure out the winning hash
- [ ] this means if someone requests a recent but not ancient block, they can use all our servers, even the slower ones
- [ ] refactor so configs can change while running
- create the app without applying any config to it
- have a blocking future watching the config file and calling app.apply_config() on first load and on change
@ -110,7 +110,7 @@
- [ ] 60 second timeout is too short. Maybe do that for free tier and larger timeout for paid. Problem is that some queries can take over 1000 seconds
new endpoints for users:
- think about where to put this. a separate app might be better. this repo could just have a cli tool for managing users
- think about where to put this. a separate app might be better
- [ ] GET /user/login/$address
- returns a JSON string for the user to sign
- [ ] POST /user/login/$address

@ -2,6 +2,7 @@
name = "web3-proxy"
version = "0.1.0"
edition = "2021"
default-run = "web3-proxy"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

@ -600,6 +600,13 @@ impl Web3Connections {
let mut connection_heads = IndexMap::<String, Arc<Block<TxHash>>>::new();
while let Ok((new_block, rpc)) = block_receiver.recv_async().await {
if let Some(current_block) = connection_heads.get(rpc.url()) {
if current_block.hash == new_block.hash {
// duplicate block
continue;
}
}
let new_block_hash = if let Some(hash) = new_block.hash {
hash
} else {
@ -810,13 +817,17 @@ impl Web3Connections {
if new_head_block {
self.chain.add_block(new_block.clone(), true);
// TODO: include the fastest rpc here?
info!(
"{}/{} rpcs at {} ({}). publishing new head!",
"{}/{} rpcs at {} ({}). head at {:?}",
pending_synced_connections.conns.len(),
self.conns.len(),
pending_synced_connections.head_block_hash,
pending_synced_connections.head_block_num,
pending_synced_connections
.conns
.iter()
.map(|x| format!("{}", x))
.collect::<Vec<_>>(),
);
// TODO: what if the hashes don't match?
if pending_synced_connections.head_block_hash == new_block_hash {
@ -833,12 +844,18 @@ impl Web3Connections {
// TODO: mark any orphaned transactions as unconfirmed
}
} else if num_best_rpcs == self.conns.len() {
debug!(
"all {} rpcs at {} ({})",
num_best_rpcs,
pending_synced_connections.head_block_hash,
pending_synced_connections.head_block_num,
);
} else {
// TODO: i'm seeing 4/4 print twice. maybe because of http providers?
// TODO: only do this log if there was a change
trace!(
?pending_synced_connections,
"{}/{} rpcs at {} ({})",
pending_synced_connections.conns.len(),
num_best_rpcs,
self.conns.len(),
pending_synced_connections.head_block_hash,
pending_synced_connections.head_block_num,

@ -156,7 +156,7 @@ mod tests {
use hashbrown::HashMap;
use std::env;
use web3_proxy::config::{RpcSharedConfig, Web3ConnectionConfig};
use crate::config::{RpcSharedConfig, Web3ConnectionConfig};
use super::*;