Update README.md

This commit is contained in:
Bryan Stitt 2022-05-06 18:53:16 -07:00 committed by GitHub
parent b60f01d241
commit 8d5f0cee69
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -1,31 +1,30 @@
# web3-proxy # web3-proxy
quick and dirty proxy for ethereum rpcs (or similar) quick and dirty proxy for ethereum rpcs (or similar)
Signed transactions are sent to the configured private RPC (eden, flashbots, etc.). All other requests are sent to the configured primary RPC (alchemy, moralis, rivet, your own node, or one of many other providers). Signed transactions are sent to the configured private RPC (eden, flashbots, etc.). All other requests are sent to an RPC server on the latest block (alchemy, moralis, rivet, your own node, or one of many other providers).
``` ```
cargo run -- --help cargo run --release -- --help
``` ```
``` ```
Finished dev [unoptimized + debuginfo] target(s) in 0.04s Finished release [optimized] target(s) in 0.13s
Running `target/debug/eth-proxy --help` Running `target/release/web3-proxy --help`
Usage: eth-proxy --eth-primary-rpc <eth-primary-rpc> --eth-private-rpc <eth-private-rpc> [--listen-port <listen-port>] Usage: web3-proxy [--listen-port <listen-port>] [--rpc-config-path <rpc-config-path>]
Proxy Web3 Requests Reach new heights.
Options: Options:
--eth-primary-rpc the primary Ethereum RPC server --listen-port what port the proxy should listen on
--eth-private-rpc the private Ethereum RPC server --rpc-config-path what port the proxy should listen on
--listen-port the port to listen on
--help display usage information --help display usage information
``` ```
``` ```
cargo run -r -- --eth-primary-rpc "https://your.favorite.provider" cargo run --release
``` ```
``` ```
curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"web3_clientVersion","params":[],"id":67}' 127.0.0.1:8845/eth curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"web3_clientVersion","params":[],"id":67}' 127.0.0.1:8544/eth
``` ```
## Flame Graphs ## Flame Graphs
@ -41,8 +40,8 @@ curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","metho
Test the proxy: Test the proxy:
wrk -s ./data/wrk/getBlockNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8445 wrk -s ./data/wrk/getBlockNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8544
wrk -s ./data/wrk/getLatestBlockByNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8445 wrk -s ./data/wrk/getLatestBlockByNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8544
Test geth: Test geth:
@ -58,14 +57,14 @@ Test erigon:
## Todo ## Todo
- [x] simple proxy - [x] simple proxy
- [ ] better locking. when lots of requests come in, we seem to be in the way of block updates - [x] better locking. when lots of requests come in, we seem to be in the way of block updates
- [ ] proper logging - [ ] proper logging
- [ ] load balance between multiple RPC servers - [x] load balance between multiple RPC servers
- [ ] support more than just ETH - [x] support more than just ETH
- [ ] option to disable private rpc and send everything to primary - [x] option to disable private rpc and send everything to primary
- [ ] health check nodes by block height - [x] health check nodes by block height
- [ ] measure latency to nodes - [ ] measure latency to nodes
- [ ] Dockerfile - [x] Dockerfile
- [ ] testing getLatestBlockByNumber is not great because the latest block changes and so one run is likely to be different than another - [ ] testing getLatestBlockByNumber is not great because the latest block changes and so one run is likely to be different than another
- [ ] if a request gets a socket timeout, try on another server - [ ] if a request gets a socket timeout, try on another server
- maybe always try at least two servers in parallel? and then return the first? or only if the first one doesn't respond very quickly? - maybe always try at least two servers in parallel? and then return the first? or only if the first one doesn't respond very quickly?