34ed450fab
* will implement balance topup endpoint * will quickly fix other PR reviews * merging from master * will finish up godmoe * will finish up login * added logic to top up balance (first iteration) * should implement additional columns soon (currency, amount, tx-hash), as well as a new table for spend * updated migrations, will account for spend next * get back to this later * will merge PR from stats-v2 * stats v2 rebased all my commits and squashed them down to one * cargo upgrade * added migrtation for spend in accounting table. will run test-deposit next * trying to get request from polygon * first iteration /user/balance/:tx_hash works, needs to add accepted tokens next * creating the referral code seems to work * will now check if spending enough credits will lead to both parties receiving credits * rpcstats takes care of accounting for spend data * removed track spend from table * Revert "removed track spend from table" This reverts commit a50802d6ae75f786864c5ec42d0ceb2cb27124ed. * Revert "rpcstats takes care of accounting for spend data" This reverts commit 1cec728bf241e4cfd24351134637ed81c1a5a10b. * removed rpc request table entity * updated referral code to use ulid s * credits used are aggregated * added a bunch of fields to referrer * added database logic whenever an aggregate stats is added. will have to iterate over this a couple times i think. go to (1) detecting accepted stables next, (2) fix influxdb bug and (3) start to write test * removed track spend as this will occur in the database * will first work on "balance", then referral. these should really be treated as two separate PRs (although already convoluted) * balance logic initial commit * breaking WIP, changing the RPC call logic functions * will start testing next * got rid of warnings & lint * will proceed with subtracting / adding to balance * added decimal points, balance tracking seems to work * will beautify code a bit * removed deprecated dependency, and added topic + deposit contract to app.yaml * brownie test suite does not rely on local contract files it pulls all from polygonscan * will continue with referral * should perhaps (in a future revision) recordhow much the referees got for free. marking referrals seems to work rn * user is upgraded to premium if they deposit more than 10$. we dont accept more than $10M in a single tx * will start PR, referral seems to be fine so far, perhaps up to some numbers that still may need tweaking * will start PR * removed rogue comments, cleaned up payments a bit * changes before PR * apply stats * added unique constraint * some refactoring such that the user file is not too bloated * compiling * progress with subusers, creating a table entry seems to work * good response type is there as well now, will work on getters from primary user and secondary user next * subuser logic also seems fine now * downgrade logic * fixed bug influxdb does not support different types in same query (which makes sense) * WIP temporary commit * merging with PR * Delete daemon.rs there are multiple daemons now, so this was moved to `proxyd` * will remove request clone to &mut * multiple request handles for payment * making requests still seem fine * removed redundant commented out bits * added deposit endpoint, added deposit amount and deposit user, untested yet * small bug with downgrade tier id * will add authorization so balance can be received for users * balance history should be set now too * will check balance over time again * subususer can see rpc key balance if admin or owner * stats also seems to work fine now with historical balance * things seem to be building and working * removed clone from OpenRequestHandle * removed influxdb from workspace members * changed config files * reran sea-orm generate entities, added a foreign key, should be proper now * removed contract from commit * made deposit contract optional * added topic in polygon dev * changed deposit contract to deposit factory contract * added selfrelation on user_tier * added payment required * changed chain id to u64 * add wss in polygon llamarpc * removed origin and method from the table * added onchain transactions naming (and forgot to add a migration before) * changed foreign key to be the referrer (id), not the code itself * forgot to add id as the target foreign key * WIP adding cache to update role * fixed merge conflicts --------- Co-authored-by: Bryan Stitt <bryan@llamanodes.com> Co-authored-by: Bryan Stitt <bryan@stitthappens.com> |
||
---|---|---|
.cargo | ||
.vscode | ||
bin | ||
config | ||
deferred-rate-limiter | ||
entities | ||
latency | ||
migration | ||
rate-counter | ||
redis-rate-limiter | ||
scripts | ||
thread-fast-rng | ||
web3_proxy | ||
wrk | ||
.dockerignore | ||
.env | ||
.gitignore | ||
bugs.md | ||
Cargo.lock | ||
Cargo.toml | ||
docker-compose.common.yml | ||
docker-compose.prod.yml | ||
docker-compose.yml | ||
Dockerfile | ||
FAQ.md | ||
Jenkinsfile | ||
LICENSE | ||
README.md | ||
rust-toolchain.toml | ||
TODO.md |
web3_proxy
Web3_proxy is a fast caching and load balancing proxy for web3 (Ethereum or similar) JsonRPC servers.
Under construction! This code is under active development. If you want to run this proxy youself, send me a message on Twitter and I can explain things that aren't documented yet. Most RPC methods are supported, but filters are coming soon. And of course, more tests are always needed.
Signed transactions (eth_sendRawTransaction) are sent in parallel to the configured private RPCs (eden, ethermine, flashbots, etc.).
All other requests are sent to an RPC server on the latest block (alchemy, moralis, rivet, your own node, or one of many other providers). If multiple servers are in sync, they are prioritized by active_requests/soft_limit
. Note that this means that the fastest server is most likely to serve requests and slow servers are unlikely to ever get any requests.
Each server has different limits to configure. The soft_limit
is the number of parallel active requests where a server starts to slow down. The hard_limit
is where a server starts giving rate limits or other errors.
Quick development
brew install librdkafka
orsudo apt-get install librdkafka-dev
- Run
docker-compose up -d
to start the database and caches. Seedocker-compose.yml
for details. - Copy
./config/example.toml
to./config/development.toml
and change settings to match your setup. - Run
cargo
commands:
$ cargo run --release -- --help
Compiling web3_proxy v0.1.0 (/home/bryan/src/web3_proxy/web3_proxy)
Finished release [optimized + debuginfo] target(s) in 17.69s
Running `target/release/web3_proxy --help`
Usage: web3_proxy [--port <port>] [--workers <workers>] [--config <config>]
web3_proxy is a fast caching and load balancing proxy for web3 (Ethereum or similar) JsonRPC servers.
Options:
--port what port the proxy should listen on
--workers number of worker threads
--config path to a toml of rpc servers
--help display usage information
Start the server with the defaults (listen on http://localhost:8544
and use ./config/development.toml
which uses the database and cache running under docker and proxies to a bunch of public nodes:
cargo run --release -- proxyd
Common commands
Create a user:
cargo run -- --db-url "$YOUR_DB_URL" create_user --address "$USER_ADDRESS_0x"
Check that the proxy is working:
curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"web3_clientVersion","id":1}' 127.0.0.1:8544
Check that the websocket is working:
$ websocat ws://127.0.0.1:8544
{"id": 1, "method": "eth_subscribe", "params": ["newHeads"]}
{"id": 2, "method": "eth_subscribe", "params": ["newPendingTransactions"]}
{"id": 3, "method": "eth_subscribe", "params": ["newPendingFullTransactions"]}
{"id": 4, "method": "eth_subscribe", "params": ["newPendingRawTransactions"]}
You can copy config/example.toml
to config/production-$CHAINNAME.toml
and then run docker-compose up --build -d
start proxies for many chains.
Compare 3 RPCs:
web3_proxy_cli health_compass https://eth.llamarpc.com https://eth-ski.llamarpc.com https://rpc.ankr.com/eth
Run migrations
This is only really useful during development. The migrations run on application start.
cd migration
cargo run up
Create a user:
web3_proxy_cli --config ... create_user --address 0x0000000000000000000000000000000000000000 --email infra@llamanodes.com --description "..."
Give a user unlimited requests per second:
Copy the ULID key (or UUID key) out of the above command, and put it into the following command.
web3_proxy_cli --config ... change_user_tier_by_key "$RPC_ULID_KEY_FROM_PREV_COMMAND" "Unlimited"
Health compass
Health check 3 servers and error if the first one doesn't match the others.
web3_proxy_cli health_compass https://eth.llamarpc.com/ https://rpc.ankr.com/eth https://cloudflare-eth.com
Adding new database tables
cargo install sea-orm-cli
- (optional) drop the current dev db
sea-orm-cli migrate
sea-orm-cli generate entity -u mysql://root:dev_web3_proxy@127.0.0.1:13306/dev_web3_proxy -o entities/src --with-serde both
- After running the above, you will need to manually fix some columns:
Vec<u8>
->sea_orm::prelude::Uuid
andi8
->bool
. Related: https://github.com/SeaQL/sea-query/issues/375 https://github.com/SeaQL/sea-orm/issues/924
Flame Graphs
Flame graphs make a developer's join of finding slow code painless:
$ cat /proc/sys/kernel/kptr_restrict
1
$ echo 0 | sudo tee /proc/sys/kernel/kptr_restrict
0
$ cat /proc/sys/kernel/perf_event_paranoid
4
$ echo -1 | sudo tee /proc/sys/kernel/perf_event_paranoid
-1
$ CARGO_PROFILE_RELEASE_DEBUG=true cargo flamegraph --bin web3_proxy_cli --no-inline -- proxyd
Be sure to use --no-inline
or perf will be VERY slow
GDB
Developers can run the proxy under gdb for advanced debugging:
cargo build --release && RUST_LOG=info,web3_proxy=debug,ethers_providers::rpc=off rust-gdb --args target/debug/web3_proxy --listen-port 7503 --rpc-config-path ./config/production-eth.toml
TODO: also enable debug symbols in the release build by modifying the root Cargo.toml
Load Testing
Test the proxy:
wrk -s ./wrk/getBlockNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8544/u/$API_KEY
wrk -s ./wrk/getLatestBlockByNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8544/u/$API_KEY
Test geth (assuming it is on 8545):
wrk -s ./wrk/getBlockNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8545
wrk -s ./wrk/getLatestBlockByNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8545
Test erigon (assuming it is on 8945):
wrk -s ./wrk/getBlockNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8945
wrk -s ./wrk/getLatestBlockByNumber.lua -t12 -c400 -d30s --latency http://127.0.0.1:8945
Note: Testing with getLatestBlockByNumber.lua
is not great because the latest block changes and so one run is likely to be very different than another.
Run ethspam and versus for a more realistic load test:
ethspam --rpc http://127.0.0.1:8544 | versus --concurrency=100 --stop-after=10000 http://127.0.0.1:8544
ethspam --rpc http://127.0.0.1:8544/u/$API_KEY | versus --concurrency=100 --stop-after=10000 http://127.0.0.1:8544/u/$API_KEY