User Balance + Referral Logic (#44)
* will implement balance topup endpoint * will quickly fix other PR reviews * merging from master * will finish up godmoe * will finish up login * added logic to top up balance (first iteration) * should implement additional columns soon (currency, amount, tx-hash), as well as a new table for spend * updated migrations, will account for spend next * get back to this later * will merge PR from stats-v2 * stats v2 rebased all my commits and squashed them down to one * cargo upgrade * added migrtation for spend in accounting table. will run test-deposit next * trying to get request from polygon * first iteration /user/balance/:tx_hash works, needs to add accepted tokens next * creating the referral code seems to work * will now check if spending enough credits will lead to both parties receiving credits * rpcstats takes care of accounting for spend data * removed track spend from table * Revert "removed track spend from table" This reverts commit a50802d6ae75f786864c5ec42d0ceb2cb27124ed. * Revert "rpcstats takes care of accounting for spend data" This reverts commit 1cec728bf241e4cfd24351134637ed81c1a5a10b. * removed rpc request table entity * updated referral code to use ulid s * credits used are aggregated * added a bunch of fields to referrer * added database logic whenever an aggregate stats is added. will have to iterate over this a couple times i think. go to (1) detecting accepted stables next, (2) fix influxdb bug and (3) start to write test * removed track spend as this will occur in the database * will first work on "balance", then referral. these should really be treated as two separate PRs (although already convoluted) * balance logic initial commit * breaking WIP, changing the RPC call logic functions * will start testing next * got rid of warnings & lint * will proceed with subtracting / adding to balance * added decimal points, balance tracking seems to work * will beautify code a bit * removed deprecated dependency, and added topic + deposit contract to app.yaml * brownie test suite does not rely on local contract files it pulls all from polygonscan * will continue with referral * should perhaps (in a future revision) recordhow much the referees got for free. marking referrals seems to work rn * user is upgraded to premium if they deposit more than 10$. we dont accept more than $10M in a single tx * will start PR, referral seems to be fine so far, perhaps up to some numbers that still may need tweaking * will start PR * removed rogue comments, cleaned up payments a bit * changes before PR * apply stats * added unique constraint * some refactoring such that the user file is not too bloated * compiling * progress with subusers, creating a table entry seems to work * good response type is there as well now, will work on getters from primary user and secondary user next * subuser logic also seems fine now * downgrade logic * fixed bug influxdb does not support different types in same query (which makes sense) * WIP temporary commit * merging with PR * Delete daemon.rs there are multiple daemons now, so this was moved to `proxyd` * will remove request clone to &mut * multiple request handles for payment * making requests still seem fine * removed redundant commented out bits * added deposit endpoint, added deposit amount and deposit user, untested yet * small bug with downgrade tier id * will add authorization so balance can be received for users * balance history should be set now too * will check balance over time again * subususer can see rpc key balance if admin or owner * stats also seems to work fine now with historical balance * things seem to be building and working * removed clone from OpenRequestHandle * removed influxdb from workspace members * changed config files * reran sea-orm generate entities, added a foreign key, should be proper now * removed contract from commit * made deposit contract optional * added topic in polygon dev * changed deposit contract to deposit factory contract * added selfrelation on user_tier * added payment required * changed chain id to u64 * add wss in polygon llamarpc * removed origin and method from the table * added onchain transactions naming (and forgot to add a migration before) * changed foreign key to be the referrer (id), not the code itself * forgot to add id as the target foreign key * WIP adding cache to update role * fixed merge conflicts --------- Co-authored-by: Bryan Stitt <bryan@llamanodes.com> Co-authored-by: Bryan Stitt <bryan@stitthappens.com>
This commit is contained in:
parent
36cc884112
commit
34ed450fab
19
Cargo.lock
generated
19
Cargo.lock
generated
@ -2547,6 +2547,12 @@ version = "0.4.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
|
||||
|
||||
[[package]]
|
||||
name = "hex_fmt"
|
||||
version = "0.3.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b07f60793ff0a4d9cef0f18e63b5357e06209987153a64648c972c1e5aff336f"
|
||||
|
||||
[[package]]
|
||||
name = "hmac"
|
||||
version = "0.12.1"
|
||||
@ -2786,9 +2792,8 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "influxdb2"
|
||||
version = "0.4.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "320c502ec0cf39e9b9fc36afc57435944fdfb6f15e8e8b0ecbc9a871d398cf63"
|
||||
version = "0.4.0"
|
||||
source = "git+https://github.com/llamanodes/influxdb2#9c2e50bee6f00fff99688ac2a39f702bb6a0b5bb"
|
||||
dependencies = [
|
||||
"base64 0.13.1",
|
||||
"bytes",
|
||||
@ -2819,8 +2824,7 @@ dependencies = [
|
||||
[[package]]
|
||||
name = "influxdb2-derive"
|
||||
version = "0.1.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "990f899841aa30130fc06f7938e3cc2cbc3d5b92c03fd4b5d79a965045abcf16"
|
||||
source = "git+https://github.com/llamanodes/influxdb2#9c2e50bee6f00fff99688ac2a39f702bb6a0b5bb"
|
||||
dependencies = [
|
||||
"itertools",
|
||||
"proc-macro2",
|
||||
@ -2832,8 +2836,7 @@ dependencies = [
|
||||
[[package]]
|
||||
name = "influxdb2-structmap"
|
||||
version = "0.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1408e712051787357e99ff732e44e8833e79cea0fabc9361018abfbff72b6265"
|
||||
source = "git+https://github.com/llamanodes/influxdb2#9c2e50bee6f00fff99688ac2a39f702bb6a0b5bb"
|
||||
dependencies = [
|
||||
"chrono",
|
||||
"num-traits",
|
||||
@ -6334,6 +6337,7 @@ checksum = "13a3aaa69b04e5b66cc27309710a569ea23593612387d67daaf102e73aa974fd"
|
||||
dependencies = [
|
||||
"rand",
|
||||
"serde",
|
||||
"uuid 1.3.2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -6631,6 +6635,7 @@ dependencies = [
|
||||
"handlebars",
|
||||
"hashbrown 0.13.2",
|
||||
"hdrhistogram",
|
||||
"hex_fmt",
|
||||
"hostname",
|
||||
"http",
|
||||
"influxdb2",
|
||||
|
212
config/development_polygon.toml
Normal file
212
config/development_polygon.toml
Normal file
@ -0,0 +1,212 @@
|
||||
[app]
|
||||
chain_id = 137
|
||||
|
||||
# a database is optional. it is used for user authentication and accounting
|
||||
# TODO: how do we find the optimal db_max_connections? too high actually ends up being slower
|
||||
db_max_connections = 20
|
||||
# development runs cargo commands on the host and so uses "mysql://root:dev_web3_proxy@127.0.0.1:13306/dev_web3_proxy" for db_url
|
||||
# production runs inside docker and so uses "mysql://root:web3_proxy@db:3306/web3_proxy" for db_url
|
||||
db_url = "mysql://root:dev_web3_proxy@127.0.0.1:13306/dev_web3_proxy"
|
||||
|
||||
deposit_factory_contract = "0x4e3bc2054788de923a04936c6addb99a05b0ea36"
|
||||
deposit_topic = "0x45fdc265dc29885b9a485766b03e70978440d38c7c328ee0a14fa40c76c6af54"
|
||||
|
||||
# a timeseries database is optional. it is used for making pretty graphs
|
||||
influxdb_host = "http://127.0.0.1:18086"
|
||||
influxdb_org = "dev_org"
|
||||
influxdb_token = "dev_web3_proxy_auth_token"
|
||||
influxdb_bucket = "dev_web3_proxy"
|
||||
|
||||
# thundering herd protection
|
||||
# only mark a block as the head block if the sum of their soft limits is greater than or equal to min_sum_soft_limit
|
||||
min_sum_soft_limit = 1_000
|
||||
# only mark a block as the head block if the number of servers with it is great than or equal to min_synced_rpcs
|
||||
min_synced_rpcs = 1
|
||||
|
||||
# redis is optional. it is used for rate limits set by `hard_limit`
|
||||
# TODO: how do we find the optimal redis_max_connections? too high actually ends up being slower
|
||||
volatile_redis_max_connections = 20
|
||||
# development runs cargo commands on the host and so uses "redis://127.0.0.1:16379/" for volatile_redis_url
|
||||
# production runs inside docker and so uses "redis://redis:6379/" for volatile_redis_url
|
||||
volatile_redis_url = "redis://127.0.0.1:16379/"
|
||||
|
||||
# redirect_public_url is optional
|
||||
redirect_public_url = "https://llamanodes.com/public-rpc"
|
||||
# redirect_rpc_key_url is optional
|
||||
# it only does something if db_url is set
|
||||
redirect_rpc_key_url = "https://llamanodes.com/dashboard/keys?key={{rpc_key_id}}"
|
||||
|
||||
# sentry is optional. it is used for browsing error logs
|
||||
# sentry_url = "https://SENTRY_KEY_A.ingest.sentry.io/SENTRY_KEY_B"
|
||||
|
||||
# public limits are when no key is used. these are instead grouped by ip
|
||||
# 0 = block all public requests
|
||||
# Not defined = allow all requests
|
||||
#public_max_concurrent_requests =
|
||||
# 0 = block all public requests
|
||||
# Not defined = allow all requests
|
||||
#public_requests_per_period =
|
||||
|
||||
public_recent_ips_salt = ""
|
||||
|
||||
login_domain = "llamanodes.com"
|
||||
|
||||
# 1GB of cache
|
||||
response_cache_max_bytes = 1_000_000_000
|
||||
|
||||
# allowed_origin_requests_per_period changes the min_sum_soft_limit for requests with the specified (AND SPOOFABLE) Origin header
|
||||
# origins not in the list for requests without an rpc_key will use public_requests_per_period instead
|
||||
[app.allowed_origin_requests_per_period]
|
||||
"https://chainlist.org" = 1_000
|
||||
|
||||
[balanced_rpcs]
|
||||
|
||||
[balanced_rpcs.llama_public]
|
||||
disabled = false
|
||||
display_name = "LlamaNodes"
|
||||
http_url = "https://polygon.llamarpc.com"
|
||||
ws_url = "wss://polygon.llamarpc.com"
|
||||
soft_limit = 1_000
|
||||
tier = 0
|
||||
|
||||
[balanced_rpcs.quicknode]
|
||||
disabled = false
|
||||
display_name = "Quicknode"
|
||||
http_url = "https://rpc-mainnet.matic.quiknode.pro"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.maticvigil]
|
||||
disabled = false
|
||||
display_name = "Maticvigil"
|
||||
http_url = "https://rpc-mainnet.maticvigil.com"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.matic-network]
|
||||
disabled = false
|
||||
display_name = "Matic Network"
|
||||
http_url = "https://rpc-mainnet.matic.network"
|
||||
soft_limit = 10
|
||||
tier = 1
|
||||
|
||||
[balanced_rpcs.chainstack]
|
||||
disabled = false
|
||||
http_url = "https://matic-mainnet.chainstacklabs.com"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.bware]
|
||||
disabled = false
|
||||
display_name = "Bware Labs"
|
||||
http_url = "https://matic-mainnet-full-rpc.bwarelabs.com"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.bware_archive]
|
||||
disabled = false
|
||||
display_name = "Bware Labs Archive"
|
||||
http_url = "https://matic-mainnet-archive-rpc.bwarelabs.com"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.polygonapi]
|
||||
disabled = false
|
||||
display_name = "Polygon API"
|
||||
http_url = "https://polygonapi.terminet.io/rpc"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.one-rpc]
|
||||
disabled = false
|
||||
display_name = "1RPC"
|
||||
http_url = "https://1rpc.io/matic"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.fastrpc]
|
||||
disabled = false
|
||||
display_name = "FastRPC"
|
||||
http_url = "https://polygon-mainnet.rpcfast.com?api_key=xbhWBI1Wkguk8SNMu1bvvLurPGLXmgwYeC4S6g2H7WdwFigZSmPWVZRxrskEQwIf"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.unifra]
|
||||
disabled = false
|
||||
display_name = "Unifra"
|
||||
http_url = "https://polygon-mainnet-public.unifra.io"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.onfinality]
|
||||
disabled = false
|
||||
display_name = "Onfinality"
|
||||
http_url = "https://polygon.api.onfinality.io/public"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.alchemy]
|
||||
disabled = false
|
||||
display_name = "Alchemy"
|
||||
heept_url = "https://polygon-mainnet.g.alchemy.com/v2/demo"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.blockpi]
|
||||
disabled = false
|
||||
display_name = "Blockpi"
|
||||
http_url = "https://polygon.blockpi.network/v1/rpc/public"
|
||||
soft_limit = 100
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.polygon]
|
||||
backup = true
|
||||
disabled = false
|
||||
display_name = "Polygon"
|
||||
http_url = "https://polygon-rpc.com"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.pokt]
|
||||
disabled = false
|
||||
display_name = "Pokt"
|
||||
http_url = "https://poly-rpc.gateway.pokt.network"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.ankr]
|
||||
backup = true
|
||||
disabled = false
|
||||
display_name = "Ankr"
|
||||
http_url = "https://rpc.ankr.com/polygon"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.blastapi]
|
||||
backup = true
|
||||
disabled = true
|
||||
display_name = "Blast"
|
||||
http_url = "https://polygon-mainnet.public.blastapi.io"
|
||||
hard_limit = 10
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.omnia]
|
||||
disabled = true
|
||||
display_name = "Omnia"
|
||||
http_url = "https://endpoints.omniatech.io/v1/matic/mainnet/public"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.bor]
|
||||
disabled = true
|
||||
http_url = "https://polygon-bor.publicnode.com"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
||||
[balanced_rpcs.blxr]
|
||||
disabled = false
|
||||
http_url = "https://polygon.rpc.blxrbdn.com"
|
||||
soft_limit = 10
|
||||
tier = 2
|
||||
|
@ -11,6 +11,9 @@ db_url = "mysql://root:dev_web3_proxy@127.0.0.1:13306/dev_web3_proxy"
|
||||
# read-only replica useful when running the proxy in multiple regions
|
||||
db_replica_url = "mysql://root:dev_web3_proxy@127.0.0.1:13306/dev_web3_proxy"
|
||||
|
||||
deposit_factory_contract = "0x4e3bc2054788de923a04936c6addb99a05b0ea36"
|
||||
deposit_topic = "0x45fdc265dc29885b9a485766b03e70978440d38c7c328ee0a14fa40c76c6af54"
|
||||
|
||||
kafka_urls = "127.0.0.1:19092"
|
||||
kafka_protocol = "plaintext"
|
||||
|
||||
@ -18,7 +21,7 @@ kafka_protocol = "plaintext"
|
||||
influxdb_host = "http://127.0.0.1:18086"
|
||||
influxdb_org = "dev_org"
|
||||
influxdb_token = "dev_web3_proxy_auth_token"
|
||||
influxdb_bucketname = "web3_proxy"
|
||||
influxdb_bucketname = "dev_web3_proxy"
|
||||
|
||||
# thundering herd protection
|
||||
# only mark a block as the head block if the sum of their soft limits is greater than or equal to min_sum_soft_limit
|
||||
|
@ -1,10 +0,0 @@
|
||||
# log in with curl
|
||||
|
||||
1. curl http://127.0.0.1:8544/user/login/$ADDRESS
|
||||
2. Sign the text with a site like https://www.myetherwallet.com/wallet/sign
|
||||
3. POST the signed data:
|
||||
|
||||
curl -X POST http://127.0.0.1:8544/user/login -H 'Content-Type: application/json' -d
|
||||
'{ "address": "0x9eb9e3dc2543dc9ff4058e2a2da43a855403f1fd", "msg": "0x6c6c616d616e6f6465732e636f6d2077616e747320796f7520746f207369676e20696e207769746820796f757220457468657265756d206163636f756e743a0a3078396562396533646332353433646339464634303538653241324441343341383535343033463166440a0af09fa699f09fa699f09fa699f09fa699f09fa6990a0a5552493a2068747470733a2f2f6c6c616d616e6f6465732e636f6d2f0a56657273696f6e3a20310a436861696e2049443a20310a4e6f6e63653a203031474d37373330375344324448333854454d3957545156454a0a4973737565642041743a20323032322d31322d31345430323a32333a31372e3735333736335a0a45787069726174696f6e2054696d653a20323032322d31322d31345430323a34333a31372e3735333736335a", "sig": "16bac055345279723193737c6c67cf995e821fd7c038d31fd6f671102088c7b85ab4b13069fd2ed02da186cf549530e315d8d042d721bf81289b3ffdbe8cf9ce1c", "version": "3", "signer": "MEW" }'
|
||||
|
||||
4. The response will include a bearer token. Use it with curl ... -H 'Authorization: Bearer $TOKEN'
|
@ -1,8 +0,0 @@
|
||||
sudo apt install bison flex
|
||||
wget https://eighty-twenty.org/files/0001-tools-perf-Use-long-running-addr2line-per-dso.patch
|
||||
git clone https://github.com/torvalds/linux.git
|
||||
cd linux
|
||||
git checkout v5.15
|
||||
git apply ../0001-tools-perf-Use-long-running-addr2line-per-dso.patch
|
||||
cd tools/perf
|
||||
make prefix=$HOME/.local VERSION=5.15 install-bin
|
@ -1,144 +0,0 @@
|
||||
|
||||
GET /
|
||||
This entrypoint handles two things.
|
||||
If connecting with a browser, it redirects to the public stat page on llamanodes.com.
|
||||
If connecting with a websocket, it is rate limited by IP and routes to the Web3 RPC.
|
||||
|
||||
POST /
|
||||
This entrypoint handles two things.
|
||||
If connecting with a browser, it redirects to the public stat page on llamanodes.com.
|
||||
If connecting with a websocket, it is rate limited by IP and routes to the Web3 RPC.
|
||||
|
||||
GET /rpc/:rpc_key
|
||||
This entrypoint handles two things.
|
||||
If connecting with a browser, it redirects to the key's stat page on llamanodes.com.
|
||||
If connecting with a websocket, it is rate limited by key and routes to the Web3 RPC.
|
||||
|
||||
POST /rpc/:rpc_key
|
||||
This entrypoint handles two things.
|
||||
If connecting with a browser, it redirects to the key's stat page on llamanodes.com.
|
||||
If connecting with a websocket, it is rate limited by key and routes to the Web3 RPC.
|
||||
|
||||
GET /health
|
||||
If servers are synced, this gives a 200 "OK".
|
||||
If no servers are synced, it gives a 502 ":("
|
||||
|
||||
GET /user/login/:user_address
|
||||
Displays a "Sign in With Ethereum" message to be signed by the address's private key.
|
||||
Once signed, continue to `POST /user/login`
|
||||
|
||||
GET /user/login/:user_address/:message_eip
|
||||
Similar to `GET /user/login/:user_address` but gives the message in different formats depending on the eip.
|
||||
Wallets have varying support. This shouldn't be needed by most users.
|
||||
The message_eip should be hidden behind a small gear icon near the login button.
|
||||
Once signed, continue to `POST /user/login`
|
||||
|
||||
Supported:
|
||||
EIP191 as bytes
|
||||
EIP191 as a hash
|
||||
EIP4361 (the default)
|
||||
|
||||
Support coming soon:
|
||||
EIP1271 for contract signing
|
||||
|
||||
POST /user/login?invite_code=SOMETHING_SECRET
|
||||
Verifies the user's signed message.
|
||||
|
||||
The post should have JSON data containing "sig" (the signature) and "msg" (the original message).
|
||||
|
||||
Optionally requires an invite_code.
|
||||
The invite code is only needed for new users. Once registered, it is not necessary.
|
||||
|
||||
If the invite code and signature are valid, this returns JSON data containing "rpc_keys", "bearer_token" and the "user".
|
||||
|
||||
"rpc_keys" contains the key and settings for all of the user's keys.
|
||||
If the user is new, an "rpc_key" will be created for them.
|
||||
|
||||
The "bearer_token" is required by some endpoints. Include it in the "AUTHORIZATION" header in this format: "bearer :bearer_token".
|
||||
The token is good for 4 weeks and the 4 week time will reset whenever the token is used.
|
||||
|
||||
The "user" just has an address at first, but you can prompt them to add an email address. See `POST /user`
|
||||
|
||||
GET /user
|
||||
Checks the "AUTHORIZATION" header for a valid bearer token.
|
||||
If valid, display's the user's data as JSON.
|
||||
|
||||
|
||||
|
||||
POST /user
|
||||
POST the data in the same format that `GET /user` gives it.
|
||||
If you do not want to update a field, do not include it in the POSTed JSON.
|
||||
If you want to delete a field, include the data's key and set the value to an empty string.
|
||||
|
||||
Checks the "AUTHORIZATION" header for a valid bearer token.
|
||||
If valid, updates the user's data and returns the updated data as JSON.
|
||||
|
||||
GET /user/balance
|
||||
Not yet implemented.
|
||||
|
||||
Checks the "AUTHORIZATION" header for a valid bearer token.
|
||||
If valid, displays data about the user's balance and payments as JSON.
|
||||
|
||||
POST /user/balance/:txid
|
||||
Not yet implemented. Rate limited by IP.
|
||||
|
||||
Checks the ":txid" for a transaction that updates a user's balance.
|
||||
The backend will be watching for these transactions, so this should not be needed in the common case.
|
||||
However, log susbcriptions are not perfect and so it might sometimes be needed.
|
||||
|
||||
GET /user/keys
|
||||
Checks the "AUTHORIZATION" header for a valid bearer token.
|
||||
If valid, displays data about the user's keys as JSON.
|
||||
|
||||
POST or PUT /user/keys
|
||||
Checks the "AUTHORIZATION" header for a valid bearer token.
|
||||
If valid, allows the user to create a new key or change options on their keys.
|
||||
|
||||
The POSTed JSON can have these fields:
|
||||
key_id: Option<u64>,
|
||||
description: Option<String>,
|
||||
private_txs: Option<bool>,
|
||||
active: Option<bool>,
|
||||
allowed_ips: Option<String>,
|
||||
allowed_origins: Option<String>,
|
||||
allowed_referers: Option<String>,
|
||||
allowed_user_agents: Option<String>,
|
||||
|
||||
The PUTed JSON has the same fields as the POSTed JSON, except for there is no `key_id`
|
||||
|
||||
If you do not want to update a field, do not include it in the POSTed JSON.
|
||||
If you want to delete a string field, include the data's key and set the value to an empty string.
|
||||
|
||||
`allowed_ips`, `allowed_origins`, `allowed_referers`, and `allowed_user_agents` can have multiple values by separating them with commas.
|
||||
`allowed_ips` must be in CIDR Notation (ex: "10.1.1.0/24" for a network, "10.1.1.10/32" for a single address).
|
||||
The spec technically allows for bytes in `allowed_origins` or `allowed_referers`, but our code currently only supports strings. If a customer needs bytes, then we can code support for them.
|
||||
|
||||
`private_txs` are not currently recommended. If high gas is not supplied then they will likely never be included. Improvements to this are in the works
|
||||
|
||||
Soon, the POST data will also have a `log_revert_trace: Option<f32>`. This will by the percent chance to log any calls that "revert" to the database. Large dapps probably want this to be a small percent, but development keys will probably want 100%. This will not be enabled until automatic pruning is coded.
|
||||
|
||||
GET `/user/revert_logs`
|
||||
Checks the "AUTHORIZATION" header for a valid bearer token.
|
||||
If valid, fetches paginated revert logs for the user.
|
||||
More documentation will be written here once revert logging is enabled.
|
||||
|
||||
GET /user/stats/aggregate
|
||||
Checks the "AUTHORIZATION" header for a valid bearer token.
|
||||
If valid, fetches paginated aggregated stats for the user.
|
||||
Pages are limited to 200 entries. The backend config can change this page size if necessary.
|
||||
Can be filtered by:
|
||||
`chain_id` - set to 0 for all. 0 is the default.
|
||||
`query_start` - The start date in unix epoch time.
|
||||
`query_window_seconds` - How many seconds to aggregate the stats over.
|
||||
`page` - The page to request. Defaults to 0.
|
||||
|
||||
GET /user/stats/detailed
|
||||
Checks the "AUTHORIZATION" header for a valid bearer token.
|
||||
If valid, fetches paginated stats for the user with more detail. The request method is included. For user privacy, we intentionally do not include the request's calldata.
|
||||
Can be filtered the same as `GET /user/stats/aggregate`
|
||||
Soon will also be filterable by "method"
|
||||
|
||||
POST /user/logout
|
||||
Checks the "AUTHORIZATION" header for a valid bearer token.
|
||||
If valid, deletes the bearer token from the proxy.
|
||||
The user will need to `POST /user/login` to get a new bearer token.
|
@ -1,15 +0,0 @@
|
||||
Hello, I'm pretty new to tracing so my vocabulary might be wrong. I've got my app using tracing to log to stdout. I have a bunch of fields including user_id and ip_addr that make telling where logs are from nice and easy.
|
||||
|
||||
Now there is one part of my code where I want to save a log to a database. I'm not sure of the best/correct way to do this. I can get the current span with tracing::Span::current(), but AFAICT that doesn't have a way to get to the values. I think I need to write my own Subscriber or Visitor (or both) and then tell tracing to use it only in this one part of the code. Am I on the right track? Is there a place in the docs that explains something similar?
|
||||
|
||||
https://burgers.io/custom-logging-in-rust-using-tracing
|
||||
|
||||
if you are doing it learn how to write a subscriber then you should write a custom layer. If you are simply trying to work on your main project there are several subscribers that already do this work for you.
|
||||
|
||||
look at opentelemetry_otlp .. this will let you connect opentelemetry collector to your tracing using tracing_opentelemetry
|
||||
|
||||
I'd suggest using the Registry subscriber because it can take multiple layers ... and use a filtered_layer to filter out the messages (look at env_filter, it can take the filtering params from an environment variable or a config string) and then have your collector be the second layer. e.... Registery can take in a vector of layers that are also-in-turn multi-layered.
|
||||
let me see if i can pull up an example
|
||||
On the https://docs.rs/tracing-subscriber/latest/tracing_subscriber/layer/ page about half-way down there is an example of boxed layers
|
||||
|
||||
you basically end up composing different layers that output to different trace stores and also configure each using per-layer filtering (see https://docs.rs/tracing-subscriber/latest/tracing_subscriber/layer/#per-layer-filtering)
|
37
entities/src/balance.rs
Normal file
37
entities/src/balance.rs
Normal file
@ -0,0 +1,37 @@
|
||||
//! `SeaORM` Entity. Generated by sea-orm-codegen 0.10.6
|
||||
|
||||
use sea_orm::entity::prelude::*;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq, Serialize, Deserialize)]
|
||||
#[sea_orm(table_name = "balance")]
|
||||
pub struct Model {
|
||||
#[sea_orm(primary_key)]
|
||||
pub id: i32,
|
||||
#[sea_orm(column_type = "Decimal(Some((20, 10)))")]
|
||||
pub available_balance: Decimal,
|
||||
#[sea_orm(column_type = "Decimal(Some((20, 10)))")]
|
||||
pub used_balance: Decimal,
|
||||
#[sea_orm(unique)]
|
||||
pub user_id: u64,
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
|
||||
pub enum Relation {
|
||||
#[sea_orm(
|
||||
belongs_to = "super::user::Entity",
|
||||
from = "Column::UserId",
|
||||
to = "super::user::Column::Id",
|
||||
on_update = "NoAction",
|
||||
on_delete = "NoAction"
|
||||
)]
|
||||
User,
|
||||
}
|
||||
|
||||
impl Related<super::user::Entity> for Entity {
|
||||
fn to() -> RelationDef {
|
||||
Relation::User.def()
|
||||
}
|
||||
}
|
||||
|
||||
impl ActiveModelBehavior for ActiveModel {}
|
37
entities/src/increase_on_chain_balance_receipt.rs
Normal file
37
entities/src/increase_on_chain_balance_receipt.rs
Normal file
@ -0,0 +1,37 @@
|
||||
//! `SeaORM` Entity. Generated by sea-orm-codegen 0.10.6
|
||||
|
||||
use sea_orm::entity::prelude::*;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq, Serialize, Deserialize)]
|
||||
#[sea_orm(table_name = "increase_on_chain_balance_receipt")]
|
||||
pub struct Model {
|
||||
#[sea_orm(primary_key)]
|
||||
pub id: i32,
|
||||
#[sea_orm(unique)]
|
||||
pub tx_hash: String,
|
||||
pub chain_id: u64,
|
||||
#[sea_orm(column_type = "Decimal(Some((20, 10)))")]
|
||||
pub amount: Decimal,
|
||||
pub deposit_to_user_id: u64,
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
|
||||
pub enum Relation {
|
||||
#[sea_orm(
|
||||
belongs_to = "super::user::Entity",
|
||||
from = "Column::DepositToUserId",
|
||||
to = "super::user::Column::Id",
|
||||
on_update = "NoAction",
|
||||
on_delete = "NoAction"
|
||||
)]
|
||||
User,
|
||||
}
|
||||
|
||||
impl Related<super::user::Entity> for Entity {
|
||||
fn to() -> RelationDef {
|
||||
Relation::User.def()
|
||||
}
|
||||
}
|
||||
|
||||
impl ActiveModelBehavior for ActiveModel {}
|
@ -4,8 +4,12 @@ pub mod prelude;
|
||||
|
||||
pub mod admin;
|
||||
pub mod admin_trail;
|
||||
pub mod balance;
|
||||
pub mod increase_on_chain_balance_receipt;
|
||||
pub mod login;
|
||||
pub mod pending_login;
|
||||
pub mod referee;
|
||||
pub mod referrer;
|
||||
pub mod revert_log;
|
||||
pub mod rpc_accounting;
|
||||
pub mod rpc_accounting_v2;
|
||||
|
@ -19,6 +19,21 @@ pub struct Model {
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
|
||||
pub enum Relation {}
|
||||
pub enum Relation {
|
||||
#[sea_orm(
|
||||
belongs_to = "super::user::Entity",
|
||||
from = "Column::ImitatingUser",
|
||||
to = "super::user::Column::Id",
|
||||
on_update = "NoAction",
|
||||
on_delete = "NoAction"
|
||||
)]
|
||||
User,
|
||||
}
|
||||
|
||||
impl Related<super::user::Entity> for Entity {
|
||||
fn to() -> RelationDef {
|
||||
Relation::User.def()
|
||||
}
|
||||
}
|
||||
|
||||
impl ActiveModelBehavior for ActiveModel {}
|
||||
|
@ -2,8 +2,12 @@
|
||||
|
||||
pub use super::admin::Entity as Admin;
|
||||
pub use super::admin_trail::Entity as AdminTrail;
|
||||
pub use super::balance::Entity as Balance;
|
||||
pub use super::increase_on_chain_balance_receipt::Entity as IncreaseOnChainBalanceReceipt;
|
||||
pub use super::login::Entity as Login;
|
||||
pub use super::pending_login::Entity as PendingLogin;
|
||||
pub use super::referee::Entity as Referee;
|
||||
pub use super::referrer::Entity as Referrer;
|
||||
pub use super::revert_log::Entity as RevertLog;
|
||||
pub use super::rpc_accounting::Entity as RpcAccounting;
|
||||
pub use super::rpc_accounting_v2::Entity as RpcAccountingV2;
|
||||
|
51
entities/src/referee.rs
Normal file
51
entities/src/referee.rs
Normal file
@ -0,0 +1,51 @@
|
||||
//! `SeaORM` Entity. Generated by sea-orm-codegen 0.10.6
|
||||
|
||||
use sea_orm::entity::prelude::*;
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
|
||||
#[sea_orm(table_name = "referee")]
|
||||
pub struct Model {
|
||||
#[sea_orm(primary_key)]
|
||||
pub id: i32,
|
||||
pub credits_applied_for_referee: bool,
|
||||
#[sea_orm(column_type = "Decimal(Some((20, 10)))")]
|
||||
pub credits_applied_for_referrer: Decimal,
|
||||
pub referral_start_date: DateTime,
|
||||
pub used_referral_code: i32,
|
||||
#[sea_orm(unique)]
|
||||
pub user_id: u64,
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
|
||||
pub enum Relation {
|
||||
#[sea_orm(
|
||||
belongs_to = "super::referrer::Entity",
|
||||
from = "Column::UsedReferralCode",
|
||||
to = "super::referrer::Column::Id",
|
||||
on_update = "NoAction",
|
||||
on_delete = "NoAction"
|
||||
)]
|
||||
Referrer,
|
||||
#[sea_orm(
|
||||
belongs_to = "super::user::Entity",
|
||||
from = "Column::UserId",
|
||||
to = "super::user::Column::Id",
|
||||
on_update = "NoAction",
|
||||
on_delete = "NoAction"
|
||||
)]
|
||||
User,
|
||||
}
|
||||
|
||||
impl Related<super::referrer::Entity> for Entity {
|
||||
fn to() -> RelationDef {
|
||||
Relation::Referrer.def()
|
||||
}
|
||||
}
|
||||
|
||||
impl Related<super::user::Entity> for Entity {
|
||||
fn to() -> RelationDef {
|
||||
Relation::User.def()
|
||||
}
|
||||
}
|
||||
|
||||
impl ActiveModelBehavior for ActiveModel {}
|
42
entities/src/referrer.rs
Normal file
42
entities/src/referrer.rs
Normal file
@ -0,0 +1,42 @@
|
||||
//! `SeaORM` Entity. Generated by sea-orm-codegen 0.10.6
|
||||
|
||||
use sea_orm::entity::prelude::*;
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
|
||||
#[sea_orm(table_name = "referrer")]
|
||||
pub struct Model {
|
||||
#[sea_orm(primary_key)]
|
||||
pub id: i32,
|
||||
#[sea_orm(unique)]
|
||||
pub referral_code: String,
|
||||
#[sea_orm(unique)]
|
||||
pub user_id: u64,
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
|
||||
pub enum Relation {
|
||||
#[sea_orm(has_many = "super::referee::Entity")]
|
||||
Referee,
|
||||
#[sea_orm(
|
||||
belongs_to = "super::user::Entity",
|
||||
from = "Column::UserId",
|
||||
to = "super::user::Column::Id",
|
||||
on_update = "NoAction",
|
||||
on_delete = "NoAction"
|
||||
)]
|
||||
User,
|
||||
}
|
||||
|
||||
impl Related<super::referee::Entity> for Entity {
|
||||
fn to() -> RelationDef {
|
||||
Relation::Referee.def()
|
||||
}
|
||||
}
|
||||
|
||||
impl Related<super::user::Entity> for Entity {
|
||||
fn to() -> RelationDef {
|
||||
Relation::User.def()
|
||||
}
|
||||
}
|
||||
|
||||
impl ActiveModelBehavior for ActiveModel {}
|
@ -11,8 +11,6 @@ pub struct Model {
|
||||
pub rpc_key_id: u64,
|
||||
pub chain_id: u64,
|
||||
pub period_datetime: DateTimeUtc,
|
||||
pub method: String,
|
||||
pub origin: String,
|
||||
pub archive_needed: bool,
|
||||
pub error_response: bool,
|
||||
pub frontend_requests: u64,
|
||||
@ -24,6 +22,8 @@ pub struct Model {
|
||||
pub sum_request_bytes: u64,
|
||||
pub sum_response_millis: u64,
|
||||
pub sum_response_bytes: u64,
|
||||
#[sea_orm(column_type = "Decimal(Some((20, 10)))")]
|
||||
pub sum_credits_used: Decimal,
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
|
||||
|
@ -38,6 +38,8 @@ pub enum Relation {
|
||||
RpcAccounting,
|
||||
#[sea_orm(has_many = "super::rpc_accounting_v2::Entity")]
|
||||
RpcAccountingV2,
|
||||
#[sea_orm(has_many = "super::secondary_user::Entity")]
|
||||
SecondaryUser,
|
||||
#[sea_orm(
|
||||
belongs_to = "super::user::Entity",
|
||||
from = "Column::UserId",
|
||||
@ -66,6 +68,12 @@ impl Related<super::rpc_accounting_v2::Entity> for Entity {
|
||||
}
|
||||
}
|
||||
|
||||
impl Related<super::secondary_user::Entity> for Entity {
|
||||
fn to() -> RelationDef {
|
||||
Relation::SecondaryUser.def()
|
||||
}
|
||||
}
|
||||
|
||||
impl Related<super::user::Entity> for Entity {
|
||||
fn to() -> RelationDef {
|
||||
Relation::User.def()
|
||||
|
@ -11,6 +11,7 @@ pub struct Model {
|
||||
pub id: u64,
|
||||
pub user_id: u64,
|
||||
pub description: Option<String>,
|
||||
pub rpc_secret_key_id: u64,
|
||||
pub role: Role,
|
||||
}
|
||||
|
||||
@ -24,6 +25,14 @@ pub enum Relation {
|
||||
on_delete = "NoAction"
|
||||
)]
|
||||
User,
|
||||
#[sea_orm(
|
||||
belongs_to = "super::rpc_key::Entity",
|
||||
from = "Column::RpcSecretKeyId",
|
||||
to = "super::rpc_key::Column::Id",
|
||||
on_update = "NoAction",
|
||||
on_delete = "NoAction"
|
||||
)]
|
||||
RpcKey,
|
||||
}
|
||||
|
||||
impl Related<super::user::Entity> for Entity {
|
||||
@ -32,4 +41,10 @@ impl Related<super::user::Entity> for Entity {
|
||||
}
|
||||
}
|
||||
|
||||
impl Related<super::rpc_key::Entity> for Entity {
|
||||
fn to() -> RelationDef {
|
||||
Relation::RpcKey.def()
|
||||
}
|
||||
}
|
||||
|
||||
impl ActiveModelBehavior for ActiveModel {}
|
||||
|
@ -11,12 +11,21 @@ pub struct Model {
|
||||
pub title: String,
|
||||
pub max_requests_per_period: Option<u64>,
|
||||
pub max_concurrent_requests: Option<u32>,
|
||||
pub downgrade_tier_id: Option<u64>,
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
|
||||
pub enum Relation {
|
||||
#[sea_orm(has_many = "super::user::Entity")]
|
||||
User,
|
||||
#[sea_orm(
|
||||
belongs_to = "Entity",
|
||||
from = "Column::DowngradeTierId",
|
||||
to = "Column::Id",
|
||||
on_update = "NoAction",
|
||||
on_delete = "NoAction"
|
||||
)]
|
||||
SelfRef,
|
||||
}
|
||||
|
||||
impl Related<super::user::Entity> for Entity {
|
||||
|
@ -19,6 +19,13 @@ mod m20230130_124740_read_only_login_logic;
|
||||
mod m20230130_165144_prepare_admin_imitation_pre_login;
|
||||
mod m20230215_152254_admin_trail;
|
||||
mod m20230307_002623_migrate_rpc_accounting_to_rpc_accounting_v2;
|
||||
mod m20230205_130035_create_balance;
|
||||
mod m20230205_133755_create_referrals;
|
||||
mod m20230214_134254_increase_balance_transactions;
|
||||
mod m20230221_230953_track_spend;
|
||||
mod m20230412_171916_modify_secondary_user_add_primary_user;
|
||||
mod m20230422_172555_premium_downgrade_logic;
|
||||
mod m20230511_161214_remove_columns_statsv2_origin_and_method;
|
||||
|
||||
pub struct Migrator;
|
||||
|
||||
@ -45,6 +52,13 @@ impl MigratorTrait for Migrator {
|
||||
Box::new(m20230130_165144_prepare_admin_imitation_pre_login::Migration),
|
||||
Box::new(m20230215_152254_admin_trail::Migration),
|
||||
Box::new(m20230307_002623_migrate_rpc_accounting_to_rpc_accounting_v2::Migration),
|
||||
Box::new(m20230205_130035_create_balance::Migration),
|
||||
Box::new(m20230205_133755_create_referrals::Migration),
|
||||
Box::new(m20230214_134254_increase_balance_transactions::Migration),
|
||||
Box::new(m20230221_230953_track_spend::Migration),
|
||||
Box::new(m20230412_171916_modify_secondary_user_add_primary_user::Migration),
|
||||
Box::new(m20230422_172555_premium_downgrade_logic::Migration),
|
||||
Box::new(m20230511_161214_remove_columns_statsv2_origin_and_method::Migration),
|
||||
]
|
||||
}
|
||||
}
|
||||
|
@ -23,6 +23,12 @@ impl MigrationTrait for Migration {
|
||||
.not_null()
|
||||
.default(0),
|
||||
)
|
||||
.foreign_key(
|
||||
ForeignKeyCreateStatement::new()
|
||||
.from_col(RpcAccountingV2::RpcKeyId)
|
||||
.to_tbl(RpcKey::Table)
|
||||
.to_col(RpcKey::Id),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(RpcAccountingV2::ChainId)
|
||||
.big_unsigned()
|
||||
@ -136,6 +142,12 @@ impl MigrationTrait for Migration {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Iden)]
|
||||
enum RpcKey {
|
||||
Table,
|
||||
Id,
|
||||
}
|
||||
|
||||
#[derive(Iden)]
|
||||
enum RpcAccountingV2 {
|
||||
Table,
|
||||
|
72
migration/src/m20230205_130035_create_balance.rs
Normal file
72
migration/src/m20230205_130035_create_balance.rs
Normal file
@ -0,0 +1,72 @@
|
||||
use sea_orm_migration::prelude::*;
|
||||
|
||||
#[derive(DeriveMigrationName)]
|
||||
pub struct Migration;
|
||||
|
||||
#[async_trait::async_trait]
|
||||
impl MigrationTrait for Migration {
|
||||
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||
// Replace the sample below with your own migration scripts
|
||||
manager
|
||||
.create_table(
|
||||
Table::create()
|
||||
.table(Balance::Table)
|
||||
.if_not_exists()
|
||||
.col(
|
||||
ColumnDef::new(Balance::Id)
|
||||
.integer()
|
||||
.not_null()
|
||||
.auto_increment()
|
||||
.primary_key(),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(Balance::AvailableBalance)
|
||||
.decimal_len(20, 10)
|
||||
.not_null()
|
||||
.default(0.0),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(Balance::UsedBalance)
|
||||
.decimal_len(20, 10)
|
||||
.not_null()
|
||||
.default(0.0),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(Balance::UserId)
|
||||
.big_unsigned()
|
||||
.unique_key()
|
||||
.not_null(),
|
||||
)
|
||||
.foreign_key(
|
||||
sea_query::ForeignKey::create()
|
||||
.from(Balance::Table, Balance::UserId)
|
||||
.to(User::Table, User::Id),
|
||||
)
|
||||
.to_owned(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||
// Replace the sample below with your own migration scripts
|
||||
manager
|
||||
.drop_table(Table::drop().table(Balance::Table).to_owned())
|
||||
.await
|
||||
}
|
||||
}
|
||||
|
||||
/// Learn more at https://docs.rs/sea-query#iden
|
||||
#[derive(Iden)]
|
||||
enum User {
|
||||
Table,
|
||||
Id,
|
||||
}
|
||||
|
||||
#[derive(Iden)]
|
||||
enum Balance {
|
||||
Table,
|
||||
Id,
|
||||
UserId,
|
||||
AvailableBalance,
|
||||
UsedBalance,
|
||||
}
|
133
migration/src/m20230205_133755_create_referrals.rs
Normal file
133
migration/src/m20230205_133755_create_referrals.rs
Normal file
@ -0,0 +1,133 @@
|
||||
use sea_orm_migration::prelude::*;
|
||||
|
||||
#[derive(DeriveMigrationName)]
|
||||
pub struct Migration;
|
||||
|
||||
#[async_trait::async_trait]
|
||||
impl MigrationTrait for Migration {
|
||||
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||
// Create one table for the referrer
|
||||
manager
|
||||
.create_table(
|
||||
Table::create()
|
||||
.table(Referrer::Table)
|
||||
.if_not_exists()
|
||||
.col(
|
||||
ColumnDef::new(Referrer::Id)
|
||||
.integer()
|
||||
.not_null()
|
||||
.auto_increment()
|
||||
.primary_key(),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(Referrer::ReferralCode)
|
||||
.string()
|
||||
.unique_key()
|
||||
.not_null(),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(Referrer::UserId)
|
||||
.big_unsigned()
|
||||
.unique_key()
|
||||
.not_null(),
|
||||
)
|
||||
.foreign_key(
|
||||
sea_query::ForeignKey::create()
|
||||
.from(Referrer::Table, Referrer::UserId)
|
||||
.to(User::Table, User::Id),
|
||||
)
|
||||
.to_owned(),
|
||||
)
|
||||
.await?;
|
||||
|
||||
// Create one table for the referrer
|
||||
manager
|
||||
.create_table(
|
||||
Table::create()
|
||||
.table(Referee::Table)
|
||||
.if_not_exists()
|
||||
.col(
|
||||
ColumnDef::new(Referee::Id)
|
||||
.integer()
|
||||
.not_null()
|
||||
.auto_increment()
|
||||
.primary_key(),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(Referee::CreditsAppliedForReferee)
|
||||
.boolean()
|
||||
.not_null(),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(Referee::CreditsAppliedForReferrer)
|
||||
.decimal_len(20, 10)
|
||||
.not_null()
|
||||
.default(0),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(Referee::ReferralStartDate)
|
||||
.date_time()
|
||||
.not_null()
|
||||
.extra("DEFAULT CURRENT_TIMESTAMP".to_string()),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(Referee::UsedReferralCode)
|
||||
.integer()
|
||||
.not_null(),
|
||||
)
|
||||
.foreign_key(
|
||||
sea_query::ForeignKey::create()
|
||||
.from(Referee::Table, Referee::UsedReferralCode)
|
||||
.to(Referrer::Table, Referrer::Id),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(Referee::UserId)
|
||||
.big_unsigned()
|
||||
.unique_key()
|
||||
.not_null(),
|
||||
)
|
||||
.foreign_key(
|
||||
sea_query::ForeignKey::create()
|
||||
.from(Referee::Table, Referee::UserId)
|
||||
.to(User::Table, User::Id),
|
||||
)
|
||||
.to_owned(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||
manager
|
||||
.drop_table(Table::drop().table(Referrer::Table).to_owned())
|
||||
.await?;
|
||||
manager
|
||||
.drop_table(Table::drop().table(Referee::Table).to_owned())
|
||||
.await
|
||||
}
|
||||
}
|
||||
|
||||
/// Learn more at https://docs.rs/sea-query#iden
|
||||
#[derive(Iden)]
|
||||
enum Referrer {
|
||||
Table,
|
||||
Id,
|
||||
UserId,
|
||||
ReferralCode,
|
||||
}
|
||||
|
||||
#[derive(Iden)]
|
||||
enum Referee {
|
||||
Table,
|
||||
Id,
|
||||
UserId,
|
||||
UsedReferralCode,
|
||||
CreditsAppliedForReferrer,
|
||||
CreditsAppliedForReferee,
|
||||
ReferralStartDate,
|
||||
}
|
||||
|
||||
#[derive(Iden)]
|
||||
enum User {
|
||||
Table,
|
||||
Id,
|
||||
}
|
@ -0,0 +1,97 @@
|
||||
use sea_orm_migration::prelude::*;
|
||||
|
||||
#[derive(DeriveMigrationName)]
|
||||
pub struct Migration;
|
||||
|
||||
#[async_trait::async_trait]
|
||||
impl MigrationTrait for Migration {
|
||||
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||
// Adds a table which keeps track of which transactions were already added (basically to prevent double spending)
|
||||
manager
|
||||
.create_table(
|
||||
Table::create()
|
||||
.table(IncreaseOnChainBalanceReceipt::Table)
|
||||
.if_not_exists()
|
||||
.col(
|
||||
ColumnDef::new(IncreaseOnChainBalanceReceipt::Id)
|
||||
.integer()
|
||||
.not_null()
|
||||
.auto_increment()
|
||||
.primary_key(),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(IncreaseOnChainBalanceReceipt::TxHash)
|
||||
.string()
|
||||
.not_null(),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(IncreaseOnChainBalanceReceipt::ChainId)
|
||||
.big_integer()
|
||||
.not_null(),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(IncreaseOnChainBalanceReceipt::Amount)
|
||||
.decimal_len(20, 10)
|
||||
.not_null(),
|
||||
)
|
||||
.col(
|
||||
ColumnDef::new(IncreaseOnChainBalanceReceipt::DepositToUserId)
|
||||
.big_unsigned()
|
||||
.unique_key()
|
||||
.not_null(),
|
||||
)
|
||||
.foreign_key(
|
||||
ForeignKey::create()
|
||||
.name("fk-deposit_to_user_id")
|
||||
.from(
|
||||
IncreaseOnChainBalanceReceipt::Table,
|
||||
IncreaseOnChainBalanceReceipt::DepositToUserId,
|
||||
)
|
||||
.to(User::Table, User::Id),
|
||||
)
|
||||
.to_owned(),
|
||||
)
|
||||
.await?;
|
||||
|
||||
// Add a unique-constraint on chain-id and tx-hash
|
||||
manager
|
||||
.create_index(
|
||||
Index::create()
|
||||
.name("idx-increase_on_chain_balance_receipt-unique-chain_id-tx_hash")
|
||||
.table(IncreaseOnChainBalanceReceipt::Table)
|
||||
.col(IncreaseOnChainBalanceReceipt::ChainId)
|
||||
.col(IncreaseOnChainBalanceReceipt::TxHash)
|
||||
.unique()
|
||||
.to_owned(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||
// Replace the sample below with your own migration scripts
|
||||
manager
|
||||
.drop_table(
|
||||
Table::drop()
|
||||
.table(IncreaseOnChainBalanceReceipt::Table)
|
||||
.to_owned(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
}
|
||||
|
||||
/// Learn more at https://docs.rs/sea-query#iden
|
||||
#[derive(Iden)]
|
||||
enum IncreaseOnChainBalanceReceipt {
|
||||
Table,
|
||||
Id,
|
||||
TxHash,
|
||||
ChainId,
|
||||
Amount,
|
||||
DepositToUserId,
|
||||
}
|
||||
|
||||
#[derive(Iden)]
|
||||
enum User {
|
||||
Table,
|
||||
Id,
|
||||
}
|
42
migration/src/m20230221_230953_track_spend.rs
Normal file
42
migration/src/m20230221_230953_track_spend.rs
Normal file
@ -0,0 +1,42 @@
|
||||
use sea_orm_migration::prelude::*;
|
||||
|
||||
#[derive(DeriveMigrationName)]
|
||||
pub struct Migration;
|
||||
|
||||
#[async_trait::async_trait]
|
||||
impl MigrationTrait for Migration {
|
||||
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||
// Track spend inside the RPC accounting v2 table
|
||||
manager
|
||||
.alter_table(
|
||||
Table::alter()
|
||||
.table(RpcAccountingV2::Table)
|
||||
.add_column(
|
||||
ColumnDef::new(RpcAccountingV2::SumCreditsUsed)
|
||||
.decimal_len(20, 10)
|
||||
.not_null(),
|
||||
)
|
||||
.to_owned(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||
// Replace the sample below with your own migration scripts
|
||||
manager
|
||||
.alter_table(
|
||||
sea_query::Table::alter()
|
||||
.table(RpcAccountingV2::Table)
|
||||
.drop_column(RpcAccountingV2::SumCreditsUsed)
|
||||
.to_owned(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
}
|
||||
|
||||
/// Learn more at https://docs.rs/sea-query#iden
|
||||
#[derive(Iden)]
|
||||
enum RpcAccountingV2 {
|
||||
Table,
|
||||
SumCreditsUsed,
|
||||
}
|
@ -0,0 +1,58 @@
|
||||
use sea_orm_migration::prelude::*;
|
||||
|
||||
#[derive(DeriveMigrationName)]
|
||||
pub struct Migration;
|
||||
|
||||
#[async_trait::async_trait]
|
||||
impl MigrationTrait for Migration {
|
||||
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||
manager
|
||||
.alter_table(
|
||||
Table::alter()
|
||||
.table(SecondaryUser::Table)
|
||||
.add_column(
|
||||
ColumnDef::new(SecondaryUser::RpcSecretKeyId)
|
||||
.big_unsigned()
|
||||
.not_null(), // add foreign key to user table ...,
|
||||
)
|
||||
.add_foreign_key(
|
||||
TableForeignKey::new()
|
||||
.name("FK_secondary_user-rpc_key")
|
||||
.from_tbl(SecondaryUser::Table)
|
||||
.from_col(SecondaryUser::RpcSecretKeyId)
|
||||
.to_tbl(RpcKey::Table)
|
||||
.to_col(RpcKey::Id)
|
||||
.on_delete(ForeignKeyAction::NoAction)
|
||||
.on_update(ForeignKeyAction::NoAction),
|
||||
)
|
||||
.to_owned(),
|
||||
)
|
||||
.await
|
||||
|
||||
// TODO: Add a unique index on RpcKey + Subuser
|
||||
}
|
||||
|
||||
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||
manager
|
||||
.alter_table(
|
||||
sea_query::Table::alter()
|
||||
.table(SecondaryUser::Table)
|
||||
.drop_column(SecondaryUser::RpcSecretKeyId)
|
||||
.to_owned(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
}
|
||||
|
||||
/// Learn more at https://docs.rs/sea-query#iden
|
||||
#[derive(Iden)]
|
||||
enum SecondaryUser {
|
||||
Table,
|
||||
RpcSecretKeyId,
|
||||
}
|
||||
|
||||
#[derive(Iden)]
|
||||
enum RpcKey {
|
||||
Table,
|
||||
Id,
|
||||
}
|
129
migration/src/m20230422_172555_premium_downgrade_logic.rs
Normal file
129
migration/src/m20230422_172555_premium_downgrade_logic.rs
Normal file
@ -0,0 +1,129 @@
|
||||
use crate::sea_orm::ConnectionTrait;
|
||||
use sea_orm_migration::prelude::*;
|
||||
|
||||
#[derive(DeriveMigrationName)]
|
||||
pub struct Migration;
|
||||
|
||||
#[async_trait::async_trait]
|
||||
impl MigrationTrait for Migration {
|
||||
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||
// Replace the sample below with your own migration scripts
|
||||
|
||||
// Add a column "downgrade_tier_id"
|
||||
// It is a "foreign key" that references other items in this table
|
||||
manager
|
||||
.alter_table(
|
||||
Table::alter()
|
||||
.table(UserTier::Table)
|
||||
.add_column(ColumnDef::new(UserTier::DowngradeTierId).big_unsigned())
|
||||
.add_foreign_key(
|
||||
TableForeignKey::new()
|
||||
.to_tbl(UserTier::Table)
|
||||
.to_tbl(UserTier::Table)
|
||||
.from_col(UserTier::DowngradeTierId)
|
||||
.to_col(UserTier::Id),
|
||||
)
|
||||
.to_owned(),
|
||||
)
|
||||
.await?;
|
||||
|
||||
// Insert Premium, and PremiumOutOfFunds
|
||||
let premium_out_of_funds_tier = Query::insert()
|
||||
.into_table(UserTier::Table)
|
||||
.columns([
|
||||
UserTier::Title,
|
||||
UserTier::MaxRequestsPerPeriod,
|
||||
UserTier::MaxConcurrentRequests,
|
||||
UserTier::DowngradeTierId,
|
||||
])
|
||||
.values_panic([
|
||||
"Premium Out Of Funds".into(),
|
||||
Some("6000").into(),
|
||||
Some("5").into(),
|
||||
None::<i64>.into(),
|
||||
])
|
||||
.to_owned();
|
||||
|
||||
manager.exec_stmt(premium_out_of_funds_tier).await?;
|
||||
|
||||
// Insert Premium Out Of Funds
|
||||
// get the premium tier ...
|
||||
let db_conn = manager.get_connection();
|
||||
let db_backend = manager.get_database_backend();
|
||||
|
||||
let select_premium_out_of_funds_tier_id = Query::select()
|
||||
.column(UserTier::Id)
|
||||
.from(UserTier::Table)
|
||||
.cond_where(Expr::col(UserTier::Title).eq("Premium Out Of Funds"))
|
||||
.to_owned();
|
||||
let premium_out_of_funds_tier_id: u64 = db_conn
|
||||
.query_one(db_backend.build(&select_premium_out_of_funds_tier_id))
|
||||
.await?
|
||||
.expect("we just created Premium Out Of Funds")
|
||||
.try_get("", &UserTier::Id.to_string())?;
|
||||
|
||||
// Add two tiers for premium: premium, and premium-out-of-funds
|
||||
let premium_tier = Query::insert()
|
||||
.into_table(UserTier::Table)
|
||||
.columns([
|
||||
UserTier::Title,
|
||||
UserTier::MaxRequestsPerPeriod,
|
||||
UserTier::MaxConcurrentRequests,
|
||||
UserTier::DowngradeTierId,
|
||||
])
|
||||
.values_panic([
|
||||
"Premium".into(),
|
||||
None::<&str>.into(),
|
||||
Some("100").into(),
|
||||
Some(premium_out_of_funds_tier_id).into(),
|
||||
])
|
||||
.to_owned();
|
||||
|
||||
manager.exec_stmt(premium_tier).await
|
||||
}
|
||||
|
||||
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||
// Replace the sample below with your own migration scripts
|
||||
|
||||
// Remove the two tiers that you just added
|
||||
// And remove the column you just added
|
||||
let db_conn = manager.get_connection();
|
||||
let db_backend = manager.get_database_backend();
|
||||
|
||||
let delete_premium = Query::delete()
|
||||
.from_table(UserTier::Table)
|
||||
.cond_where(Expr::col(UserTier::Title).eq("Premium"))
|
||||
.to_owned();
|
||||
|
||||
db_conn.execute(db_backend.build(&delete_premium)).await?;
|
||||
|
||||
let delete_premium_out_of_funds = Query::delete()
|
||||
.from_table(UserTier::Table)
|
||||
.cond_where(Expr::col(UserTier::Title).eq("Premium Out Of Funds"))
|
||||
.to_owned();
|
||||
|
||||
db_conn
|
||||
.execute(db_backend.build(&delete_premium_out_of_funds))
|
||||
.await?;
|
||||
|
||||
// Finally drop the downgrade column
|
||||
manager
|
||||
.alter_table(
|
||||
Table::alter()
|
||||
.table(UserTier::Table)
|
||||
.drop_column(UserTier::DowngradeTierId)
|
||||
.to_owned(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Iden)]
|
||||
enum UserTier {
|
||||
Table,
|
||||
Id,
|
||||
Title,
|
||||
MaxRequestsPerPeriod,
|
||||
MaxConcurrentRequests,
|
||||
DowngradeTierId,
|
||||
}
|
@ -0,0 +1,50 @@
|
||||
use sea_orm_migration::prelude::*;
|
||||
|
||||
#[derive(DeriveMigrationName)]
|
||||
pub struct Migration;
|
||||
|
||||
#[async_trait::async_trait]
|
||||
impl MigrationTrait for Migration {
|
||||
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||
manager
|
||||
.alter_table(
|
||||
Table::alter()
|
||||
.table(RpcAccountingV2::Table)
|
||||
.drop_column(RpcAccountingV2::Origin)
|
||||
.drop_column(RpcAccountingV2::Method)
|
||||
.to_owned(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||
manager
|
||||
.alter_table(
|
||||
Table::alter()
|
||||
.table(RpcAccountingV2::Table)
|
||||
.add_column(
|
||||
ColumnDef::new(RpcAccountingV2::Method)
|
||||
.string()
|
||||
.not_null()
|
||||
.default(""),
|
||||
)
|
||||
.add_column(
|
||||
ColumnDef::new(RpcAccountingV2::Origin)
|
||||
.string()
|
||||
.not_null()
|
||||
.default(""),
|
||||
)
|
||||
.to_owned(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
}
|
||||
|
||||
/// Learn more at https://docs.rs/sea-query#iden
|
||||
#[derive(Iden)]
|
||||
enum RpcAccountingV2 {
|
||||
Table,
|
||||
Id,
|
||||
Origin,
|
||||
Method,
|
||||
}
|
2
scripts/brownie-tests/.gitattributes
vendored
Normal file
2
scripts/brownie-tests/.gitattributes
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
*.sol linguist-language=Solidity
|
||||
*.vy linguist-language=Python
|
6
scripts/brownie-tests/.gitignore
vendored
Normal file
6
scripts/brownie-tests/.gitignore
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
__pycache__
|
||||
.env
|
||||
.history
|
||||
.hypothesis/
|
||||
build/
|
||||
reports/
|
1
scripts/brownie-tests/brownie-config.yaml
Normal file
1
scripts/brownie-tests/brownie-config.yaml
Normal file
@ -0,0 +1 @@
|
||||
dotenv: .env
|
34
scripts/brownie-tests/scripts/make_payment.py
Normal file
34
scripts/brownie-tests/scripts/make_payment.py
Normal file
@ -0,0 +1,34 @@
|
||||
from brownie import Contract, Sweeper, accounts
|
||||
from brownie.network import priority_fee
|
||||
|
||||
def main():
|
||||
print("Hello")
|
||||
|
||||
|
||||
print("accounts are")
|
||||
token = Contract.from_explorer("0xC9fCFA7e28fF320C49967f4522EBc709aa1fDE7c")
|
||||
factory = Contract.from_explorer("0x4e3bc2054788de923a04936c6addb99a05b0ea36")
|
||||
user = accounts.load("david")
|
||||
# user = accounts.load("david-main")
|
||||
|
||||
print("Llama token")
|
||||
print(token)
|
||||
|
||||
print("Factory token")
|
||||
print(factory)
|
||||
|
||||
print("User addr")
|
||||
print(user)
|
||||
|
||||
# Sweeper and Proxy are deployed by us, as the user, by calling factory
|
||||
# Already been called before ...
|
||||
# factory.create_payment_address({'from': user})
|
||||
sweeper = Sweeper.at(factory.account_to_payment_address(user))
|
||||
print("Sweeper is at")
|
||||
print(sweeper)
|
||||
|
||||
priority_fee("auto")
|
||||
token._mint_for_testing(user, (10_000)*(10**18), {'from': user})
|
||||
# token.approve(sweeper, 2**256-1, {'from': user})
|
||||
sweeper.send_token(token, (5_000)*(10**18), {'from': user})
|
||||
# sweeper.send_token(token, (47)*(10**13), {'from': user})
|
@ -6,5 +6,6 @@
|
||||
curl -X GET \
|
||||
"http://localhost:8544/user/stats/aggregate?query_start=1678780033&query_window_seconds=1000"
|
||||
|
||||
#curl -X GET \
|
||||
#"http://localhost:8544/user/stats/detailed?query_start=1678780033&query_window_seconds=1000"
|
||||
curl -X GET \
|
||||
-H "Authorization: Bearer 01GZK8MHHGQWK4VPGF97HS91MB" \
|
||||
"http://localhost:8544/user/stats/detailed?query_start=1678780033&query_window_seconds=1000"
|
||||
|
110
scripts/manual-tests/12-subusers-premium-account.sh
Normal file
110
scripts/manual-tests/12-subusers-premium-account.sh
Normal file
@ -0,0 +1,110 @@
|
||||
### Tests subuser premium account endpoints
|
||||
##################
|
||||
# Run the server
|
||||
##################
|
||||
# Run the proxyd instance
|
||||
cargo run --release -- proxyd
|
||||
|
||||
# Check if the instance is running
|
||||
curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"web3_clientVersion","id":1}' 127.0.0.1:8544
|
||||
|
||||
|
||||
##################
|
||||
# Create the premium / primary user & log in (Wallet 0xeB3E928A2E54BE013EF8241d4C9EaF4DfAE94D5a)
|
||||
##################
|
||||
cargo run create_user --address 0xeB3E928A2E54BE013EF8241d4C9EaF4DfAE94D5a
|
||||
|
||||
# Make user premium, so he can create subusers
|
||||
cargo run change_user_tier_by_address 0xeB3E928A2E54BE013EF8241d4C9EaF4DfAE94D5a "Unlimited"
|
||||
# could also use CLI to change user role
|
||||
# ULID 01GXRAGS5F9VJFQRVMZGE1Q85T
|
||||
# UUID 018770a8-64af-4ee4-fbe3-74fc1c1ba0ba
|
||||
|
||||
# Open this website to get the nonce to log in, sign the message, and paste the payload in the endpoint that follows it
|
||||
http://127.0.0.1:8544/user/login/0xeB3E928A2E54BE013EF8241d4C9EaF4DfAE94D5a
|
||||
https://www.myetherwallet.com/wallet/sign
|
||||
|
||||
http://127.0.0.1:8544/user/login/0xeB3E928A2E54BE013EF8241d4C9EaF4DfAE94D5a
|
||||
https://www.myetherwallet.com/wallet/sign
|
||||
|
||||
# Use this site to sign a message
|
||||
curl -X POST http://127.0.0.1:8544/user/login \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"address": "0x762390ae7a3c4d987062a398c1ea8767029ab08e",
|
||||
"msg": "0x6c6c616d616e6f6465732e636f6d2077616e747320796f7520746f207369676e20696e207769746820796f757220457468657265756d206163636f756e743a0a3078373632333930616537613363344439383730363261333938433165413837363730323941423038450a0af09fa699f09fa699f09fa699f09fa699f09fa6990a0a5552493a2068747470733a2f2f6c6c616d616e6f6465732e636f6d2f0a56657273696f6e3a20310a436861696e2049443a20310a4e6f6e63653a203031475a484e4350315a57345134305a384b4e4e304454564a320a4973737565642041743a20323032332d30352d30335432303a33383a31392e3435363231345a0a45787069726174696f6e2054696d653a20323032332d30352d30335432303a35383a31392e3435363231345a",
|
||||
"sig": "82d2ee89fb6075bdc57fa66db4e0b2b84ad0b6515e1b3d71bb1dd4e6f1711b2f0f6b5f5e40116fd51e609bc8b4c0642f4cdaaf96a6c48e66093fe153d4e2873f1c",
|
||||
"version": "3",
|
||||
"signer": "MEW"
|
||||
}'
|
||||
|
||||
# Bearer token is: 01GZHMCXHXHPGAABAQQTXKMSM3
|
||||
# RPC secret key is: 01GZHMCXGXT5Z4M8SCKCMKDAZ6
|
||||
|
||||
# 01GZHND8E5BYRVPXXMKPQ75RJ1
|
||||
# 01GZHND83W8VAHCZWEPP1AA24M
|
||||
|
||||
# Top up the balance of the account
|
||||
curl \
|
||||
-H "Authorization: Bearer 01GZHMCXHXHPGAABAQQTXKMSM3" \
|
||||
-X GET "127.0.0.1:8544/user/balance/0x749788a5766577431a0a4fc8721fd7cb981f55222e073ed17976f0aba5e8818a"
|
||||
|
||||
|
||||
# Make an example RPC request to check if the tokens work
|
||||
curl \
|
||||
-X POST "127.0.0.1:8544/rpc/01GZHMCXGXT5Z4M8SCKCMKDAZ6" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"method":"eth_blockNumber","params":[],"id":1,"jsonrpc":"2.0"}'
|
||||
|
||||
##################
|
||||
# Now act as the subuser (Wallet 0x762390ae7a3c4D987062a398C1eA8767029AB08E)
|
||||
# We first login the subuser
|
||||
##################
|
||||
# Login using the referral link. This should create the user, and also mark him as being referred
|
||||
# http://127.0.0.1:8544/user/login/0x762390ae7a3c4D987062a398C1eA8767029AB08E
|
||||
# https://www.myetherwallet.com/wallet/sign
|
||||
curl -X POST http://127.0.0.1:8544/user/login \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"address": "0x762390ae7a3c4d987062a398c1ea8767029ab08e",
|
||||
"msg": "0x6c6c616d616e6f6465732e636f6d2077616e747320796f7520746f207369676e20696e207769746820796f757220457468657265756d206163636f756e743a0a3078373632333930616537613363344439383730363261333938433165413837363730323941423038450a0af09fa699f09fa699f09fa699f09fa699f09fa6990a0a5552493a2068747470733a2f2f6c6c616d616e6f6465732e636f6d2f0a56657273696f6e3a20310a436861696e2049443a20310a4e6f6e63653a20303147585246454b5654334d584531334b5956443159323853460a4973737565642041743a20323032332d30342d31315431353a33373a34382e3636373438315a0a45787069726174696f6e2054696d653a20323032332d30342d31315431353a35373a34382e3636373438315a",
|
||||
"sig": "1784c968fdc244248a4c0b8d52158ff773e044646d6e5ce61d457679d740566b66fd16ad24777f09c971e2c3dfa74966ffb8c083a9bef2a527e49bc3770713431c",
|
||||
"version": "3",
|
||||
"signer": "MEW",
|
||||
"referral_code": "llamanodes-01GXRB6RVM00MACTKABYVF8MJR"
|
||||
}'
|
||||
|
||||
# Bearer token 01GXRFKFQXDV0MQ2RT52BCPZ23
|
||||
# RPC key 01GXRFKFPY5DDRCRVB3B3HVDYK
|
||||
|
||||
##################
|
||||
# Now the primary user adds the secondary user as a subuser
|
||||
##################
|
||||
# Get first users RPC keys
|
||||
curl \
|
||||
-H "Authorization: Bearer 01GXRB6AHZSXFDX2S1QJPJ8X51" \
|
||||
-X GET "127.0.0.1:8544/user/keys"
|
||||
|
||||
# Secret key
|
||||
curl \
|
||||
-X GET "127.0.0.1:8544/user/subuser?subuser_address=0x762390ae7a3c4D987062a398C1eA8767029AB08E&rpc_key=01GZHMCXGXT5Z4M8SCKCMKDAZ6&new_status=upsert&new_role=admin" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer 01GZHMCXHXHPGAABAQQTXKMSM3"
|
||||
|
||||
# The primary user can check what subusers he gave access to
|
||||
curl \
|
||||
-X GET "127.0.0.1:8544/user/subusers?rpc_key=01GZHMCXGXT5Z4M8SCKCMKDAZ6" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer 01GZHMCXHXHPGAABAQQTXKMSM3"
|
||||
|
||||
# The secondary user can see all the projects that he is associated with
|
||||
curl \
|
||||
-X GET "127.0.0.1:8544/subuser/rpc_keys" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer 01GXRFKFQXDV0MQ2RT52BCPZ23"
|
||||
|
||||
# Secret key
|
||||
curl \
|
||||
-X GET "127.0.0.1:8544/user/subuser?subuser_address=0x762390ae7a3c4D987062a398C1eA8767029AB08E&rpc_key=01GXRFKFPY5DDRCRVB3B3HVDYK&new_status=remove&new_role=collaborator" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer 01GXRFKFQXDV0MQ2RT52BCPZ23"
|
@ -3,14 +3,14 @@
|
||||
# sea-orm-cli migrate up
|
||||
|
||||
# Use CLI to create the admin that will call the endpoint
|
||||
RUSTFLAGS="--cfg tokio_unstable" cargo run create_user --address 0xeB3E928A2E54BE013EF8241d4C9EaF4DfAE94D5a
|
||||
RUSTFLAGS="--cfg tokio_unstable" cargo run change_admin_status 0xeB3E928A2E54BE013EF8241d4C9EaF4DfAE94D5a true
|
||||
cargo run create_user --address 0xeB3E928A2E54BE013EF8241d4C9EaF4DfAE94D5a
|
||||
cargo run change_admin_status 0xeB3E928A2E54BE013EF8241d4C9EaF4DfAE94D5a true
|
||||
|
||||
# Use CLI to create the user whose role will be changed via the endpoint
|
||||
RUSTFLAGS="--cfg tokio_unstable" cargo run create_user --address 0x077e43dcca20da9859daa3fd78b5998b81f794f7
|
||||
cargo run create_user --address 0x077e43dcca20da9859daa3fd78b5998b81f794f7
|
||||
|
||||
# Run the proxyd instance
|
||||
RUSTFLAGS="--cfg tokio_unstable" cargo run --release -- proxyd
|
||||
cargo run --release -- proxyd
|
||||
|
||||
# Check if the instance is running
|
||||
curl --verbose -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"web3_clientVersion","id":1}' 127.0.0.1:8544
|
||||
|
111
scripts/manual-tests/24-simple-referral-program.sh
Normal file
111
scripts/manual-tests/24-simple-referral-program.sh
Normal file
@ -0,0 +1,111 @@
|
||||
##################
|
||||
# Run the server
|
||||
##################
|
||||
|
||||
# Keep the proxyd instance running the background (and test that it works)
|
||||
cargo run --release -- proxyd
|
||||
|
||||
# Check if the instance is running
|
||||
curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"web3_clientVersion","id":1}' 127.0.0.1:8544
|
||||
|
||||
##################
|
||||
# Create the referring user & log in (Wallet 0xeB3E928A2E54BE013EF8241d4C9EaF4DfAE94D5a)
|
||||
##################
|
||||
cargo run create_user --address 0xeB3E928A2E54BE013EF8241d4C9EaF4DfAE94D5a
|
||||
|
||||
# Make user premium, so he can create referral keys
|
||||
cargo run change_user_tier_by_address 0xeB3E928A2E54BE013EF8241d4C9EaF4DfAE94D5a "Unlimited"
|
||||
# could also use CLI to change user role
|
||||
# ULID 01GXRAGS5F9VJFQRVMZGE1Q85T
|
||||
# UUID 018770a8-64af-4ee4-fbe3-74fc1c1ba0ba
|
||||
|
||||
# Open this website to get the nonce to log in, sign the message, and paste the payload in the endpoint that follows it
|
||||
http://127.0.0.1:8544/user/login/0xeB3E928A2E54BE013EF8241d4C9EaF4DfAE94D5a
|
||||
https://www.myetherwallet.com/wallet/sign
|
||||
|
||||
# Use this site to sign a message
|
||||
curl -X POST http://127.0.0.1:8544/user/login \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"address": "0xeb3e928a2e54be013ef8241d4c9eaf4dfae94d5a",
|
||||
"msg": "0x6c6c616d616e6f6465732e636f6d2077616e747320796f7520746f207369676e20696e207769746820796f757220457468657265756d206163636f756e743a0a3078654233453932384132453534424530313345463832343164344339456146344466414539344435610a0af09fa699f09fa699f09fa699f09fa699f09fa6990a0a5552493a2068747470733a2f2f6c6c616d616e6f6465732e636f6d2f0a56657273696f6e3a20310a436861696e2049443a20310a4e6f6e63653a2030314758524235424a584b47535845454b5a314438424857565a0a4973737565642041743a20323032332d30342d31315431343a32323a35302e3937333930365a0a45787069726174696f6e2054696d653a20323032332d30342d31315431343a34323a35302e3937333930365a",
|
||||
"sig": "be1f9fed3f6f206c15677b7da488071b936b68daf560715b75cf9232afe4b9923c2c5d00a558847131f0f04200b4b123011f62521b7b97bab2c8b794c82b29621b",
|
||||
"version": "3",
|
||||
"signer": "MEW"
|
||||
}'
|
||||
|
||||
# Bearer token is: 01GXRB6AHZSXFDX2S1QJPJ8X51
|
||||
# RPC secret key is: 01GXRAGS5F9VJFQRVMZGE1Q85T
|
||||
|
||||
# Make an example RPC request to check if the tokens work
|
||||
curl \
|
||||
-X POST "127.0.0.1:8544/rpc/01GXRAGS5F9VJFQRVMZGE1Q85T" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"method":"eth_blockNumber","params":[],"id":1,"jsonrpc":"2.0"}'
|
||||
|
||||
# Now retrieve the referral link
|
||||
curl \
|
||||
-H "Authorization: Bearer 01GXRB6AHZSXFDX2S1QJPJ8X51" \
|
||||
-X GET "127.0.0.1:8544/user/referral"
|
||||
|
||||
# This is the referral code which will be used by the redeemer
|
||||
# "llamanodes-01GXRB6RVM00MACTKABYVF8MJR"
|
||||
|
||||
##################
|
||||
# Now act as the referrer (Wallet 0x762390ae7a3c4D987062a398C1eA8767029AB08E)
|
||||
# We first login the referrer
|
||||
# Using the referrer code creates an entry in the table
|
||||
##################
|
||||
# Login using the referral link. This should create the user, and also mark him as being referred
|
||||
# http://127.0.0.1:8544/user/login/0x762390ae7a3c4D987062a398C1eA8767029AB08E
|
||||
# https://www.myetherwallet.com/wallet/sign
|
||||
curl -X POST http://127.0.0.1:8544/user/login \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"address": "0x762390ae7a3c4d987062a398c1ea8767029ab08e",
|
||||
"msg": "0x6c6c616d616e6f6465732e636f6d2077616e747320796f7520746f207369676e20696e207769746820796f757220457468657265756d206163636f756e743a0a3078373632333930616537613363344439383730363261333938433165413837363730323941423038450a0af09fa699f09fa699f09fa699f09fa699f09fa6990a0a5552493a2068747470733a2f2f6c6c616d616e6f6465732e636f6d2f0a56657273696f6e3a20310a436861696e2049443a20310a4e6f6e63653a20303147585246454b5654334d584531334b5956443159323853460a4973737565642041743a20323032332d30342d31315431353a33373a34382e3636373438315a0a45787069726174696f6e2054696d653a20323032332d30342d31315431353a35373a34382e3636373438315a",
|
||||
"sig": "1784c968fdc244248a4c0b8d52158ff773e044646d6e5ce61d457679d740566b66fd16ad24777f09c971e2c3dfa74966ffb8c083a9bef2a527e49bc3770713431c",
|
||||
"version": "3",
|
||||
"signer": "MEW",
|
||||
"referral_code": "llamanodes-01GXRB6RVM00MACTKABYVF8MJR"
|
||||
}'
|
||||
|
||||
# Bearer token 01GXRFKFQXDV0MQ2RT52BCPZ23
|
||||
# RPC key 01GXRFKFPY5DDRCRVB3B3HVDYK
|
||||
|
||||
# Make some requests, the referrer should not receive any credits for this (balance table is not created for free-tier users ...) This works fine
|
||||
for i in {1..1000}
|
||||
do
|
||||
curl \
|
||||
-X POST "127.0.0.1:8544/rpc/01GXRFKFPY5DDRCRVB3B3HVDYK" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"method":"eth_blockNumber","params":[],"id":1,"jsonrpc":"2.0"}'
|
||||
done
|
||||
|
||||
###########################################
|
||||
# Now the referred user deposits some tokens
|
||||
# They then send it to the endpoint
|
||||
###########################################
|
||||
curl \
|
||||
-H "Authorization: Bearer 01GXRFKFQXDV0MQ2RT52BCPZ23" \
|
||||
-X GET "127.0.0.1:8544/user/balance/0xda41f748106d2d1f1bf395e65d07bd9fc507c1eb4fd50c87d8ca1f34cfd536b0"
|
||||
|
||||
curl \
|
||||
-H "Authorization: Bearer 01GXRFKFQXDV0MQ2RT52BCPZ23" \
|
||||
-X GET "127.0.0.1:8544/user/balance/0xd56dee328dfa3bea26c3762834081881e5eff62e77a2b45e72d98016daaeffba"
|
||||
|
||||
|
||||
###########################################
|
||||
# Now the referred user starts spending the money. Let's make requests worth $100 and see what happens ...
|
||||
# At all times, the referrer should receive 10% of the spent tokens
|
||||
###########################################
|
||||
for i in {1..10000000}
|
||||
do
|
||||
curl \
|
||||
-X POST "127.0.0.1:8544/rpc/01GXRFKFPY5DDRCRVB3B3HVDYK" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"method":"eth_blockNumber","params":[],"id":1,"jsonrpc":"2.0"}'
|
||||
done
|
||||
|
||||
# Check that the new user was indeed logged in, and that a referral table entry was created (in the database)
|
||||
# Check that the 10% referral rate works
|
86
scripts/manual-tests/42-simple-balance.sh
Normal file
86
scripts/manual-tests/42-simple-balance.sh
Normal file
@ -0,0 +1,86 @@
|
||||
##################
|
||||
# Run the server
|
||||
##################
|
||||
# Run the proxyd instance
|
||||
cargo run --release -- proxyd
|
||||
|
||||
# Check if the instance is running
|
||||
curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"web3_clientVersion","id":1}' 127.0.0.1:8544
|
||||
|
||||
##########################
|
||||
# Create a User & Log in
|
||||
##########################
|
||||
cargo run create_user --address 0x762390ae7a3c4D987062a398C1eA8767029AB08E
|
||||
# ULID: 01GXEDC66Z9RZE6AE22JE7FRAW
|
||||
# UUID: 01875cd6-18df-4e3e-e329-c2149c77e15c
|
||||
|
||||
# Log in as the user so we can check the balance
|
||||
# Open this website to get the nonce to log in
|
||||
curl -X GET "http://127.0.0.1:8544/user/login/0xeb3e928a2e54be013ef8241d4c9eaf4dfae94d5a"
|
||||
|
||||
# Use this site to sign a message
|
||||
# https://www.myetherwallet.com/wallet/sign (whatever is output with the above code)
|
||||
curl -X POST http://127.0.0.1:8544/user/login \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"address": "0xeb3e928a2e54be013ef8241d4c9eaf4dfae94d5a",
|
||||
"msg": "0x6c6c616d616e6f6465732e636f6d2077616e747320796f7520746f207369676e20696e207769746820796f757220457468657265756d206163636f756e743a0a3078654233453932384132453534424530313345463832343164344339456146344466414539344435610a0af09fa699f09fa699f09fa699f09fa699f09fa6990a0a5552493a2068747470733a2f2f6c6c616d616e6f6465732e636f6d2f0a56657273696f6e3a20310a436861696e2049443a20310a4e6f6e63653a203031475a4b384b4847305259474737514e5132475037464444470a4973737565642041743a20323032332d30352d30345431313a33333a32312e3533363734355a0a45787069726174696f6e2054696d653a20323032332d30352d30345431313a35333a32312e3533363734355a",
|
||||
"sig": "cebd9effff15f4517e53522dbe91798d59dc0df0299faaec25d3f6443fa121f847e4311d5ca7386e75b87d6d45df92b8ced58c822117519c666ab1a6b2fc7bd21b",
|
||||
"version": "3",
|
||||
"signer": "MEW"
|
||||
}'
|
||||
|
||||
# bearer token is: 01GZK8MHHGQWK4VPGF97HS91MB
|
||||
# scret key is: 01GZK65YNV0P0WN2SCXYTW3R9S
|
||||
|
||||
# 01GZH2PS89EJJY6V8JFCVTQ4BX
|
||||
# 01GZH2PS7CTHA3TAZ4HXCTX6KQ
|
||||
|
||||
###########################################
|
||||
# Initially check balance, it should be 0
|
||||
###########################################
|
||||
# Check the balance of the user
|
||||
# Balance seems to be returning properly (0, in this test case)
|
||||
curl \
|
||||
-H "Authorization: Bearer 01GZK8MHHGQWK4VPGF97HS91MB" \
|
||||
-X GET "127.0.0.1:8544/user/balance"
|
||||
|
||||
|
||||
###########################################
|
||||
# The user submits a transaction on the matic network
|
||||
# and submits it on the endpoint
|
||||
###########################################
|
||||
curl \
|
||||
-H "Authorization: Bearer 01GZK65YRW69KZECCGPSQH2XYK" \
|
||||
-X GET "127.0.0.1:8544/user/balance/0x749788a5766577431a0a4fc8721fd7cb981f55222e073ed17976f0aba5e8818a"
|
||||
|
||||
###########################################
|
||||
# Check the balance again, it should have increased according to how much USDC was spent
|
||||
###########################################
|
||||
# Check the balance of the user
|
||||
# Balance seems to be returning properly (0, in this test case)
|
||||
curl \
|
||||
-H "Authorization: Bearer 01GZGGDBMV0GM6MFBBHPDE78BW" \
|
||||
-X GET "127.0.0.1:8544/user/balance"
|
||||
|
||||
# TODO: Now start using the RPC, balance should decrease
|
||||
|
||||
# Get the RPC key
|
||||
curl \
|
||||
-X GET "127.0.0.1:8544/user/keys" \
|
||||
-H "Authorization: Bearer 01GZGGDBMV0GM6MFBBHPDE78BW" \
|
||||
--data '{"method":"eth_blockNumber","params":[],"id":1,"jsonrpc":"2.0"}'
|
||||
|
||||
## Check if calling an RPC endpoint logs the stats
|
||||
## This one does already even it seems
|
||||
for i in {1..100}
|
||||
do
|
||||
curl \
|
||||
-X POST "127.0.0.1:8544/rpc/01GZK65YNV0P0WN2SCXYTW3R9S" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"method":"eth_blockNumber","params":[],"id":1,"jsonrpc":"2.0"}'
|
||||
done
|
||||
|
||||
|
||||
# TODO: Now implement and test withdrawal
|
||||
|
88
scripts/manual-tests/48-balance-downgrade.sh
Normal file
88
scripts/manual-tests/48-balance-downgrade.sh
Normal file
@ -0,0 +1,88 @@
|
||||
##################
|
||||
# Run the server
|
||||
##################
|
||||
# Run the proxyd instance
|
||||
cargo run --release -- proxyd
|
||||
|
||||
# Check if the instance is running
|
||||
curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"web3_clientVersion","id":1}' 127.0.0.1:8544
|
||||
|
||||
##########################
|
||||
# Create a User & Log in
|
||||
##########################
|
||||
#cargo run create_user --address 0x762390ae7a3c4D987062a398C1eA8767029AB08E
|
||||
# ULID: 01GXEDC66Z9RZE6AE22JE7FRAW
|
||||
# UUID: 01875cd6-18df-4e3e-e329-c2149c77e15c
|
||||
|
||||
# Log in as the user so we can check the balance
|
||||
# Open this website to get the nonce to log in
|
||||
curl -X GET "http://127.0.0.1:8544/user/login/0xeB3E928A2E54BE013EF8241d4C9EaF4DfAE94D5a"
|
||||
|
||||
# Use this site to sign a message
|
||||
# https://www.myetherwallet.com/wallet/sign (whatever is output with the above code)
|
||||
curl -X POST http://127.0.0.1:8544/user/login \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"address": "0xeb3e928a2e54be013ef8241d4c9eaf4dfae94d5a",
|
||||
"msg": "0x6c6c616d616e6f6465732e636f6d2077616e747320796f7520746f207369676e20696e207769746820796f757220457468657265756d206163636f756e743a0a3078654233453932384132453534424530313345463832343164344339456146344466414539344435610a0af09fa699f09fa699f09fa699f09fa699f09fa6990a0a5552493a2068747470733a2f2f6c6c616d616e6f6465732e636f6d2f0a56657273696f6e3a20310a436861696e2049443a20310a4e6f6e63653a2030314759513445564731474b34314b42364130324a344b45384b0a4973737565642041743a20323032332d30342d32335431333a32323a30392e3533373932365a0a45787069726174696f6e2054696d653a20323032332d30342d32335431333a34323a30392e3533373932365a",
|
||||
"sig": "52071cc59afb427eb554126f4f9f2a445c2a539783ba45079ccc0911197062f135d6d347cf0c38fa078dc2369c32b5131b86811fc0916786d1e48252163f58131c",
|
||||
"version": "3",
|
||||
"signer": "MEW"
|
||||
}'
|
||||
|
||||
# bearer token is: 01GYQ4FMRKKWJEA2YBST3B89MJ
|
||||
# scret key is: 01GYQ4FMNX9EMFBT43XEFGZV1K
|
||||
|
||||
###########################################
|
||||
# Initially check balance, it should be 0
|
||||
###########################################
|
||||
# Check the balance of the user
|
||||
# Balance seems to be returning properly (0, in this test case)
|
||||
curl \
|
||||
-H "Authorization: Bearer 01GYQ4FMRKKWJEA2YBST3B89MJ" \
|
||||
-X GET "127.0.0.1:8544/user/balance"
|
||||
|
||||
|
||||
###########################################
|
||||
# The user submits a transaction on the matic network
|
||||
# and submits it on the endpoint
|
||||
###########################################
|
||||
curl \
|
||||
-H "Authorization: Bearer 01GYQ4FMRKKWJEA2YBST3B89MJ" \
|
||||
-X GET "127.0.0.1:8544/user/balance/0x749788a5766577431a0a4fc8721fd7cb981f55222e073ed17976f0aba5e8818a"
|
||||
|
||||
###########################################
|
||||
# Check the balance again, it should have increased according to how much USDC was spent
|
||||
###########################################
|
||||
# Check the balance of the user
|
||||
# Balance seems to be returning properly (0, in this test case)
|
||||
curl \
|
||||
-H "Authorization: Bearer 01GYQ4FMRKKWJEA2YBST3B89MJ" \
|
||||
-X GET "127.0.0.1:8544/user/balance"
|
||||
|
||||
# Get the RPC key
|
||||
curl \
|
||||
-X GET "127.0.0.1:8544/user/keys" \
|
||||
-H "Authorization: Bearer 01GYQ4FMRKKWJEA2YBST3B89MJ"
|
||||
|
||||
## Check if calling an RPC endpoint logs the stats
|
||||
## This one does already even it seems
|
||||
for i in {1..100000}
|
||||
do
|
||||
curl \
|
||||
-X POST "127.0.0.1:8544/rpc/01GZHMCXGXT5Z4M8SCKCMKDAZ6" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"method":"eth_blockNumber","params":[],"id":1,"jsonrpc":"2.0"}'
|
||||
done
|
||||
|
||||
for i in {1..100}
|
||||
do
|
||||
curl \
|
||||
-X POST "127.0.0.1:8544/" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"method":"eth_blockNumber","params":[],"id":1,"jsonrpc":"2.0"}'
|
||||
done
|
||||
|
||||
|
||||
# TODO: Now implement and test withdrawal
|
||||
|
5
scripts/manual-tests/52-simple-get-deposits.sh
Normal file
5
scripts/manual-tests/52-simple-get-deposits.sh
Normal file
@ -0,0 +1,5 @@
|
||||
# Check the balance of the user
|
||||
# Balance seems to be returning properly (0, in this test case)
|
||||
curl \
|
||||
-H "Authorization: Bearer 01GZHMCXHXHPGAABAQQTXKMSM3" \
|
||||
-X GET "127.0.0.1:8544/user/deposits"
|
4
scripts/requirements.txt
Normal file
4
scripts/requirements.txt
Normal file
@ -0,0 +1,4 @@
|
||||
python-dotenv
|
||||
eth-brownie
|
||||
ensurepath
|
||||
brownie-token-tester
|
@ -47,11 +47,12 @@ gethostname = "0.4.2"
|
||||
glob = "0.3.1"
|
||||
handlebars = "4.3.7"
|
||||
hashbrown = { version = "0.13.2", features = ["serde"] }
|
||||
hex_fmt = "0.3.0"
|
||||
hdrhistogram = "7.5.2"
|
||||
http = "0.2.9"
|
||||
influxdb2 = { git = "https://github.com/llamanodes/influxdb2", features = ["rustls"] }
|
||||
influxdb2-structmap = { git = "https://github.com/llamanodes/influxdb2/"}
|
||||
hostname = "0.3.1"
|
||||
influxdb2 = { version = "0.4", features = ["rustls"] }
|
||||
influxdb2-structmap = "0.2.0"
|
||||
ipnet = "2.7.2"
|
||||
itertools = "0.10.5"
|
||||
log = "0.4.17"
|
||||
@ -82,6 +83,6 @@ tokio-uring = { version = "0.4.0", optional = true }
|
||||
toml = "0.7.3"
|
||||
tower = "0.4.13"
|
||||
tower-http = { version = "0.4.0", features = ["cors", "sensitive-headers"] }
|
||||
ulid = { version = "1.0.0", features = ["serde"] }
|
||||
ulid = { version = "1.0.0", features = ["uuid", "serde"] }
|
||||
url = "2.3.1"
|
||||
uuid = "1.3.2"
|
||||
|
@ -33,6 +33,7 @@ use futures::stream::{FuturesUnordered, StreamExt};
|
||||
use hashbrown::{HashMap, HashSet};
|
||||
use ipnet::IpNet;
|
||||
use log::{debug, error, info, trace, warn, Level};
|
||||
use migration::sea_orm::prelude::Decimal;
|
||||
use migration::sea_orm::{
|
||||
self, ConnectionTrait, Database, DatabaseConnection, EntityTrait, PaginatorTrait,
|
||||
};
|
||||
@ -189,6 +190,7 @@ pub struct AuthorizationChecks {
|
||||
/// IMPORTANT! Once confirmed by a miner, they will be public on the blockchain!
|
||||
pub private_txs: bool,
|
||||
pub proxy_mode: ProxyMode,
|
||||
pub balance: Option<Decimal>,
|
||||
}
|
||||
|
||||
/// Simple wrapper so that we can keep track of read only connections.
|
||||
@ -579,6 +581,15 @@ impl Web3ProxyApp {
|
||||
None => None,
|
||||
};
|
||||
|
||||
// all the users are the same size, so no need for a weigher
|
||||
// if there is no database of users, there will be no keys and so this will be empty
|
||||
// TODO: max_capacity from config
|
||||
// TODO: ttl from config
|
||||
let rpc_secret_key_cache = Cache::builder()
|
||||
.max_capacity(10_000)
|
||||
.time_to_live(Duration::from_secs(600))
|
||||
.build_with_hasher(hashbrown::hash_map::DefaultHashBuilder::default());
|
||||
|
||||
// create a channel for receiving stats
|
||||
// we do this in a channel so we don't slow down our response to the users
|
||||
// stats can be saved in mysql, influxdb, both, or none
|
||||
@ -589,6 +600,7 @@ impl Web3ProxyApp {
|
||||
influxdb_bucket,
|
||||
db_conn.clone(),
|
||||
influxdb_client.clone(),
|
||||
Some(rpc_secret_key_cache.clone()),
|
||||
60,
|
||||
1,
|
||||
BILLING_PERIOD_SECONDS,
|
||||
@ -699,15 +711,6 @@ impl Web3ProxyApp {
|
||||
.time_to_live(Duration::from_secs(600))
|
||||
.build_with_hasher(hashbrown::hash_map::DefaultHashBuilder::default());
|
||||
|
||||
// all the users are the same size, so no need for a weigher
|
||||
// if there is no database of users, there will be no keys and so this will be empty
|
||||
// TODO: max_capacity from config
|
||||
// TODO: ttl from config
|
||||
let rpc_secret_key_cache = Cache::builder()
|
||||
.max_capacity(10_000)
|
||||
.time_to_live(Duration::from_secs(600))
|
||||
.build_with_hasher(hashbrown::hash_map::DefaultHashBuilder::default());
|
||||
|
||||
// create semaphores for concurrent connection limits
|
||||
// TODO: what should tti be for semaphores?
|
||||
let bearer_token_semaphores = Cache::builder()
|
||||
|
@ -76,6 +76,7 @@ impl MigrateStatsToV2 {
|
||||
.context("No influxdb bucket was provided")?,
|
||||
Some(db_conn.clone()),
|
||||
influxdb_client.clone(),
|
||||
None,
|
||||
30,
|
||||
1,
|
||||
BILLING_PERIOD_SECONDS,
|
||||
|
@ -2,7 +2,7 @@ use crate::app::AnyhowJoinHandle;
|
||||
use crate::rpcs::blockchain::{BlocksByHashCache, Web3ProxyBlock};
|
||||
use crate::rpcs::one::Web3Rpc;
|
||||
use argh::FromArgs;
|
||||
use ethers::prelude::TxHash;
|
||||
use ethers::prelude::{Address, TxHash, H256};
|
||||
use ethers::types::{U256, U64};
|
||||
use hashbrown::HashMap;
|
||||
use log::warn;
|
||||
@ -94,6 +94,12 @@ pub struct AppConfig {
|
||||
/// None = allow all requests
|
||||
pub default_user_max_requests_per_period: Option<u64>,
|
||||
|
||||
/// Default ERC address for out deposit contract
|
||||
pub deposit_factory_contract: Option<Address>,
|
||||
|
||||
/// Default ERC address for out deposit contract
|
||||
pub deposit_topic: Option<H256>,
|
||||
|
||||
/// minimum amount to increase eth_estimateGas results
|
||||
pub gas_increase_min: Option<U256>,
|
||||
|
||||
|
@ -10,7 +10,7 @@ use axum::headers::{Header, Origin, Referer, UserAgent};
|
||||
use chrono::Utc;
|
||||
use deferred_rate_limiter::DeferredRateLimitResult;
|
||||
use entities::sea_orm_active_enums::TrackingLevel;
|
||||
use entities::{login, rpc_key, user, user_tier};
|
||||
use entities::{balance, login, rpc_key, user, user_tier};
|
||||
use ethers::types::Bytes;
|
||||
use ethers::utils::keccak256;
|
||||
use futures::TryFutureExt;
|
||||
@ -689,6 +689,13 @@ impl Web3ProxyApp {
|
||||
.await?
|
||||
.expect("related user");
|
||||
|
||||
let balance = balance::Entity::find()
|
||||
.filter(balance::Column::UserId.eq(user_model.id))
|
||||
.one(db_replica.conn())
|
||||
.await?
|
||||
.expect("related balance")
|
||||
.available_balance;
|
||||
|
||||
let user_tier_model =
|
||||
user_tier::Entity::find_by_id(user_model.user_tier_id)
|
||||
.one(db_replica.conn())
|
||||
@ -771,6 +778,7 @@ impl Web3ProxyApp {
|
||||
max_requests_per_period: user_tier_model.max_requests_per_period,
|
||||
private_txs: rpc_key_model.private_txs,
|
||||
proxy_mode,
|
||||
balance: Some(balance),
|
||||
})
|
||||
}
|
||||
None => Ok(AuthorizationChecks::default()),
|
||||
|
@ -56,6 +56,7 @@ pub enum Web3ProxyError {
|
||||
InvalidHeaderValue(InvalidHeaderValue),
|
||||
InvalidEip,
|
||||
InvalidInviteCode,
|
||||
InvalidReferralCode,
|
||||
InvalidReferer,
|
||||
InvalidSignatureLength,
|
||||
InvalidUserAgent,
|
||||
@ -118,6 +119,7 @@ pub enum Web3ProxyError {
|
||||
#[error(ignore)]
|
||||
UserAgentNotAllowed(headers::UserAgent),
|
||||
UserIdZero,
|
||||
PaymentRequired,
|
||||
VerificationError(siwe::VerificationError),
|
||||
WatchRecvError(tokio::sync::watch::error::RecvError),
|
||||
WatchSendError,
|
||||
@ -353,6 +355,17 @@ impl Web3ProxyError {
|
||||
),
|
||||
)
|
||||
}
|
||||
Self::InvalidReferralCode => {
|
||||
warn!("InvalidReferralCode");
|
||||
(
|
||||
StatusCode::UNAUTHORIZED,
|
||||
JsonRpcForwardedResponse::from_str(
|
||||
"invalid referral code",
|
||||
Some(StatusCode::UNAUTHORIZED.as_u16().into()),
|
||||
None,
|
||||
),
|
||||
)
|
||||
}
|
||||
Self::InvalidReferer => {
|
||||
warn!("InvalidReferer");
|
||||
(
|
||||
@ -574,6 +587,17 @@ impl Web3ProxyError {
|
||||
),
|
||||
)
|
||||
}
|
||||
Self::PaymentRequired => {
|
||||
trace!("PaymentRequiredError");
|
||||
(
|
||||
StatusCode::PAYMENT_REQUIRED,
|
||||
JsonRpcForwardedResponse::from_str(
|
||||
"Payment is required and user is not premium.",
|
||||
Some(StatusCode::PAYMENT_REQUIRED.as_u16().into()),
|
||||
None,
|
||||
),
|
||||
)
|
||||
}
|
||||
// TODO: this should actually by the id of the key. multiple users might control one key
|
||||
Self::RateLimited(authorization, retry_at) => {
|
||||
// TODO: emit a stat
|
||||
|
@ -168,30 +168,58 @@ pub async fn serve(
|
||||
//
|
||||
// User stuff
|
||||
//
|
||||
.route("/user/login/:user_address", get(users::user_login_get))
|
||||
.route(
|
||||
"/user/login/:user_address",
|
||||
get(users::authentication::user_login_get),
|
||||
)
|
||||
.route(
|
||||
"/user/login/:user_address/:message_eip",
|
||||
get(users::user_login_get),
|
||||
get(users::authentication::user_login_get),
|
||||
)
|
||||
.route("/user/login", post(users::authentication::user_login_post))
|
||||
.route(
|
||||
// /:rpc_key/:subuser_address/:new_status/:new_role
|
||||
"/user/subuser",
|
||||
get(users::subuser::modify_subuser),
|
||||
)
|
||||
.route("/user/subusers", get(users::subuser::get_subusers))
|
||||
.route(
|
||||
"/subuser/rpc_keys",
|
||||
get(users::subuser::get_keys_as_subuser),
|
||||
)
|
||||
.route("/user/login", post(users::user_login_post))
|
||||
.route("/user", get(users::user_get))
|
||||
.route("/user", post(users::user_post))
|
||||
.route("/user/balance", get(users::user_balance_get))
|
||||
.route("/user/balance/:txid", post(users::user_balance_post))
|
||||
.route("/user/keys", get(users::rpc_keys_get))
|
||||
.route("/user/keys", post(users::rpc_keys_management))
|
||||
.route("/user/keys", put(users::rpc_keys_management))
|
||||
.route("/user/revert_logs", get(users::user_revert_logs_get))
|
||||
.route("/user/balance", get(users::payment::user_balance_get))
|
||||
.route("/user/deposits", get(users::payment::user_deposits_get))
|
||||
.route(
|
||||
"/user/balance/:tx_hash",
|
||||
get(users::payment::user_balance_post),
|
||||
)
|
||||
.route("/user/keys", get(users::rpc_keys::rpc_keys_get))
|
||||
.route("/user/keys", post(users::rpc_keys::rpc_keys_management))
|
||||
.route("/user/keys", put(users::rpc_keys::rpc_keys_management))
|
||||
// .route("/user/referral/:referral_link", get(users::user_referral_link_get))
|
||||
.route(
|
||||
"/user/referral",
|
||||
get(users::referral::user_referral_link_get),
|
||||
)
|
||||
.route("/user/revert_logs", get(users::stats::user_revert_logs_get))
|
||||
.route(
|
||||
"/user/stats/aggregate",
|
||||
get(users::user_stats_aggregated_get),
|
||||
get(users::stats::user_stats_aggregated_get),
|
||||
)
|
||||
.route(
|
||||
"/user/stats/aggregated",
|
||||
get(users::user_stats_aggregated_get),
|
||||
get(users::stats::user_stats_aggregated_get),
|
||||
)
|
||||
.route(
|
||||
"/user/stats/detailed",
|
||||
get(users::stats::user_stats_detailed_get),
|
||||
)
|
||||
.route(
|
||||
"/user/logout",
|
||||
post(users::authentication::user_logout_post),
|
||||
)
|
||||
.route("/user/stats/detailed", get(users::user_stats_detailed_get))
|
||||
.route("/user/logout", post(users::user_logout_post))
|
||||
.route("/admin/modify_role", get(admin::admin_change_user_roles))
|
||||
.route(
|
||||
"/admin/imitate-login/:admin_address/:user_address",
|
||||
|
@ -1,838 +0,0 @@
|
||||
//! Handle registration, logins, and managing account data.
|
||||
use super::authorization::{login_is_authorized, RpcSecretKey};
|
||||
use super::errors::{Web3ProxyError, Web3ProxyErrorContext, Web3ProxyResponse};
|
||||
use crate::app::Web3ProxyApp;
|
||||
use crate::http_params::{
|
||||
get_chain_id_from_params, get_page_from_params, get_query_start_from_params,
|
||||
};
|
||||
use crate::stats::influxdb_queries::query_user_stats;
|
||||
use crate::stats::StatType;
|
||||
use crate::user_token::UserBearerToken;
|
||||
use crate::{PostLogin, PostLoginQuery};
|
||||
use axum::headers::{Header, Origin, Referer, UserAgent};
|
||||
use axum::{
|
||||
extract::{Path, Query},
|
||||
headers::{authorization::Bearer, Authorization},
|
||||
response::IntoResponse,
|
||||
Extension, Json, TypedHeader,
|
||||
};
|
||||
use axum_client_ip::InsecureClientIp;
|
||||
use axum_macros::debug_handler;
|
||||
use chrono::{TimeZone, Utc};
|
||||
use entities::sea_orm_active_enums::TrackingLevel;
|
||||
use entities::{login, pending_login, revert_log, rpc_key, user};
|
||||
use ethers::{prelude::Address, types::Bytes};
|
||||
use hashbrown::HashMap;
|
||||
use http::{HeaderValue, StatusCode};
|
||||
use ipnet::IpNet;
|
||||
use itertools::Itertools;
|
||||
use log::{debug, warn};
|
||||
use migration::sea_orm::prelude::Uuid;
|
||||
use migration::sea_orm::{
|
||||
self, ActiveModelTrait, ColumnTrait, EntityTrait, IntoActiveModel, PaginatorTrait, QueryFilter,
|
||||
QueryOrder, TransactionTrait, TryIntoModel,
|
||||
};
|
||||
use serde::Deserialize;
|
||||
use serde_json::json;
|
||||
use siwe::{Message, VerificationOpts};
|
||||
use std::ops::Add;
|
||||
use std::str::FromStr;
|
||||
use std::sync::Arc;
|
||||
use time::{Duration, OffsetDateTime};
|
||||
use ulid::Ulid;
|
||||
|
||||
/// `GET /user/login/:user_address` or `GET /user/login/:user_address/:message_eip` -- Start the "Sign In with Ethereum" (siwe) login flow.
|
||||
///
|
||||
/// `message_eip`s accepted:
|
||||
/// - eip191_bytes
|
||||
/// - eip191_hash
|
||||
/// - eip4361 (default)
|
||||
///
|
||||
/// Coming soon: eip1271
|
||||
///
|
||||
/// This is the initial entrypoint for logging in. Take the response from this endpoint and give it to your user's wallet for singing. POST the response to `/user/login`.
|
||||
///
|
||||
/// Rate limited by IP address.
|
||||
///
|
||||
/// At first i thought about checking that user_address is in our db,
|
||||
/// But theres no need to separate the registration and login flows.
|
||||
/// It is a better UX to just click "login with ethereum" and have the account created if it doesn't exist.
|
||||
/// We can prompt for an email and and payment after they log in.
|
||||
#[debug_handler]
|
||||
pub async fn user_login_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
InsecureClientIp(ip): InsecureClientIp,
|
||||
// TODO: what does axum's error handling look like if the path fails to parse?
|
||||
Path(mut params): Path<HashMap<String, String>>,
|
||||
) -> Web3ProxyResponse {
|
||||
login_is_authorized(&app, ip).await?;
|
||||
|
||||
// create a message and save it in redis
|
||||
// TODO: how many seconds? get from config?
|
||||
let expire_seconds: usize = 20 * 60;
|
||||
|
||||
let nonce = Ulid::new();
|
||||
|
||||
let issued_at = OffsetDateTime::now_utc();
|
||||
|
||||
let expiration_time = issued_at.add(Duration::new(expire_seconds as i64, 0));
|
||||
|
||||
// TODO: allow ENS names here?
|
||||
let user_address: Address = params
|
||||
.remove("user_address")
|
||||
.ok_or(Web3ProxyError::BadRouting)?
|
||||
.parse()
|
||||
.or(Err(Web3ProxyError::ParseAddressError))?;
|
||||
|
||||
let login_domain = app
|
||||
.config
|
||||
.login_domain
|
||||
.clone()
|
||||
.unwrap_or_else(|| "llamanodes.com".to_string());
|
||||
|
||||
// TODO: get most of these from the app config
|
||||
let message = Message {
|
||||
// TODO: don't unwrap
|
||||
// TODO: accept a login_domain from the request?
|
||||
domain: login_domain.parse().unwrap(),
|
||||
address: user_address.to_fixed_bytes(),
|
||||
// TODO: config for statement
|
||||
statement: Some("🦙🦙🦙🦙🦙".to_string()),
|
||||
// TODO: don't unwrap
|
||||
uri: format!("https://{}/", login_domain).parse().unwrap(),
|
||||
version: siwe::Version::V1,
|
||||
chain_id: 1,
|
||||
expiration_time: Some(expiration_time.into()),
|
||||
issued_at: issued_at.into(),
|
||||
nonce: nonce.to_string(),
|
||||
not_before: None,
|
||||
request_id: None,
|
||||
resources: vec![],
|
||||
};
|
||||
|
||||
let db_conn = app.db_conn().web3_context("login requires a database")?;
|
||||
|
||||
// massage types to fit in the database. sea-orm does not make this very elegant
|
||||
let uuid = Uuid::from_u128(nonce.into());
|
||||
// we add 1 to expire_seconds just to be sure the database has the key for the full expiration_time
|
||||
let expires_at = Utc
|
||||
.timestamp_opt(expiration_time.unix_timestamp() + 1, 0)
|
||||
.unwrap();
|
||||
|
||||
// we do not store a maximum number of attempted logins. anyone can request so we don't want to allow DOS attacks
|
||||
// add a row to the database for this user
|
||||
let user_pending_login = pending_login::ActiveModel {
|
||||
id: sea_orm::NotSet,
|
||||
nonce: sea_orm::Set(uuid),
|
||||
message: sea_orm::Set(message.to_string()),
|
||||
expires_at: sea_orm::Set(expires_at),
|
||||
imitating_user: sea_orm::Set(None),
|
||||
};
|
||||
|
||||
user_pending_login
|
||||
.save(&db_conn)
|
||||
.await
|
||||
.web3_context("saving user's pending_login")?;
|
||||
|
||||
// there are multiple ways to sign messages and not all wallets support them
|
||||
// TODO: default message eip from config?
|
||||
let message_eip = params
|
||||
.remove("message_eip")
|
||||
.unwrap_or_else(|| "eip4361".to_string());
|
||||
|
||||
let message: String = match message_eip.as_str() {
|
||||
"eip191_bytes" => Bytes::from(message.eip191_bytes().unwrap()).to_string(),
|
||||
"eip191_hash" => Bytes::from(&message.eip191_hash().unwrap()).to_string(),
|
||||
"eip4361" => message.to_string(),
|
||||
_ => {
|
||||
return Err(Web3ProxyError::InvalidEip);
|
||||
}
|
||||
};
|
||||
|
||||
Ok(message.into_response())
|
||||
}
|
||||
|
||||
/// `POST /user/login` - Register or login by posting a signed "siwe" message.
|
||||
/// It is recommended to save the returned bearer token in a cookie.
|
||||
/// The bearer token can be used to authenticate other requests, such as getting the user's stats or modifying the user's profile.
|
||||
#[debug_handler]
|
||||
pub async fn user_login_post(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
InsecureClientIp(ip): InsecureClientIp,
|
||||
Query(query): Query<PostLoginQuery>,
|
||||
Json(payload): Json<PostLogin>,
|
||||
) -> Web3ProxyResponse {
|
||||
login_is_authorized(&app, ip).await?;
|
||||
|
||||
// TODO: this seems too verbose. how can we simply convert a String into a [u8; 65]
|
||||
let their_sig_bytes = Bytes::from_str(&payload.sig).web3_context("parsing sig")?;
|
||||
if their_sig_bytes.len() != 65 {
|
||||
return Err(Web3ProxyError::InvalidSignatureLength);
|
||||
}
|
||||
let mut their_sig: [u8; 65] = [0; 65];
|
||||
for x in 0..65 {
|
||||
their_sig[x] = their_sig_bytes[x]
|
||||
}
|
||||
|
||||
// we can't trust that they didn't tamper with the message in some way. like some clients return it hex encoded
|
||||
// TODO: checking 0x seems fragile, but I think it will be fine. siwe message text shouldn't ever start with 0x
|
||||
let their_msg: Message = if payload.msg.starts_with("0x") {
|
||||
let their_msg_bytes =
|
||||
Bytes::from_str(&payload.msg).web3_context("parsing payload message")?;
|
||||
|
||||
// TODO: lossy or no?
|
||||
String::from_utf8_lossy(their_msg_bytes.as_ref())
|
||||
.parse::<siwe::Message>()
|
||||
.web3_context("parsing hex string message")?
|
||||
} else {
|
||||
payload
|
||||
.msg
|
||||
.parse::<siwe::Message>()
|
||||
.web3_context("parsing string message")?
|
||||
};
|
||||
|
||||
// the only part of the message we will trust is their nonce
|
||||
// TODO: this is fragile. have a helper function/struct for redis keys
|
||||
let login_nonce = UserBearerToken::from_str(&their_msg.nonce)?;
|
||||
|
||||
// fetch the message we gave them from our database
|
||||
let db_replica = app
|
||||
.db_replica()
|
||||
.web3_context("Getting database connection")?;
|
||||
|
||||
// massage type for the db
|
||||
let login_nonce_uuid: Uuid = login_nonce.clone().into();
|
||||
|
||||
let user_pending_login = pending_login::Entity::find()
|
||||
.filter(pending_login::Column::Nonce.eq(login_nonce_uuid))
|
||||
.one(db_replica.conn())
|
||||
.await
|
||||
.web3_context("database error while finding pending_login")?
|
||||
.web3_context("login nonce not found")?;
|
||||
|
||||
let our_msg: siwe::Message = user_pending_login
|
||||
.message
|
||||
.parse()
|
||||
.web3_context("parsing siwe message")?;
|
||||
|
||||
// default options are fine. the message includes timestamp and domain and nonce
|
||||
let verify_config = VerificationOpts::default();
|
||||
|
||||
// Check with both verify and verify_eip191
|
||||
if let Err(err_1) = our_msg
|
||||
.verify(&their_sig, &verify_config)
|
||||
.await
|
||||
.web3_context("verifying signature against our local message")
|
||||
{
|
||||
// verification method 1 failed. try eip191
|
||||
if let Err(err_191) = our_msg
|
||||
.verify_eip191(&their_sig)
|
||||
.web3_context("verifying eip191 signature against our local message")
|
||||
{
|
||||
let db_conn = app
|
||||
.db_conn()
|
||||
.web3_context("deleting expired pending logins requires a db")?;
|
||||
|
||||
// delete ALL expired rows.
|
||||
let now = Utc::now();
|
||||
let delete_result = pending_login::Entity::delete_many()
|
||||
.filter(pending_login::Column::ExpiresAt.lte(now))
|
||||
.exec(&db_conn)
|
||||
.await?;
|
||||
|
||||
// TODO: emit a stat? if this is high something weird might be happening
|
||||
debug!("cleared expired pending_logins: {:?}", delete_result);
|
||||
|
||||
return Err(Web3ProxyError::EipVerificationFailed(
|
||||
Box::new(err_1),
|
||||
Box::new(err_191),
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: limit columns or load whole user?
|
||||
let u = user::Entity::find()
|
||||
.filter(user::Column::Address.eq(our_msg.address.as_ref()))
|
||||
.one(db_replica.conn())
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let db_conn = app.db_conn().web3_context("login requires a db")?;
|
||||
|
||||
let (u, uks, status_code) = match u {
|
||||
None => {
|
||||
// user does not exist yet
|
||||
|
||||
// check the invite code
|
||||
// TODO: more advanced invite codes that set different request/minute and concurrency limits
|
||||
if let Some(invite_code) = &app.config.invite_code {
|
||||
if query.invite_code.as_ref() != Some(invite_code) {
|
||||
return Err(Web3ProxyError::InvalidInviteCode);
|
||||
}
|
||||
}
|
||||
|
||||
let txn = db_conn.begin().await?;
|
||||
|
||||
// the only thing we need from them is an address
|
||||
// everything else is optional
|
||||
// TODO: different invite codes should allow different levels
|
||||
// TODO: maybe decrement a count on the invite code?
|
||||
let u = user::ActiveModel {
|
||||
address: sea_orm::Set(our_msg.address.into()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let u = u.insert(&txn).await?;
|
||||
|
||||
// create the user's first api key
|
||||
let rpc_secret_key = RpcSecretKey::new();
|
||||
|
||||
let uk = rpc_key::ActiveModel {
|
||||
user_id: sea_orm::Set(u.id),
|
||||
secret_key: sea_orm::Set(rpc_secret_key.into()),
|
||||
description: sea_orm::Set(None),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let uk = uk
|
||||
.insert(&txn)
|
||||
.await
|
||||
.web3_context("Failed saving new user key")?;
|
||||
|
||||
let uks = vec![uk];
|
||||
|
||||
// save the user and key to the database
|
||||
txn.commit().await?;
|
||||
|
||||
(u, uks, StatusCode::CREATED)
|
||||
}
|
||||
Some(u) => {
|
||||
// the user is already registered
|
||||
let uks = rpc_key::Entity::find()
|
||||
.filter(rpc_key::Column::UserId.eq(u.id))
|
||||
.all(db_replica.conn())
|
||||
.await
|
||||
.web3_context("failed loading user's key")?;
|
||||
|
||||
(u, uks, StatusCode::OK)
|
||||
}
|
||||
};
|
||||
|
||||
// create a bearer token for the user.
|
||||
let user_bearer_token = UserBearerToken::default();
|
||||
|
||||
// json response with everything in it
|
||||
// we could return just the bearer token, but I think they will always request api keys and the user profile
|
||||
let response_json = json!({
|
||||
"rpc_keys": uks
|
||||
.into_iter()
|
||||
.map(|uk| (uk.id, uk))
|
||||
.collect::<HashMap<_, _>>(),
|
||||
"bearer_token": user_bearer_token,
|
||||
"user": u,
|
||||
});
|
||||
|
||||
let response = (status_code, Json(response_json)).into_response();
|
||||
|
||||
// add bearer to the database
|
||||
|
||||
// expire in 4 weeks
|
||||
let expires_at = Utc::now()
|
||||
.checked_add_signed(chrono::Duration::weeks(4))
|
||||
.unwrap();
|
||||
|
||||
let user_login = login::ActiveModel {
|
||||
id: sea_orm::NotSet,
|
||||
bearer_token: sea_orm::Set(user_bearer_token.uuid()),
|
||||
user_id: sea_orm::Set(u.id),
|
||||
expires_at: sea_orm::Set(expires_at),
|
||||
read_only: sea_orm::Set(false),
|
||||
};
|
||||
|
||||
user_login
|
||||
.save(&db_conn)
|
||||
.await
|
||||
.web3_context("saving user login")?;
|
||||
|
||||
if let Err(err) = user_pending_login
|
||||
.into_active_model()
|
||||
.delete(&db_conn)
|
||||
.await
|
||||
{
|
||||
warn!("Failed to delete nonce:{}: {}", login_nonce.0, err);
|
||||
}
|
||||
|
||||
Ok(response)
|
||||
}
|
||||
|
||||
/// `POST /user/logout` - Forget the bearer token in the `Authentication` header.
|
||||
#[debug_handler]
|
||||
pub async fn user_logout_post(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let user_bearer = UserBearerToken::try_from(bearer)?;
|
||||
|
||||
let db_conn = app
|
||||
.db_conn()
|
||||
.web3_context("database needed for user logout")?;
|
||||
|
||||
if let Err(err) = login::Entity::delete_many()
|
||||
.filter(login::Column::BearerToken.eq(user_bearer.uuid()))
|
||||
.exec(&db_conn)
|
||||
.await
|
||||
{
|
||||
debug!("Failed to delete {}: {}", user_bearer.redis_key(), err);
|
||||
}
|
||||
|
||||
let now = Utc::now();
|
||||
|
||||
// also delete any expired logins
|
||||
let delete_result = login::Entity::delete_many()
|
||||
.filter(login::Column::ExpiresAt.lte(now))
|
||||
.exec(&db_conn)
|
||||
.await;
|
||||
|
||||
debug!("Deleted expired logins: {:?}", delete_result);
|
||||
|
||||
// also delete any expired pending logins
|
||||
let delete_result = login::Entity::delete_many()
|
||||
.filter(login::Column::ExpiresAt.lte(now))
|
||||
.exec(&db_conn)
|
||||
.await;
|
||||
|
||||
debug!("Deleted expired pending logins: {:?}", delete_result);
|
||||
|
||||
// TODO: what should the response be? probably json something
|
||||
Ok("goodbye".into_response())
|
||||
}
|
||||
|
||||
/// `GET /user` -- Use a bearer token to get the user's profile.
|
||||
///
|
||||
/// - the email address of a user if they opted in to get contacted via email
|
||||
///
|
||||
/// TODO: this will change as we add better support for secondary users.
|
||||
#[debug_handler]
|
||||
pub async fn user_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer_token)): TypedHeader<Authorization<Bearer>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let (user, _semaphore) = app.bearer_is_authorized(bearer_token).await?;
|
||||
|
||||
Ok(Json(user).into_response())
|
||||
}
|
||||
|
||||
/// the JSON input to the `post_user` handler.
|
||||
#[derive(Debug, Deserialize)]
|
||||
pub struct UserPost {
|
||||
email: Option<String>,
|
||||
}
|
||||
|
||||
/// `POST /user` -- modify the account connected to the bearer token in the `Authentication` header.
|
||||
#[debug_handler]
|
||||
pub async fn user_post(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer_token)): TypedHeader<Authorization<Bearer>>,
|
||||
Json(payload): Json<UserPost>,
|
||||
) -> Web3ProxyResponse {
|
||||
let (user, _semaphore) = app.bearer_is_authorized(bearer_token).await?;
|
||||
|
||||
let mut user: user::ActiveModel = user.into();
|
||||
|
||||
// update the email address
|
||||
if let Some(x) = payload.email {
|
||||
// TODO: only Set if no change
|
||||
if x.is_empty() {
|
||||
user.email = sea_orm::Set(None);
|
||||
} else {
|
||||
// TODO: do some basic validation
|
||||
// TODO: don't set immediatly, send a confirmation email first
|
||||
// TODO: compare first? or is sea orm smart enough to do that for us?
|
||||
user.email = sea_orm::Set(Some(x));
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: what else can we update here? password hash? subscription to newsletter?
|
||||
|
||||
let user = if user.is_changed() {
|
||||
let db_conn = app.db_conn().web3_context("Getting database connection")?;
|
||||
|
||||
user.save(&db_conn).await?
|
||||
} else {
|
||||
// no changes. no need to touch the database
|
||||
user
|
||||
};
|
||||
|
||||
let user: user::Model = user.try_into().web3_context("Returning updated user")?;
|
||||
|
||||
Ok(Json(user).into_response())
|
||||
}
|
||||
|
||||
/// `GET /user/balance` -- Use a bearer token to get the user's balance and spend.
|
||||
///
|
||||
/// - show balance in USD
|
||||
/// - show deposits history (currency, amounts, transaction id)
|
||||
///
|
||||
/// TODO: one key per request? maybe /user/balance/:rpc_key?
|
||||
/// TODO: this will change as we add better support for secondary users.
|
||||
#[debug_handler]
|
||||
pub async fn user_balance_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let (_user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
todo!("user_balance_get");
|
||||
}
|
||||
|
||||
/// `POST /user/balance/:txhash` -- Manually process a confirmed txid to update a user's balance.
|
||||
///
|
||||
/// We will subscribe to events to watch for any user deposits, but sometimes events can be missed.
|
||||
///
|
||||
/// TODO: change this. just have a /tx/:txhash that is open to anyone. rate limit like we rate limit /login
|
||||
#[debug_handler]
|
||||
pub async fn user_balance_post(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let (_user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
todo!("user_balance_post");
|
||||
}
|
||||
|
||||
/// `GET /user/keys` -- Use a bearer token to get the user's api keys and their settings.
|
||||
#[debug_handler]
|
||||
pub async fn rpc_keys_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let (user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
let db_replica = app
|
||||
.db_replica()
|
||||
.web3_context("db_replica is required to fetch a user's keys")?;
|
||||
|
||||
let uks = rpc_key::Entity::find()
|
||||
.filter(rpc_key::Column::UserId.eq(user.id))
|
||||
.all(db_replica.conn())
|
||||
.await
|
||||
.web3_context("failed loading user's key")?;
|
||||
|
||||
let response_json = json!({
|
||||
"user_id": user.id,
|
||||
"user_rpc_keys": uks
|
||||
.into_iter()
|
||||
.map(|uk| (uk.id, uk))
|
||||
.collect::<HashMap::<_, _>>(),
|
||||
});
|
||||
|
||||
Ok(Json(response_json).into_response())
|
||||
}
|
||||
|
||||
/// `DELETE /user/keys` -- Use a bearer token to delete an existing key.
|
||||
#[debug_handler]
|
||||
pub async fn rpc_keys_delete(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let (_user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
// TODO: think about how cascading deletes and billing should work
|
||||
Err(Web3ProxyError::NotImplemented)
|
||||
}
|
||||
|
||||
/// the JSON input to the `rpc_keys_management` handler.
|
||||
/// If `key_id` is set, it updates an existing key.
|
||||
/// If `key_id` is not set, it creates a new key.
|
||||
/// `log_request_method` cannot be change once the key is created
|
||||
/// `user_tier` cannot be changed here
|
||||
#[derive(Debug, Deserialize)]
|
||||
pub struct UserKeyManagement {
|
||||
key_id: Option<u64>,
|
||||
active: Option<bool>,
|
||||
allowed_ips: Option<String>,
|
||||
allowed_origins: Option<String>,
|
||||
allowed_referers: Option<String>,
|
||||
allowed_user_agents: Option<String>,
|
||||
description: Option<String>,
|
||||
log_level: Option<TrackingLevel>,
|
||||
// TODO: enable log_revert_trace: Option<f64>,
|
||||
private_txs: Option<bool>,
|
||||
}
|
||||
|
||||
/// `POST /user/keys` or `PUT /user/keys` -- Use a bearer token to create or update an existing key.
|
||||
#[debug_handler]
|
||||
pub async fn rpc_keys_management(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
Json(payload): Json<UserKeyManagement>,
|
||||
) -> Web3ProxyResponse {
|
||||
// TODO: is there a way we can know if this is a PUT or POST? right now we can modify or create keys with either. though that probably doesn't matter
|
||||
|
||||
let (user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
let db_replica = app
|
||||
.db_replica()
|
||||
.web3_context("getting db for user's keys")?;
|
||||
|
||||
let mut uk = if let Some(existing_key_id) = payload.key_id {
|
||||
// get the key and make sure it belongs to the user
|
||||
rpc_key::Entity::find()
|
||||
.filter(rpc_key::Column::UserId.eq(user.id))
|
||||
.filter(rpc_key::Column::Id.eq(existing_key_id))
|
||||
.one(db_replica.conn())
|
||||
.await
|
||||
.web3_context("failed loading user's key")?
|
||||
.web3_context("key does not exist or is not controlled by this bearer token")?
|
||||
.into_active_model()
|
||||
} else {
|
||||
// make a new key
|
||||
// TODO: limit to 10 keys?
|
||||
let secret_key = RpcSecretKey::new();
|
||||
|
||||
let log_level = payload
|
||||
.log_level
|
||||
.web3_context("log level must be 'none', 'detailed', or 'aggregated'")?;
|
||||
|
||||
rpc_key::ActiveModel {
|
||||
user_id: sea_orm::Set(user.id),
|
||||
secret_key: sea_orm::Set(secret_key.into()),
|
||||
log_level: sea_orm::Set(log_level),
|
||||
..Default::default()
|
||||
}
|
||||
};
|
||||
|
||||
// TODO: do we need null descriptions? default to empty string should be fine, right?
|
||||
if let Some(description) = payload.description {
|
||||
if description.is_empty() {
|
||||
uk.description = sea_orm::Set(None);
|
||||
} else {
|
||||
uk.description = sea_orm::Set(Some(description));
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(private_txs) = payload.private_txs {
|
||||
uk.private_txs = sea_orm::Set(private_txs);
|
||||
}
|
||||
|
||||
if let Some(active) = payload.active {
|
||||
uk.active = sea_orm::Set(active);
|
||||
}
|
||||
|
||||
if let Some(allowed_ips) = payload.allowed_ips {
|
||||
if allowed_ips.is_empty() {
|
||||
uk.allowed_ips = sea_orm::Set(None);
|
||||
} else {
|
||||
// split allowed ips on ',' and try to parse them all. error on invalid input
|
||||
let allowed_ips = allowed_ips
|
||||
.split(',')
|
||||
.map(|x| x.trim().parse::<IpNet>())
|
||||
.collect::<Result<Vec<_>, _>>()?
|
||||
// parse worked. convert back to Strings
|
||||
.into_iter()
|
||||
.map(|x| x.to_string());
|
||||
|
||||
// and join them back together
|
||||
let allowed_ips: String =
|
||||
Itertools::intersperse(allowed_ips, ", ".to_string()).collect();
|
||||
|
||||
uk.allowed_ips = sea_orm::Set(Some(allowed_ips));
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: this should actually be bytes
|
||||
if let Some(allowed_origins) = payload.allowed_origins {
|
||||
if allowed_origins.is_empty() {
|
||||
uk.allowed_origins = sea_orm::Set(None);
|
||||
} else {
|
||||
// split allowed_origins on ',' and try to parse them all. error on invalid input
|
||||
let allowed_origins = allowed_origins
|
||||
.split(',')
|
||||
.map(|x| HeaderValue::from_str(x.trim()))
|
||||
.collect::<Result<Vec<_>, _>>()?
|
||||
.into_iter()
|
||||
.map(|x| Origin::decode(&mut [x].iter()))
|
||||
.collect::<Result<Vec<_>, _>>()?
|
||||
// parse worked. convert back to String and join them back together
|
||||
.into_iter()
|
||||
.map(|x| x.to_string());
|
||||
|
||||
let allowed_origins: String =
|
||||
Itertools::intersperse(allowed_origins, ", ".to_string()).collect();
|
||||
|
||||
uk.allowed_origins = sea_orm::Set(Some(allowed_origins));
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: this should actually be bytes
|
||||
if let Some(allowed_referers) = payload.allowed_referers {
|
||||
if allowed_referers.is_empty() {
|
||||
uk.allowed_referers = sea_orm::Set(None);
|
||||
} else {
|
||||
// split allowed ips on ',' and try to parse them all. error on invalid input
|
||||
let allowed_referers = allowed_referers
|
||||
.split(',')
|
||||
.map(|x| HeaderValue::from_str(x.trim()))
|
||||
.collect::<Result<Vec<_>, _>>()?
|
||||
.into_iter()
|
||||
.map(|x| Referer::decode(&mut [x].iter()))
|
||||
.collect::<Result<Vec<_>, _>>()?;
|
||||
|
||||
// parse worked. now we can put it back together.
|
||||
// but we can't go directly to String.
|
||||
// so we convert to HeaderValues first
|
||||
let mut header_map = vec![];
|
||||
for x in allowed_referers {
|
||||
x.encode(&mut header_map);
|
||||
}
|
||||
|
||||
// convert HeaderValues to Strings
|
||||
// since we got these from strings, this should always work (unless we figure out using bytes)
|
||||
let allowed_referers = header_map
|
||||
.into_iter()
|
||||
.map(|x| x.to_str().map(|x| x.to_string()))
|
||||
.collect::<Result<Vec<_>, _>>()?;
|
||||
|
||||
// join strings together with commas
|
||||
let allowed_referers: String =
|
||||
Itertools::intersperse(allowed_referers.into_iter(), ", ".to_string()).collect();
|
||||
|
||||
uk.allowed_referers = sea_orm::Set(Some(allowed_referers));
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(allowed_user_agents) = payload.allowed_user_agents {
|
||||
if allowed_user_agents.is_empty() {
|
||||
uk.allowed_user_agents = sea_orm::Set(None);
|
||||
} else {
|
||||
// split allowed_user_agents on ',' and try to parse them all. error on invalid input
|
||||
let allowed_user_agents = allowed_user_agents
|
||||
.split(',')
|
||||
.filter_map(|x| x.trim().parse::<UserAgent>().ok())
|
||||
// parse worked. convert back to String
|
||||
.map(|x| x.to_string());
|
||||
|
||||
// join the strings together
|
||||
let allowed_user_agents: String =
|
||||
Itertools::intersperse(allowed_user_agents, ", ".to_string()).collect();
|
||||
|
||||
uk.allowed_user_agents = sea_orm::Set(Some(allowed_user_agents));
|
||||
}
|
||||
}
|
||||
|
||||
let uk = if uk.is_changed() {
|
||||
let db_conn = app.db_conn().web3_context("login requires a db")?;
|
||||
|
||||
uk.save(&db_conn)
|
||||
.await
|
||||
.web3_context("Failed saving user key")?
|
||||
} else {
|
||||
uk
|
||||
};
|
||||
|
||||
let uk = uk.try_into_model()?;
|
||||
|
||||
Ok(Json(uk).into_response())
|
||||
}
|
||||
|
||||
/// `GET /user/revert_logs` -- Use a bearer token to get the user's revert logs.
|
||||
#[debug_handler]
|
||||
pub async fn user_revert_logs_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
Query(params): Query<HashMap<String, String>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let (user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
let chain_id = get_chain_id_from_params(app.as_ref(), ¶ms)?;
|
||||
let query_start = get_query_start_from_params(¶ms)?;
|
||||
let page = get_page_from_params(¶ms)?;
|
||||
|
||||
// TODO: page size from config
|
||||
let page_size = 1_000;
|
||||
|
||||
let mut response = HashMap::new();
|
||||
|
||||
response.insert("page", json!(page));
|
||||
response.insert("page_size", json!(page_size));
|
||||
response.insert("chain_id", json!(chain_id));
|
||||
response.insert("query_start", json!(query_start.timestamp() as u64));
|
||||
|
||||
let db_replica = app
|
||||
.db_replica()
|
||||
.web3_context("getting replica db for user's revert logs")?;
|
||||
|
||||
let uks = rpc_key::Entity::find()
|
||||
.filter(rpc_key::Column::UserId.eq(user.id))
|
||||
.all(db_replica.conn())
|
||||
.await
|
||||
.web3_context("failed loading user's key")?;
|
||||
|
||||
// TODO: only select the ids
|
||||
let uks: Vec<_> = uks.into_iter().map(|x| x.id).collect();
|
||||
|
||||
// get revert logs
|
||||
let mut q = revert_log::Entity::find()
|
||||
.filter(revert_log::Column::Timestamp.gte(query_start))
|
||||
.filter(revert_log::Column::RpcKeyId.is_in(uks))
|
||||
.order_by_asc(revert_log::Column::Timestamp);
|
||||
|
||||
if chain_id == 0 {
|
||||
// don't do anything
|
||||
} else {
|
||||
// filter on chain id
|
||||
q = q.filter(revert_log::Column::ChainId.eq(chain_id))
|
||||
}
|
||||
|
||||
// query the database for number of items and pages
|
||||
let pages_result = q
|
||||
.clone()
|
||||
.paginate(db_replica.conn(), page_size)
|
||||
.num_items_and_pages()
|
||||
.await?;
|
||||
|
||||
response.insert("num_items", pages_result.number_of_items.into());
|
||||
response.insert("num_pages", pages_result.number_of_pages.into());
|
||||
|
||||
// query the database for the revert logs
|
||||
let revert_logs = q
|
||||
.paginate(db_replica.conn(), page_size)
|
||||
.fetch_page(page)
|
||||
.await?;
|
||||
|
||||
response.insert("revert_logs", json!(revert_logs));
|
||||
|
||||
Ok(Json(response).into_response())
|
||||
}
|
||||
|
||||
/// `GET /user/stats/aggregate` -- Public endpoint for aggregate stats such as bandwidth used and methods requested.
|
||||
#[debug_handler]
|
||||
pub async fn user_stats_aggregated_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
bearer: Option<TypedHeader<Authorization<Bearer>>>,
|
||||
Query(params): Query<HashMap<String, String>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let response = query_user_stats(&app, bearer, ¶ms, StatType::Aggregated).await?;
|
||||
|
||||
Ok(response)
|
||||
}
|
||||
|
||||
/// `GET /user/stats/detailed` -- Use a bearer token to get the user's key stats such as bandwidth used and methods requested.
|
||||
///
|
||||
/// If no bearer is provided, detailed stats for all users will be shown.
|
||||
/// View a single user with `?user_id=$x`.
|
||||
/// View a single chain with `?chain_id=$x`.
|
||||
///
|
||||
/// Set `$x` to zero to see all.
|
||||
///
|
||||
/// TODO: this will change as we add better support for secondary users.
|
||||
#[debug_handler]
|
||||
pub async fn user_stats_detailed_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
bearer: Option<TypedHeader<Authorization<Bearer>>>,
|
||||
Query(params): Query<HashMap<String, String>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let response = query_user_stats(&app, bearer, ¶ms, StatType::Detailed).await?;
|
||||
|
||||
Ok(response)
|
||||
}
|
473
web3_proxy/src/frontend/users/authentication.rs
Normal file
473
web3_proxy/src/frontend/users/authentication.rs
Normal file
@ -0,0 +1,473 @@
|
||||
//! Handle registration, logins, and managing account data.
|
||||
use crate::app::Web3ProxyApp;
|
||||
use crate::frontend::authorization::{login_is_authorized, RpcSecretKey};
|
||||
use crate::frontend::errors::{Web3ProxyError, Web3ProxyErrorContext, Web3ProxyResponse};
|
||||
use crate::user_token::UserBearerToken;
|
||||
use crate::{PostLogin, PostLoginQuery};
|
||||
use axum::{
|
||||
extract::{Path, Query},
|
||||
headers::{authorization::Bearer, Authorization},
|
||||
response::IntoResponse,
|
||||
Extension, Json, TypedHeader,
|
||||
};
|
||||
use axum_client_ip::InsecureClientIp;
|
||||
use axum_macros::debug_handler;
|
||||
use chrono::{TimeZone, Utc};
|
||||
use entities;
|
||||
use entities::{balance, login, pending_login, referee, referrer, rpc_key, user};
|
||||
use ethers::{prelude::Address, types::Bytes};
|
||||
use hashbrown::HashMap;
|
||||
use http::StatusCode;
|
||||
use log::{debug, warn};
|
||||
use migration::sea_orm::prelude::{Decimal, Uuid};
|
||||
use migration::sea_orm::{
|
||||
self, ActiveModelTrait, ColumnTrait, EntityTrait, IntoActiveModel, QueryFilter,
|
||||
TransactionTrait,
|
||||
};
|
||||
use serde_json::json;
|
||||
use siwe::{Message, VerificationOpts};
|
||||
use std::ops::Add;
|
||||
use std::str::FromStr;
|
||||
use std::sync::Arc;
|
||||
use time::{Duration, OffsetDateTime};
|
||||
use ulid::Ulid;
|
||||
|
||||
/// `GET /user/login/:user_address` or `GET /user/login/:user_address/:message_eip` -- Start the "Sign In with Ethereum" (siwe) login flow.
|
||||
///
|
||||
/// `message_eip`s accepted:
|
||||
/// - eip191_bytes
|
||||
/// - eip191_hash
|
||||
/// - eip4361 (default)
|
||||
///
|
||||
/// Coming soon: eip1271
|
||||
///
|
||||
/// This is the initial entrypoint for logging in. Take the response from this endpoint and give it to your user's wallet for singing. POST the response to `/user/login`.
|
||||
///
|
||||
/// Rate limited by IP address.
|
||||
///
|
||||
/// At first i thought about checking that user_address is in our db,
|
||||
/// But theres no need to separate the registration and login flows.
|
||||
/// It is a better UX to just click "login with ethereum" and have the account created if it doesn't exist.
|
||||
/// We can prompt for an email and and payment after they log in.
|
||||
#[debug_handler]
|
||||
pub async fn user_login_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
InsecureClientIp(ip): InsecureClientIp,
|
||||
// TODO: what does axum's error handling look like if the path fails to parse?
|
||||
Path(mut params): Path<HashMap<String, String>>,
|
||||
) -> Web3ProxyResponse {
|
||||
login_is_authorized(&app, ip).await?;
|
||||
|
||||
// create a message and save it in redis
|
||||
// TODO: how many seconds? get from config?
|
||||
let expire_seconds: usize = 20 * 60;
|
||||
|
||||
let nonce = Ulid::new();
|
||||
|
||||
let issued_at = OffsetDateTime::now_utc();
|
||||
|
||||
let expiration_time = issued_at.add(Duration::new(expire_seconds as i64, 0));
|
||||
|
||||
// TODO: allow ENS names here?
|
||||
let user_address: Address = params
|
||||
.remove("user_address")
|
||||
.ok_or(Web3ProxyError::BadRouting)?
|
||||
.parse()
|
||||
.or(Err(Web3ProxyError::ParseAddressError))?;
|
||||
|
||||
let login_domain = app
|
||||
.config
|
||||
.login_domain
|
||||
.clone()
|
||||
.unwrap_or_else(|| "llamanodes.com".to_string());
|
||||
|
||||
// TODO: get most of these from the app config
|
||||
let message = Message {
|
||||
// TODO: don't unwrap
|
||||
// TODO: accept a login_domain from the request?
|
||||
domain: login_domain.parse().unwrap(),
|
||||
address: user_address.to_fixed_bytes(),
|
||||
// TODO: config for statement
|
||||
statement: Some("🦙🦙🦙🦙🦙".to_string()),
|
||||
// TODO: don't unwrap
|
||||
uri: format!("https://{}/", login_domain).parse().unwrap(),
|
||||
version: siwe::Version::V1,
|
||||
chain_id: 1,
|
||||
expiration_time: Some(expiration_time.into()),
|
||||
issued_at: issued_at.into(),
|
||||
nonce: nonce.to_string(),
|
||||
not_before: None,
|
||||
request_id: None,
|
||||
resources: vec![],
|
||||
};
|
||||
|
||||
let db_conn = app.db_conn().web3_context("login requires a database")?;
|
||||
|
||||
// massage types to fit in the database. sea-orm does not make this very elegant
|
||||
let uuid = Uuid::from_u128(nonce.into());
|
||||
// we add 1 to expire_seconds just to be sure the database has the key for the full expiration_time
|
||||
let expires_at = Utc
|
||||
.timestamp_opt(expiration_time.unix_timestamp() + 1, 0)
|
||||
.unwrap();
|
||||
|
||||
// we do not store a maximum number of attempted logins. anyone can request so we don't want to allow DOS attacks
|
||||
// add a row to the database for this user
|
||||
let user_pending_login = pending_login::ActiveModel {
|
||||
id: sea_orm::NotSet,
|
||||
nonce: sea_orm::Set(uuid),
|
||||
message: sea_orm::Set(message.to_string()),
|
||||
expires_at: sea_orm::Set(expires_at),
|
||||
imitating_user: sea_orm::Set(None),
|
||||
};
|
||||
|
||||
user_pending_login
|
||||
.save(&db_conn)
|
||||
.await
|
||||
.web3_context("saving user's pending_login")?;
|
||||
|
||||
// there are multiple ways to sign messages and not all wallets support them
|
||||
// TODO: default message eip from config?
|
||||
let message_eip = params
|
||||
.remove("message_eip")
|
||||
.unwrap_or_else(|| "eip4361".to_string());
|
||||
|
||||
let message: String = match message_eip.as_str() {
|
||||
"eip191_bytes" => Bytes::from(message.eip191_bytes().unwrap()).to_string(),
|
||||
"eip191_hash" => Bytes::from(&message.eip191_hash().unwrap()).to_string(),
|
||||
"eip4361" => message.to_string(),
|
||||
_ => {
|
||||
return Err(Web3ProxyError::InvalidEip);
|
||||
}
|
||||
};
|
||||
|
||||
Ok(message.into_response())
|
||||
}
|
||||
|
||||
/// `POST /user/login` - Register or login by posting a signed "siwe" message.
|
||||
/// It is recommended to save the returned bearer token in a cookie.
|
||||
/// The bearer token can be used to authenticate other requests, such as getting the user's stats or modifying the user's profile.
|
||||
#[debug_handler]
|
||||
pub async fn user_login_post(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
InsecureClientIp(ip): InsecureClientIp,
|
||||
Query(query): Query<PostLoginQuery>,
|
||||
Json(payload): Json<PostLogin>,
|
||||
) -> Web3ProxyResponse {
|
||||
login_is_authorized(&app, ip).await?;
|
||||
|
||||
// TODO: this seems too verbose. how can we simply convert a String into a [u8; 65]
|
||||
let their_sig_bytes = Bytes::from_str(&payload.sig).web3_context("parsing sig")?;
|
||||
if their_sig_bytes.len() != 65 {
|
||||
return Err(Web3ProxyError::InvalidSignatureLength);
|
||||
}
|
||||
let mut their_sig: [u8; 65] = [0; 65];
|
||||
for x in 0..65 {
|
||||
their_sig[x] = their_sig_bytes[x]
|
||||
}
|
||||
|
||||
// we can't trust that they didn't tamper with the message in some way. like some clients return it hex encoded
|
||||
// TODO: checking 0x seems fragile, but I think it will be fine. siwe message text shouldn't ever start with 0x
|
||||
let their_msg: Message = if payload.msg.starts_with("0x") {
|
||||
let their_msg_bytes =
|
||||
Bytes::from_str(&payload.msg).web3_context("parsing payload message")?;
|
||||
|
||||
// TODO: lossy or no?
|
||||
String::from_utf8_lossy(their_msg_bytes.as_ref())
|
||||
.parse::<siwe::Message>()
|
||||
.web3_context("parsing hex string message")?
|
||||
} else {
|
||||
payload
|
||||
.msg
|
||||
.parse::<siwe::Message>()
|
||||
.web3_context("parsing string message")?
|
||||
};
|
||||
|
||||
// the only part of the message we will trust is their nonce
|
||||
// TODO: this is fragile. have a helper function/struct for redis keys
|
||||
let login_nonce = UserBearerToken::from_str(&their_msg.nonce)?;
|
||||
|
||||
// fetch the message we gave them from our database
|
||||
let db_replica = app
|
||||
.db_replica()
|
||||
.web3_context("Getting database connection")?;
|
||||
|
||||
// massage type for the db
|
||||
let login_nonce_uuid: Uuid = login_nonce.clone().into();
|
||||
|
||||
let user_pending_login = pending_login::Entity::find()
|
||||
.filter(pending_login::Column::Nonce.eq(login_nonce_uuid))
|
||||
.one(db_replica.conn())
|
||||
.await
|
||||
.web3_context("database error while finding pending_login")?
|
||||
.web3_context("login nonce not found")?;
|
||||
|
||||
let our_msg: siwe::Message = user_pending_login
|
||||
.message
|
||||
.parse()
|
||||
.web3_context("parsing siwe message")?;
|
||||
|
||||
// default options are fine. the message includes timestamp and domain and nonce
|
||||
let verify_config = VerificationOpts::default();
|
||||
|
||||
// Check with both verify and verify_eip191
|
||||
if let Err(err_1) = our_msg
|
||||
.verify(&their_sig, &verify_config)
|
||||
.await
|
||||
.web3_context("verifying signature against our local message")
|
||||
{
|
||||
// verification method 1 failed. try eip191
|
||||
if let Err(err_191) = our_msg
|
||||
.verify_eip191(&their_sig)
|
||||
.web3_context("verifying eip191 signature against our local message")
|
||||
{
|
||||
let db_conn = app
|
||||
.db_conn()
|
||||
.web3_context("deleting expired pending logins requires a db")?;
|
||||
|
||||
// delete ALL expired rows.
|
||||
let now = Utc::now();
|
||||
let delete_result = pending_login::Entity::delete_many()
|
||||
.filter(pending_login::Column::ExpiresAt.lte(now))
|
||||
.exec(&db_conn)
|
||||
.await?;
|
||||
|
||||
// TODO: emit a stat? if this is high something weird might be happening
|
||||
debug!("cleared expired pending_logins: {:?}", delete_result);
|
||||
|
||||
return Err(Web3ProxyError::EipVerificationFailed(
|
||||
Box::new(err_1),
|
||||
Box::new(err_191),
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: limit columns or load whole user?
|
||||
let caller = user::Entity::find()
|
||||
.filter(user::Column::Address.eq(our_msg.address.as_ref()))
|
||||
.one(db_replica.conn())
|
||||
.await?;
|
||||
|
||||
let db_conn = app.db_conn().web3_context("login requires a db")?;
|
||||
|
||||
let (caller, user_rpc_keys, status_code) = match caller {
|
||||
None => {
|
||||
// user does not exist yet
|
||||
|
||||
// check the invite code
|
||||
// TODO: more advanced invite codes that set different request/minute and concurrency limits
|
||||
// Do nothing if app config is none (then there is basically no authentication invitation, and the user can process with a free tier ...
|
||||
|
||||
// Prematurely return if there is a wrong invite code
|
||||
if let Some(invite_code) = &app.config.invite_code {
|
||||
if query.invite_code.as_ref() != Some(invite_code) {
|
||||
return Err(Web3ProxyError::InvalidInviteCode);
|
||||
}
|
||||
}
|
||||
|
||||
let txn = db_conn.begin().await?;
|
||||
|
||||
// First add a user
|
||||
|
||||
// the only thing we need from them is an address
|
||||
// everything else is optional
|
||||
// TODO: different invite codes should allow different levels
|
||||
// TODO: maybe decrement a count on the invite code?
|
||||
// TODO: There will be two different transactions. The first one inserts the user, the second one marks the user as being referred
|
||||
let caller = user::ActiveModel {
|
||||
address: sea_orm::Set(our_msg.address.into()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let caller = caller.insert(&txn).await?;
|
||||
|
||||
// create the user's first api key
|
||||
let rpc_secret_key = RpcSecretKey::new();
|
||||
|
||||
let user_rpc_key = rpc_key::ActiveModel {
|
||||
user_id: sea_orm::Set(caller.id.clone()),
|
||||
secret_key: sea_orm::Set(rpc_secret_key.into()),
|
||||
description: sea_orm::Set(None),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let user_rpc_key = user_rpc_key
|
||||
.insert(&txn)
|
||||
.await
|
||||
.web3_context("Failed saving new user key")?;
|
||||
|
||||
// We should also create the balance entry ...
|
||||
let user_balance = balance::ActiveModel {
|
||||
user_id: sea_orm::Set(caller.id.clone()),
|
||||
available_balance: sea_orm::Set(Decimal::new(0, 0)),
|
||||
used_balance: sea_orm::Set(Decimal::new(0, 0)),
|
||||
..Default::default()
|
||||
};
|
||||
user_balance.insert(&txn).await?;
|
||||
|
||||
let user_rpc_keys = vec![user_rpc_key];
|
||||
|
||||
// Also add a part for the invite code, i.e. who invited this guy
|
||||
|
||||
// save the user and key to the database
|
||||
txn.commit().await?;
|
||||
|
||||
let txn = db_conn.begin().await?;
|
||||
// First, optionally catch a referral code from the parameters if there is any
|
||||
debug!("Refferal code is: {:?}", payload.referral_code);
|
||||
if let Some(referral_code) = payload.referral_code.as_ref() {
|
||||
// If it is not inside, also check in the database
|
||||
warn!("Using register referral code: {:?}", referral_code);
|
||||
let user_referrer = referrer::Entity::find()
|
||||
.filter(referrer::Column::ReferralCode.eq(referral_code))
|
||||
.one(db_replica.conn())
|
||||
.await?
|
||||
.ok_or(Web3ProxyError::InvalidReferralCode)?;
|
||||
|
||||
// Create a new item in the database,
|
||||
// marking this guy as the referrer (and ignoring a duplicate insert, if there is any...)
|
||||
// First person to make the referral gets all credits
|
||||
// Generate a random referral code ...
|
||||
let used_referral = referee::ActiveModel {
|
||||
used_referral_code: sea_orm::Set(user_referrer.id),
|
||||
user_id: sea_orm::Set(caller.id),
|
||||
credits_applied_for_referee: sea_orm::Set(false),
|
||||
credits_applied_for_referrer: sea_orm::Set(Decimal::new(0, 10)),
|
||||
..Default::default()
|
||||
};
|
||||
used_referral.insert(&txn).await?;
|
||||
}
|
||||
txn.commit().await?;
|
||||
|
||||
(caller, user_rpc_keys, StatusCode::CREATED)
|
||||
}
|
||||
Some(caller) => {
|
||||
// Let's say that a user that exists can actually also redeem a key in retrospect...
|
||||
let txn = db_conn.begin().await?;
|
||||
// TODO: Move this into a common variable outside ...
|
||||
// First, optionally catch a referral code from the parameters if there is any
|
||||
if let Some(referral_code) = payload.referral_code.as_ref() {
|
||||
// If it is not inside, also check in the database
|
||||
warn!("Using referral code: {:?}", referral_code);
|
||||
let user_referrer = referrer::Entity::find()
|
||||
.filter(referrer::Column::ReferralCode.eq(referral_code))
|
||||
.one(db_replica.conn())
|
||||
.await?
|
||||
.ok_or(Web3ProxyError::BadRequest(format!(
|
||||
"The referral_link you provided does not exist {}",
|
||||
referral_code
|
||||
)))?;
|
||||
|
||||
// Create a new item in the database,
|
||||
// marking this guy as the referrer (and ignoring a duplicate insert, if there is any...)
|
||||
// First person to make the referral gets all credits
|
||||
// Generate a random referral code ...
|
||||
let used_referral = referee::ActiveModel {
|
||||
used_referral_code: sea_orm::Set(user_referrer.id),
|
||||
user_id: sea_orm::Set(caller.id),
|
||||
credits_applied_for_referee: sea_orm::Set(false),
|
||||
credits_applied_for_referrer: sea_orm::Set(Decimal::new(0, 10)),
|
||||
..Default::default()
|
||||
};
|
||||
used_referral.insert(&txn).await?;
|
||||
}
|
||||
txn.commit().await?;
|
||||
|
||||
// the user is already registered
|
||||
let user_rpc_keys = rpc_key::Entity::find()
|
||||
.filter(rpc_key::Column::UserId.eq(caller.id))
|
||||
.all(db_replica.conn())
|
||||
.await
|
||||
.web3_context("failed loading user's key")?;
|
||||
|
||||
(caller, user_rpc_keys, StatusCode::OK)
|
||||
}
|
||||
};
|
||||
|
||||
// create a bearer token for the user.
|
||||
let user_bearer_token = UserBearerToken::default();
|
||||
|
||||
// json response with everything in it
|
||||
// we could return just the bearer token, but I think they will always request api keys and the user profile
|
||||
let response_json = json!({
|
||||
"rpc_keys": user_rpc_keys
|
||||
.into_iter()
|
||||
.map(|user_rpc_key| (user_rpc_key.id, user_rpc_key))
|
||||
.collect::<HashMap<_, _>>(),
|
||||
"bearer_token": user_bearer_token,
|
||||
"user": caller,
|
||||
});
|
||||
|
||||
let response = (status_code, Json(response_json)).into_response();
|
||||
|
||||
// add bearer to the database
|
||||
|
||||
// expire in 4 weeks
|
||||
let expires_at = Utc::now()
|
||||
.checked_add_signed(chrono::Duration::weeks(4))
|
||||
.unwrap();
|
||||
|
||||
let user_login = login::ActiveModel {
|
||||
id: sea_orm::NotSet,
|
||||
bearer_token: sea_orm::Set(user_bearer_token.uuid()),
|
||||
user_id: sea_orm::Set(caller.id),
|
||||
expires_at: sea_orm::Set(expires_at),
|
||||
read_only: sea_orm::Set(false),
|
||||
};
|
||||
|
||||
user_login
|
||||
.save(&db_conn)
|
||||
.await
|
||||
.web3_context("saving user login")?;
|
||||
|
||||
if let Err(err) = user_pending_login
|
||||
.into_active_model()
|
||||
.delete(&db_conn)
|
||||
.await
|
||||
{
|
||||
warn!("Failed to delete nonce:{}: {}", login_nonce.0, err);
|
||||
}
|
||||
|
||||
Ok(response)
|
||||
}
|
||||
|
||||
/// `POST /user/logout` - Forget the bearer token in the `Authentication` header.
|
||||
#[debug_handler]
|
||||
pub async fn user_logout_post(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let user_bearer = UserBearerToken::try_from(bearer)?;
|
||||
|
||||
let db_conn = app
|
||||
.db_conn()
|
||||
.web3_context("database needed for user logout")?;
|
||||
|
||||
if let Err(err) = login::Entity::delete_many()
|
||||
.filter(login::Column::BearerToken.eq(user_bearer.uuid()))
|
||||
.exec(&db_conn)
|
||||
.await
|
||||
{
|
||||
debug!("Failed to delete {}: {}", user_bearer.redis_key(), err);
|
||||
}
|
||||
|
||||
let now = Utc::now();
|
||||
|
||||
// also delete any expired logins
|
||||
let delete_result = login::Entity::delete_many()
|
||||
.filter(login::Column::ExpiresAt.lte(now))
|
||||
.exec(&db_conn)
|
||||
.await;
|
||||
|
||||
debug!("Deleted expired logins: {:?}", delete_result);
|
||||
|
||||
// also delete any expired pending logins
|
||||
let delete_result = login::Entity::delete_many()
|
||||
.filter(login::Column::ExpiresAt.lte(now))
|
||||
.exec(&db_conn)
|
||||
.await;
|
||||
|
||||
debug!("Deleted expired pending logins: {:?}", delete_result);
|
||||
|
||||
// TODO: what should the response be? probably json something
|
||||
Ok("goodbye".into_response())
|
||||
}
|
83
web3_proxy/src/frontend/users/mod.rs
Normal file
83
web3_proxy/src/frontend/users/mod.rs
Normal file
@ -0,0 +1,83 @@
|
||||
//! Handle registration, logins, and managing account data.
|
||||
pub mod authentication;
|
||||
pub mod payment;
|
||||
pub mod referral;
|
||||
pub mod rpc_keys;
|
||||
pub mod stats;
|
||||
pub mod subuser;
|
||||
|
||||
use super::errors::{Web3ProxyErrorContext, Web3ProxyResponse};
|
||||
use crate::app::Web3ProxyApp;
|
||||
|
||||
use axum::{
|
||||
headers::{authorization::Bearer, Authorization},
|
||||
response::IntoResponse,
|
||||
Extension, Json, TypedHeader,
|
||||
};
|
||||
use axum_macros::debug_handler;
|
||||
use entities;
|
||||
use entities::user;
|
||||
use migration::sea_orm::{self, ActiveModelTrait};
|
||||
use serde::Deserialize;
|
||||
use std::sync::Arc;
|
||||
|
||||
/// `GET /user` -- Use a bearer token to get the user's profile.
|
||||
///
|
||||
/// - the email address of a user if they opted in to get contacted via email
|
||||
///
|
||||
/// TODO: this will change as we add better support for secondary users.
|
||||
#[debug_handler]
|
||||
pub async fn user_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer_token)): TypedHeader<Authorization<Bearer>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let (user, _semaphore) = app.bearer_is_authorized(bearer_token).await?;
|
||||
|
||||
Ok(Json(user).into_response())
|
||||
}
|
||||
|
||||
/// the JSON input to the `post_user` handler.
|
||||
#[derive(Debug, Deserialize)]
|
||||
pub struct UserPost {
|
||||
email: Option<String>,
|
||||
}
|
||||
|
||||
/// `POST /user` -- modify the account connected to the bearer token in the `Authentication` header.
|
||||
#[debug_handler]
|
||||
pub async fn user_post(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer_token)): TypedHeader<Authorization<Bearer>>,
|
||||
Json(payload): Json<UserPost>,
|
||||
) -> Web3ProxyResponse {
|
||||
let (user, _semaphore) = app.bearer_is_authorized(bearer_token).await?;
|
||||
|
||||
let mut user: user::ActiveModel = user.into();
|
||||
|
||||
// update the email address
|
||||
if let Some(x) = payload.email {
|
||||
// TODO: only Set if no change
|
||||
if x.is_empty() {
|
||||
user.email = sea_orm::Set(None);
|
||||
} else {
|
||||
// TODO: do some basic validation
|
||||
// TODO: don't set immediatly, send a confirmation email first
|
||||
// TODO: compare first? or is sea orm smart enough to do that for us?
|
||||
user.email = sea_orm::Set(Some(x));
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: what else can we update here? password hash? subscription to newsletter?
|
||||
|
||||
let user = if user.is_changed() {
|
||||
let db_conn = app.db_conn().web3_context("Getting database connection")?;
|
||||
|
||||
user.save(&db_conn).await?
|
||||
} else {
|
||||
// no changes. no need to touch the database
|
||||
user
|
||||
};
|
||||
|
||||
let user: user::Model = user.try_into().web3_context("Returning updated user")?;
|
||||
|
||||
Ok(Json(user).into_response())
|
||||
}
|
499
web3_proxy/src/frontend/users/payment.rs
Normal file
499
web3_proxy/src/frontend/users/payment.rs
Normal file
@ -0,0 +1,499 @@
|
||||
use crate::app::Web3ProxyApp;
|
||||
use crate::frontend::authorization::Authorization as InternalAuthorization;
|
||||
use crate::frontend::errors::{Web3ProxyError, Web3ProxyResponse};
|
||||
use crate::rpcs::request::OpenRequestResult;
|
||||
use anyhow::{anyhow, Context};
|
||||
use axum::{
|
||||
extract::Path,
|
||||
headers::{authorization::Bearer, Authorization},
|
||||
response::IntoResponse,
|
||||
Extension, Json, TypedHeader,
|
||||
};
|
||||
use axum_macros::debug_handler;
|
||||
use entities::{balance, increase_on_chain_balance_receipt, user, user_tier};
|
||||
use ethers::abi::{AbiEncode, ParamType};
|
||||
use ethers::types::{Address, TransactionReceipt, H256, U256};
|
||||
use ethers::utils::{hex, keccak256};
|
||||
use hashbrown::HashMap;
|
||||
use hex_fmt::HexFmt;
|
||||
use http::StatusCode;
|
||||
use log::{debug, info, warn, Level};
|
||||
use migration::sea_orm;
|
||||
use migration::sea_orm::prelude::Decimal;
|
||||
use migration::sea_orm::ActiveModelTrait;
|
||||
use migration::sea_orm::ColumnTrait;
|
||||
use migration::sea_orm::EntityTrait;
|
||||
use migration::sea_orm::IntoActiveModel;
|
||||
use migration::sea_orm::QueryFilter;
|
||||
use migration::sea_orm::TransactionTrait;
|
||||
use serde_json::json;
|
||||
use std::sync::Arc;
|
||||
|
||||
/// Implements any logic related to payments
|
||||
/// Removed this mainly from "user" as this was getting clogged
|
||||
///
|
||||
/// `GET /user/balance` -- Use a bearer token to get the user's balance and spend.
|
||||
///
|
||||
/// - show balance in USD
|
||||
/// - show deposits history (currency, amounts, transaction id)
|
||||
#[debug_handler]
|
||||
pub async fn user_balance_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let (_user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
let db_replica = app.db_replica().context("Getting database connection")?;
|
||||
|
||||
// Just return the balance for the user
|
||||
let user_balance = match balance::Entity::find()
|
||||
.filter(balance::Column::UserId.eq(_user.id))
|
||||
.one(db_replica.conn())
|
||||
.await?
|
||||
{
|
||||
Some(x) => x.available_balance,
|
||||
None => Decimal::from(0), // That means the user has no balance as of yet
|
||||
// (user exists, but balance entry does not exist)
|
||||
// In that case add this guy here
|
||||
// Err(FrontendErrorResponse::BadRequest("User not found!"))
|
||||
};
|
||||
|
||||
let mut response = HashMap::new();
|
||||
response.insert("balance", json!(user_balance));
|
||||
|
||||
// TODO: Gotta create a new table for the spend part
|
||||
Ok(Json(response).into_response())
|
||||
}
|
||||
|
||||
/// `GET /user/deposits` -- Use a bearer token to get the user's balance and spend.
|
||||
///
|
||||
/// - shows a list of all deposits, including their chain-id, amount and tx-hash
|
||||
#[debug_handler]
|
||||
pub async fn user_deposits_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let (user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
let db_replica = app.db_replica().context("Getting database connection")?;
|
||||
|
||||
// Filter by user ...
|
||||
let receipts = increase_on_chain_balance_receipt::Entity::find()
|
||||
.filter(increase_on_chain_balance_receipt::Column::DepositToUserId.eq(user.id))
|
||||
.all(db_replica.conn())
|
||||
.await?;
|
||||
|
||||
// Return the response, all except the user ...
|
||||
let mut response = HashMap::new();
|
||||
let receipts = receipts
|
||||
.into_iter()
|
||||
.map(|x| {
|
||||
let mut out = HashMap::new();
|
||||
out.insert("amount", serde_json::Value::String(x.amount.to_string()));
|
||||
out.insert("chain_id", serde_json::Value::Number(x.chain_id.into()));
|
||||
out.insert("tx_hash", serde_json::Value::String(x.tx_hash));
|
||||
out
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
response.insert(
|
||||
"user",
|
||||
json!(format!("{:?}", Address::from_slice(&user.address))),
|
||||
);
|
||||
response.insert("deposits", json!(receipts));
|
||||
|
||||
Ok(Json(response).into_response())
|
||||
}
|
||||
|
||||
/// `POST /user/balance/:tx_hash` -- Manually process a confirmed txid to update a user's balance.
|
||||
///
|
||||
/// We will subscribe to events to watch for any user deposits, but sometimes events can be missed.
|
||||
/// TODO: change this. just have a /tx/:txhash that is open to anyone. rate limit like we rate limit /login
|
||||
#[debug_handler]
|
||||
pub async fn user_balance_post(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
Path(mut params): Path<HashMap<String, String>>,
|
||||
) -> Web3ProxyResponse {
|
||||
// I suppose this is ok / good, so people don't spam this endpoint as it is not "cheap"
|
||||
// Check that the user is logged-in and authorized. We don't need a semaphore here btw
|
||||
let (_, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
// Get the transaction hash, and the amount that the user wants to top up by.
|
||||
// Let's say that for now, 1 credit is equivalent to 1 dollar (assuming any stablecoin has a 1:1 peg)
|
||||
let tx_hash: H256 = params
|
||||
.remove("tx_hash")
|
||||
// TODO: map_err so this becomes a 500. routing must be bad
|
||||
.ok_or(Web3ProxyError::BadRequest(
|
||||
"You have not provided the tx_hash in which you paid in".to_string(),
|
||||
))?
|
||||
.parse()
|
||||
.context("unable to parse tx_hash")?;
|
||||
|
||||
let db_conn = app.db_conn().context("query_user_stats needs a db")?;
|
||||
let db_replica = app
|
||||
.db_replica()
|
||||
.context("query_user_stats needs a db replica")?;
|
||||
|
||||
// Return straight false if the tx was already added ...
|
||||
let receipt = increase_on_chain_balance_receipt::Entity::find()
|
||||
.filter(increase_on_chain_balance_receipt::Column::TxHash.eq(hex::encode(tx_hash)))
|
||||
.one(&db_conn)
|
||||
.await?;
|
||||
if receipt.is_some() {
|
||||
return Err(Web3ProxyError::BadRequest(
|
||||
"The transaction you provided has already been accounted for!".to_string(),
|
||||
));
|
||||
}
|
||||
debug!("Receipt: {:?}", receipt);
|
||||
|
||||
// Iterate through all logs, and add them to the transaction list if there is any
|
||||
// Address will be hardcoded in the config
|
||||
let authorization = Arc::new(InternalAuthorization::internal(None).unwrap());
|
||||
|
||||
// Just make an rpc request, idk if i need to call this super extensive code
|
||||
let transaction_receipt: TransactionReceipt = match app
|
||||
.balanced_rpcs
|
||||
.best_available_rpc(&authorization, None, &[], None, None)
|
||||
.await
|
||||
{
|
||||
Ok(OpenRequestResult::Handle(handle)) => {
|
||||
debug!(
|
||||
"Params are: {:?}",
|
||||
&vec![format!("0x{}", hex::encode(tx_hash))]
|
||||
);
|
||||
handle
|
||||
.request(
|
||||
"eth_getTransactionReceipt",
|
||||
&vec![format!("0x{}", hex::encode(tx_hash))],
|
||||
Level::Trace.into(),
|
||||
None,
|
||||
)
|
||||
.await
|
||||
// TODO: What kind of error would be here
|
||||
.map_err(|err| Web3ProxyError::Anyhow(err.into()))
|
||||
}
|
||||
Ok(_) => {
|
||||
// TODO: @Brllan Is this the right error message?
|
||||
Err(Web3ProxyError::NoHandleReady)
|
||||
}
|
||||
Err(err) => {
|
||||
log::trace!(
|
||||
"cancelled funneling transaction {} from: {:?}",
|
||||
tx_hash,
|
||||
err,
|
||||
);
|
||||
Err(err)
|
||||
}
|
||||
}?;
|
||||
debug!("Transaction receipt is: {:?}", transaction_receipt);
|
||||
let accepted_token: Address = match app
|
||||
.balanced_rpcs
|
||||
.best_available_rpc(&authorization, None, &[], None, None)
|
||||
.await
|
||||
{
|
||||
Ok(OpenRequestResult::Handle(handle)) => {
|
||||
let mut accepted_tokens_request_object: serde_json::Map<String, serde_json::Value> =
|
||||
serde_json::Map::new();
|
||||
// We want to send a request to the contract
|
||||
accepted_tokens_request_object.insert(
|
||||
"to".to_owned(),
|
||||
serde_json::Value::String(format!(
|
||||
"{:?}",
|
||||
app.config.deposit_factory_contract.clone()
|
||||
)),
|
||||
);
|
||||
// We then want to include the function that we want to call
|
||||
accepted_tokens_request_object.insert(
|
||||
"data".to_owned(),
|
||||
serde_json::Value::String(format!(
|
||||
"0x{}",
|
||||
HexFmt(keccak256("get_approved_tokens()".to_owned().into_bytes()))
|
||||
)),
|
||||
// hex::encode(
|
||||
);
|
||||
let params = serde_json::Value::Array(vec![
|
||||
serde_json::Value::Object(accepted_tokens_request_object),
|
||||
serde_json::Value::String("latest".to_owned()),
|
||||
]);
|
||||
debug!("Params are: {:?}", ¶ms);
|
||||
let accepted_token: String = handle
|
||||
.request("eth_call", ¶ms, Level::Trace.into(), None)
|
||||
.await
|
||||
// TODO: What kind of error would be here
|
||||
.map_err(|err| Web3ProxyError::Anyhow(err.into()))?;
|
||||
// Read the last
|
||||
debug!("Accepted token response is: {:?}", accepted_token);
|
||||
accepted_token[accepted_token.len() - 40..]
|
||||
.parse::<Address>()
|
||||
.map_err(|err| Web3ProxyError::Anyhow(err.into()))
|
||||
}
|
||||
Ok(_) => {
|
||||
// TODO: @Brllan Is this the right error message?
|
||||
Err(Web3ProxyError::NoHandleReady)
|
||||
}
|
||||
Err(err) => {
|
||||
log::trace!(
|
||||
"cancelled funneling transaction {} from: {:?}",
|
||||
tx_hash,
|
||||
err,
|
||||
);
|
||||
Err(err)
|
||||
}
|
||||
}?;
|
||||
debug!("Accepted token is: {:?}", accepted_token);
|
||||
let decimals: u32 = match app
|
||||
.balanced_rpcs
|
||||
.best_available_rpc(&authorization, None, &[], None, None)
|
||||
.await
|
||||
{
|
||||
Ok(OpenRequestResult::Handle(handle)) => {
|
||||
// Now get decimals points of the stablecoin
|
||||
let mut token_decimals_request_object: serde_json::Map<String, serde_json::Value> =
|
||||
serde_json::Map::new();
|
||||
token_decimals_request_object.insert(
|
||||
"to".to_owned(),
|
||||
serde_json::Value::String(format!("0x{}", HexFmt(accepted_token))),
|
||||
);
|
||||
token_decimals_request_object.insert(
|
||||
"data".to_owned(),
|
||||
serde_json::Value::String(format!(
|
||||
"0x{}",
|
||||
HexFmt(keccak256("decimals()".to_owned().into_bytes()))
|
||||
)),
|
||||
);
|
||||
let params = serde_json::Value::Array(vec![
|
||||
serde_json::Value::Object(token_decimals_request_object),
|
||||
serde_json::Value::String("latest".to_owned()),
|
||||
]);
|
||||
debug!("ERC20 Decimal request params are: {:?}", ¶ms);
|
||||
let decimals: String = handle
|
||||
.request("eth_call", ¶ms, Level::Trace.into(), None)
|
||||
.await
|
||||
.map_err(|err| Web3ProxyError::Anyhow(err.into()))?;
|
||||
debug!("Decimals response is: {:?}", decimals);
|
||||
u32::from_str_radix(&decimals[2..], 16)
|
||||
.map_err(|err| Web3ProxyError::Anyhow(err.into()))
|
||||
}
|
||||
Ok(_) => {
|
||||
// TODO: @Brllan Is this the right error message?
|
||||
Err(Web3ProxyError::NoHandleReady)
|
||||
}
|
||||
Err(err) => {
|
||||
log::trace!(
|
||||
"cancelled funneling transaction {} from: {:?}",
|
||||
tx_hash,
|
||||
err,
|
||||
);
|
||||
Err(err)
|
||||
}
|
||||
}?;
|
||||
debug!("Decimals are: {:?}", decimals);
|
||||
debug!("Tx receipt: {:?}", transaction_receipt);
|
||||
|
||||
// Go through all logs, this should prob capture it,
|
||||
// At least according to this SE logs are just concatenations of the underlying types (like a struct..)
|
||||
// https://ethereum.stackexchange.com/questions/87653/how-to-decode-log-event-of-my-transaction-log
|
||||
|
||||
let deposit_contract = match app.config.deposit_factory_contract {
|
||||
Some(x) => Ok(x),
|
||||
None => Err(Web3ProxyError::Anyhow(anyhow!(
|
||||
"A deposit_contract must be provided in the config to parse payments"
|
||||
))),
|
||||
}?;
|
||||
let deposit_topic = match app.config.deposit_topic {
|
||||
Some(x) => Ok(x),
|
||||
None => Err(Web3ProxyError::Anyhow(anyhow!(
|
||||
"A deposit_topic must be provided in the config to parse payments"
|
||||
))),
|
||||
}?;
|
||||
|
||||
// Make sure there is only a single log within that transaction ...
|
||||
// I don't know how to best cover the case that there might be multiple logs inside
|
||||
|
||||
for log in transaction_receipt.logs {
|
||||
if log.address != deposit_contract {
|
||||
debug!(
|
||||
"Out: Log is not relevant, as it is not directed to the deposit contract {:?} {:?}",
|
||||
format!("{:?}", log.address),
|
||||
deposit_contract
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Get the topics out
|
||||
let topic: H256 = H256::from(log.topics.get(0).unwrap().to_owned());
|
||||
if topic != deposit_topic {
|
||||
debug!(
|
||||
"Out: Topic is not relevant: {:?} {:?}",
|
||||
topic, deposit_topic
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
// TODO: Will this work? Depends how logs are encoded
|
||||
let (recipient_account, token, amount): (Address, Address, U256) = match ethers::abi::decode(
|
||||
&[
|
||||
ParamType::Address,
|
||||
ParamType::Address,
|
||||
ParamType::Uint(256usize),
|
||||
],
|
||||
&log.data,
|
||||
) {
|
||||
Ok(tpl) => (
|
||||
tpl.get(0)
|
||||
.unwrap()
|
||||
.clone()
|
||||
.into_address()
|
||||
.context("Could not decode recipient")?,
|
||||
tpl.get(1)
|
||||
.unwrap()
|
||||
.clone()
|
||||
.into_address()
|
||||
.context("Could not decode token")?,
|
||||
tpl.get(2)
|
||||
.unwrap()
|
||||
.clone()
|
||||
.into_uint()
|
||||
.context("Could not decode amount")?,
|
||||
),
|
||||
Err(err) => {
|
||||
warn!("Out: Could not decode! {:?}", err);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
// return early if amount is 0
|
||||
if amount == U256::from(0) {
|
||||
warn!(
|
||||
"Out: Found log has amount = 0 {:?}. This should never be the case according to the smart contract",
|
||||
amount
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Skip if no accepted token. Right now we only accept a single stablecoin as input
|
||||
if token != accepted_token {
|
||||
warn!(
|
||||
"Out: Token is not accepted: {:?} != {:?}",
|
||||
token, accepted_token
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
info!(
|
||||
"Found deposit transaction for: {:?} {:?} {:?}",
|
||||
recipient_account, token, amount
|
||||
);
|
||||
|
||||
// Encoding is inefficient, revisit later
|
||||
let recipient = match user::Entity::find()
|
||||
.filter(user::Column::Address.eq(&recipient_account.encode()[12..]))
|
||||
.one(db_replica.conn())
|
||||
.await?
|
||||
{
|
||||
Some(x) => Ok(x),
|
||||
None => Err(Web3ProxyError::BadRequest(
|
||||
"The user must have signed up first. They are currently not signed up!".to_string(),
|
||||
)),
|
||||
}?;
|
||||
|
||||
// For now we only accept stablecoins
|
||||
// And we hardcode the peg (later we would have to depeg this, for example
|
||||
// 1$ = Decimal(1) for any stablecoin
|
||||
// TODO: Let's assume that people don't buy too much at _once_, we do support >$1M which should be fine for now
|
||||
debug!("Arithmetic is: {:?} {:?}", amount, decimals);
|
||||
debug!(
|
||||
"Decimals arithmetic is: {:?} {:?}",
|
||||
Decimal::from(amount.as_u128()),
|
||||
Decimal::from(10_u64.pow(decimals))
|
||||
);
|
||||
let mut amount = Decimal::from(amount.as_u128());
|
||||
let _ = amount.set_scale(decimals);
|
||||
debug!("Amount is: {:?}", amount);
|
||||
|
||||
// Check if the item is in the database. If it is not, then add it into the database
|
||||
let user_balance = balance::Entity::find()
|
||||
.filter(balance::Column::UserId.eq(recipient.id))
|
||||
.one(&db_conn)
|
||||
.await?;
|
||||
|
||||
// Get the premium user-tier
|
||||
let premium_user_tier = user_tier::Entity::find()
|
||||
.filter(user_tier::Column::Title.eq("Premium"))
|
||||
.one(&db_conn)
|
||||
.await?
|
||||
.context("Could not find 'Premium' Tier in user-database")?;
|
||||
|
||||
let txn = db_conn.begin().await?;
|
||||
match user_balance {
|
||||
Some(user_balance) => {
|
||||
let balance_plus_amount = user_balance.available_balance + amount;
|
||||
info!("New user balance is: {:?}", balance_plus_amount);
|
||||
// Update the entry, adding the balance
|
||||
let mut active_user_balance = user_balance.into_active_model();
|
||||
active_user_balance.available_balance = sea_orm::Set(balance_plus_amount);
|
||||
|
||||
if balance_plus_amount >= Decimal::new(10, 0) {
|
||||
// Also make the user premium at this point ...
|
||||
let mut active_recipient = recipient.clone().into_active_model();
|
||||
// Make the recipient premium "Effectively Unlimited"
|
||||
active_recipient.user_tier_id = sea_orm::Set(premium_user_tier.id);
|
||||
active_recipient.save(&txn).await?;
|
||||
}
|
||||
|
||||
debug!("New user balance model is: {:?}", active_user_balance);
|
||||
active_user_balance.save(&txn).await?;
|
||||
// txn.commit().await?;
|
||||
// user_balance
|
||||
}
|
||||
None => {
|
||||
// Create the entry with the respective balance
|
||||
let active_user_balance = balance::ActiveModel {
|
||||
available_balance: sea_orm::ActiveValue::Set(amount),
|
||||
user_id: sea_orm::ActiveValue::Set(recipient.id),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
if amount >= Decimal::new(10, 0) {
|
||||
// Also make the user premium at this point ...
|
||||
let mut active_recipient = recipient.clone().into_active_model();
|
||||
// Make the recipient premium "Effectively Unlimited"
|
||||
active_recipient.user_tier_id = sea_orm::Set(premium_user_tier.id);
|
||||
active_recipient.save(&txn).await?;
|
||||
}
|
||||
|
||||
info!("New user balance model is: {:?}", active_user_balance);
|
||||
active_user_balance.save(&txn).await?;
|
||||
// txn.commit().await?;
|
||||
// user_balance // .try_into_model().unwrap()
|
||||
}
|
||||
};
|
||||
debug!("Setting tx_hash: {:?}", tx_hash);
|
||||
let receipt = increase_on_chain_balance_receipt::ActiveModel {
|
||||
tx_hash: sea_orm::ActiveValue::Set(hex::encode(tx_hash)),
|
||||
chain_id: sea_orm::ActiveValue::Set(app.config.chain_id),
|
||||
amount: sea_orm::ActiveValue::Set(amount),
|
||||
deposit_to_user_id: sea_orm::ActiveValue::Set(recipient.id),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
receipt.save(&txn).await?;
|
||||
txn.commit().await?;
|
||||
debug!("Saved to db");
|
||||
|
||||
let response = (
|
||||
StatusCode::CREATED,
|
||||
Json(json!({
|
||||
"tx_hash": tx_hash,
|
||||
"amount": amount
|
||||
})),
|
||||
)
|
||||
.into_response();
|
||||
// Return early if the log was added, assume there is at most one valid log per transaction
|
||||
return Ok(response.into());
|
||||
}
|
||||
|
||||
Err(Web3ProxyError::BadRequest(
|
||||
"No such transaction was found, or token is not supported!".to_string(),
|
||||
))
|
||||
}
|
87
web3_proxy/src/frontend/users/referral.rs
Normal file
87
web3_proxy/src/frontend/users/referral.rs
Normal file
@ -0,0 +1,87 @@
|
||||
//! Handle registration, logins, and managing account data.
|
||||
use crate::app::Web3ProxyApp;
|
||||
use crate::frontend::errors::{Web3ProxyError, Web3ProxyResponse};
|
||||
use crate::referral_code::ReferralCode;
|
||||
use anyhow::Context;
|
||||
use axum::{
|
||||
extract::Query,
|
||||
headers::{authorization::Bearer, Authorization},
|
||||
response::IntoResponse,
|
||||
Extension, Json, TypedHeader,
|
||||
};
|
||||
use axum_macros::debug_handler;
|
||||
use entities::{referrer, user_tier};
|
||||
use hashbrown::HashMap;
|
||||
use http::StatusCode;
|
||||
use log::warn;
|
||||
use migration::sea_orm;
|
||||
use migration::sea_orm::ActiveModelTrait;
|
||||
use migration::sea_orm::ColumnTrait;
|
||||
use migration::sea_orm::EntityTrait;
|
||||
use migration::sea_orm::QueryFilter;
|
||||
use migration::sea_orm::TransactionTrait;
|
||||
use serde_json::json;
|
||||
use std::sync::Arc;
|
||||
|
||||
/// Create or get the existing referral link.
|
||||
/// This is the link that the user can share to third parties, and get credits.
|
||||
/// Applies to premium users only
|
||||
#[debug_handler]
|
||||
pub async fn user_referral_link_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
Query(_params): Query<HashMap<String, String>>,
|
||||
) -> Web3ProxyResponse {
|
||||
// First get the bearer token and check if the user is logged in
|
||||
let (user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
let db_replica = app
|
||||
.db_replica()
|
||||
.context("getting replica db for user's revert logs")?;
|
||||
|
||||
// Second, check if the user is a premium user
|
||||
let user_tier = user_tier::Entity::find()
|
||||
.filter(user_tier::Column::Id.eq(user.user_tier_id))
|
||||
.one(db_replica.conn())
|
||||
.await?
|
||||
.ok_or(Web3ProxyError::UnknownKey)?;
|
||||
|
||||
warn!("User tier is: {:?}", user_tier);
|
||||
// TODO: This shouldn't be hardcoded. Also, it should be an enum, not sth like this ...
|
||||
if user_tier.id != 6 {
|
||||
return Err(Web3ProxyError::PaymentRequired.into());
|
||||
}
|
||||
|
||||
// Then get the referral token
|
||||
let user_referrer = referrer::Entity::find()
|
||||
.filter(referrer::Column::UserId.eq(user.id))
|
||||
.one(db_replica.conn())
|
||||
.await?;
|
||||
|
||||
let (referral_code, status_code) = match user_referrer {
|
||||
Some(x) => (x.referral_code, StatusCode::OK),
|
||||
None => {
|
||||
// Connect to the database for mutable write
|
||||
let db_conn = app.db_conn().context("getting db_conn")?;
|
||||
|
||||
let referral_code = ReferralCode::default().0;
|
||||
// Log that this guy was referred by another guy
|
||||
// Do not automatically create a new
|
||||
let referrer_entry = referrer::ActiveModel {
|
||||
user_id: sea_orm::ActiveValue::Set(user.id),
|
||||
referral_code: sea_orm::ActiveValue::Set(referral_code.clone()),
|
||||
..Default::default()
|
||||
};
|
||||
referrer_entry.save(&db_conn).await?;
|
||||
(referral_code, StatusCode::CREATED)
|
||||
}
|
||||
};
|
||||
|
||||
let response_json = json!({
|
||||
"referral_code": referral_code,
|
||||
"user": user,
|
||||
});
|
||||
|
||||
let response = (status_code, Json(response_json)).into_response();
|
||||
Ok(response)
|
||||
}
|
259
web3_proxy/src/frontend/users/rpc_keys.rs
Normal file
259
web3_proxy/src/frontend/users/rpc_keys.rs
Normal file
@ -0,0 +1,259 @@
|
||||
//! Handle registration, logins, and managing account data.
|
||||
use super::super::authorization::RpcSecretKey;
|
||||
use super::super::errors::{Web3ProxyError, Web3ProxyErrorContext, Web3ProxyResponse};
|
||||
use crate::app::Web3ProxyApp;
|
||||
use axum::headers::{Header, Origin, Referer, UserAgent};
|
||||
use axum::{
|
||||
headers::{authorization::Bearer, Authorization},
|
||||
response::IntoResponse,
|
||||
Extension, Json, TypedHeader,
|
||||
};
|
||||
use axum_macros::debug_handler;
|
||||
use entities;
|
||||
use entities::rpc_key;
|
||||
use entities::sea_orm_active_enums::TrackingLevel;
|
||||
use hashbrown::HashMap;
|
||||
use http::HeaderValue;
|
||||
use ipnet::IpNet;
|
||||
use itertools::Itertools;
|
||||
use migration::sea_orm::{
|
||||
self, ActiveModelTrait, ColumnTrait, EntityTrait, IntoActiveModel, QueryFilter, TryIntoModel,
|
||||
};
|
||||
use serde::Deserialize;
|
||||
use serde_json::json;
|
||||
use std::sync::Arc;
|
||||
|
||||
/// `GET /user/keys` -- Use a bearer token to get the user's api keys and their settings.
|
||||
#[debug_handler]
|
||||
pub async fn rpc_keys_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let (user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
let db_replica = app
|
||||
.db_replica()
|
||||
.web3_context("db_replica is required to fetch a user's keys")?;
|
||||
|
||||
let uks = rpc_key::Entity::find()
|
||||
.filter(rpc_key::Column::UserId.eq(user.id))
|
||||
.all(db_replica.conn())
|
||||
.await
|
||||
.web3_context("failed loading user's key")?;
|
||||
|
||||
let response_json = json!({
|
||||
"user_id": user.id,
|
||||
"user_rpc_keys": uks
|
||||
.into_iter()
|
||||
.map(|uk| (uk.id, uk))
|
||||
.collect::<HashMap::<_, _>>(),
|
||||
});
|
||||
|
||||
Ok(Json(response_json).into_response())
|
||||
}
|
||||
|
||||
/// `DELETE /user/keys` -- Use a bearer token to delete an existing key.
|
||||
#[debug_handler]
|
||||
pub async fn rpc_keys_delete(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let (_user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
// TODO: think about how cascading deletes and billing should work
|
||||
Err(Web3ProxyError::NotImplemented)
|
||||
}
|
||||
|
||||
/// the JSON input to the `rpc_keys_management` handler.
|
||||
/// If `key_id` is set, it updates an existing key.
|
||||
/// If `key_id` is not set, it creates a new key.
|
||||
/// `log_request_method` cannot be change once the key is created
|
||||
/// `user_tier` cannot be changed here
|
||||
#[derive(Debug, Deserialize)]
|
||||
pub struct UserKeyManagement {
|
||||
key_id: Option<u64>,
|
||||
active: Option<bool>,
|
||||
allowed_ips: Option<String>,
|
||||
allowed_origins: Option<String>,
|
||||
allowed_referers: Option<String>,
|
||||
allowed_user_agents: Option<String>,
|
||||
description: Option<String>,
|
||||
log_level: Option<TrackingLevel>,
|
||||
// TODO: enable log_revert_trace: Option<f64>,
|
||||
private_txs: Option<bool>,
|
||||
}
|
||||
|
||||
/// `POST /user/keys` or `PUT /user/keys` -- Use a bearer token to create or update an existing key.
|
||||
#[debug_handler]
|
||||
pub async fn rpc_keys_management(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
Json(payload): Json<UserKeyManagement>,
|
||||
) -> Web3ProxyResponse {
|
||||
// TODO: is there a way we can know if this is a PUT or POST? right now we can modify or create keys with either. though that probably doesn't matter
|
||||
|
||||
let (user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
let db_replica = app
|
||||
.db_replica()
|
||||
.web3_context("getting db for user's keys")?;
|
||||
|
||||
let mut uk = if let Some(existing_key_id) = payload.key_id {
|
||||
// get the key and make sure it belongs to the user
|
||||
rpc_key::Entity::find()
|
||||
.filter(rpc_key::Column::UserId.eq(user.id))
|
||||
.filter(rpc_key::Column::Id.eq(existing_key_id))
|
||||
.one(db_replica.conn())
|
||||
.await
|
||||
.web3_context("failed loading user's key")?
|
||||
.web3_context("key does not exist or is not controlled by this bearer token")?
|
||||
.into_active_model()
|
||||
} else {
|
||||
// make a new key
|
||||
// TODO: limit to 10 keys?
|
||||
let secret_key = RpcSecretKey::new();
|
||||
|
||||
let log_level = payload
|
||||
.log_level
|
||||
.web3_context("log level must be 'none', 'detailed', or 'aggregated'")?;
|
||||
|
||||
rpc_key::ActiveModel {
|
||||
user_id: sea_orm::Set(user.id),
|
||||
secret_key: sea_orm::Set(secret_key.into()),
|
||||
log_level: sea_orm::Set(log_level),
|
||||
..Default::default()
|
||||
}
|
||||
};
|
||||
|
||||
// TODO: do we need null descriptions? default to empty string should be fine, right?
|
||||
if let Some(description) = payload.description {
|
||||
if description.is_empty() {
|
||||
uk.description = sea_orm::Set(None);
|
||||
} else {
|
||||
uk.description = sea_orm::Set(Some(description));
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(private_txs) = payload.private_txs {
|
||||
uk.private_txs = sea_orm::Set(private_txs);
|
||||
}
|
||||
|
||||
if let Some(active) = payload.active {
|
||||
uk.active = sea_orm::Set(active);
|
||||
}
|
||||
|
||||
if let Some(allowed_ips) = payload.allowed_ips {
|
||||
if allowed_ips.is_empty() {
|
||||
uk.allowed_ips = sea_orm::Set(None);
|
||||
} else {
|
||||
// split allowed ips on ',' and try to parse them all. error on invalid input
|
||||
let allowed_ips = allowed_ips
|
||||
.split(',')
|
||||
.map(|x| x.trim().parse::<IpNet>())
|
||||
.collect::<Result<Vec<_>, _>>()?
|
||||
// parse worked. convert back to Strings
|
||||
.into_iter()
|
||||
.map(|x| x.to_string());
|
||||
|
||||
// and join them back together
|
||||
let allowed_ips: String =
|
||||
Itertools::intersperse(allowed_ips, ", ".to_string()).collect();
|
||||
|
||||
uk.allowed_ips = sea_orm::Set(Some(allowed_ips));
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: this should actually be bytes
|
||||
if let Some(allowed_origins) = payload.allowed_origins {
|
||||
if allowed_origins.is_empty() {
|
||||
uk.allowed_origins = sea_orm::Set(None);
|
||||
} else {
|
||||
// split allowed_origins on ',' and try to parse them all. error on invalid input
|
||||
let allowed_origins = allowed_origins
|
||||
.split(',')
|
||||
.map(|x| HeaderValue::from_str(x.trim()))
|
||||
.collect::<Result<Vec<_>, _>>()?
|
||||
.into_iter()
|
||||
.map(|x| Origin::decode(&mut [x].iter()))
|
||||
.collect::<Result<Vec<_>, _>>()?
|
||||
// parse worked. convert back to String and join them back together
|
||||
.into_iter()
|
||||
.map(|x| x.to_string());
|
||||
|
||||
let allowed_origins: String =
|
||||
Itertools::intersperse(allowed_origins, ", ".to_string()).collect();
|
||||
|
||||
uk.allowed_origins = sea_orm::Set(Some(allowed_origins));
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: this should actually be bytes
|
||||
if let Some(allowed_referers) = payload.allowed_referers {
|
||||
if allowed_referers.is_empty() {
|
||||
uk.allowed_referers = sea_orm::Set(None);
|
||||
} else {
|
||||
// split allowed ips on ',' and try to parse them all. error on invalid input
|
||||
let allowed_referers = allowed_referers
|
||||
.split(',')
|
||||
.map(|x| HeaderValue::from_str(x.trim()))
|
||||
.collect::<Result<Vec<_>, _>>()?
|
||||
.into_iter()
|
||||
.map(|x| Referer::decode(&mut [x].iter()))
|
||||
.collect::<Result<Vec<_>, _>>()?;
|
||||
|
||||
// parse worked. now we can put it back together.
|
||||
// but we can't go directly to String.
|
||||
// so we convert to HeaderValues first
|
||||
let mut header_map = vec![];
|
||||
for x in allowed_referers {
|
||||
x.encode(&mut header_map);
|
||||
}
|
||||
|
||||
// convert HeaderValues to Strings
|
||||
// since we got these from strings, this should always work (unless we figure out using bytes)
|
||||
let allowed_referers = header_map
|
||||
.into_iter()
|
||||
.map(|x| x.to_str().map(|x| x.to_string()))
|
||||
.collect::<Result<Vec<_>, _>>()?;
|
||||
|
||||
// join strings together with commas
|
||||
let allowed_referers: String =
|
||||
Itertools::intersperse(allowed_referers.into_iter(), ", ".to_string()).collect();
|
||||
|
||||
uk.allowed_referers = sea_orm::Set(Some(allowed_referers));
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(allowed_user_agents) = payload.allowed_user_agents {
|
||||
if allowed_user_agents.is_empty() {
|
||||
uk.allowed_user_agents = sea_orm::Set(None);
|
||||
} else {
|
||||
// split allowed_user_agents on ',' and try to parse them all. error on invalid input
|
||||
let allowed_user_agents = allowed_user_agents
|
||||
.split(',')
|
||||
.filter_map(|x| x.trim().parse::<UserAgent>().ok())
|
||||
// parse worked. convert back to String
|
||||
.map(|x| x.to_string());
|
||||
|
||||
// join the strings together
|
||||
let allowed_user_agents: String =
|
||||
Itertools::intersperse(allowed_user_agents, ", ".to_string()).collect();
|
||||
|
||||
uk.allowed_user_agents = sea_orm::Set(Some(allowed_user_agents));
|
||||
}
|
||||
}
|
||||
|
||||
let uk = if uk.is_changed() {
|
||||
let db_conn = app.db_conn().web3_context("login requires a db")?;
|
||||
|
||||
uk.save(&db_conn)
|
||||
.await
|
||||
.web3_context("Failed saving user key")?
|
||||
} else {
|
||||
uk
|
||||
};
|
||||
|
||||
let uk = uk.try_into_model()?;
|
||||
|
||||
Ok(Json(uk).into_response())
|
||||
}
|
123
web3_proxy/src/frontend/users/stats.rs
Normal file
123
web3_proxy/src/frontend/users/stats.rs
Normal file
@ -0,0 +1,123 @@
|
||||
//! Handle registration, logins, and managing account data.
|
||||
use crate::app::Web3ProxyApp;
|
||||
use crate::frontend::errors::{Web3ProxyErrorContext, Web3ProxyResponse};
|
||||
use crate::http_params::{
|
||||
get_chain_id_from_params, get_page_from_params, get_query_start_from_params,
|
||||
};
|
||||
use crate::stats::influxdb_queries::query_user_stats;
|
||||
use crate::stats::StatType;
|
||||
use axum::{
|
||||
extract::Query,
|
||||
headers::{authorization::Bearer, Authorization},
|
||||
response::IntoResponse,
|
||||
Extension, Json, TypedHeader,
|
||||
};
|
||||
use axum_macros::debug_handler;
|
||||
use entities;
|
||||
use entities::{revert_log, rpc_key};
|
||||
use hashbrown::HashMap;
|
||||
use migration::sea_orm::{ColumnTrait, EntityTrait, PaginatorTrait, QueryFilter, QueryOrder};
|
||||
use serde_json::json;
|
||||
use std::sync::Arc;
|
||||
|
||||
/// `GET /user/revert_logs` -- Use a bearer token to get the user's revert logs.
|
||||
#[debug_handler]
|
||||
pub async fn user_revert_logs_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
Query(params): Query<HashMap<String, String>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let (user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
let chain_id = get_chain_id_from_params(app.as_ref(), ¶ms)?;
|
||||
let query_start = get_query_start_from_params(¶ms)?;
|
||||
let page = get_page_from_params(¶ms)?;
|
||||
|
||||
// TODO: page size from config
|
||||
let page_size = 1_000;
|
||||
|
||||
let mut response = HashMap::new();
|
||||
|
||||
response.insert("page", json!(page));
|
||||
response.insert("page_size", json!(page_size));
|
||||
response.insert("chain_id", json!(chain_id));
|
||||
response.insert("query_start", json!(query_start.timestamp() as u64));
|
||||
|
||||
let db_replica = app
|
||||
.db_replica()
|
||||
.web3_context("getting replica db for user's revert logs")?;
|
||||
|
||||
let uks = rpc_key::Entity::find()
|
||||
.filter(rpc_key::Column::UserId.eq(user.id))
|
||||
.all(db_replica.conn())
|
||||
.await
|
||||
.web3_context("failed loading user's key")?;
|
||||
|
||||
// TODO: only select the ids
|
||||
let uks: Vec<_> = uks.into_iter().map(|x| x.id).collect();
|
||||
|
||||
// get revert logs
|
||||
let mut q = revert_log::Entity::find()
|
||||
.filter(revert_log::Column::Timestamp.gte(query_start))
|
||||
.filter(revert_log::Column::RpcKeyId.is_in(uks))
|
||||
.order_by_asc(revert_log::Column::Timestamp);
|
||||
|
||||
if chain_id == 0 {
|
||||
// don't do anything
|
||||
} else {
|
||||
// filter on chain id
|
||||
q = q.filter(revert_log::Column::ChainId.eq(chain_id))
|
||||
}
|
||||
|
||||
// query the database for number of items and pages
|
||||
let pages_result = q
|
||||
.clone()
|
||||
.paginate(db_replica.conn(), page_size)
|
||||
.num_items_and_pages()
|
||||
.await?;
|
||||
|
||||
response.insert("num_items", pages_result.number_of_items.into());
|
||||
response.insert("num_pages", pages_result.number_of_pages.into());
|
||||
|
||||
// query the database for the revert logs
|
||||
let revert_logs = q
|
||||
.paginate(db_replica.conn(), page_size)
|
||||
.fetch_page(page)
|
||||
.await?;
|
||||
|
||||
response.insert("revert_logs", json!(revert_logs));
|
||||
|
||||
Ok(Json(response).into_response())
|
||||
}
|
||||
|
||||
/// `GET /user/stats/aggregate` -- Public endpoint for aggregate stats such as bandwidth used and methods requested.
|
||||
#[debug_handler]
|
||||
pub async fn user_stats_aggregated_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
bearer: Option<TypedHeader<Authorization<Bearer>>>,
|
||||
Query(params): Query<HashMap<String, String>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let response = query_user_stats(&app, bearer, ¶ms, StatType::Aggregated).await?;
|
||||
|
||||
Ok(response)
|
||||
}
|
||||
|
||||
/// `GET /user/stats/detailed` -- Use a bearer token to get the user's key stats such as bandwidth used and methods requested.
|
||||
///
|
||||
/// If no bearer is provided, detailed stats for all users will be shown.
|
||||
/// View a single user with `?user_id=$x`.
|
||||
/// View a single chain with `?chain_id=$x`.
|
||||
///
|
||||
/// Set `$x` to zero to see all.
|
||||
///
|
||||
/// TODO: this will change as we add better support for secondary users.
|
||||
#[debug_handler]
|
||||
pub async fn user_stats_detailed_get(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
bearer: Option<TypedHeader<Authorization<Bearer>>>,
|
||||
Query(params): Query<HashMap<String, String>>,
|
||||
) -> Web3ProxyResponse {
|
||||
let response = query_user_stats(&app, bearer, ¶ms, StatType::Detailed).await?;
|
||||
|
||||
Ok(response)
|
||||
}
|
426
web3_proxy/src/frontend/users/subuser.rs
Normal file
426
web3_proxy/src/frontend/users/subuser.rs
Normal file
@ -0,0 +1,426 @@
|
||||
//! Handle subusers, viewing subusers, and viewing accessible rpc-keys
|
||||
use crate::app::Web3ProxyApp;
|
||||
use crate::frontend::authorization::RpcSecretKey;
|
||||
use crate::frontend::errors::{Web3ProxyError, Web3ProxyErrorContext, Web3ProxyResponse};
|
||||
use anyhow::Context;
|
||||
use axum::{
|
||||
extract::Query,
|
||||
headers::{authorization::Bearer, Authorization},
|
||||
response::IntoResponse,
|
||||
Extension, Json, TypedHeader,
|
||||
};
|
||||
use axum_macros::debug_handler;
|
||||
use entities::sea_orm_active_enums::Role;
|
||||
use entities::{balance, rpc_key, secondary_user, user, user_tier};
|
||||
use ethers::types::Address;
|
||||
use hashbrown::HashMap;
|
||||
use http::StatusCode;
|
||||
use log::{debug, warn};
|
||||
use migration::sea_orm;
|
||||
use migration::sea_orm::prelude::Decimal;
|
||||
use migration::sea_orm::ActiveModelTrait;
|
||||
use migration::sea_orm::ColumnTrait;
|
||||
use migration::sea_orm::EntityTrait;
|
||||
use migration::sea_orm::IntoActiveModel;
|
||||
use migration::sea_orm::QueryFilter;
|
||||
use migration::sea_orm::TransactionTrait;
|
||||
use serde_json::json;
|
||||
use std::sync::Arc;
|
||||
use ulid::{self, Ulid};
|
||||
use uuid::Uuid;
|
||||
|
||||
pub async fn get_keys_as_subuser(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
Query(_params): Query<HashMap<String, String>>,
|
||||
) -> Web3ProxyResponse {
|
||||
// First, authenticate
|
||||
let (subuser, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
let db_replica = app
|
||||
.db_replica()
|
||||
.context("getting replica db for user's revert logs")?;
|
||||
|
||||
// TODO: JOIN over RPC_KEY, SUBUSER, PRIMARY_USER and return these items
|
||||
|
||||
// Get all secondary users that have access to this rpc key
|
||||
let secondary_user_entities = secondary_user::Entity::find()
|
||||
.filter(secondary_user::Column::UserId.eq(subuser.id))
|
||||
.all(db_replica.conn())
|
||||
.await?
|
||||
.into_iter()
|
||||
.map(|x| (x.rpc_secret_key_id.clone(), x))
|
||||
.collect::<HashMap<u64, secondary_user::Model>>();
|
||||
|
||||
// Now return a list of all subusers (their wallets)
|
||||
let rpc_key_entities: Vec<(rpc_key::Model, Option<user::Model>)> = rpc_key::Entity::find()
|
||||
.filter(
|
||||
rpc_key::Column::Id.is_in(
|
||||
secondary_user_entities
|
||||
.iter()
|
||||
.map(|(x, _)| *x)
|
||||
.collect::<Vec<_>>(),
|
||||
),
|
||||
)
|
||||
.find_also_related(user::Entity)
|
||||
.all(db_replica.conn())
|
||||
.await?;
|
||||
|
||||
// TODO: Merge rpc-key with respective user (join is probably easiest ...)
|
||||
|
||||
// Now return the list
|
||||
let response_json = json!({
|
||||
"subuser": format!("{:?}", Address::from_slice(&subuser.address)),
|
||||
"rpc_keys": rpc_key_entities
|
||||
.into_iter()
|
||||
.flat_map(|(rpc_key, rpc_owner)| {
|
||||
match rpc_owner {
|
||||
Some(inner_rpc_owner) => {
|
||||
let mut tmp = HashMap::new();
|
||||
tmp.insert("rpc-key", serde_json::Value::String(Ulid::from(rpc_key.secret_key).to_string()));
|
||||
tmp.insert("rpc-owner", serde_json::Value::String(format!("{:?}", Address::from_slice(&inner_rpc_owner.address))));
|
||||
tmp.insert("role", serde_json::Value::String(format!("{:?}", secondary_user_entities.get(&rpc_key.id).unwrap().role))); // .to_string() returns ugly "'...'"
|
||||
Some(tmp)
|
||||
},
|
||||
None => {
|
||||
// error!("Found RPC secret key with no user!".to_owned());
|
||||
None
|
||||
}
|
||||
}
|
||||
})
|
||||
.collect::<Vec::<_>>(),
|
||||
});
|
||||
|
||||
Ok(Json(response_json).into_response())
|
||||
}
|
||||
|
||||
pub async fn get_subusers(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
Query(mut params): Query<HashMap<String, String>>,
|
||||
) -> Web3ProxyResponse {
|
||||
// First, authenticate
|
||||
let (user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
let db_replica = app
|
||||
.db_replica()
|
||||
.context("getting replica db for user's revert logs")?;
|
||||
|
||||
// Second, check if the user is a premium user
|
||||
let user_tier = user_tier::Entity::find()
|
||||
.filter(user_tier::Column::Id.eq(user.user_tier_id))
|
||||
.one(db_replica.conn())
|
||||
.await?
|
||||
.ok_or(Web3ProxyError::BadRequest(
|
||||
"Could not find user in db although bearer token is there!".to_string(),
|
||||
))?;
|
||||
|
||||
debug!("User tier is: {:?}", user_tier);
|
||||
// TODO: This shouldn't be hardcoded. Also, it should be an enum, not sth like this ...
|
||||
if user_tier.id != 6 {
|
||||
return Err(
|
||||
anyhow::anyhow!("User is not premium. Must be premium to create referrals.").into(),
|
||||
);
|
||||
}
|
||||
|
||||
let rpc_key: Ulid = params
|
||||
.remove("rpc_key")
|
||||
// TODO: map_err so this becomes a 500. routing must be bad
|
||||
.ok_or(Web3ProxyError::BadRequest(
|
||||
"You have not provided the 'rpc_key' whose access to modify".to_string(),
|
||||
))?
|
||||
.parse()
|
||||
.context(format!("unable to parse rpc_key {:?}", params))?;
|
||||
|
||||
// Get the rpc key id
|
||||
let rpc_key = rpc_key::Entity::find()
|
||||
.filter(rpc_key::Column::SecretKey.eq(Uuid::from(rpc_key)))
|
||||
.one(db_replica.conn())
|
||||
.await?
|
||||
.ok_or(Web3ProxyError::BadRequest(
|
||||
"The provided RPC key cannot be found".to_string(),
|
||||
))?;
|
||||
|
||||
// Get all secondary users that have access to this rpc key
|
||||
let secondary_user_entities = secondary_user::Entity::find()
|
||||
.filter(secondary_user::Column::RpcSecretKeyId.eq(rpc_key.id))
|
||||
.all(db_replica.conn())
|
||||
.await?
|
||||
.into_iter()
|
||||
.map(|x| (x.user_id.clone(), x))
|
||||
.collect::<HashMap<u64, secondary_user::Model>>();
|
||||
|
||||
// Now return a list of all subusers (their wallets)
|
||||
let subusers = user::Entity::find()
|
||||
.filter(
|
||||
user::Column::Id.is_in(
|
||||
secondary_user_entities
|
||||
.iter()
|
||||
.map(|(x, _)| *x)
|
||||
.collect::<Vec<_>>(),
|
||||
),
|
||||
)
|
||||
.all(db_replica.conn())
|
||||
.await?;
|
||||
|
||||
warn!("Subusers are: {:?}", subusers);
|
||||
|
||||
// Now return the list
|
||||
let response_json = json!({
|
||||
"caller": format!("{:?}", Address::from_slice(&user.address)),
|
||||
"rpc_key": rpc_key,
|
||||
"subusers": subusers
|
||||
.into_iter()
|
||||
.map(|subuser| {
|
||||
let mut tmp = HashMap::new();
|
||||
// .encode_hex()
|
||||
tmp.insert("address", serde_json::Value::String(format!("{:?}", Address::from_slice(&subuser.address))));
|
||||
tmp.insert("role", serde_json::Value::String(format!("{:?}", secondary_user_entities.get(&subuser.id).unwrap().role)));
|
||||
json!(tmp)
|
||||
})
|
||||
.collect::<Vec::<_>>(),
|
||||
});
|
||||
|
||||
Ok(Json(response_json).into_response())
|
||||
}
|
||||
|
||||
#[debug_handler]
|
||||
pub async fn modify_subuser(
|
||||
Extension(app): Extension<Arc<Web3ProxyApp>>,
|
||||
TypedHeader(Authorization(bearer)): TypedHeader<Authorization<Bearer>>,
|
||||
Query(mut params): Query<HashMap<String, String>>,
|
||||
) -> Web3ProxyResponse {
|
||||
// First, authenticate
|
||||
let (user, _semaphore) = app.bearer_is_authorized(bearer).await?;
|
||||
|
||||
let db_replica = app
|
||||
.db_replica()
|
||||
.context("getting replica db for user's revert logs")?;
|
||||
|
||||
// Second, check if the user is a premium user
|
||||
let user_tier = user_tier::Entity::find()
|
||||
.filter(user_tier::Column::Id.eq(user.user_tier_id))
|
||||
.one(db_replica.conn())
|
||||
.await?
|
||||
.ok_or(Web3ProxyError::BadRequest(
|
||||
"Could not find user in db although bearer token is there!".to_string(),
|
||||
))?;
|
||||
|
||||
debug!("User tier is: {:?}", user_tier);
|
||||
// TODO: This shouldn't be hardcoded. Also, it should be an enum, not sth like this ...
|
||||
if user_tier.id != 6 {
|
||||
return Err(
|
||||
anyhow::anyhow!("User is not premium. Must be premium to create referrals.").into(),
|
||||
);
|
||||
}
|
||||
|
||||
warn!("Parameters are: {:?}", params);
|
||||
|
||||
// Then, distinguish the endpoint to modify
|
||||
let rpc_key_to_modify: Ulid = params
|
||||
.remove("rpc_key")
|
||||
// TODO: map_err so this becomes a 500. routing must be bad
|
||||
.ok_or(Web3ProxyError::BadRequest(
|
||||
"You have not provided the 'rpc_key' whose access to modify".to_string(),
|
||||
))?
|
||||
.parse::<Ulid>()
|
||||
.context(format!("unable to parse rpc_key {:?}", params))?;
|
||||
// let rpc_key_to_modify: Uuid = ulid::serde::ulid_as_uuid::deserialize(rpc_key_to_modify)?;
|
||||
|
||||
let subuser_address: Address = params
|
||||
.remove("subuser_address")
|
||||
// TODO: map_err so this becomes a 500. routing must be bad
|
||||
.ok_or(Web3ProxyError::BadRequest(
|
||||
"You have not provided the 'user_address' whose access to modify".to_string(),
|
||||
))?
|
||||
.parse()
|
||||
.context(format!("unable to parse subuser_address {:?}", params))?;
|
||||
|
||||
// TODO: Check subuser address for eip55 checksum
|
||||
|
||||
let keep_subuser: bool = match params
|
||||
.remove("new_status")
|
||||
// TODO: map_err so this becomes a 500. routing must be bad
|
||||
.ok_or(Web3ProxyError::BadRequest(
|
||||
"You have not provided the new_stats key in the request".to_string(),
|
||||
))?
|
||||
.as_str()
|
||||
{
|
||||
"upsert" => Ok(true),
|
||||
"remove" => Ok(false),
|
||||
_ => Err(Web3ProxyError::BadRequest(
|
||||
"'new_status' must be one of 'upsert' or 'remove'".to_string(),
|
||||
)),
|
||||
}?;
|
||||
|
||||
let new_role: Role = match params
|
||||
.remove("new_role")
|
||||
// TODO: map_err so this becomes a 500. routing must be bad
|
||||
.ok_or(Web3ProxyError::BadRequest(
|
||||
"You have not provided the new_stats key in the request".to_string(),
|
||||
))?
|
||||
.as_str()
|
||||
{
|
||||
// TODO: Technically, if this is the new owner, we should transpose the full table.
|
||||
// For now, let's just not allow the primary owner to just delete his account
|
||||
// (if there is even such a functionality)
|
||||
"owner" => Ok(Role::Owner),
|
||||
"admin" => Ok(Role::Admin),
|
||||
"collaborator" => Ok(Role::Collaborator),
|
||||
_ => Err(Web3ProxyError::BadRequest(
|
||||
"'new_role' must be one of 'owner', 'admin', 'collaborator'".to_string(),
|
||||
)),
|
||||
}?;
|
||||
|
||||
// ---------------------------
|
||||
// First, check if the user exists as a user. If not, add them
|
||||
// (and also create a balance, and rpc_key, same procedure as logging in for first time)
|
||||
// ---------------------------
|
||||
let subuser = user::Entity::find()
|
||||
.filter(user::Column::Address.eq(subuser_address.as_ref()))
|
||||
.one(db_replica.conn())
|
||||
.await?;
|
||||
|
||||
let rpc_key_entity = rpc_key::Entity::find()
|
||||
.filter(rpc_key::Column::SecretKey.eq(Uuid::from(rpc_key_to_modify)))
|
||||
.one(db_replica.conn())
|
||||
.await?
|
||||
.ok_or(Web3ProxyError::BadRequest(
|
||||
"Provided RPC key does not exist!".to_owned(),
|
||||
))?;
|
||||
|
||||
// Make sure that the user owns the rpc_key_entity
|
||||
if rpc_key_entity.user_id != user.id {
|
||||
return Err(Web3ProxyError::BadRequest(
|
||||
"you must own the RPC for which you are giving permissions out".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
// TODO: There is a good chunk of duplicate logic as login-post. Consider refactoring ...
|
||||
let db_conn = app.db_conn().web3_context("login requires a db")?;
|
||||
let (subuser, _subuser_rpc_keys, _status_code) = match subuser {
|
||||
None => {
|
||||
let txn = db_conn.begin().await?;
|
||||
// First add a user; the only thing we need from them is an address
|
||||
// everything else is optional
|
||||
let subuser = user::ActiveModel {
|
||||
address: sea_orm::Set(subuser_address.to_fixed_bytes().into()), // Address::from_slice(
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let subuser = subuser.insert(&txn).await?;
|
||||
|
||||
// create the user's first api key
|
||||
let rpc_secret_key = RpcSecretKey::new();
|
||||
|
||||
let subuser_rpc_key = rpc_key::ActiveModel {
|
||||
user_id: sea_orm::Set(subuser.id.clone()),
|
||||
secret_key: sea_orm::Set(rpc_secret_key.into()),
|
||||
description: sea_orm::Set(None),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let subuser_rpc_keys = vec![subuser_rpc_key
|
||||
.insert(&txn)
|
||||
.await
|
||||
.web3_context("Failed saving new user key")?];
|
||||
|
||||
// We should also create the balance entry ...
|
||||
let subuser_balance = balance::ActiveModel {
|
||||
user_id: sea_orm::Set(subuser.id.clone()),
|
||||
available_balance: sea_orm::Set(Decimal::new(0, 0)),
|
||||
used_balance: sea_orm::Set(Decimal::new(0, 0)),
|
||||
..Default::default()
|
||||
};
|
||||
subuser_balance.insert(&txn).await?;
|
||||
// save the user and key to the database
|
||||
txn.commit().await?;
|
||||
|
||||
(subuser, subuser_rpc_keys, StatusCode::CREATED)
|
||||
}
|
||||
Some(subuser) => {
|
||||
if subuser.id == user.id {
|
||||
return Err(Web3ProxyError::BadRequest(
|
||||
"you cannot make a subuser out of yourself".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
// Let's say that a user that exists can actually also redeem a key in retrospect...
|
||||
// the user is already registered
|
||||
let subuser_rpc_keys = rpc_key::Entity::find()
|
||||
.filter(rpc_key::Column::UserId.eq(subuser.id))
|
||||
.all(db_replica.conn())
|
||||
.await
|
||||
.web3_context("failed loading user's key")?;
|
||||
|
||||
(subuser, subuser_rpc_keys, StatusCode::OK)
|
||||
}
|
||||
};
|
||||
|
||||
// --------------------------------
|
||||
// Now apply the operation
|
||||
// Either add the subuser
|
||||
// Or revoke his subuser status
|
||||
// --------------------------------
|
||||
|
||||
// Search for subuser first of all
|
||||
// There should be a unique-constraint on user-id + rpc_key
|
||||
let subuser_entry_secondary_user = secondary_user::Entity::find()
|
||||
.filter(secondary_user::Column::UserId.eq(subuser.id))
|
||||
.filter(secondary_user::Column::RpcSecretKeyId.eq(rpc_key_entity.id))
|
||||
.one(db_replica.conn())
|
||||
.await
|
||||
.web3_context("failed using the db to check for a subuser")?;
|
||||
|
||||
let txn = db_conn.begin().await?;
|
||||
let mut action = "no action";
|
||||
let _ = match subuser_entry_secondary_user {
|
||||
Some(secondary_user) => {
|
||||
// In this case, remove the subuser
|
||||
let mut active_subuser_entry_secondary_user = secondary_user.into_active_model();
|
||||
if !keep_subuser {
|
||||
// Remove the user
|
||||
active_subuser_entry_secondary_user.delete(&db_conn).await?;
|
||||
action = "removed";
|
||||
} else {
|
||||
// Just change the role
|
||||
active_subuser_entry_secondary_user.role = sea_orm::Set(new_role.clone());
|
||||
active_subuser_entry_secondary_user.save(&db_conn).await?;
|
||||
action = "role modified";
|
||||
}
|
||||
}
|
||||
None if keep_subuser => {
|
||||
let active_subuser_entry_secondary_user = secondary_user::ActiveModel {
|
||||
user_id: sea_orm::Set(subuser.id),
|
||||
rpc_secret_key_id: sea_orm::Set(rpc_key_entity.id),
|
||||
role: sea_orm::Set(new_role.clone()),
|
||||
..Default::default()
|
||||
};
|
||||
active_subuser_entry_secondary_user.insert(&txn).await?;
|
||||
action = "added";
|
||||
}
|
||||
_ => {
|
||||
// Return if the user should be removed and if there is no entry;
|
||||
// in this case, the user is not entered
|
||||
|
||||
// Return if the user should be added and there is already an entry;
|
||||
// in this case, they were already added, so we can skip this
|
||||
// Do nothing in this case
|
||||
}
|
||||
};
|
||||
txn.commit().await?;
|
||||
|
||||
let response = (
|
||||
StatusCode::OK,
|
||||
Json(json!({
|
||||
"rpc_key": rpc_key_to_modify,
|
||||
"subuser_address": subuser_address,
|
||||
"keep_user": keep_subuser,
|
||||
"new_role": new_role,
|
||||
"action": action
|
||||
})),
|
||||
)
|
||||
.into_response();
|
||||
// Return early if the log was added, assume there is at most one valid log per transaction
|
||||
Ok(response.into())
|
||||
}
|
@ -232,20 +232,23 @@ pub fn get_query_window_seconds_from_params(
|
||||
|
||||
pub fn get_stats_column_from_params(params: &HashMap<String, String>) -> Web3ProxyResult<&str> {
|
||||
params.get("query_stats_column").map_or_else(
|
||||
|| Ok("frontend_requests"),
|
||||
|| Ok(""),
|
||||
|query_stats_column: &String| {
|
||||
// Must be one of: Otherwise respond with an error ...
|
||||
match query_stats_column.as_str() {
|
||||
"frontend_requests"
|
||||
""
|
||||
| "frontend_requests"
|
||||
| "backend_requests"
|
||||
| "cache_hits"
|
||||
| "cache_misses"
|
||||
| "no_servers"
|
||||
| "sum_request_bytes"
|
||||
| "sum_response_bytes"
|
||||
| "sum_response_millis" => Ok(query_stats_column),
|
||||
| "sum_response_millis"
|
||||
| "sum_credits_used"
|
||||
| "balance" => Ok(query_stats_column),
|
||||
_ => Err(Web3ProxyError::BadRequest(
|
||||
"Unable to parse query_stats_column. It must be one of: \
|
||||
"Unable to parse query_stats_column. It must be empty, or one of: \
|
||||
frontend_requests, \
|
||||
backend_requests, \
|
||||
cache_hits, \
|
||||
@ -253,7 +256,9 @@ pub fn get_stats_column_from_params(params: &HashMap<String, String>) -> Web3Pro
|
||||
no_servers, \
|
||||
sum_request_bytes, \
|
||||
sum_response_bytes, \
|
||||
sum_response_millis"
|
||||
sum_response_millis, \
|
||||
sum_credits_used, \
|
||||
balance"
|
||||
.to_string(),
|
||||
)),
|
||||
}
|
||||
|
@ -7,6 +7,7 @@ pub mod http_params;
|
||||
pub mod jsonrpc;
|
||||
pub mod pagerduty;
|
||||
pub mod prometheus;
|
||||
pub mod referral_code;
|
||||
pub mod rpcs;
|
||||
pub mod stats;
|
||||
pub mod user_token;
|
||||
@ -30,4 +31,5 @@ pub struct PostLoginQuery {
|
||||
pub struct PostLogin {
|
||||
sig: String,
|
||||
msg: String,
|
||||
pub referral_code: Option<String>,
|
||||
}
|
||||
|
24
web3_proxy/src/referral_code.rs
Normal file
24
web3_proxy/src/referral_code.rs
Normal file
@ -0,0 +1,24 @@
|
||||
use anyhow::{self, Result};
|
||||
use ulid::Ulid;
|
||||
|
||||
pub struct ReferralCode(pub String);
|
||||
|
||||
impl Default for ReferralCode {
|
||||
fn default() -> Self {
|
||||
let out = Ulid::new();
|
||||
Self(format!("llamanodes-{}", out))
|
||||
}
|
||||
}
|
||||
|
||||
impl TryFrom<String> for ReferralCode {
|
||||
type Error = anyhow::Error;
|
||||
|
||||
fn try_from(x: String) -> Result<Self> {
|
||||
if !x.starts_with("llamanodes-") {
|
||||
return Err(anyhow::anyhow!(
|
||||
"Referral Code does not have the right format"
|
||||
));
|
||||
}
|
||||
Ok(Self(x))
|
||||
}
|
||||
}
|
@ -4,7 +4,6 @@ use super::consensus::ConsensusWeb3Rpcs;
|
||||
use super::one::Web3Rpc;
|
||||
use super::request::{OpenRequestHandle, OpenRequestResult, RequestErrorHandler};
|
||||
use crate::app::{flatten_handle, AnyhowJoinHandle, Web3ProxyApp};
|
||||
///! Load balanced communication with a group of web3 providers
|
||||
use crate::config::{BlockAndRpc, TxHashAndRpc, Web3RpcConfig};
|
||||
use crate::frontend::authorization::{Authorization, RequestMetadata};
|
||||
use crate::frontend::errors::{Web3ProxyError, Web3ProxyResult};
|
||||
|
@ -14,7 +14,6 @@ use axum::{
|
||||
};
|
||||
use entities::{rpc_accounting, rpc_key};
|
||||
use hashbrown::HashMap;
|
||||
use http::StatusCode;
|
||||
use log::warn;
|
||||
use migration::sea_orm::{
|
||||
ColumnTrait, EntityTrait, PaginatorTrait, QueryFilter, QueryOrder, QuerySelect, Select,
|
||||
@ -209,11 +208,7 @@ pub async fn query_user_stats<'a>(
|
||||
// TODO: move getting the param and checking the bearer token into a helper function
|
||||
if let Some(rpc_key_id) = params.get("rpc_key_id") {
|
||||
let rpc_key_id = rpc_key_id.parse::<u64>().map_err(|e| {
|
||||
Web3ProxyError::StatusCode(
|
||||
StatusCode::BAD_REQUEST,
|
||||
"Unable to parse rpc_key_id".to_string(),
|
||||
Some(e.into()),
|
||||
)
|
||||
Web3ProxyError::BadRequest(format!("Unable to parse rpc_key_id. {:?}", e))
|
||||
})?;
|
||||
|
||||
response_body.insert("rpc_key_id", serde_json::Value::Number(rpc_key_id.into()));
|
||||
|
@ -1,11 +1,11 @@
|
||||
use super::StatType;
|
||||
use crate::http_params::get_stats_column_from_params;
|
||||
use crate::frontend::errors::Web3ProxyErrorContext;
|
||||
use crate::{
|
||||
app::Web3ProxyApp,
|
||||
frontend::errors::{Web3ProxyError, Web3ProxyResponse},
|
||||
http_params::{
|
||||
get_chain_id_from_params, get_query_start_from_params, get_query_stop_from_params,
|
||||
get_query_window_seconds_from_params, get_user_id_from_params,
|
||||
get_query_window_seconds_from_params,
|
||||
},
|
||||
};
|
||||
use anyhow::Context;
|
||||
@ -14,38 +14,18 @@ use axum::{
|
||||
response::IntoResponse,
|
||||
Json, TypedHeader,
|
||||
};
|
||||
use chrono::{DateTime, FixedOffset};
|
||||
use entities::sea_orm_active_enums::Role;
|
||||
use entities::{rpc_key, secondary_user};
|
||||
use fstrings::{f, format_args_f};
|
||||
use hashbrown::HashMap;
|
||||
use influxdb2::api::query::FluxRecord;
|
||||
use influxdb2::models::Query;
|
||||
use influxdb2::FromDataPoint;
|
||||
use itertools::Itertools;
|
||||
use log::trace;
|
||||
use serde::Serialize;
|
||||
use serde_json::{json, Number, Value};
|
||||
|
||||
// This type-API is extremely brittle! Make sure that the types conform 1-to-1 as defined here
|
||||
// https://docs.rs/influxdb2-structmap/0.2.0/src/influxdb2_structmap/value.rs.html#1-98
|
||||
#[derive(Debug, Default, FromDataPoint, Serialize)]
|
||||
pub struct AggregatedRpcAccounting {
|
||||
chain_id: String,
|
||||
_field: String,
|
||||
_value: i64,
|
||||
_time: DateTime<FixedOffset>,
|
||||
error_response: String,
|
||||
archive_needed: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, FromDataPoint, Serialize)]
|
||||
pub struct DetailedRpcAccounting {
|
||||
chain_id: String,
|
||||
_field: String,
|
||||
_value: i64,
|
||||
_time: DateTime<FixedOffset>,
|
||||
error_response: String,
|
||||
archive_needed: String,
|
||||
method: String,
|
||||
}
|
||||
use log::{error, info, warn};
|
||||
use migration::sea_orm::ColumnTrait;
|
||||
use migration::sea_orm::EntityTrait;
|
||||
use migration::sea_orm::QueryFilter;
|
||||
use serde_json::json;
|
||||
use ulid::Ulid;
|
||||
|
||||
pub async fn query_user_stats<'a>(
|
||||
app: &'a Web3ProxyApp,
|
||||
@ -53,15 +33,17 @@ pub async fn query_user_stats<'a>(
|
||||
params: &'a HashMap<String, String>,
|
||||
stat_response_type: StatType,
|
||||
) -> Web3ProxyResponse {
|
||||
let db_conn = app.db_conn().context("query_user_stats needs a db")?;
|
||||
let user_id = match bearer {
|
||||
Some(inner_bearer) => {
|
||||
let (user, _semaphore) = app.bearer_is_authorized(inner_bearer.0 .0).await?;
|
||||
user.id
|
||||
}
|
||||
None => 0,
|
||||
};
|
||||
|
||||
let db_replica = app
|
||||
.db_replica()
|
||||
.context("query_user_stats needs a db replica")?;
|
||||
let mut redis_conn = app
|
||||
.redis_conn()
|
||||
.await
|
||||
.context("query_user_stats had a redis connection error")?
|
||||
.context("query_user_stats needs a redis")?;
|
||||
|
||||
// TODO: have a getter for this. do we need a connection pool on it?
|
||||
let influxdb_client = app
|
||||
@ -69,22 +51,15 @@ pub async fn query_user_stats<'a>(
|
||||
.as_ref()
|
||||
.context("query_user_stats needs an influxdb client")?;
|
||||
|
||||
// get the user id first. if it is 0, we should use a cache on the app
|
||||
let user_id =
|
||||
get_user_id_from_params(&mut redis_conn, &db_conn, &db_replica, bearer, params).await?;
|
||||
|
||||
let query_window_seconds = get_query_window_seconds_from_params(params)?;
|
||||
let query_start = get_query_start_from_params(params)?.timestamp();
|
||||
let query_stop = get_query_stop_from_params(params)?.timestamp();
|
||||
let chain_id = get_chain_id_from_params(app, params)?;
|
||||
let stats_column = get_stats_column_from_params(params)?;
|
||||
|
||||
// query_window_seconds must be provided, and should be not 1s (?) by default ..
|
||||
|
||||
// Return a bad request if query_start == query_stop, because then the query is empty basically
|
||||
if query_start == query_stop {
|
||||
return Err(Web3ProxyError::BadRequest(
|
||||
"query_start and query_stop date cannot be equal. Please specify a different range"
|
||||
"Start and Stop date cannot be equal. Please specify a (different) start date."
|
||||
.to_owned(),
|
||||
));
|
||||
}
|
||||
@ -95,273 +70,400 @@ pub async fn query_user_stats<'a>(
|
||||
"opt_in_proxy"
|
||||
};
|
||||
|
||||
let mut join_candidates: Vec<String> = vec![
|
||||
"_time".to_string(),
|
||||
"_measurement".to_string(),
|
||||
"chain_id".to_string(),
|
||||
];
|
||||
|
||||
// Include a hashmap to go from rpc_secret_key_id to the rpc_secret_key
|
||||
let mut rpc_key_id_to_key = HashMap::new();
|
||||
|
||||
let rpc_key_filter = if user_id == 0 {
|
||||
"".to_string()
|
||||
} else {
|
||||
// Fetch all rpc_secret_key_ids, and filter for these
|
||||
let mut user_rpc_keys = rpc_key::Entity::find()
|
||||
.filter(rpc_key::Column::UserId.eq(user_id))
|
||||
.all(db_replica.conn())
|
||||
.await
|
||||
.web3_context("failed loading user's key")?
|
||||
.into_iter()
|
||||
.map(|x| {
|
||||
let key = x.id.to_string();
|
||||
let val = Ulid::from(x.secret_key);
|
||||
rpc_key_id_to_key.insert(key.clone(), val);
|
||||
key
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
// Fetch all rpc_keys where we are the subuser
|
||||
let mut subuser_rpc_keys = secondary_user::Entity::find()
|
||||
.filter(secondary_user::Column::UserId.eq(user_id))
|
||||
.find_also_related(rpc_key::Entity)
|
||||
.all(db_replica.conn())
|
||||
// TODO: Do a join with rpc-keys
|
||||
.await
|
||||
.web3_context("failed loading subuser keys")?
|
||||
.into_iter()
|
||||
.flat_map(
|
||||
|(subuser, wrapped_shared_rpc_key)| match wrapped_shared_rpc_key {
|
||||
Some(shared_rpc_key) => {
|
||||
if subuser.role == Role::Admin || subuser.role == Role::Owner {
|
||||
let key = shared_rpc_key.id.to_string();
|
||||
let val = Ulid::from(shared_rpc_key.secret_key);
|
||||
rpc_key_id_to_key.insert(key.clone(), val);
|
||||
Some(key)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
None => None,
|
||||
},
|
||||
)
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
user_rpc_keys.append(&mut subuser_rpc_keys);
|
||||
|
||||
if user_rpc_keys.len() == 0 {
|
||||
return Err(Web3ProxyError::BadRequest(
|
||||
"User has no secret RPC keys yet".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
// Make the tables join on the rpc_key_id as well:
|
||||
join_candidates.push("rpc_secret_key_id".to_string());
|
||||
|
||||
// Iterate, pop and add to string
|
||||
f!(
|
||||
r#"|> filter(fn: (r) => contains(value: r["rpc_secret_key_id"], set: {:?}))"#,
|
||||
user_rpc_keys
|
||||
)
|
||||
};
|
||||
|
||||
// TODO: Turn into a 500 error if bucket is not found ..
|
||||
// Or just unwrap or so
|
||||
let bucket = &app
|
||||
.config
|
||||
.influxdb_bucket
|
||||
.clone()
|
||||
.context("No influxdb bucket was provided")?;
|
||||
trace!("Bucket is {:?}", bucket);
|
||||
.context("No influxdb bucket was provided")?; // "web3_proxy";
|
||||
|
||||
let mut group_columns = vec![
|
||||
"chain_id",
|
||||
"_measurement",
|
||||
"_field",
|
||||
"_measurement",
|
||||
"error_response",
|
||||
"archive_needed",
|
||||
];
|
||||
info!("Bucket is {:?}", bucket);
|
||||
let mut filter_chain_id = "".to_string();
|
||||
|
||||
// Add to group columns the method, if we want the detailed view as well
|
||||
if let StatType::Detailed = stat_response_type {
|
||||
group_columns.push("method");
|
||||
}
|
||||
|
||||
if chain_id == 0 {
|
||||
group_columns.push("chain_id");
|
||||
} else {
|
||||
if chain_id != 0 {
|
||||
filter_chain_id = f!(r#"|> filter(fn: (r) => r["chain_id"] == "{chain_id}")"#);
|
||||
}
|
||||
|
||||
let group_columns = serde_json::to_string(&json!(group_columns)).unwrap();
|
||||
// Fetch and request for balance
|
||||
|
||||
let group = match stat_response_type {
|
||||
StatType::Aggregated => f!(r#"|> group(columns: {group_columns})"#),
|
||||
StatType::Detailed => "".to_string(),
|
||||
};
|
||||
info!(
|
||||
"Query start and stop are: {:?} {:?}",
|
||||
query_start, query_stop
|
||||
);
|
||||
// info!("Query column parameters are: {:?}", stats_column);
|
||||
info!("Query measurement is: {:?}", measurement);
|
||||
info!("Filters are: {:?}", filter_chain_id); // filter_field
|
||||
info!("window seconds are: {:?}", query_window_seconds);
|
||||
|
||||
let filter_field = match stat_response_type {
|
||||
StatType::Aggregated => {
|
||||
f!(r#"|> filter(fn: (r) => r["_field"] == "{stats_column}")"#)
|
||||
let drop_method = match stat_response_type {
|
||||
StatType::Aggregated => f!(r#"|> drop(columns: ["method"])"#),
|
||||
StatType::Detailed => {
|
||||
// Make the tables join on the method column as well
|
||||
join_candidates.push("method".to_string());
|
||||
"".to_string()
|
||||
}
|
||||
// TODO: Detailed should still filter it, but just "group-by" method (call it once per each method ...
|
||||
// Or maybe it shouldn't filter it ...
|
||||
StatType::Detailed => "".to_string(),
|
||||
};
|
||||
|
||||
trace!("query time range: {:?} - {:?}", query_start, query_stop);
|
||||
trace!("stats_column: {:?}", stats_column);
|
||||
trace!("measurement: {:?}", measurement);
|
||||
trace!("filters: {:?} {:?}", filter_field, filter_chain_id);
|
||||
trace!("group: {:?}", group);
|
||||
trace!("query_window_seconds: {:?}", query_window_seconds);
|
||||
let join_candidates = f!(r#"{:?}"#, join_candidates);
|
||||
|
||||
let query = f!(r#"
|
||||
from(bucket: "{bucket}")
|
||||
|> range(start: {query_start}, stop: {query_stop})
|
||||
|> filter(fn: (r) => r["_measurement"] == "{measurement}")
|
||||
{filter_field}
|
||||
{filter_chain_id}
|
||||
{group}
|
||||
|> aggregateWindow(every: {query_window_seconds}s, fn: sum, createEmpty: false)
|
||||
|> group()
|
||||
base = from(bucket: "{bucket}")
|
||||
|> range(start: {query_start}, stop: {query_stop})
|
||||
{rpc_key_filter}
|
||||
|> filter(fn: (r) => r["_measurement"] == "{measurement}")
|
||||
{filter_chain_id}
|
||||
{drop_method}
|
||||
|
||||
cumsum = base
|
||||
|> aggregateWindow(every: {query_window_seconds}s, fn: sum, createEmpty: false)
|
||||
|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")
|
||||
|> drop(columns: ["balance"])
|
||||
|> map(fn: (r) => ({{ r with "archive_needed": if r.archive_needed == "true" then r.frontend_requests else 0}}))
|
||||
|> map(fn: (r) => ({{ r with "error_response": if r.error_response == "true" then r.frontend_requests else 0}}))
|
||||
|> group(columns: ["_time", "_measurement", "chain_id", "method", "rpc_secret_key_id"])
|
||||
|> sort(columns: ["frontend_requests"])
|
||||
|> map(fn:(r) => ({{ r with "sum_credits_used": float(v: r["sum_credits_used"]) }}))
|
||||
|> cumulativeSum(columns: ["archive_needed", "error_response", "backend_requests", "cache_hits", "cache_misses", "frontend_requests", "sum_credits_used", "sum_request_bytes", "sum_response_bytes", "sum_response_millis"])
|
||||
|> sort(columns: ["frontend_requests"], desc: true)
|
||||
|> limit(n: 1)
|
||||
|> group()
|
||||
|> sort(columns: ["_time", "_measurement", "chain_id", "method", "rpc_secret_key_id"], desc: true)
|
||||
|
||||
balance = base
|
||||
|> toFloat()
|
||||
|> aggregateWindow(every: {query_window_seconds}s, fn: mean, createEmpty: false)
|
||||
|> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")
|
||||
|> group(columns: ["_time", "_measurement", "chain_id", "method", "rpc_secret_key_id"])
|
||||
|> mean(column: "balance")
|
||||
|> group()
|
||||
|> sort(columns: ["_time", "_measurement", "chain_id", "method", "rpc_secret_key_id"], desc: true)
|
||||
|
||||
join(
|
||||
tables: {{cumsum, balance}},
|
||||
on: {join_candidates}
|
||||
)
|
||||
"#);
|
||||
|
||||
trace!("Raw query to db is: {:?}", query);
|
||||
info!("Raw query to db is: {:?}", query);
|
||||
let query = Query::new(query.to_string());
|
||||
trace!("Query to db is: {:?}", query);
|
||||
info!("Query to db is: {:?}", query);
|
||||
|
||||
// Return a different result based on the query
|
||||
let datapoints = match stat_response_type {
|
||||
StatType::Aggregated => {
|
||||
let influx_responses: Vec<AggregatedRpcAccounting> = influxdb_client
|
||||
.query::<AggregatedRpcAccounting>(Some(query))
|
||||
.await?;
|
||||
trace!("Influx responses are {:?}", &influx_responses);
|
||||
for res in &influx_responses {
|
||||
trace!("Resp is: {:?}", res);
|
||||
}
|
||||
// Make the query and collect all data
|
||||
let raw_influx_responses: Vec<FluxRecord> =
|
||||
influxdb_client.query_raw(Some(query.clone())).await?;
|
||||
|
||||
influx_responses
|
||||
.into_iter()
|
||||
.map(|x| (x._time, x))
|
||||
.into_group_map()
|
||||
.into_iter()
|
||||
.map(|(group, grouped_items)| {
|
||||
trace!("Group is: {:?}", group);
|
||||
|
||||
// Now put all the fields next to each other
|
||||
// (there will be exactly one field per timestamp, but we want to arrive at a new object)
|
||||
let mut out = HashMap::new();
|
||||
// Could also add a timestamp
|
||||
|
||||
let mut archive_requests = 0;
|
||||
let mut error_responses = 0;
|
||||
|
||||
out.insert("method".to_owned(), json!("null"));
|
||||
|
||||
for x in grouped_items {
|
||||
trace!("Iterating over grouped item {:?}", x);
|
||||
|
||||
let key = format!("total_{}", x._field).to_string();
|
||||
trace!("Looking at {:?}: {:?}", key, x._value);
|
||||
|
||||
// Insert it once, and then fix it
|
||||
match out.get_mut(&key) {
|
||||
Some(existing) => {
|
||||
match existing {
|
||||
Value::Number(old_value) => {
|
||||
trace!("Old value is {:?}", old_value);
|
||||
// unwrap will error when someone has too many credits ..
|
||||
let old_value = old_value.as_i64().unwrap();
|
||||
*existing = serde_json::Value::Number(Number::from(
|
||||
old_value + x._value,
|
||||
));
|
||||
trace!("New value is {:?}", existing);
|
||||
}
|
||||
_ => {
|
||||
panic!("Should be nothing but a number")
|
||||
}
|
||||
};
|
||||
// Basically rename all items to be "total",
|
||||
// calculate number of "archive_needed" and "error_responses" through their boolean representations ...
|
||||
// HashMap<String, serde_json::Value>
|
||||
// let mut datapoints = HashMap::new();
|
||||
// TODO: I must be able to probably zip the balance query...
|
||||
let datapoints = raw_influx_responses
|
||||
.into_iter()
|
||||
// .into_values()
|
||||
.map(|x| x.values)
|
||||
.map(|value_map| {
|
||||
// Unwrap all relevant numbers
|
||||
// BTreeMap<String, value::Value>
|
||||
let mut out: HashMap<String, serde_json::Value> = HashMap::new();
|
||||
value_map.into_iter().for_each(|(key, value)| {
|
||||
if key == "_measurement" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::String(inner) => {
|
||||
if inner == "opt_in_proxy" {
|
||||
out.insert(
|
||||
"collection".to_owned(),
|
||||
serde_json::Value::String("opt-in".to_owned()),
|
||||
);
|
||||
} else if inner == "global_proxy" {
|
||||
out.insert(
|
||||
"collection".to_owned(),
|
||||
serde_json::Value::String("global".to_owned()),
|
||||
);
|
||||
} else {
|
||||
warn!("Some datapoints are not part of any _measurement!");
|
||||
out.insert(
|
||||
"collection".to_owned(),
|
||||
serde_json::Value::String("unknown".to_owned()),
|
||||
);
|
||||
}
|
||||
None => {
|
||||
trace!("Does not exist yet! Insert new!");
|
||||
out.insert(key, serde_json::Value::Number(Number::from(x._value)));
|
||||
}
|
||||
};
|
||||
|
||||
if !out.contains_key("query_window_timestamp") {
|
||||
out.insert(
|
||||
"query_window_timestamp".to_owned(),
|
||||
// serde_json::Value::Number(x.time.timestamp().into())
|
||||
json!(x._time.timestamp()),
|
||||
);
|
||||
}
|
||||
|
||||
// Interpret archive needed as a boolean
|
||||
let archive_needed = match x.archive_needed.as_str() {
|
||||
"true" => true,
|
||||
"false" => false,
|
||||
_ => {
|
||||
panic!("This should never be!")
|
||||
}
|
||||
};
|
||||
let error_response = match x.error_response.as_str() {
|
||||
"true" => true,
|
||||
"false" => false,
|
||||
_ => {
|
||||
panic!("This should never be!")
|
||||
}
|
||||
};
|
||||
|
||||
// Add up to archive requests and error responses
|
||||
// TODO: Gotta double check if errors & archive is based on frontend requests, or other metrics
|
||||
if x._field == "frontend_requests" && archive_needed {
|
||||
archive_requests += x._value as u64 // This is the number of requests
|
||||
}
|
||||
if x._field == "frontend_requests" && error_response {
|
||||
error_responses += x._value as u64
|
||||
_ => {
|
||||
error!("_measurement should always be a String!");
|
||||
}
|
||||
}
|
||||
|
||||
out.insert("archive_request".to_owned(), json!(archive_requests));
|
||||
out.insert("error_response".to_owned(), json!(error_responses));
|
||||
|
||||
json!(out)
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
}
|
||||
StatType::Detailed => {
|
||||
let influx_responses: Vec<DetailedRpcAccounting> = influxdb_client
|
||||
.query::<DetailedRpcAccounting>(Some(query))
|
||||
.await?;
|
||||
trace!("Influx responses are {:?}", &influx_responses);
|
||||
for res in &influx_responses {
|
||||
trace!("Resp is: {:?}", res);
|
||||
}
|
||||
|
||||
// Group by all fields together ..
|
||||
influx_responses
|
||||
.into_iter()
|
||||
.map(|x| ((x._time, x.method.clone()), x))
|
||||
.into_group_map()
|
||||
.into_iter()
|
||||
.map(|(group, grouped_items)| {
|
||||
// Now put all the fields next to each other
|
||||
// (there will be exactly one field per timestamp, but we want to arrive at a new object)
|
||||
let mut out = HashMap::new();
|
||||
// Could also add a timestamp
|
||||
|
||||
let mut archive_requests = 0;
|
||||
let mut error_responses = 0;
|
||||
|
||||
// Should probably move this outside ... (?)
|
||||
let method = group.1;
|
||||
out.insert("method".to_owned(), json!(method));
|
||||
|
||||
for x in grouped_items {
|
||||
trace!("Iterating over grouped item {:?}", x);
|
||||
|
||||
let key = format!("total_{}", x._field).to_string();
|
||||
trace!("Looking at {:?}: {:?}", key, x._value);
|
||||
|
||||
// Insert it once, and then fix it
|
||||
match out.get_mut(&key) {
|
||||
Some(existing) => {
|
||||
match existing {
|
||||
Value::Number(old_value) => {
|
||||
trace!("Old value is {:?}", old_value);
|
||||
|
||||
// unwrap will error when someone has too many credits ..
|
||||
let old_value = old_value.as_i64().unwrap();
|
||||
*existing = serde_json::Value::Number(Number::from(
|
||||
old_value + x._value,
|
||||
));
|
||||
|
||||
trace!("New value is {:?}", existing.as_i64());
|
||||
}
|
||||
_ => {
|
||||
panic!("Should be nothing but a number")
|
||||
}
|
||||
};
|
||||
}
|
||||
None => {
|
||||
trace!("Does not exist yet! Insert new!");
|
||||
out.insert(key, serde_json::Value::Number(Number::from(x._value)));
|
||||
}
|
||||
};
|
||||
|
||||
if !out.contains_key("query_window_timestamp") {
|
||||
} else if key == "_stop" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::TimeRFC(inner) => {
|
||||
out.insert(
|
||||
"query_window_timestamp".to_owned(),
|
||||
json!(x._time.timestamp()),
|
||||
"stop_time".to_owned(),
|
||||
serde_json::Value::String(inner.to_string()),
|
||||
);
|
||||
}
|
||||
|
||||
// Interpret archive needed as a boolean
|
||||
let archive_needed = match x.archive_needed.as_str() {
|
||||
"true" => true,
|
||||
"false" => false,
|
||||
_ => {
|
||||
panic!("This should never be!")
|
||||
}
|
||||
};
|
||||
let error_response = match x.error_response.as_str() {
|
||||
"true" => true,
|
||||
"false" => false,
|
||||
_ => {
|
||||
panic!("This should never be!")
|
||||
}
|
||||
};
|
||||
|
||||
// Add up to archive requests and error responses
|
||||
// TODO: Gotta double check if errors & archive is based on frontend requests, or other metrics
|
||||
if x._field == "frontend_requests" && archive_needed {
|
||||
archive_requests += x._value as i32 // This is the number of requests
|
||||
_ => {
|
||||
error!("_stop should always be a TimeRFC!");
|
||||
}
|
||||
if x._field == "frontend_requests" && error_response {
|
||||
error_responses += x._value as i32
|
||||
};
|
||||
} else if key == "_time" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::TimeRFC(inner) => {
|
||||
out.insert(
|
||||
"time".to_owned(),
|
||||
serde_json::Value::String(inner.to_string()),
|
||||
);
|
||||
}
|
||||
_ => {
|
||||
error!("_stop should always be a TimeRFC!");
|
||||
}
|
||||
}
|
||||
} else if key == "backend_requests" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::Long(inner) => {
|
||||
out.insert(
|
||||
"total_backend_requests".to_owned(),
|
||||
serde_json::Value::Number(inner.into()),
|
||||
);
|
||||
}
|
||||
_ => {
|
||||
error!("backend_requests should always be a Long!");
|
||||
}
|
||||
}
|
||||
} else if key == "balance" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::Double(inner) => {
|
||||
out.insert("balance".to_owned(), json!(f64::from(inner)));
|
||||
}
|
||||
_ => {
|
||||
error!("balance should always be a Double!");
|
||||
}
|
||||
}
|
||||
} else if key == "cache_hits" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::Long(inner) => {
|
||||
out.insert(
|
||||
"total_cache_hits".to_owned(),
|
||||
serde_json::Value::Number(inner.into()),
|
||||
);
|
||||
}
|
||||
_ => {
|
||||
error!("cache_hits should always be a Long!");
|
||||
}
|
||||
}
|
||||
} else if key == "cache_misses" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::Long(inner) => {
|
||||
out.insert(
|
||||
"total_cache_misses".to_owned(),
|
||||
serde_json::Value::Number(inner.into()),
|
||||
);
|
||||
}
|
||||
_ => {
|
||||
error!("cache_misses should always be a Long!");
|
||||
}
|
||||
}
|
||||
} else if key == "frontend_requests" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::Long(inner) => {
|
||||
out.insert(
|
||||
"total_frontend_requests".to_owned(),
|
||||
serde_json::Value::Number(inner.into()),
|
||||
);
|
||||
}
|
||||
_ => {
|
||||
error!("frontend_requests should always be a Long!");
|
||||
}
|
||||
}
|
||||
} else if key == "no_servers" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::Long(inner) => {
|
||||
out.insert(
|
||||
"no_servers".to_owned(),
|
||||
serde_json::Value::Number(inner.into()),
|
||||
);
|
||||
}
|
||||
_ => {
|
||||
error!("no_servers should always be a Long!");
|
||||
}
|
||||
}
|
||||
} else if key == "sum_credits_used" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::Double(inner) => {
|
||||
out.insert("total_credits_used".to_owned(), json!(f64::from(inner)));
|
||||
}
|
||||
_ => {
|
||||
error!("sum_credits_used should always be a Double!");
|
||||
}
|
||||
}
|
||||
} else if key == "sum_request_bytes" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::Long(inner) => {
|
||||
out.insert(
|
||||
"total_request_bytes".to_owned(),
|
||||
serde_json::Value::Number(inner.into()),
|
||||
);
|
||||
}
|
||||
_ => {
|
||||
error!("sum_request_bytes should always be a Long!");
|
||||
}
|
||||
}
|
||||
} else if key == "sum_response_bytes" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::Long(inner) => {
|
||||
out.insert(
|
||||
"total_response_bytes".to_owned(),
|
||||
serde_json::Value::Number(inner.into()),
|
||||
);
|
||||
}
|
||||
_ => {
|
||||
error!("sum_response_bytes should always be a Long!");
|
||||
}
|
||||
}
|
||||
} else if key == "rpc_secret_key_id" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::String(inner) => {
|
||||
out.insert(
|
||||
"rpc_key".to_owned(),
|
||||
serde_json::Value::String(
|
||||
rpc_key_id_to_key.get(&inner).unwrap().to_string(),
|
||||
),
|
||||
);
|
||||
}
|
||||
_ => {
|
||||
error!("rpc_secret_key_id should always be a String!");
|
||||
}
|
||||
}
|
||||
} else if key == "sum_response_millis" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::Long(inner) => {
|
||||
out.insert(
|
||||
"total_response_millis".to_owned(),
|
||||
serde_json::Value::Number(inner.into()),
|
||||
);
|
||||
}
|
||||
_ => {
|
||||
error!("sum_response_millis should always be a Long!");
|
||||
}
|
||||
}
|
||||
}
|
||||
// Make this if detailed ...
|
||||
else if stat_response_type == StatType::Detailed && key == "method" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::String(inner) => {
|
||||
out.insert("method".to_owned(), serde_json::Value::String(inner));
|
||||
}
|
||||
_ => {
|
||||
error!("method should always be a String!");
|
||||
}
|
||||
}
|
||||
} else if key == "chain_id" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::String(inner) => {
|
||||
out.insert("chain_id".to_owned(), serde_json::Value::String(inner));
|
||||
}
|
||||
_ => {
|
||||
error!("chain_id should always be a String!");
|
||||
}
|
||||
}
|
||||
} else if key == "archive_needed" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::Long(inner) => {
|
||||
out.insert(
|
||||
"archive_needed".to_owned(),
|
||||
serde_json::Value::Number(inner.into()),
|
||||
);
|
||||
}
|
||||
_ => {
|
||||
error!("archive_needed should always be a Long!");
|
||||
}
|
||||
}
|
||||
} else if key == "error_response" {
|
||||
match value {
|
||||
influxdb2_structmap::value::Value::Long(inner) => {
|
||||
out.insert(
|
||||
"error_response".to_owned(),
|
||||
serde_json::Value::Number(inner.into()),
|
||||
);
|
||||
}
|
||||
_ => {
|
||||
error!("error_response should always be a Long!");
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
out.insert("archive_request".to_owned(), json!(archive_requests));
|
||||
out.insert("error_response".to_owned(), json!(error_responses));
|
||||
|
||||
json!(out)
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
}
|
||||
};
|
||||
// datapoints.insert(out.get("time"), out);
|
||||
json!(out)
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
// I suppose archive requests could be either gathered by default (then summed up), or retrieved on a second go.
|
||||
// Same with error responses ..
|
||||
|
@ -1,22 +1,29 @@
|
||||
//! Store "stats" in a database for billing and a different database for graphing
|
||||
//!
|
||||
//! TODO: move some of these structs/functions into their own file?
|
||||
pub mod db_queries;
|
||||
pub mod influxdb_queries;
|
||||
|
||||
use crate::app::AuthorizationChecks;
|
||||
use crate::frontend::authorization::{Authorization, RequestMetadata};
|
||||
use anyhow::Context;
|
||||
use axum::headers::Origin;
|
||||
use chrono::{TimeZone, Utc};
|
||||
use chrono::{DateTime, Months, TimeZone, Utc};
|
||||
use derive_more::From;
|
||||
use entities::rpc_accounting_v2;
|
||||
use entities::sea_orm_active_enums::TrackingLevel;
|
||||
use entities::{balance, referee, referrer, rpc_accounting_v2, rpc_key, user, user_tier};
|
||||
use futures::stream;
|
||||
use hashbrown::HashMap;
|
||||
use influxdb2::api::write::TimestampPrecision;
|
||||
use influxdb2::models::DataPoint;
|
||||
use log::{error, info, trace};
|
||||
use migration::sea_orm::{self, DatabaseConnection, EntityTrait};
|
||||
use log::{error, info, trace, warn};
|
||||
use migration::sea_orm::prelude::Decimal;
|
||||
use migration::sea_orm::ActiveModelTrait;
|
||||
use migration::sea_orm::ColumnTrait;
|
||||
use migration::sea_orm::IntoActiveModel;
|
||||
use migration::sea_orm::{self, DatabaseConnection, EntityTrait, QueryFilter};
|
||||
use migration::{Expr, OnConflict};
|
||||
use moka::future::Cache;
|
||||
use num_traits::ToPrimitive;
|
||||
use std::cmp::max;
|
||||
use std::num::NonZeroU64;
|
||||
use std::sync::atomic::Ordering;
|
||||
use std::sync::Arc;
|
||||
@ -24,7 +31,9 @@ use std::time::Duration;
|
||||
use tokio::sync::broadcast;
|
||||
use tokio::task::JoinHandle;
|
||||
use tokio::time::interval;
|
||||
use ulid::Ulid;
|
||||
|
||||
#[derive(Debug, PartialEq, Eq)]
|
||||
pub enum StatType {
|
||||
Aggregated,
|
||||
Detailed,
|
||||
@ -45,6 +54,8 @@ pub struct RpcQueryStats {
|
||||
pub response_bytes: u64,
|
||||
pub response_millis: u64,
|
||||
pub response_timestamp: i64,
|
||||
/// Credits used signifies how how much money was used up
|
||||
pub credits_used: Decimal,
|
||||
}
|
||||
|
||||
#[derive(Clone, From, Hash, PartialEq, Eq)]
|
||||
@ -104,6 +115,8 @@ impl RpcQueryStats {
|
||||
}
|
||||
};
|
||||
|
||||
// Depending on method, add some arithmetic around calculating credits_used
|
||||
// I think balance should not go here, this looks more like a key thingy
|
||||
RpcQueryKey {
|
||||
response_timestamp,
|
||||
archive_needed: self.archive_request,
|
||||
@ -179,6 +192,9 @@ pub struct BufferedRpcQueryStats {
|
||||
pub sum_request_bytes: u64,
|
||||
pub sum_response_bytes: u64,
|
||||
pub sum_response_millis: u64,
|
||||
pub sum_credits_used: Decimal,
|
||||
/// Balance tells us the user's balance at this point in time
|
||||
pub latest_balance: Decimal,
|
||||
}
|
||||
|
||||
/// A stat that we aggregate and then store in a database.
|
||||
@ -200,6 +216,8 @@ pub struct StatBuffer {
|
||||
db_conn: Option<DatabaseConnection>,
|
||||
influxdb_client: Option<influxdb2::Client>,
|
||||
tsdb_save_interval_seconds: u32,
|
||||
rpc_secret_key_cache:
|
||||
Option<Cache<Ulid, AuthorizationChecks, hashbrown::hash_map::DefaultHashBuilder>>,
|
||||
db_save_interval_seconds: u32,
|
||||
billing_period_seconds: i64,
|
||||
global_timeseries_buffer: HashMap<RpcQueryKey, BufferedRpcQueryStats>,
|
||||
@ -227,6 +245,14 @@ impl BufferedRpcQueryStats {
|
||||
self.sum_request_bytes += stat.request_bytes;
|
||||
self.sum_response_bytes += stat.response_bytes;
|
||||
self.sum_response_millis += stat.response_millis;
|
||||
self.sum_credits_used += stat.credits_used;
|
||||
|
||||
// Also record the latest balance for this user ..
|
||||
self.latest_balance = stat
|
||||
.authorization
|
||||
.checks
|
||||
.balance
|
||||
.unwrap_or(Decimal::from(0));
|
||||
}
|
||||
|
||||
// TODO: take a db transaction instead so that we can batch?
|
||||
@ -242,10 +268,8 @@ impl BufferedRpcQueryStats {
|
||||
let accounting_entry = rpc_accounting_v2::ActiveModel {
|
||||
id: sea_orm::NotSet,
|
||||
rpc_key_id: sea_orm::Set(key.rpc_secret_key_id.map(Into::into).unwrap_or_default()),
|
||||
origin: sea_orm::Set(key.origin.map(|x| x.to_string()).unwrap_or_default()),
|
||||
chain_id: sea_orm::Set(chain_id),
|
||||
period_datetime: sea_orm::Set(period_datetime),
|
||||
method: sea_orm::Set(key.method.unwrap_or_default()),
|
||||
archive_needed: sea_orm::Set(key.archive_needed),
|
||||
error_response: sea_orm::Set(key.error_response),
|
||||
frontend_requests: sea_orm::Set(self.frontend_requests),
|
||||
@ -257,6 +281,7 @@ impl BufferedRpcQueryStats {
|
||||
sum_request_bytes: sea_orm::Set(self.sum_request_bytes),
|
||||
sum_response_millis: sea_orm::Set(self.sum_response_millis),
|
||||
sum_response_bytes: sea_orm::Set(self.sum_response_bytes),
|
||||
sum_credits_used: sea_orm::Set(self.sum_credits_used),
|
||||
};
|
||||
|
||||
rpc_accounting_v2::Entity::insert(accounting_entry)
|
||||
@ -306,12 +331,215 @@ impl BufferedRpcQueryStats {
|
||||
Expr::col(rpc_accounting_v2::Column::SumResponseBytes)
|
||||
.add(self.sum_response_bytes),
|
||||
),
|
||||
(
|
||||
rpc_accounting_v2::Column::SumCreditsUsed,
|
||||
Expr::col(rpc_accounting_v2::Column::SumCreditsUsed)
|
||||
.add(self.sum_credits_used),
|
||||
),
|
||||
])
|
||||
.to_owned(),
|
||||
)
|
||||
.exec(db_conn)
|
||||
.await?;
|
||||
|
||||
// TODO: Refactor this function a bit more just so it looks and feels nicer
|
||||
// TODO: Figure out how to go around unmatching, it shouldn't return an error, but this is disgusting
|
||||
|
||||
// All the referral & balance arithmetic takes place here
|
||||
let rpc_secret_key_id: u64 = match key.rpc_secret_key_id {
|
||||
Some(x) => x.into(),
|
||||
// Return early if the RPC key is not found, because then it is an anonymous user
|
||||
None => return Ok(()),
|
||||
};
|
||||
|
||||
// (1) Get the user with that RPC key. This is the referee
|
||||
let sender_rpc_key = rpc_key::Entity::find()
|
||||
.filter(rpc_key::Column::Id.eq(rpc_secret_key_id))
|
||||
.one(db_conn)
|
||||
.await?;
|
||||
|
||||
// Technicall there should always be a user ... still let's return "Ok(())" for now
|
||||
let sender_user_id: u64 = match sender_rpc_key {
|
||||
Some(x) => x.user_id.into(),
|
||||
// Return early if the User is not found, because then it is an anonymous user
|
||||
// Let's also issue a warning because obviously the RPC key should correspond to a user
|
||||
None => {
|
||||
warn!(
|
||||
"No user was found for the following rpc key: {:?}",
|
||||
rpc_secret_key_id
|
||||
);
|
||||
return Ok(());
|
||||
}
|
||||
};
|
||||
|
||||
// (1) Do some general bookkeeping on the user
|
||||
let sender_balance = match balance::Entity::find()
|
||||
.filter(balance::Column::UserId.eq(sender_user_id))
|
||||
.one(db_conn)
|
||||
.await?
|
||||
{
|
||||
Some(x) => x,
|
||||
None => {
|
||||
warn!("This user id has no balance entry! {:?}", sender_user_id);
|
||||
return Ok(());
|
||||
}
|
||||
};
|
||||
|
||||
let mut active_sender_balance = sender_balance.clone().into_active_model();
|
||||
|
||||
// Still subtract from the user in any case,
|
||||
// Modify the balance of the sender completely (in mysql, next to the stats)
|
||||
// In any case, add this to "spent"
|
||||
active_sender_balance.used_balance =
|
||||
sea_orm::Set(sender_balance.used_balance + Decimal::from(self.sum_credits_used));
|
||||
|
||||
// Also update the available balance
|
||||
let new_available_balance = max(
|
||||
sender_balance.available_balance - Decimal::from(self.sum_credits_used),
|
||||
Decimal::from(0),
|
||||
);
|
||||
active_sender_balance.available_balance = sea_orm::Set(new_available_balance);
|
||||
|
||||
active_sender_balance.save(db_conn).await?;
|
||||
|
||||
let downgrade_user = match user::Entity::find()
|
||||
.filter(user::Column::Id.eq(sender_user_id))
|
||||
.one(db_conn)
|
||||
.await?
|
||||
{
|
||||
Some(x) => x,
|
||||
None => {
|
||||
warn!("No user was found with this sender id!");
|
||||
return Ok(());
|
||||
}
|
||||
};
|
||||
|
||||
let downgrade_user_role = user_tier::Entity::find()
|
||||
.filter(user_tier::Column::Id.eq(downgrade_user.user_tier_id))
|
||||
.one(db_conn)
|
||||
.await?
|
||||
.context(format!(
|
||||
"The foreign key for the user's user_tier_id was not found! {:?}",
|
||||
downgrade_user.user_tier_id
|
||||
))?;
|
||||
|
||||
// Downgrade a user to premium - out of funds if there's less than 10$ in the account, and if the user was premium before
|
||||
if new_available_balance < Decimal::from(10u64) && downgrade_user_role.title == "Premium" {
|
||||
// Only downgrade the user in local process memory, not elsewhere
|
||||
// app.rpc_secret_key_cache-
|
||||
|
||||
// let mut active_downgrade_user = downgrade_user.into_active_model();
|
||||
// active_downgrade_user.user_tier_id = sea_orm::Set(downgrade_user_role.id);
|
||||
// active_downgrade_user.save(db_conn).await?;
|
||||
}
|
||||
|
||||
// Get the referee, and the referrer
|
||||
// (2) Look up the code that this user used. This is the referee table
|
||||
let referee_object = match referee::Entity::find()
|
||||
.filter(referee::Column::UserId.eq(sender_user_id))
|
||||
.one(db_conn)
|
||||
.await?
|
||||
{
|
||||
Some(x) => x,
|
||||
None => {
|
||||
warn!(
|
||||
"No referral code was found for this user: {:?}",
|
||||
sender_user_id
|
||||
);
|
||||
return Ok(());
|
||||
}
|
||||
};
|
||||
|
||||
// (3) Look up the matching referrer in the referrer table
|
||||
// Referral table -> Get the referee id
|
||||
let user_with_that_referral_code = match referrer::Entity::find()
|
||||
.filter(referrer::Column::ReferralCode.eq(referee_object.used_referral_code))
|
||||
.one(db_conn)
|
||||
.await?
|
||||
{
|
||||
Some(x) => x,
|
||||
None => {
|
||||
warn!(
|
||||
"No referrer with that referral code was found {:?}",
|
||||
referee_object
|
||||
);
|
||||
return Ok(());
|
||||
}
|
||||
};
|
||||
|
||||
// Ok, now we add the credits to both users if applicable...
|
||||
// (4 onwards) Add balance to the referrer,
|
||||
|
||||
// (5) Check if referee has used up $100.00 USD in total (Have a config item that says how many credits account to 1$)
|
||||
// Get balance for the referrer (optionally make it into an active model ...)
|
||||
let sender_balance = match balance::Entity::find()
|
||||
.filter(balance::Column::UserId.eq(referee_object.user_id))
|
||||
.one(db_conn)
|
||||
.await?
|
||||
{
|
||||
Some(x) => x,
|
||||
None => {
|
||||
warn!(
|
||||
"This user id has no balance entry! {:?}",
|
||||
referee_object.user_id
|
||||
);
|
||||
return Ok(());
|
||||
}
|
||||
};
|
||||
|
||||
let mut active_sender_balance = sender_balance.clone().into_active_model();
|
||||
let referrer_balance = match balance::Entity::find()
|
||||
.filter(balance::Column::UserId.eq(user_with_that_referral_code.user_id))
|
||||
.one(db_conn)
|
||||
.await?
|
||||
{
|
||||
Some(x) => x,
|
||||
None => {
|
||||
warn!(
|
||||
"This user id has no balance entry! {:?}",
|
||||
referee_object.user_id
|
||||
);
|
||||
return Ok(());
|
||||
}
|
||||
};
|
||||
|
||||
// I could try to circumvene the clone here, but let's skip that for now
|
||||
let mut active_referee = referee_object.clone().into_active_model();
|
||||
|
||||
// (5.1) If not, go to (7). If yes, go to (6)
|
||||
// Hardcode this parameter also in config, so it's easier to tune
|
||||
if !referee_object.credits_applied_for_referee
|
||||
&& (sender_balance.used_balance + self.sum_credits_used) >= Decimal::from(100)
|
||||
{
|
||||
// (6) If the credits have not yet been applied to the referee, apply 10M credits / $100.00 USD worth of credits.
|
||||
// Make it into an active model, and add credits
|
||||
active_sender_balance.available_balance =
|
||||
sea_orm::Set(sender_balance.available_balance + Decimal::from(100));
|
||||
// Also mark referral as "credits_applied_for_referee"
|
||||
active_referee.credits_applied_for_referee = sea_orm::Set(true);
|
||||
}
|
||||
|
||||
// (7) If the referral-start-date has not been passed, apply 10% of the credits to the referrer.
|
||||
let now = Utc::now();
|
||||
let valid_until = DateTime::<Utc>::from_utc(referee_object.referral_start_date, Utc)
|
||||
.checked_add_months(Months::new(12))
|
||||
.unwrap();
|
||||
if now <= valid_until {
|
||||
let mut active_referrer_balance = referrer_balance.clone().into_active_model();
|
||||
// Add 10% referral fees ...
|
||||
active_referrer_balance.available_balance = sea_orm::Set(
|
||||
referrer_balance.available_balance
|
||||
+ Decimal::from(self.sum_credits_used / Decimal::from(10)),
|
||||
);
|
||||
// Also record how much the current referrer has "provided" / "gifted" away
|
||||
active_referee.credits_applied_for_referrer =
|
||||
sea_orm::Set(referee_object.credits_applied_for_referrer + self.sum_credits_used);
|
||||
active_referrer_balance.save(db_conn).await?;
|
||||
}
|
||||
|
||||
active_sender_balance.save(db_conn).await?;
|
||||
active_referee.save(db_conn).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@ -343,7 +571,24 @@ impl BufferedRpcQueryStats {
|
||||
.field("cache_hits", self.cache_hits as i64)
|
||||
.field("sum_request_bytes", self.sum_request_bytes as i64)
|
||||
.field("sum_response_millis", self.sum_response_millis as i64)
|
||||
.field("sum_response_bytes", self.sum_response_bytes as i64);
|
||||
.field("sum_response_bytes", self.sum_response_bytes as i64)
|
||||
// TODO: will this be enough of a range
|
||||
// I guess Decimal can be a f64
|
||||
// TODO: This should prob be a float, i should change the query if we want float-precision for this (which would be important...)
|
||||
.field(
|
||||
"sum_credits_used",
|
||||
self.sum_credits_used
|
||||
.to_f64()
|
||||
.expect("number is really (too) large"),
|
||||
)
|
||||
.field(
|
||||
"balance",
|
||||
self.latest_balance
|
||||
.to_f64()
|
||||
.expect("number is really (too) large"),
|
||||
);
|
||||
|
||||
// .round() as i64
|
||||
|
||||
builder = builder.timestamp(key.response_timestamp);
|
||||
|
||||
@ -370,6 +615,18 @@ impl RpcQueryStats {
|
||||
let response_millis = metadata.start_instant.elapsed().as_millis() as u64;
|
||||
let response_bytes = response_bytes as u64;
|
||||
|
||||
// TODO: Gotta make the arithmetic here
|
||||
|
||||
// TODO: Depending on the method, metadata and response bytes, pick a different number of credits used
|
||||
// This can be a slightly more complex function as we ll
|
||||
// TODO: Here, let's implement the formula
|
||||
let credits_used = Self::compute_cost(
|
||||
request_bytes,
|
||||
response_bytes,
|
||||
backend_requests == 0,
|
||||
&method,
|
||||
);
|
||||
|
||||
let response_timestamp = Utc::now().timestamp();
|
||||
|
||||
Self {
|
||||
@ -382,6 +639,36 @@ impl RpcQueryStats {
|
||||
response_bytes,
|
||||
response_millis,
|
||||
response_timestamp,
|
||||
credits_used,
|
||||
}
|
||||
}
|
||||
|
||||
/// Compute cost per request
|
||||
/// All methods cost the same
|
||||
/// The number of bytes are based on input, and output bytes
|
||||
pub fn compute_cost(
|
||||
request_bytes: u64,
|
||||
response_bytes: u64,
|
||||
cache_hit: bool,
|
||||
_method: &Option<String>,
|
||||
) -> Decimal {
|
||||
// TODO: Should make these lazy_static const?
|
||||
// pays at least $0.000018 / credits per request
|
||||
let cost_minimum = Decimal::new(18, 6);
|
||||
// 1kb is included on each call
|
||||
let cost_free_bytes = 1024;
|
||||
// after that, we add cost per bytes, $0.000000006 / credits per byte
|
||||
let cost_per_byte = Decimal::new(6, 9);
|
||||
|
||||
let total_bytes = request_bytes + response_bytes;
|
||||
let total_chargable_bytes =
|
||||
Decimal::from(max(0, total_bytes as i64 - cost_free_bytes as i64));
|
||||
|
||||
let out = cost_minimum + cost_per_byte * total_chargable_bytes;
|
||||
if cache_hit {
|
||||
out * Decimal::new(5, 1)
|
||||
} else {
|
||||
out
|
||||
}
|
||||
}
|
||||
|
||||
@ -405,6 +692,9 @@ impl StatBuffer {
|
||||
bucket: String,
|
||||
db_conn: Option<DatabaseConnection>,
|
||||
influxdb_client: Option<influxdb2::Client>,
|
||||
rpc_secret_key_cache: Option<
|
||||
Cache<Ulid, AuthorizationChecks, hashbrown::hash_map::DefaultHashBuilder>,
|
||||
>,
|
||||
db_save_interval_seconds: u32,
|
||||
tsdb_save_interval_seconds: u32,
|
||||
billing_period_seconds: i64,
|
||||
@ -423,6 +713,7 @@ impl StatBuffer {
|
||||
influxdb_client,
|
||||
db_save_interval_seconds,
|
||||
tsdb_save_interval_seconds,
|
||||
rpc_secret_key_cache,
|
||||
billing_period_seconds,
|
||||
global_timeseries_buffer: Default::default(),
|
||||
opt_in_timeseries_buffer: Default::default(),
|
||||
@ -452,7 +743,6 @@ impl StatBuffer {
|
||||
|
||||
// TODO: Somewhere here we should probably be updating the balance of the user
|
||||
// And also update the credits used etc. for the referred user
|
||||
|
||||
loop {
|
||||
tokio::select! {
|
||||
stat = stat_receiver.recv_async() => {
|
||||
|
Loading…
Reference in New Issue
Block a user