Compare commits

...

182 Commits

Author SHA1 Message Date
zzzckck
ec318b9c97
github: upgrade upload artifact from v4.1.7 to v4.3.3 (#2713) 2024-09-19 15:02:02 +08:00
zzzckck
6624522423
github: upgrade artifact from v3 to v4.1.7 (#2712)
upload-artifact and download-artifact must use same version, otherwise
release workflow will fail.
https://github.com/actions/toolkit/blob/main/packages/artifact/docs/faq.md
2024-09-19 14:54:35 +08:00
zzzckck
206c3b0ab0
Merge pull request #2701 from bnb-chain/develop
Draft release v1.4.15
2024-09-19 11:50:11 +08:00
zzzckck
089064c1ff
Merge pull request #2683 from buddh0/remove_duplicate_list_check
eth/handler: remove duplicate check for lists in body
2024-09-19 10:13:27 +08:00
buddh0
5289ecdfe2 eth/protocols: add Withdrawals check before broadcastBlock 2024-09-18 17:12:43 +08:00
buddh0
d141ff06c3 Revert "eth/handler: check lists in body before broadcast blocks (#2461)"
This reverts commit 0c0958ff8709eab1d5d4d0adaa81c09a89ec75d9.
2024-09-18 17:11:37 +08:00
zzzckck
9cbac84363
doc: update readme to remove Beacon chain part (#2697)
others changes:
  - remove bsc-docker, as it is broken
  - remove light client, as it is no longer supported
2024-09-18 16:56:10 +08:00
zzzckck
21faa2de3f
release: prepare for release v1.4.15 (#2700) 2024-09-18 14:34:01 +08:00
WMQ
34059cb144
faucet: update DIN token faucet support (#2706) 2024-09-18 11:51:59 +08:00
tiaoxizhan
3e44dcaa55
chore: add missing symbols in comment (#2704) 2024-09-14 09:12:56 +08:00
zzzckck
282aee5856
faucet: add example for custimized token (#2698) 2024-09-12 15:48:11 +08:00
zzzckck
44e91bba23
faucet: support customized token (#2687) 2024-09-12 15:10:37 +08:00
zzzckck
774d1b7ddb
Merge pull request #2684 from ngotchac/ngotchac/requeue-fixes 2024-09-11 10:58:06 +08:00
buddho
8bbd8fbf48
consensus/parlia: wait more time when processing huge blocks (#2689) 2024-09-11 10:14:11 +08:00
buddho
a28262b3ec
miner: limit block size to eth protocol msg size (#2696) 2024-09-10 16:24:29 +08:00
buddho
7de27ca9e9
CI: nancy ignore CVE-2024-8421 (#2695) 2024-09-10 11:27:04 +08:00
zzzckck
e7e5d508b5
txpool: set default GasCeil from 30M to 0 (#2688)
This PR tries to improve https://github.com/bnb-chain/bsc/pull/2680
If the node did not set `Eth.Miner` in config.toml, the default GasCeil
will be 30000000, it is not an accurate number for both BSC mainnet and testnet.
Set it to 0 means disable the transaction GasLimit check, i.e. the TxPool will
accept transactions with any GasLimit size.
2024-09-05 22:01:24 +08:00
Nicolas Gotchac
03069a7703 fetcher: Sleep after marking block as done when requeuing
Otherwise the node will be waiting for 500ms before the block fetcher
keeps processing.
2024-09-04 11:31:12 +02:00
galaio
1bcdad851f
metrics: add some extra feature flags as node stats; (#2662) 2024-09-04 16:37:43 +08:00
Nicolas Gotchac
24a46de5b2
eth: Add sidecars when available to broadcasted current block (#2675) 2024-09-04 16:22:07 +08:00
zzzckck
5c4096fffa
faucet: with mainnet balance check, 0.002BNB at least (#2672) 2024-09-04 16:19:01 +08:00
dependabot[bot]
f85d19aa8f
build(deps): bump actions/download-artifact in /.github/workflows (#2679)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 3 to 4.1.7.
2024-09-04 16:12:07 +08:00
zzzckck
0dab664d98
txpool: apply miner's gasceil to txpool (#2680)
apply `Eth.Miner.GasCeil` to TxPool even the node did not enable `--mine` option
Otherwise, there could be accmulated pending transactions in txpool.
Take an example:
- Currently, BSC testnet's block gaslimit is 70M, as 20M gas is reserved for
  SystemTxs, the maximum acceptable GasLimit for transaction is 50M.
- TxPool recevied a transaction with GasLimit 60M, it would be accepted in TxPool but it
  will never be accepted by validators. Such kinds of transactions will be kept in txpool
  and gradually make the txpool overflowed
2024-09-04 16:02:27 +08:00
Dike.w
094519d058
beaconserver: simulated beacon api server for op-stack (#2678)
only some necessary apis are implemented.
2024-09-04 09:39:01 +08:00
zzzckck
d3450f13c9
log: add some p2p log (#2677) 2024-09-03 14:27:53 +08:00
zzzckck
75af65dbf2
Merge pull request #2669 from bnb-chain/develop
Draft release v1.4.14
2024-08-27 15:00:23 +08:00
zzzckck
959850218c
release: prepare for release v1.4.14 (#2668) 2024-08-27 11:23:04 +08:00
zzzckck
af0204bd68
faucet: bump and resend faucet transaction if it has been pending for a while (#2665) 2024-08-26 16:09:34 +08:00
zzzckck
ec2d7e0228
config: setup Mainnet 2 hardfork date: HaberFix & Bohr (#2661)
expected hard fork date:
    Mainnet-HaberFix: 2024-09-26 02:02:00 AM UTC
    Mainnet-Bohr:     2024-09-26 02:20:00 AM UTC
2024-08-23 13:59:56 +08:00
Dike.w
c46d7e8bd8
ethclient: fix BlobSidecars api (#2656) 2024-08-22 11:07:05 +08:00
Chris Li
6cb4be4ebf
fix: when dataset is pruned and pruneancient is set, the offset is error (#2657) 2024-08-21 15:37:42 +08:00
buddho
3bd9a2395c
core: improve readability of the fork choice logic (#2658) 2024-08-20 23:06:49 +08:00
buddho
5d19f2182b
internal/ethapi: make GetFinalizedHeader monotonically increasing (#2655) 2024-08-20 11:51:32 +08:00
buddho
99a2dd5ed9
internal/debug: remove memsize (#2649) 2024-08-15 11:38:14 +08:00
easyfold
3adcfabb41
core/systemcontracts: use vm.StateDB in UpgradeBuildInSystemContract (#2578) 2024-08-13 17:06:07 +08:00
buddho
b1f0a3c79b
core: fix cache for receipts (#2643) 2024-08-13 14:02:38 +08:00
zzzckck
26a4d4fda6
Merge pull request #2633 from bnb-chain/develop
Draft release v1.4.13
2024-08-08 16:38:42 +08:00
buddho
e988d1574e
consensus/parlia: support recovery when snapshot of parlia gone in disk (#2587) 2024-08-08 14:53:51 +08:00
zzzckck
2cce9dd3de
release: prepare for release v1.4.13 (#2632) 2024-08-07 17:53:24 +08:00
zzzckck
b7e678e93d
config: setup Testnet Bohr hardfork date (#2634)
expected Bohr hard fork date:
- Testnet: 2024-08-20 01:23:16 AM UTC
- Mainnet: it is not determined yet, target Later Sep 2024
2024-08-07 16:24:28 +08:00
zzzckck
b61128bd7b
utils: add GetTopAddr to analyse large traffic (#2629) 2024-08-05 16:48:33 +08:00
buddho
df16ab95ab
Makefile: use docker compose v2 instead of v1 (#2628) 2024-08-05 15:15:46 +08:00
Nathan
9e343669b5
core: not record zero hash beacon block root with Parlia engine (#2621) 2024-07-31 16:39:24 +08:00
Nathan
987b8c1504
consensus/parlia: exclude inturn validator when calculate backoffTime (#2618) 2024-07-31 16:37:50 +08:00
Nathan
7d907016ff
tests: fix evm-test CI (#2622) 2024-07-31 15:50:43 +08:00
Satyajit Das
00cac12542
faucet: rate limit initial implementation (#2603) 2024-07-29 13:36:42 +08:00
Ethan
27f618f434
fix: update stakehub bytecode after zero address agent issue fixed (#2614) 2024-07-26 15:47:59 +08:00
irrun
46b88d11f9
fix: only take non-mempool tx to calculate bid price (#2579) 2024-07-26 09:49:16 +08:00
yuhangcangqian
f532da6ca0
chore: fix some comments (#2597)
Signed-off-by: yuhangcangqian <cuibuwei@qq.com>
2024-07-25 10:46:52 +08:00
zzzckck
c94fc290e7
Merge pull request #2612 from buddh0/merge_master_into_develop
Merge master into develop
2024-07-25 10:44:24 +08:00
buddho
7f3c5ce4cd
cmd/utils: add new flag OverridePassedForkTime (#2611) 2024-07-25 10:43:32 +08:00
Nathan
313449404f
consensus/parlia: modify mining time for last block in one turn (#2608) 2024-07-25 10:42:42 +08:00
buddh0
99e4e950f8 Merge branch 'master' into develop 2024-07-25 10:20:23 +08:00
zzzckck
83a9b13771
Merge pull request #2607 from NathanBSC/for_release_v1.4.12
Revert "miner/worker: broadcast block immediately once sealed (bnb-chain#2576)"
2024-07-24 12:07:13 +08:00
Ethan
cabd0f8a21
feat: add bohr upgrade contracts bytecode (#2605)
* feat: add bohr upgrade contracts bytecode

* feat: add bohr upgrade contracts bytecode on rialto network

* feat: add protector for stakehub on rialto network
2024-07-24 10:29:01 +08:00
Nathan
17e0e45a09
BEP-341: Validators can produce consecutive blocks (#2482) 2024-07-23 13:55:45 +08:00
Chris Li
6260a26971
fix: prune-state when specify --triesInMemory 32 (#2602) 2024-07-23 10:06:19 +08:00
Nathan
a44b6d8067
core: cache block after wroten into db (#2600) 2024-07-22 20:52:07 +08:00
NathanBSC
c6cb43b7ca Revert "miner/worker: broadcast block immediately once sealed (#2576)"
This reverts commit 6d5b4ad64d31907e0d3def45ac283b540cbe9ec9.
2024-07-22 18:23:10 +08:00
NathanBSC
222e10810e core: cache block after wroten into db 2024-07-22 18:22:56 +08:00
buddho
b844958a96
core: improve the network stability when double sign happens (#2596) 2024-07-22 14:53:26 +08:00
buddho
3cade73e40
BEP-404: Clear Miner History when Switching Validators Set (#2558) 2024-07-19 20:39:15 +08:00
buddho
4f38c78c6e
BEP-402: Complete Missing Fields in Block Header to Generate Signature (#2502) 2024-07-19 20:32:19 +08:00
buddho
7b8d28b425
core/vote: vote before committing state and writing block (#2589) 2024-07-19 20:23:45 +08:00
buddho
74078e1dc4
consensus/parlia: add GetJustifiedNumber and GetFinalizedNumber (#2591) 2024-07-19 10:20:53 +08:00
zzzckck
26b236fb5f
Merge pull request #2586 from bnb-chain/master_2_develop
Draft release v1.4.12
2024-07-17 17:45:39 +08:00
zzzckck
900cf26c65
Merge branch 'master' into master_2_develop 2024-07-17 17:29:11 +08:00
zzzckck
21e6dcfc79
release: prepare for release v1.4.12 (#2585) 2024-07-17 17:07:57 +08:00
Chris Li
a262acfb00
fix: remove delete and dangling side chains in prunefreezer (#2582) 2024-07-17 11:01:40 +08:00
Chris Li
87e622e51f
fix: the bug of blobsidecars and downloader with multi-database (#2564) 2024-07-16 22:37:03 +08:00
Chris Li
13d454796f
fix: pruneancient freeze from the previous position when the first time (#2542) 2024-07-16 22:36:18 +08:00
galaio
c6af48100d
freezer: Opt freezer env checking (#2580) 2024-07-16 21:44:39 +08:00
buddho
6d5b4ad64d
miner/worker: broadcast block immediately once sealed (#2576) 2024-07-16 21:24:37 +08:00
stellrust
d35b57ae36
chore: fix some comments (#2581)
Signed-off-by: stellrust <gohunter@foxmail.com>
2024-07-16 17:10:45 +08:00
Eric
21fc2d3ac4
cmd/jsutill: add log about validator name (#2583) 2024-07-16 17:10:16 +08:00
Nathan
c96fab04a3
core/vote: not vote if too late for next in turn validator (#2568) 2024-07-11 22:39:07 +08:00
buddho
27d86948fa
core: adapt highestVerifiedHeader to FastFinality (#2574) 2024-07-11 15:01:01 +08:00
Mars
a04e287cb6
feat: enhance bid comparison and reply bidding results && detail logs (#2538) 2024-07-11 11:31:24 +08:00
Nathan
c3d6155fff
cmd/utils: support use NetworkId to distinguish chapel when do syncing (#2573) 2024-07-10 16:10:07 +08:00
will-2012
a00ffa762c
fix: fix statedb copy (#2567) 2024-07-10 15:39:20 +08:00
buddho
863fdea026
core: avoid to cache block before wroten into db (#2566) 2024-07-10 14:46:11 +08:00
Nathan
90e67970ae
core: clearup testflag for Cancun and Haber (#2572) 2024-07-09 14:30:19 +08:00
Eric
e8456c2d08
cmd/jsutils: add a tool to get slash count (#2569) 2024-07-08 22:37:05 +08:00
Chris Li
bc970e5893
fix: delete unexpected block (#2562) 2024-07-08 19:21:56 +08:00
wayen
a5810fefc9
fix: fix state inspect error after pruned state (#2557) 2024-07-04 22:24:22 +08:00
Matus Kysel
971c0fa380
tests: fix unstable test (#2561) 2024-07-04 07:56:35 +02:00
KeefeL
88225c1a4b
chore: update greenfield cometbft version (#2556) 2024-07-03 15:14:19 +08:00
zzzckck
51e27f9e3b
nancy: ignore go-retryablehttp@v0.7.4 in .nancy-ignore (#2559) 2024-07-02 15:00:14 +08:00
zzzckck
f1a85ec306
go.mod: update missing dependency (#2546) 2024-06-28 14:14:22 +08:00
dependabot[bot]
75d162983a
build(deps): bump github.com/hashicorp/go-retryablehttp (#2537)
Bumps [github.com/hashicorp/go-retryablehttp](https://github.com/hashicorp/go-retryablehttp) from 0.7.4 to 0.7.7.
- [Changelog](https://github.com/hashicorp/go-retryablehttp/blob/main/CHANGELOG.md)
- [Commits](https://github.com/hashicorp/go-retryablehttp/compare/v0.7.4...v0.7.7)

---
updated-dependencies:
- dependency-name: github.com/hashicorp/go-retryablehttp
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-28 13:51:53 +08:00
Nathan
727c07116d
cmd/jsutils: add a tool to get performance between a range of blocks (#2513) 2024-06-28 13:48:17 +08:00
zoro
719412551a
Merge pull request #2549 from bnb-chain/develop
release: prepare for v1.4.11
2024-06-27 17:37:26 +08:00
zoro
c2226a0c9f
release: bump version to 1.4.11 (#2550) 2024-06-27 17:35:46 +08:00
zoro
d52628aa82
doc: add changelog for v1.4.11 (#2548) 2024-06-27 17:24:49 +08:00
Roshan
f7de51f74e
upgrade: add HaberFix hardfork (#2535) 2024-06-26 14:51:52 +08:00
irrun
55cbf31f18
fix: nil pointer when clear simulating bid (#2534) 2024-06-24 16:42:17 +08:00
zzzckck
f0c7795542
Merge pull request #2527 from bnb-chain/develop
Draft release v1.4.10
2024-06-21 16:13:48 +08:00
zzzckck
1548452def
release: prepare for release v1.4.10 (#2526) 2024-06-21 15:31:43 +08:00
Nathan
f2e7f1dd24
fix: ensure empty withdrawals after cancun before broadcast (#2525) 2024-06-21 15:00:56 +08:00
Eric
27a3ec5d72
fix getBlobSidecars by ethclient (#2515) 2024-06-20 11:29:33 +08:00
zzzckck
6094d7157e
UT: random failure of TestSnapSyncWithBlobs (#2519)
this case failed randomly in github CI, the ratio of around 20%
but can not be reproduced locally by:
```
  go test -p 1 -v -run TestSnapSyncWithBlobs -count 100
```
there could be a potential risk between `runEthPeer` and `runSnapExtension`,as
they shared the same handler.peers.
Add extra wait time for `runEthPeer` to resolve it.

And same for another UT: testBroadcastBlock
2024-06-20 11:14:52 +08:00
zzzckck
be0fbfb79e
fix: remove zero gasprice check for BSC (#2518)
note: blobGasFeePrice can not be zero right now.
2024-06-19 19:51:30 +08:00
Nathan
00f094c37e
core/forkchoice: improve stability when inturn block not generated (#2451) 2024-06-18 15:53:05 +08:00
Chris Li
7b08a70a23
perf: optimize chain commit performance for multi-database (#2509) 2024-06-18 15:20:23 +08:00
will-2012
99d31aeb28
perf: speedup pbss trienode read (#2508) 2024-06-18 11:47:22 +08:00
irrun
f467c6018b
feat: add mev helper params and func (#2512) 2024-06-13 11:13:39 +08:00
zzzckck
4566ac7659
Merge pull request #2511 from bnb-chain/develop
Draft release v1.4.9
2024-06-11 11:51:22 +08:00
zzzckck
aab4b8812a
release: prepare for release v1.4.9 (#2510) 2024-06-11 10:47:44 +08:00
irrun
af7e9b95bd
fix: waiting for the last simulation before pick best bid (#2507) 2024-06-07 16:41:20 +08:00
zoupingshi
1047f0e59a
chore: fix function name (#2501)
Signed-off-by: zoupingshi <hellocatty@tom.com>
2024-06-05 15:12:48 +08:00
VM
9bb4fed1bf
fix: add an empty freeze db (#2495) 2024-06-04 15:47:16 +08:00
zzzckck
35e71a769b
doc: update url linker (#2499) 2024-05-30 14:08:48 +08:00
zzzckck
6b02ac7ac5
doc: update url linker (#2498) 2024-05-30 12:19:07 +08:00
Nathan
8b9558bb4d
params/config: add Bohr hardfork (#2497) 2024-05-30 11:51:34 +08:00
Chris Li
63e7eac394
fix: keep 9W blocks in ancient db when prune block (#2481) 2024-05-29 15:11:52 +08:00
Chris Li
05543e558d
fix: fix inspect database error (#2484) 2024-05-27 15:10:59 +08:00
Satyajit Das
b0146261c7
jsutils: faucet successful requests within blocks (#2470) 2024-05-23 15:08:36 +08:00
lx
d7b9866d3b
Merge pull request #2487 from bnb-chain/master
merge PRs from master to develop branch
2024-05-22 13:56:19 +08:00
zzzckck
f5ba30ed47
release: prepare for release v1.4.8 (#2486) 2024-05-21 18:30:28 +08:00
Nathan
f190c49252
core/vm: add secp256r1 into PrecompiledContractsHaber (#2483) 2024-05-21 17:46:42 +08:00
Mars
08769ead2b
dev: ensure consistency in BPS bundle result (#2479)
* dev: ensure consistency in BPS bundle result

* fix: remove env operation once the sim is discarded & rename
2024-05-21 12:24:41 +08:00
Matus Kysel
4d0f1e7117
RIP-7212: Precompile for secp256r1 Curve Support (#2400) 2024-05-21 11:44:21 +08:00
irrun
c77bb1110d
fix: limit the gas price of the mev bid (#2473) 2024-05-20 14:33:47 +08:00
Mars
c856d21719
fix: move mev op to MinerAPI & add command to console (#2475) 2024-05-20 14:00:28 +08:00
Nathan
f45305b1ad
cmd/utils: add a flag to change breathe block interval for testing (#2472) 2024-05-17 16:18:29 +08:00
Eric
d16532d678
internal/ethapi: add optional parameter for blobSidecars (#2468) 2024-05-16 19:07:14 +08:00
Eric
5edd032cdb
internal/ethapi: add optional parameter for blobSidecars (#2467) 2024-05-16 19:06:49 +08:00
galaio
6b8cbbe172
sync: fix some sync issues caused by prune-block. (#2466) 2024-05-16 12:07:13 +08:00
setunapo
5ea2ada0ee
utils: add check_blobtx.js (#2463) 2024-05-15 18:17:57 +08:00
Fynn
b230a02006
cmd: fix memory leak when big dataset (#2455) 2024-05-15 15:28:57 +08:00
Nathan
86e3a02490
cmd/utils: add a flag to change breathe block interval for testing (#2462) 2024-05-15 15:27:05 +08:00
Nathan
0c0958ff87
eth/handler: check lists in body before broadcast blocks (#2461) 2024-05-15 14:54:25 +08:00
buddho
c577ce3720
Merge pull request #2460 from bnb-chain/develop
merge some PRs for v1.4.7(2nd)
2024-05-14 19:59:57 +08:00
Eric
d436f9e2e8
eth/fetcher/block_fetcher.go: add metrics for import failed block (#2459) 2024-05-14 19:29:43 +08:00
Eric
97c3b9b267
internal/api.go: add choice about not show full blob (#2458) 2024-05-14 17:17:58 +08:00
Nathan
0560685460
core: clearup feynman testflag and rialto code (#2457) 2024-05-14 16:37:17 +08:00
rjl493456442
bf16a39876 ethdb/pebble: print warning log if pebble performance degrades (#29478) 2024-05-14 15:26:21 +08:00
Devon Bear
2c8720016d go.mod: bump pebble db to official release (#29038)
bump pebble
2024-05-14 15:26:21 +08:00
Nathan
f2ec3cc6a5 eth/handler: check blobs before broadcast blocks (#2450) 2024-05-13 17:21:45 +08:00
Martin HS
0a2e1282d2 core/rawdb: add sanity-limit to header accessor (#29534) 2024-05-13 17:21:45 +08:00
Nathan
adb5e8fe86 eth/filters: enforce topic-limit early on filter criterias (#29535) (#2448)
This PR adds a limit of 1000 to the "inner" topics in a filter-criteria

Co-authored-by: Martin HS <martin@swende.se>
2024-05-13 17:21:45 +08:00
Nathan
23f6194fad
eth/handler: check blobs before broadcast blocks (#2450) 2024-05-13 16:20:50 +08:00
Martin HS
691d195526 core/rawdb: add sanity-limit to header accessor (#29534) 2024-05-11 10:24:39 +08:00
Nathan
b57c779759
eth/filters: enforce topic-limit early on filter criterias (#29535) (#2448)
This PR adds a limit of 1000 to the "inner" topics in a filter-criteria

Co-authored-by: Martin HS <martin@swende.se>
2024-05-11 10:24:19 +08:00
zzzckck
4ab1c865b2
Merge pull request #2441 from bnb-chain/develop
Draft release v1.4.7
2024-05-10 13:10:56 +08:00
zzzckck
a7d5b02919
release: prepare for release v1.4.7 (#2442)
* release: prepare for release v1.4.7

* code: avoid golang-lint error for TxLookupLimit
2024-05-09 19:47:52 +08:00
zzzckck
1ce9bb044d
config: setup Mainnet Tycho(Cancun) hardfork date (#2439)
expected hard fork date:
Mainnet: 2024-06-20 06:05:00 AM UTC
2024-05-09 16:24:04 +08:00
dependabot[bot]
7948950f7a
build(deps): bump golang.org/x/net from 0.19.0 to 0.23.0 (#2411) 2024-05-09 16:06:28 +08:00
knowmost
0c101e618a
chore: fix some typos (#2433) 2024-05-09 16:05:46 +08:00
zzzckck
571ea2c4b9
nancy: add files .nancy-ignore (#2440) 2024-05-09 15:58:06 +08:00
galaio
7bc5a3353d
txpool: limit max gas when mining is enabled; (#2435) 2024-05-09 15:54:31 +08:00
Chris Li
901ea2e0d2
fix: performance issue when load journal (#2438) 2024-05-08 15:36:01 +08:00
buddho
1d81f3316f
metrics: add blockInsertMgaspsGauge to trace mgasps (#2396) 2024-04-30 17:58:04 +08:00
zzzckck
43b2ffa63b
Merge pull request #2427 from bnb-chain/develop
Draft release v1.4.6
2024-04-29 14:07:29 +08:00
zzzckck
0567715760 release: prepare for release v1.4.6 2024-04-29 10:47:43 +08:00
zzzckck
e32fcf5b93 Revert "github: add branch protect rule (#2343)"
This reverts commit e7c5ce2e94578b480fe6eaa6b35a8dad66341637.
2024-04-29 10:47:43 +08:00
zzzckck
e55028d788 metrics: refine the double sign detect code 2024-04-29 10:47:43 +08:00
Satyajit Das
9d8df917b8
metrics: add doublesign counter (#2419) 2024-04-29 09:42:43 +08:00
irrun
9e170972f4
fix: oom caused by non-discarded mev simulation env (#2430) 2024-04-28 19:45:42 +08:00
irrun
ba6726325a
feat: recommit bid when newBidCh is empty to maximize mev reward (#2424) 2024-04-28 11:05:09 +08:00
Chris Li
6573254a62
fix: adapt journal for cmd (#2425) 2024-04-28 11:02:14 +08:00
galaio
31d92c50ad
chore: add metric & log for blobTx; (#2428) 2024-04-27 06:37:42 +08:00
Eric
7cab9c622c
eth/gasprice: add query limit to defend DDOS attack (#2423) 2024-04-25 14:05:12 +08:00
irrun
2a0e399c38
Revert "fix: wrong way to get blob tx sidecar in `BidRuntime.commitTransactio…" (#2418)
This reverts commit 14023fae6d5e28685dfebf5345011a73607202cf.
2024-04-23 11:41:56 +08:00
forcedebug
182c841374
fix: fix function names (#2416) 2024-04-23 10:06:50 +08:00
Roshan
14023fae6d
fix: wrong way to get blob tx sidecar in BidRuntime.commitTransaction (#2417) 2024-04-22 18:46:44 +08:00
Chris Li
d653cda82e
feat: adaptive for loading journal file or journal kv during loadJournal (#2406)
* core: check journalType before load journal
* fix: when delete trieJournal delete from kv & file
2024-04-19 16:23:28 +08:00
careworry
4b54601d5c
chore: fix some typos in comments (#2408) 2024-04-19 13:17:58 +08:00
Ng Wei Han
3b7f0e4279
cmd, p2p: filter peers by regex on name (#2404) 2024-04-18 16:11:32 +08:00
TechVest
fe1fff8c77
chore: fix some typos in comments (#2399) 2024-04-18 15:43:57 +08:00
Chris Li
c0afdc9a98
core: separated databases for block data (#2227)
* core: use finalized block as the chain freeze indicator (#28683)
* core/rawdb: use max(finality, head-90k) as chain freezing threshold
* core: impl multi database for block data
* core: fix db inspect total size bug
* core: add tips for user who use multi-database
* core: adapt some cmd for multi-database
* core: adapter blockBlobSidecars
* core: fix freezer readHeader bug

---------
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
2024-04-18 15:12:05 +08:00
buddho
fb435eb5f1
fix: allow fast node to rewind after abnormal shutdown (#2401) 2024-04-18 13:45:02 +08:00
irrun
5cc253a2cd
fix: NPE (#2403) 2024-04-18 13:25:38 +08:00
buddho
cbcd26c9a9
fix: no import blocks before or equal to the finalized height (#2398) 2024-04-17 14:19:50 +08:00
Chris Li
90eb5b33e8
fix: trieJournal format compatible old db format (#2395) 2024-04-16 11:49:04 +08:00
Ng Wei Han
837de88057
cmd/geth: fix importBlock (#2244) 2024-04-15 16:40:03 +08:00
buddho
b4fb2f6ffc
fix: print value instead of pointer in ConfigCompatError (#2391) 2024-04-15 14:46:14 +08:00
dylanhuang
11503edeb2
chore: render system bytecode by go:embed (#2201) 2024-04-15 11:23:23 +08:00
Chris Li
3a6e3c67f2
core/trie: persist TrieJournal to journal file instead of kv database (#2341) 2024-04-15 10:47:54 +08:00
hugehope
335be39905
chore: fix function names in comment (#2390) 2024-04-11 19:01:43 +08:00
irrun
b7972bcd77
feat: greedy merge tx in bid (#2363) 2024-04-11 15:15:46 +08:00
buddho
4bb1bd1a77
deps: update prsym to solve warning about quic-go version (#2389) 2024-04-11 14:11:15 +08:00
295 changed files with 12396 additions and 2672 deletions

20
.github/CODEOWNERS vendored
View File

@ -1,3 +1,21 @@
# Lines starting with '#' are comments. # Lines starting with '#' are comments.
# Each line is a file pattern followed by one or more owners. # Each line is a file pattern followed by one or more owners.
* @zzzckck @zjubfd accounts/usbwallet @karalabe
accounts/scwallet @gballet
accounts/abi @gballet @MariusVanDerWijden
cmd/clef @holiman
consensus @karalabe
core/ @karalabe @holiman @rjl493456442
eth/ @karalabe @holiman @rjl493456442
eth/catalyst/ @gballet
eth/tracers/ @s1na
graphql/ @s1na
les/ @zsfelfoldi @rjl493456442
light/ @zsfelfoldi @rjl493456442
node/ @fjl
p2p/ @fjl @zsfelfoldi
rpc/ @fjl @holiman
p2p/simulations @fjl
p2p/protocols @fjl
p2p/testing @fjl
signer/ @holiman

View File

@ -82,28 +82,28 @@ jobs:
# ============================== # ==============================
- name: Upload Linux Build - name: Upload Linux Build
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4.3.3
if: matrix.os == 'ubuntu-latest' if: matrix.os == 'ubuntu-latest'
with: with:
name: linux name: linux
path: ./build/bin/geth path: ./build/bin/geth
- name: Upload MacOS Build - name: Upload MacOS Build
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4.3.3
if: matrix.os == 'macos-latest' if: matrix.os == 'macos-latest'
with: with:
name: macos name: macos
path: ./build/bin/geth path: ./build/bin/geth
- name: Upload Windows Build - name: Upload Windows Build
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4.3.3
if: matrix.os == 'windows-latest' if: matrix.os == 'windows-latest'
with: with:
name: windows name: windows
path: ./build/bin/geth.exe path: ./build/bin/geth.exe
- name: Upload ARM-64 Build - name: Upload ARM-64 Build
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4.3.3
if: matrix.os == 'ubuntu-latest' if: matrix.os == 'ubuntu-latest'
with: with:
name: arm64 name: arm64
@ -125,25 +125,25 @@ jobs:
# ============================== # ==============================
- name: Download Artifacts - name: Download Artifacts
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4.1.7
with: with:
name: linux name: linux
path: ./linux path: ./linux
- name: Download Artifacts - name: Download Artifacts
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4.1.7
with: with:
name: macos name: macos
path: ./macos path: ./macos
- name: Download Artifacts - name: Download Artifacts
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4.1.7
with: with:
name: windows name: windows
path: ./windows path: ./windows
- name: Download Artifacts - name: Download Artifacts
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4.1.7
with: with:
name: arm64 name: arm64
path: ./arm64 path: ./arm64

View File

@ -81,28 +81,28 @@ jobs:
# ============================== # ==============================
- name: Upload Linux Build - name: Upload Linux Build
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4.3.3
if: matrix.os == 'ubuntu-latest' if: matrix.os == 'ubuntu-latest'
with: with:
name: linux name: linux
path: ./build/bin/geth path: ./build/bin/geth
- name: Upload MacOS Build - name: Upload MacOS Build
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4.3.3
if: matrix.os == 'macos-latest' if: matrix.os == 'macos-latest'
with: with:
name: macos name: macos
path: ./build/bin/geth path: ./build/bin/geth
- name: Upload Windows Build - name: Upload Windows Build
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4.3.3
if: matrix.os == 'windows-latest' if: matrix.os == 'windows-latest'
with: with:
name: windows name: windows
path: ./build/bin/geth.exe path: ./build/bin/geth.exe
- name: Upload ARM-64 Build - name: Upload ARM-64 Build
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4.3.3
if: matrix.os == 'ubuntu-latest' if: matrix.os == 'ubuntu-latest'
with: with:
name: arm64 name: arm64
@ -124,25 +124,25 @@ jobs:
# ============================== # ==============================
- name: Download Artifacts - name: Download Artifacts
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4.1.7
with: with:
name: linux name: linux
path: ./linux path: ./linux
- name: Download Artifacts - name: Download Artifacts
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4.1.7
with: with:
name: macos name: macos
path: ./macos path: ./macos
- name: Download Artifacts - name: Download Artifacts
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4.1.7
with: with:
name: windows name: windows
path: ./windows path: ./windows
- name: Download Artifacts - name: Download Artifacts
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4.1.7
with: with:
name: arm64 name: arm64
path: ./arm64 path: ./arm64

3
.nancy-ignore Normal file
View File

@ -0,0 +1,3 @@
CVE-2024-34478 # "CWE-754: Improper Check for Unusual or Exceptional Conditions." This vulnerability is BTC only, BSC does not have the issue.
CVE-2024-6104 # "CWE-532: Information Exposure Through Log Files" This is caused by the vulnerabilities go-retryablehttp@v0.7.4, it is only used in cmd devp2p, impact is limited. will upgrade to v0.7.7 later
CVE-2024-8421 # "CWE-400: Uncontrolled Resource Consumption (Resource Exhaustion)" This vulnerability is caused by issues in the golang.org/x/net package. Even the latest version(v0.29.0) has not yet addressed it, but we will continue to monitor updates closely.

View File

@ -1,4 +1,199 @@
# Changelog # Changelog
## v1.4.15
### BUGFIX
* [\#2680](https://github.com/bnb-chain/bsc/pull/2680) txpool: apply miner's gasceil to txpool
* [\#2688](https://github.com/bnb-chain/bsc/pull/2688) txpool: set default GasCeil from 30M to 0
* [\#2696](https://github.com/bnb-chain/bsc/pull/2696) miner: limit block size to eth protocol msg size
* [\#2684](https://github.com/bnb-chain/bsc/pull/2684) eth: Add sidecars when available to broadcasted current block
### FEATURE
* [\#2672](https://github.com/bnb-chain/bsc/pull/2672) faucet: with mainnet balance check, 0.002BNB at least
* [\#2678](https://github.com/bnb-chain/bsc/pull/2678) beaconserver: simulated beacon api server for op-stack
* [\#2687](https://github.com/bnb-chain/bsc/pull/2687) faucet: support customized token
* [\#2698](https://github.com/bnb-chain/bsc/pull/2698) faucet: add example for custimized token
* [\#2706](https://github.com/bnb-chain/bsc/pull/2706) faucet: update DIN token faucet support
### IMPROVEMENT
* [\#2677](https://github.com/bnb-chain/bsc/pull/2677) log: add some p2p log
* [\#2679](https://github.com/bnb-chain/bsc/pull/2679) build(deps): bump actions/download-artifact in /.github/workflows
* [\#2662](https://github.com/bnb-chain/bsc/pull/2662) metrics: add some extra feature flags as node stats
* [\#2675](https://github.com/bnb-chain/bsc/pull/2675) fetcher: Sleep after marking block as done when requeuing
* [\#2695](https://github.com/bnb-chain/bsc/pull/2695) CI: nancy ignore CVE-2024-8421
* [\#2689](https://github.com/bnb-chain/bsc/pull/2689) consensus/parlia: wait more time when processing huge blocks
## v1.4.14
### BUGFIX
* [\#2643](https://github.com/bnb-chain/bsc/pull/2643)core: fix cache for receipts
* [\#2656](https://github.com/bnb-chain/bsc/pull/2656)ethclient: fix BlobSidecars api
* [\#2657](https://github.com/bnb-chain/bsc/pull/2657)fix: update prunefreezers offset when pruneancient and the dataset has pruned block
### FEATURE
* [\#2661](https://github.com/bnb-chain/bsc/pull/2661)config: setup Mainnet 2 hardfork date: HaberFix & Bohr
### IMPROVEMENT
* [\#2578](https://github.com/bnb-chain/bsc/pull/2578)core/systemcontracts: use vm.StateDB in UpgradeBuildInSystemContract
* [\#2649](https://github.com/bnb-chain/bsc/pull/2649)internal/debug: remove memsize
* [\#2655](https://github.com/bnb-chain/bsc/pull/2655)internal/ethapi: make GetFinalizedHeader monotonically increasing
* [\#2658](https://github.com/bnb-chain/bsc/pull/2658)core: improve readability of the fork choice logic
* [\#2665](https://github.com/bnb-chain/bsc/pull/2665)faucet: bump and resend faucet transaction if it has been pending for a while
## v1.4.13
### BUGFIX
* [\#2602](https://github.com/bnb-chain/bsc/pull/2602) fix: prune-state when specify --triesInMemory 32
* [\#2579](https://github.com/bnb-chain/bsc/pull/00025790) fix: only take non-mempool tx to calculate bid price
### FEATURE
* [\#2634](https://github.com/bnb-chain/bsc/pull/2634) config: setup Testnet Bohr hardfork date
* [\#2482](https://github.com/bnb-chain/bsc/pull/2482) BEP-341: Validators can produce consecutive blocks
* [\#2502](https://github.com/bnb-chain/bsc/pull/2502) BEP-402: Complete Missing Fields in Block Header to Generate Signature
* [\#2558](https://github.com/bnb-chain/bsc/pull/2558) BEP-404: Clear Miner History when Switching Validators Set
* [\#2605](https://github.com/bnb-chain/bsc/pull/2605) feat: add bohr upgrade contracts bytecode
* [\#2614](https://github.com/bnb-chain/bsc/pull/2614) fix: update stakehub bytecode after zero address agent issue fixed
* [\#2608](https://github.com/bnb-chain/bsc/pull/2608) consensus/parlia: modify mining time for last block in one turn
* [\#2618](https://github.com/bnb-chain/bsc/pull/2618) consensus/parlia: exclude inturn validator when calculate backoffTime
* [\#2621](https://github.com/bnb-chain/bsc/pull/2621) core: not record zero hash beacon block root with Parlia engine
### IMPROVEMENT
* [\#2589](https://github.com/bnb-chain/bsc/pull/2589) core/vote: vote before committing state and writing block
* [\#2596](https://github.com/bnb-chain/bsc/pull/2596) core: improve the network stability when double sign happens
* [\#2600](https://github.com/bnb-chain/bsc/pull/2600) core: cache block after wroten into db
* [\#2629](https://github.com/bnb-chain/bsc/pull/2629) utils: add GetTopAddr to analyse large traffic
* [\#2591](https://github.com/bnb-chain/bsc/pull/2591) consensus/parlia: add GetJustifiedNumber and GetFinalizedNumber
* [\#2611](https://github.com/bnb-chain/bsc/pull/2611) cmd/utils: add new flag OverridePassedForkTime
* [\#2603](https://github.com/bnb-chain/bsc/pull/2603) faucet: rate limit initial implementation
* [\#2622](https://github.com/bnb-chain/bsc/pull/2622) tests: fix evm-test CI
* [\#2628](https://github.com/bnb-chain/bsc/pull/2628) Makefile: use docker compose v2 instead of v1
## v1.4.12
### BUGFIX
* [\#2557](https://github.com/bnb-chain/bsc/pull/2557) fix: fix state inspect error after pruned state
* [\#2562](https://github.com/bnb-chain/bsc/pull/2562) fix: delete unexpected block
* [\#2566](https://github.com/bnb-chain/bsc/pull/2566) core: avoid to cache block before wroten into db
* [\#2567](https://github.com/bnb-chain/bsc/pull/2567) fix: fix statedb copy
* [\#2574](https://github.com/bnb-chain/bsc/pull/2574) core: adapt highestVerifiedHeader to FastFinality
* [\#2542](https://github.com/bnb-chain/bsc/pull/2542) fix: pruneancient freeze from the previous position when the first time
* [\#2564](https://github.com/bnb-chain/bsc/pull/2564) fix: the bug of blobsidecars and downloader with multi-database
* [\#2582](https://github.com/bnb-chain/bsc/pull/2582) fix: remove delete and dangling side chains in prunefreezer
### FEATURE
* [\#2513](https://github.com/bnb-chain/bsc/pull/2513) cmd/jsutils: add a tool to get performance between a range of blocks
* [\#2569](https://github.com/bnb-chain/bsc/pull/2569) cmd/jsutils: add a tool to get slash count
* [\#2583](https://github.com/bnb-chain/bsc/pull/2583) cmd/jsutill: add log about validator name
### IMPROVEMENT
* [\#2546](https://github.com/bnb-chain/bsc/pull/2546) go.mod: update missing dependency
* [\#2559](https://github.com/bnb-chain/bsc/pull/2559) nancy: ignore go-retryablehttp@v0.7.4 in .nancy-ignore
* [\#2556](https://github.com/bnb-chain/bsc/pull/2556) chore: update greenfield cometbft version
* [\#2561](https://github.com/bnb-chain/bsc/pull/2561) tests: fix unstable test
* [\#2572](https://github.com/bnb-chain/bsc/pull/2572) core: clearup testflag for Cancun and Haber
* [\#2573](https://github.com/bnb-chain/bsc/pull/2573) cmd/utils: support use NetworkId to distinguish chapel when do syncing
* [\#2538](https://github.com/bnb-chain/bsc/pull/2538) feat: enhance bid comparison and reply bidding results && detail logs
* [\#2568](https://github.com/bnb-chain/bsc/pull/2568) core/vote: not vote if too late for next in turn validator
* [\#2576](https://github.com/bnb-chain/bsc/pull/2576) miner/worker: broadcast block immediately once sealed
* [\#2580](https://github.com/bnb-chain/bsc/pull/2580) freezer: Opt freezer env checking
## v1.4.11
### BUGFIX
* [\#2534](https://github.com/bnb-chain/bsc/pull/2534) fix: nil pointer when clear simulating bid
* [\#2535](https://github.com/bnb-chain/bsc/pull/2535) upgrade: add HaberFix hardfork
## v1.4.10
### FEATURE
NA
### IMPROVEMENT
* [\#2512](https://github.com/bnb-chain/bsc/pull/2512) feat: add mev helper params and func
* [\#2508](https://github.com/bnb-chain/bsc/pull/2508) perf: speedup pbss trienode read
* [\#2509](https://github.com/bnb-chain/bsc/pull/2509) perf: optimize chain commit performance for multi-database
* [\#2451](https://github.com/bnb-chain/bsc/pull/2451) core/forkchoice: improve stability when inturn block not generate
### BUGFIX
* [\#2518](https://github.com/bnb-chain/bsc/pull/2518) fix: remove zero gasprice check for BSC
* [\#2519](https://github.com/bnb-chain/bsc/pull/2519) UT: random failure of TestSnapSyncWithBlobs
* [\#2515](https://github.com/bnb-chain/bsc/pull/2515) fix getBlobSidecars by ethclient
* [\#2525](https://github.com/bnb-chain/bsc/pull/2525) fix: ensure empty withdrawals after cancun before broadcast
## v1.4.9
### FEATURE
* [\#2463](https://github.com/bnb-chain/bsc/pull/2463) utils: add check_blobtx.js
* [\#2470](https://github.com/bnb-chain/bsc/pull/2470) jsutils: faucet successful requests within blocks
* [\#2467](https://github.com/bnb-chain/bsc/pull/2467) internal/ethapi: add optional parameter for blobSidecars
### IMPROVEMENT
* [\#2462](https://github.com/bnb-chain/bsc/pull/2462) cmd/utils: add a flag to change breathe block interval for testing
* [\#2497](https://github.com/bnb-chain/bsc/pull/2497) params/config: add Bohr hardfork
* [\#2479](https://github.com/bnb-chain/bsc/pull/2479) dev: ensure consistency in BPS bundle result
### BUGFIX
* [\#2461](https://github.com/bnb-chain/bsc/pull/2461) eth/handler: check lists in body before broadcast blocks
* [\#2455](https://github.com/bnb-chain/bsc/pull/2455) cmd: fix memory leak when big dataset
* [\#2466](https://github.com/bnb-chain/bsc/pull/2466) sync: fix some sync issues caused by prune-block.
* [\#2475](https://github.com/bnb-chain/bsc/pull/2475) fix: move mev op to MinerAPI & add command to console
* [\#2473](https://github.com/bnb-chain/bsc/pull/2473) fix: limit the gas price of the mev bid
* [\#2484](https://github.com/bnb-chain/bsc/pull/2484) fix: fix inspect database error
* [\#2481](https://github.com/bnb-chain/bsc/pull/2481) fix: keep 9W blocks in ancient db when prune block
* [\#2495](https://github.com/bnb-chain/bsc/pull/2495) fix: add an empty freeze db
* [\#2507](https://github.com/bnb-chain/bsc/pull/2507) fix: waiting for the last simulation before pick best bid
## v1.4.8
### FEATURE
* [\#2483](https://github.com/bnb-chain/bsc/pull/2483) core/vm: add secp256r1 into PrecompiledContractsHaber
* [\#2400](https://github.com/bnb-chain/bsc/pull/2400) RIP-7212: Precompile for secp256r1 Curve Support
### IMPROVEMENT
NA
### BUGFIX
NA
## v1.4.7
### FEATURE
* [\#2439](https://github.com/bnb-chain/bsc/pull/2439) config: setup Mainnet Tycho(Cancun) hardfork date
### IMPROVEMENT
* [\#2396](https://github.com/bnb-chain/bsc/pull/2396) metrics: add blockInsertMgaspsGauge to trace mgasps
* [\#2411](https://github.com/bnb-chain/bsc/pull/2411) build(deps): bump golang.org/x/net from 0.19.0 to 0.23.0
* [\#2435](https://github.com/bnb-chain/bsc/pull/2435) txpool: limit max gas when mining is enabled
* [\#2438](https://github.com/bnb-chain/bsc/pull/2438) fix: performance issue when load journal
* [\#2440](https://github.com/bnb-chain/bsc/pull/2440) nancy: add files .nancy-ignore
### BUGFIX
NA
## v1.4.6
### FEATURE
* [\#2227](https://github.com/bnb-chain/bsc/pull/2227) core: separated databases for block data
* [\#2404](https://github.com/bnb-chain/bsc/pull/2404) cmd, p2p: filter peers by regex on name
### IMPROVEMENT
* [\#2201](https://github.com/bnb-chain/bsc/pull/2201) chore: render system bytecode by go:embed
* [\#2363](https://github.com/bnb-chain/bsc/pull/2363) feat: greedy merge tx in bid
* [\#2389](https://github.com/bnb-chain/bsc/pull/2389) deps: update prsym to solve warning about quic-go version
* [\#2341](https://github.com/bnb-chain/bsc/pull/2341) core/trie: persist TrieJournal to journal file instead of kv database
* [\#2395](https://github.com/bnb-chain/bsc/pull/2395) fix: trieJournal format compatible old db format
* [\#2406](https://github.com/bnb-chain/bsc/pull/2406) feat: adaptive for loading journal file or journal kv during loadJournal
* [\#2390](https://github.com/bnb-chain/bsc/pull/2390) chore: fix function names in comment
* [\#2399](https://github.com/bnb-chain/bsc/pull/2399) chore: fix some typos in comments
* [\#2408](https://github.com/bnb-chain/bsc/pull/2408) chore: fix some typos in comments
* [\#2416](https://github.com/bnb-chain/bsc/pull/2416) fix: fix function names
* [\#2424](https://github.com/bnb-chain/bsc/pull/2424) feat: recommit bid when newBidCh is empty to maximize mev reward
* [\#2430](https://github.com/bnb-chain/bsc/pull/2430) fix: oom caused by non-discarded mev simulation env
* [\#2428](https://github.com/bnb-chain/bsc/pull/2428) chore: add metric & log for blobTx
* [\#2419](https://github.com/bnb-chain/bsc/pull/2419) metrics: add doublesign counter
### BUGFIX
* [\#2244](https://github.com/bnb-chain/bsc/pull/2244) cmd/geth: fix importBlock
* [\#2391](https://github.com/bnb-chain/bsc/pull/2391) fix: print value instead of pointer in ConfigCompatError
* [\#2398](https://github.com/bnb-chain/bsc/pull/2398) fix: no import blocks before or equal to the finalized height
* [\#2401](https://github.com/bnb-chain/bsc/pull/2401) fix: allow fast node to rewind after abnormal shutdown
* [\#2403](https://github.com/bnb-chain/bsc/pull/2403) fix: NPE
* [\#2423](https://github.com/bnb-chain/bsc/pull/2423) eth/gasprice: add query limit to defend DDOS attack
* [\#2425](https://github.com/bnb-chain/bsc/pull/2425) fix: adapt journal for cmd
## v1.4.5 ## v1.4.5
### FEATURE ### FEATURE
* [\#2378](https://github.com/bnb-chain/bsc/pull/2378) config: setup Testnet Tycho(Cancun) hardfork date * [\#2378](https://github.com/bnb-chain/bsc/pull/2378) config: setup Testnet Tycho(Cancun) hardfork date

View File

@ -26,7 +26,7 @@ COPY --from=builder /go-ethereum/build/bin/* /usr/local/bin/
EXPOSE 8545 8546 30303 30303/udp EXPOSE 8545 8546 30303 30303/udp
# Add some metadata labels to help programatic image consumption # Add some metadata labels to help programmatic image consumption
ARG COMMIT="" ARG COMMIT=""
ARG VERSION="" ARG VERSION=""
ARG BUILDNUM="" ARG BUILDNUM=""

View File

@ -17,6 +17,11 @@ geth:
@echo "Done building." @echo "Done building."
@echo "Run \"$(GOBIN)/geth\" to launch geth." @echo "Run \"$(GOBIN)/geth\" to launch geth."
#? faucet: Build faucet
faucet:
$(GORUN) build/ci.go install ./cmd/faucet
@echo "Done building faucet"
#? all: Build all packages and executables #? all: Build all packages and executables
all: all:
$(GORUN) build/ci.go install $(GORUN) build/ci.go install
@ -29,11 +34,11 @@ truffle-test:
docker build . -f ./docker/Dockerfile --target bsc-genesis -t bsc-genesis docker build . -f ./docker/Dockerfile --target bsc-genesis -t bsc-genesis
docker build . -f ./docker/Dockerfile --target bsc -t bsc docker build . -f ./docker/Dockerfile --target bsc -t bsc
docker build . -f ./docker/Dockerfile.truffle -t truffle-test docker build . -f ./docker/Dockerfile.truffle -t truffle-test
docker-compose -f ./tests/truffle/docker-compose.yml up genesis docker compose -f ./tests/truffle/docker-compose.yml up genesis
docker-compose -f ./tests/truffle/docker-compose.yml up -d bsc-rpc bsc-validator1 docker compose -f ./tests/truffle/docker-compose.yml up -d bsc-rpc bsc-validator1
sleep 30 sleep 30
docker-compose -f ./tests/truffle/docker-compose.yml up --exit-code-from truffle-test truffle-test docker compose -f ./tests/truffle/docker-compose.yml up --exit-code-from truffle-test truffle-test
docker-compose -f ./tests/truffle/docker-compose.yml down docker compose -f ./tests/truffle/docker-compose.yml down
#? lint: Run certain pre-selected linters #? lint: Run certain pre-selected linters
lint: ## Run linters. lint: ## Run linters.

View File

@ -9,16 +9,15 @@ https://pkg.go.dev/badge/github.com/ethereum/go-ethereum
)](https://pkg.go.dev/github.com/ethereum/go-ethereum?tab=doc) )](https://pkg.go.dev/github.com/ethereum/go-ethereum?tab=doc)
[![Discord](https://img.shields.io/badge/discord-join%20chat-blue.svg)](https://discord.gg/z2VpC455eU) [![Discord](https://img.shields.io/badge/discord-join%20chat-blue.svg)](https://discord.gg/z2VpC455eU)
But from that baseline of EVM compatible, BNB Smart Chain introduces a system of 21 validators with Proof of Staked Authority (PoSA) consensus that can support short block time and lower fees. The most bonded validator candidates of staking will become validators and produce blocks. The double-sign detection and other slashing logic guarantee security, stability, and chain finality. But from that baseline of EVM compatible, BNB Smart Chain introduces a system of 21 validators with Proof of Staked Authority (PoSA) consensus that can support short block time and lower fees. The most bonded validator candidates of staking will become validators and produce blocks. The double-sign detection and other slashing logic guarantee security, stability, and chain finality.
Cross-chain transfer and other communication are possible due to native support of interoperability. Relayers and on-chain contracts are developed to support that. BNB Beacon Chain DEX remains a liquid venue of the exchange of assets on both chains. This dual-chain architecture will be ideal for users to take advantage of the fast trading on one side and build their decentralized apps on the other side. **The BNB Smart Chain** will be: **The BNB Smart Chain** will be:
- **A self-sovereign blockchain**: Provides security and safety with elected validators. - **A self-sovereign blockchain**: Provides security and safety with elected validators.
- **EVM-compatible**: Supports all the existing Ethereum tooling along with faster finality and cheaper transaction fees. - **EVM-compatible**: Supports all the existing Ethereum tooling along with faster finality and cheaper transaction fees.
- **Interoperable**: Comes with efficient native dual chain communication; Optimized for scaling high-performance dApps that require fast and smooth user experience.
- **Distributed with on-chain governance**: Proof of Staked Authority brings in decentralization and community participants. As the native token, BNB will serve as both the gas of smart contract execution and tokens for staking. - **Distributed with on-chain governance**: Proof of Staked Authority brings in decentralization and community participants. As the native token, BNB will serve as both the gas of smart contract execution and tokens for staking.
More details in [White Paper](https://www.bnbchain.org/en#smartChain). More details in [White Paper](https://github.com/bnb-chain/whitepaper/blob/master/WHITEPAPER.md).
## Key features ## Key features
@ -34,18 +33,8 @@ To combine DPoS and PoA for consensus, BNB Smart Chain implement a novel consens
1. Blocks are produced by a limited set of validators. 1. Blocks are produced by a limited set of validators.
2. Validators take turns to produce blocks in a PoA manner, similar to Ethereum's Clique consensus engine. 2. Validators take turns to produce blocks in a PoA manner, similar to Ethereum's Clique consensus engine.
3. Validator set are elected in and out based on a staking based governance on BNB Beacon Chain. 3. Validator set are elected in and out based on a staking based governance on BNB Smart Chain.
4. The validator set change is relayed via a cross-chain communication mechanism. 4. Parlia consensus engine will interact with a set of [system contracts](https://docs.bnbchain.org/bnb-smart-chain/staking/overview/#system-contracts) to achieve liveness slash, revenue distributing and validator set renewing func.
5. Parlia consensus engine will interact with a set of [system contracts](https://docs.bnbchain.org/docs/learn/system-contract) to achieve liveness slash, revenue distributing and validator set renewing func.
### Light Client of BNB Beacon Chain
To achieve the cross-chain communication from BNB Beacon Chain to BNB Smart Chain, need introduce a on-chain light client verification algorithm.
It contains two parts:
1. [Stateless Precompiled contracts](https://github.com/bnb-chain/bsc/blob/master/core/vm/contracts_lightclient.go) to do tendermint header verification and Merkle Proof verification.
2. [Stateful solidity contracts](https://github.com/bnb-chain/bsc-genesis-contract/blob/master/contracts/TendermintLightClient.sol) to store validator set and trusted appHash.
## Native Token ## Native Token
@ -53,7 +42,6 @@ BNB will run on BNB Smart Chain in the same way as ETH runs on Ethereum so that
BNB will be used to: BNB will be used to:
1. pay `gas` to deploy or invoke Smart Contract on BSC 1. pay `gas` to deploy or invoke Smart Contract on BSC
2. perform cross-chain operations, such as transfer token assets across BNB Smart Chain and BNB Beacon Chain.
## Building the source ## Building the source
@ -149,13 +137,11 @@ unzip testnet.zip
#### 3. Download snapshot #### 3. Download snapshot
Download latest chaindata snapshot from [here](https://github.com/bnb-chain/bsc-snapshots). Follow the guide to structure your files. Download latest chaindata snapshot from [here](https://github.com/bnb-chain/bsc-snapshots). Follow the guide to structure your files.
Note: If you encounter difficulties downloading the chaindata snapshot and prefer to synchronize from the genesis block on the Chapel testnet, remember to include the additional flag `--chapel` when initially launching Geth.
#### 4. Start a full node #### 4. Start a full node
```shell ```shell
./geth --config ./config.toml --datadir ./node --cache 8000 --rpc.allow-unprotected-txs --history.transactions 0 ./geth --config ./config.toml --datadir ./node --cache 8000 --rpc.allow-unprotected-txs --history.transactions 0
## It is recommand to run fullnode with `--tries-verify-mode none` if you want high performance and care little about state consistency ## It is recommend to run fullnode with `--tries-verify-mode none` if you want high performance and care little about state consistency
## It will run with Hash-Base Storage Scheme by default ## It will run with Hash-Base Storage Scheme by default
./geth --config ./config.toml --datadir ./node --cache 8000 --rpc.allow-unprotected-txs --history.transactions 0 --tries-verify-mode none ./geth --config ./config.toml --datadir ./node --cache 8000 --rpc.allow-unprotected-txs --history.transactions 0 --tries-verify-mode none
@ -183,7 +169,7 @@ This tool is optional and if you leave it out you can always attach to an alread
#### 7. More #### 7. More
More details about [running a node](https://docs.bnbchain.org/docs/validator/fullnode) and [becoming a validator](https://docs.bnbchain.org/docs/validator/create-val) More details about [running a node](https://docs.bnbchain.org/bnb-smart-chain/developers/node_operators/full_node/) and [becoming a validator](https://docs.bnbchain.org/bnb-smart-chain/validator/create-val/)
*Note: Although some internal protective measures prevent transactions from *Note: Although some internal protective measures prevent transactions from
crossing over between the main network and test network, you should always crossing over between the main network and test network, you should always
@ -249,9 +235,7 @@ running web servers, so malicious web pages could try to subvert locally availab
APIs!** APIs!**
### Operating a private network ### Operating a private network
- [BSC-Deploy](https://github.com/bnb-chain/node-deploy/): deploy tool for setting up both BNB Beacon Chain, BNB Smart Chain and the cross chain infrastructure between them. - [BSC-Deploy](https://github.com/bnb-chain/node-deploy/): deploy tool for setting up BNB Smart Chain.
- [BSC-Docker](https://github.com/bnb-chain/bsc-docker): deploy tool for setting up local BSC cluster in container.
## Running a bootnode ## Running a bootnode

View File

@ -51,7 +51,7 @@ var (
} }
) )
// waitWatcherStarts waits up to 1s for the keystore watcher to start. // waitWatcherStart waits up to 1s for the keystore watcher to start.
func waitWatcherStart(ks *KeyStore) bool { func waitWatcherStart(ks *KeyStore) bool {
// On systems where file watch is not supported, just return "ok". // On systems where file watch is not supported, just return "ok".
if !ks.cache.watcher.enabled() { if !ks.cache.watcher.enabled() {

View File

@ -343,7 +343,7 @@ func TestWalletNotifications(t *testing.T) {
checkEvents(t, wantEvents, events) checkEvents(t, wantEvents, events)
} }
// TestImportExport tests the import functionality of a keystore. // TestImportECDSA tests the import functionality of a keystore.
func TestImportECDSA(t *testing.T) { func TestImportECDSA(t *testing.T) {
t.Parallel() t.Parallel()
_, ks := tmpKeyStore(t, true) _, ks := tmpKeyStore(t, true)
@ -362,7 +362,7 @@ func TestImportECDSA(t *testing.T) {
} }
} }
// TestImportECDSA tests the import and export functionality of a keystore. // TestImportExport tests the import and export functionality of a keystore.
func TestImportExport(t *testing.T) { func TestImportExport(t *testing.T) {
t.Parallel() t.Parallel()
_, ks := tmpKeyStore(t, true) _, ks := tmpKeyStore(t, true)

View File

@ -0,0 +1,87 @@
package fakebeacon
import (
"context"
"sort"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto/kzg4844"
"github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rpc"
)
type BlobSidecar struct {
Blob kzg4844.Blob `json:"blob"`
Index int `json:"index"`
KZGCommitment kzg4844.Commitment `json:"kzg_commitment"`
KZGProof kzg4844.Proof `json:"kzg_proof"`
}
type APIGetBlobSidecarsResponse struct {
Data []*BlobSidecar `json:"data"`
}
type ReducedGenesisData struct {
GenesisTime string `json:"genesis_time"`
}
type APIGenesisResponse struct {
Data ReducedGenesisData `json:"data"`
}
type ReducedConfigData struct {
SecondsPerSlot string `json:"SECONDS_PER_SLOT"`
}
type IndexedBlobHash struct {
Index int // absolute index in the block, a.k.a. position in sidecar blobs array
Hash common.Hash // hash of the blob, used for consistency checks
}
func configSpec() ReducedConfigData {
return ReducedConfigData{SecondsPerSlot: "1"}
}
func beaconGenesis() APIGenesisResponse {
return APIGenesisResponse{Data: ReducedGenesisData{GenesisTime: "0"}}
}
func beaconBlobSidecars(ctx context.Context, backend ethapi.Backend, slot uint64, indices []int) (APIGetBlobSidecarsResponse, error) {
var blockNrOrHash rpc.BlockNumberOrHash
header, err := fetchBlockNumberByTime(ctx, int64(slot), backend)
if err != nil {
log.Error("Error fetching block number", "slot", slot, "indices", indices)
return APIGetBlobSidecarsResponse{}, err
}
sideCars, err := backend.GetBlobSidecars(ctx, header.Hash())
if err != nil {
log.Error("Error fetching Sidecars", "blockNrOrHash", blockNrOrHash, "err", err)
return APIGetBlobSidecarsResponse{}, err
}
sort.Ints(indices)
fullBlob := len(indices) == 0
res := APIGetBlobSidecarsResponse{}
idx := 0
curIdx := 0
for _, sideCar := range sideCars {
for i := 0; i < len(sideCar.Blobs); i++ {
//hash := kZGToVersionedHash(sideCar.Commitments[i])
if !fullBlob && curIdx >= len(indices) {
break
}
if fullBlob || idx == indices[curIdx] {
res.Data = append(res.Data, &BlobSidecar{
Index: idx,
Blob: sideCar.Blobs[i],
KZGCommitment: sideCar.Commitments[i],
KZGProof: sideCar.Proofs[i],
})
curIdx++
}
idx++
}
}
return res, nil
}

View File

@ -0,0 +1,88 @@
package fakebeacon
import (
"fmt"
"net/http"
"net/url"
"strconv"
"strings"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
field_params "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/network/httputil"
)
var (
versionMethod = "/eth/v1/node/version"
specMethod = "/eth/v1/config/spec"
genesisMethod = "/eth/v1/beacon/genesis"
sidecarsMethodPrefix = "/eth/v1/beacon/blob_sidecars/{slot}"
)
func VersionMethod(w http.ResponseWriter, r *http.Request) {
resp := &structs.GetVersionResponse{
Data: &structs.Version{
Version: "",
},
}
httputil.WriteJson(w, resp)
}
func SpecMethod(w http.ResponseWriter, r *http.Request) {
httputil.WriteJson(w, &structs.GetSpecResponse{Data: configSpec()})
}
func GenesisMethod(w http.ResponseWriter, r *http.Request) {
httputil.WriteJson(w, beaconGenesis())
}
func (s *Service) SidecarsMethod(w http.ResponseWriter, r *http.Request) {
indices, err := parseIndices(r.URL)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
segments := strings.Split(r.URL.Path, "/")
slot, err := strconv.ParseUint(segments[len(segments)-1], 10, 64)
if err != nil {
httputil.HandleError(w, "not a valid slot(timestamp)", http.StatusBadRequest)
return
}
resp, err := beaconBlobSidecars(r.Context(), s.backend, slot, indices)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
httputil.WriteJson(w, resp)
}
// parseIndices filters out invalid and duplicate blob indices
func parseIndices(url *url.URL) ([]int, error) {
rawIndices := url.Query()["indices"]
indices := make([]int, 0, field_params.MaxBlobsPerBlock)
invalidIndices := make([]string, 0)
loop:
for _, raw := range rawIndices {
ix, err := strconv.Atoi(raw)
if err != nil {
invalidIndices = append(invalidIndices, raw)
continue
}
if ix >= field_params.MaxBlobsPerBlock {
invalidIndices = append(invalidIndices, raw)
continue
}
for i := range indices {
if ix == indices[i] {
continue loop
}
}
indices = append(indices, ix)
}
if len(invalidIndices) > 0 {
return nil, fmt.Errorf("requested blob indices %v are invalid", invalidIndices)
}
return indices, nil
}

View File

@ -0,0 +1,97 @@
package fakebeacon
import (
"net/http"
"strconv"
"github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/gorilla/mux"
"github.com/prysmaticlabs/prysm/v5/api/server"
)
const (
DefaultAddr = "localhost"
DefaultPort = 8686
)
type Config struct {
Enable bool
Addr string
Port int
}
func defaultConfig() *Config {
return &Config{
Enable: false,
Addr: DefaultAddr,
Port: DefaultPort,
}
}
type Service struct {
cfg *Config
router *mux.Router
backend ethapi.Backend
}
func NewService(cfg *Config, backend ethapi.Backend) *Service {
cfgs := defaultConfig()
if cfg.Addr != "" {
cfgs.Addr = cfg.Addr
}
if cfg.Port > 0 {
cfgs.Port = cfg.Port
}
s := &Service{
cfg: cfgs,
backend: backend,
}
router := s.newRouter()
s.router = router
return s
}
func (s *Service) Run() {
_ = http.ListenAndServe(s.cfg.Addr+":"+strconv.Itoa(s.cfg.Port), s.router)
}
func (s *Service) newRouter() *mux.Router {
r := mux.NewRouter()
r.Use(server.NormalizeQueryValuesHandler)
for _, e := range s.endpoints() {
r.HandleFunc(e.path, e.handler).Methods(e.methods...)
}
return r
}
type endpoint struct {
path string
handler http.HandlerFunc
methods []string
}
func (s *Service) endpoints() []endpoint {
return []endpoint{
{
path: versionMethod,
handler: VersionMethod,
methods: []string{http.MethodGet},
},
{
path: specMethod,
handler: SpecMethod,
methods: []string{http.MethodGet},
},
{
path: genesisMethod,
handler: GenesisMethod,
methods: []string{http.MethodGet},
},
{
path: sidecarsMethodPrefix,
handler: s.SidecarsMethod,
methods: []string{http.MethodGet},
},
}
}

View File

@ -0,0 +1,90 @@
package fakebeacon
import (
"context"
"fmt"
"strconv"
"testing"
"github.com/stretchr/testify/assert"
)
//
//func TestFetchBlockNumberByTime(t *testing.T) {
// blockNum, err := fetchBlockNumberByTime(context.Background(), 1724052941, client)
// assert.Nil(t, err)
// assert.Equal(t, uint64(41493946), blockNum)
//
// blockNum, err = fetchBlockNumberByTime(context.Background(), 1734052941, client)
// assert.Equal(t, err, errors.New("time too large"))
//
// blockNum, err = fetchBlockNumberByTime(context.Background(), 1600153618, client)
// assert.Nil(t, err)
// assert.Equal(t, uint64(493946), blockNum)
//}
//
//func TestBeaconBlobSidecars(t *testing.T) {
// indexBlobHash := []IndexedBlobHash{
// {Hash: common.HexToHash("0x01231952ecbaede62f8d0398b656072c072db36982c9ef106fbbc39ce14f983c"), Index: 0},
// {Hash: common.HexToHash("0x012c21a8284d2d707bb5318e874d2e1b97a53d028e96abb702b284a2cbb0f79c"), Index: 1},
// {Hash: common.HexToHash("0x011196c8d02536ede0382aa6e9fdba6c460169c0711b5f97fcd701bd8997aee3"), Index: 2},
// {Hash: common.HexToHash("0x019c86b46b27401fb978fd175d1eb7dadf4976d6919501b0c5280d13a5bab57b"), Index: 3},
// {Hash: common.HexToHash("0x01e00db7ee99176b3fd50aab45b4fae953292334bbf013707aac58c455d98596"), Index: 4},
// {Hash: common.HexToHash("0x0117d23b68123d578a98b3e1aa029661e0abda821a98444c21992eb1e5b7208f"), Index: 5},
// //{Hash: common.HexToHash("0x01e00db7ee99176b3fd50aab45b4fae953292334bbf013707aac58c455d98596"), Index: 1},
// }
//
// resp, err := beaconBlobSidecars(context.Background(), 1724055046, []int{0, 1, 2, 3, 4, 5}) // block: 41494647
// assert.Nil(t, err)
// assert.NotNil(t, resp)
// assert.NotEmpty(t, resp.Data)
// for i, sideCar := range resp.Data {
// assert.Equal(t, indexBlobHash[i].Index, sideCar.Index)
// assert.Equal(t, indexBlobHash[i].Hash, kZGToVersionedHash(sideCar.KZGCommitment))
// }
//
// apiscs := make([]*BlobSidecar, 0, len(indexBlobHash))
// // filter and order by hashes
// for _, h := range indexBlobHash {
// for _, apisc := range resp.Data {
// if h.Index == int(apisc.Index) {
// apiscs = append(apiscs, apisc)
// break
// }
// }
// }
//
// assert.Equal(t, len(apiscs), len(resp.Data))
// assert.Equal(t, len(apiscs), len(indexBlobHash))
//}
type TimeToSlotFn func(timestamp uint64) (uint64, error)
// GetTimeToSlotFn returns a function that converts a timestamp to a slot number.
func GetTimeToSlotFn(ctx context.Context) (TimeToSlotFn, error) {
genesis := beaconGenesis()
config := configSpec()
genesisTime, _ := strconv.ParseUint(genesis.Data.GenesisTime, 10, 64)
secondsPerSlot, _ := strconv.ParseUint(config.SecondsPerSlot, 10, 64)
if secondsPerSlot == 0 {
return nil, fmt.Errorf("got bad value for seconds per slot: %v", config.SecondsPerSlot)
}
timeToSlotFn := func(timestamp uint64) (uint64, error) {
if timestamp < genesisTime {
return 0, fmt.Errorf("provided timestamp (%v) precedes genesis time (%v)", timestamp, genesisTime)
}
return (timestamp - genesisTime) / secondsPerSlot, nil
}
return timeToSlotFn, nil
}
func TestAPI(t *testing.T) {
slotFn, err := GetTimeToSlotFn(context.Background())
assert.Nil(t, err)
expTx := uint64(123151345)
gotTx, err := slotFn(expTx)
assert.Nil(t, err)
assert.Equal(t, expTx, gotTx)
}

View File

@ -0,0 +1,65 @@
package fakebeacon
import (
"context"
"errors"
"fmt"
"math/rand"
"time"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/rpc"
)
func fetchBlockNumberByTime(ctx context.Context, ts int64, backend ethapi.Backend) (*types.Header, error) {
// calc the block number of the ts.
currentHeader := backend.CurrentHeader()
blockTime := int64(currentHeader.Time)
if ts > blockTime {
return nil, errors.New("time too large")
}
blockNum := currentHeader.Number.Uint64()
estimateEndNumber := int64(blockNum) - (blockTime-ts)/3
// find the end number
for {
header, err := backend.HeaderByNumber(ctx, rpc.BlockNumber(estimateEndNumber))
if err != nil {
time.Sleep(time.Duration(rand.Int()%180) * time.Millisecond)
continue
}
if header == nil {
estimateEndNumber -= 1
time.Sleep(time.Duration(rand.Int()%180) * time.Millisecond)
continue
}
headerTime := int64(header.Time)
if headerTime == ts {
return header, nil
}
// let the estimateEndNumber a little bigger than real value
if headerTime > ts+8 {
estimateEndNumber -= (headerTime - ts) / 3
} else if headerTime < ts {
estimateEndNumber += (ts-headerTime)/3 + 1
} else {
// search one by one
for headerTime >= ts {
header, err = backend.HeaderByNumber(ctx, rpc.BlockNumber(estimateEndNumber-1))
if err != nil {
time.Sleep(time.Duration(rand.Int()%180) * time.Millisecond)
continue
}
headerTime = int64(header.Time)
if headerTime == ts {
return header, nil
}
estimateEndNumber -= 1
if headerTime < ts { //found the real endNumber
return nil, fmt.Errorf("block not found by time %d", ts)
}
}
}
}
}

View File

@ -43,7 +43,42 @@ func TestExtraParse(t *testing.T) {
} }
} }
// case 3, |---Extra Vanity---|---Empty---|---Vote Attestation---|---Extra Seal---| // case 3, |---Extra Vanity---|---Validators Number and Validators Bytes---|---Turn Length---|---Empty---|---Extra Seal---|
{
extraData := "0xd983010209846765746889676f312e31392e3131856c696e75780000a6bf97c1152465176c461afb316ebc773c61faee85a6515daa8a923564c6ffd37fb2fe9f118ef88092e8762c7addb526ab7eb1e772baef85181f892c731be0c1891a50e6b06262c816295e26495cef6f69dfa69911d9d8e4f3bbadb89b977cf58294f7239d515e15b24cfeb82494056cf691eaf729b165f32c9757c429dba5051155903067e56ebe3698678e912d4c407bbe49438ed859fe965b140dcf1aab71a993c1f7f6929d1fe2a17b4e14614ef9fc5bdc713d6631d675403fbeefac55611bf612700b1b65f4744861b80b0f7d6ab03f349bbafec1551819b8be1efea2fc46ca749aa184248a459464eec1a21e7fc7b71a053d9644e9bb8da4853b8f872cd7c1d6b324bf1922829830646ceadfb658d3de009a61dd481a114a2e761c554b641742c973867899d300000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000069c77a677c40c7fbea129d4b171a39b7a8ddabfab2317f59d86abfaf690850223d90e9e7593d91a29331dfc2f84d5adecc75fc39ecab4632c1b4400a3dd1e1298835bcca70f657164e5b75689b64b7fd1fa275f334f28e1896a26afa1295da81418593bd12814463d9f6e45c36a0e47eb4cd3e5b6af29c41e2a3a5636430155a466e216585af3ba772b61c6014342d914470ec7ac2975be345796c2b81db0422a5fd08e40db1fc2368d2245e4b18b1d0b85c921aaaafd2e341760e29fc613edd39f71254614e2055c3287a517ae2f5b9e386cd1b50a4550696d957cb4900f03ab84f83ff2df44193496793b847f64e9d6db1b3953682bb95edd096eb1e69bbd357c200992ca78050d0cbe180cfaa018e8b6c8fd93d6f4cea42bbb345dbc6f0dfdb5bec73a8a257074e82b881cfa06ef3eb4efeca060c2531359abd0eab8af1e3edfa2025fca464ac9c3fd123f6c24a0d78869485a6f79b60359f141df90a0c745125b131caaffd12000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b218c5d6af1f979ac42bc68d98a5a0d796c6ab01000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b4dd66d7c2c7e57f628210187192fb89d4b99dd4000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000be807dddb074639cd9fa61b47676c064fc50d62cb1f2c71577def3144fabeb75a8a1c8cb5b51d1d1b4a05eec67988b8685008baa17459ec425dbaebc852f496dc92196cdcc8e6d00c17eb431350c6c50d8b8f05176b90b11b3a3d4feb825ae9702711566df5dbf38e82add4dd1b573b95d2466fa6501ccb81e9d26a352b96150ccbf7b697fd0a419d1d6bf74282782b0b3eb1413c901d6ecf02e8e28000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000e2d3a739effcd3a99387d015e260eefac72ebea1956c470ddff48cb49300200b5f83497f3a3ccb3aeb83c5edd9818569038e61d197184f4aa6939ea5e9911e3e98ac6d21e9ae3261a475a27bb1028f140bc2a7c843318afd000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000ea0a6e3c511bbd10f4519ece37dc24887e11b55db2d4c6283c44a1c7bd503aaba7666e9f0c830e0ff016c1c750a5e48757a713d0836b1cabfd5c281b1de3b77d1c192183ee226379db83cffc681495730c11fdde79ba4c0c000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000ef0274e31810c9df02f98fafde0f841f4e66a1cd00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004e99f701bb14cb7dfb68b90bd3e6d1ca656964630de71beffc7f33f7f08ec99d336ec51ad9fad0ac84ae77ca2e8ad9512acc56e0d7c93f3c2ce7de1b69149a5a400"
extra, err := parseExtra(extraData)
assert.NoError(t, err)
{
var have = extra.ValidatorSize
var want = uint8(21)
if have != want {
t.Fatalf("extra.ValidatorSize mismatch, have %d, want %d", have, want)
}
}
{
var have = common.Bytes2Hex(extra.Validators[14].Address[:])
var want = "cc8e6d00c17eb431350c6c50d8b8f05176b90b11"
if have != want {
t.Fatalf("extra.Validators[14].Address mismatch, have %s, want %s", have, want)
}
}
{
var have = common.Bytes2Hex(extra.Validators[18].BLSPublicKey[:])
var want = "b2d4c6283c44a1c7bd503aaba7666e9f0c830e0ff016c1c750a5e48757a713d0836b1cabfd5c281b1de3b77d1c192183"
if have != want {
t.Fatalf("extra.Validators[18].BLSPublicKey mismatch, have %s, want %s", have, want)
}
}
{
var have = extra.TurnLength
var want = uint8(4)
if *have != want {
t.Fatalf("extra.TurnLength mismatch, have %d, want %d", *have, want)
}
}
}
// case 4, |---Extra Vanity---|---Empty---|---Vote Attestation---|---Extra Seal---|
{ {
extraData := "0xd883010205846765746888676f312e32302e35856c696e75780000002995c52af8b5830563efb86089cf168dcf4c5d3cb057926628ad1bf0f03ea67eef1458485578a4f8489afa8a853ecc7af45e2d145c21b70641c4b29f0febd2dd2c61fa1ba174be3fd47f1f5fa2ab9b5c318563d8b70ca58d0d51e79ee32b2fb721649e2cb9d36538361fba11f84c8401d14bb7a0fa67ddb3ba654d6006bf788710032247aa4d1be0707273e696b422b3ff72e9798401d14bbaa01225f505f5a0e1aefadcd2913b7aac9009fe4fb3d1bf57399e0b9dce5947f94280fe6d3647276c4127f437af59eb7c7985b2ae1ebe432619860695cb6106b80cc66c735bc1709afd11f233a2c97409d38ebaf7178aa53e895aea2fe0a229f71ec601" extraData := "0xd883010205846765746888676f312e32302e35856c696e75780000002995c52af8b5830563efb86089cf168dcf4c5d3cb057926628ad1bf0f03ea67eef1458485578a4f8489afa8a853ecc7af45e2d145c21b70641c4b29f0febd2dd2c61fa1ba174be3fd47f1f5fa2ab9b5c318563d8b70ca58d0d51e79ee32b2fb721649e2cb9d36538361fba11f84c8401d14bb7a0fa67ddb3ba654d6006bf788710032247aa4d1be0707273e696b422b3ff72e9798401d14bbaa01225f505f5a0e1aefadcd2913b7aac9009fe4fb3d1bf57399e0b9dce5947f94280fe6d3647276c4127f437af59eb7c7985b2ae1ebe432619860695cb6106b80cc66c735bc1709afd11f233a2c97409d38ebaf7178aa53e895aea2fe0a229f71ec601"
extra, err := parseExtra(extraData) extra, err := parseExtra(extraData)
@ -64,9 +99,9 @@ func TestExtraParse(t *testing.T) {
} }
} }
// case 4, |---Extra Vanity---|---Validators Number and Validators Bytes---|---Vote Attestation---|---Extra Seal---| // case 5, |---Extra Vanity---|---Validators Number and Validators Bytes---|---Vote Attestation---|---Extra Seal---|
{ {
extraData := "0xd883010209846765746888676f312e31392e38856c696e7578000000dc55905c071284214b9b9c85549ab3d2b972df0deef66ac2c98e82934ca974fdcd97f3309de967d3c9c43fa711a8d673af5d75465844bf8969c8d1948d903748ac7b8b1720fa64e50c35552c16704d214347f29fa77f77da6d75d7c752b742ad4855bae330426b823e742da31f816cc83bc16d69a9134be0cfb4a1d17ec34f1b5b32d5c20440b8536b1e88f0f247788386d0ed6c748e03a53160b4b30ed3748cc5000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000980a75ecd1309ea12fa2ed87a8744fbfc9b863d589037a9ace3b590165ea1c0c5ac72bf600b7c88c1e435f41932c1132aae1bfa0bb68e46b96ccb12c3415e4d82af717d8a2959d3f95eae5dc7d70144ce1b73b403b7eb6e0b973c2d38487e58fd6e145491b110080fb14ac915a0411fc78f19e09a399ddee0d20c63a75d8f930f1694544ad2dc01bb71b214cb885500844365e95cd9942c7276e7fd8a2750ec6dded3dcdc2f351782310b0eadc077db59abca0f0cd26776e2e7acb9f3bce40b1fa5221fd1561226c6263cc5ff474cf03cceff28abc65c9cbae594f725c80e12d96c9b86c3400e529bfe184056e257c07940bb664636f689e8d2027c834681f8f878b73445261034e946bb2d901b4b878f8b27bb8608c11016739b3f8a19e54ab8c7abacd936cfeba200f3645a98b65adb0dd3692b69ce0b3ae10e7176b9a4b0d83f04065b1042b4bcb646a34b75c550f92fc34b8b2b1db0fa0d3172db23ba92727c80bcd306320d0ff411bf858525fde13bc8e0370f84c8401e9c2e6a0820dc11d63176a0eb1b828bc5376867b275579112b7013358da40317e7bab6e98401e9c2e7a00edc71ce80105a3220a87bea2792fa340d66c59002f02b0a09349ed1ed284070808b972fac2b9077a4dcb6fc37093799a652858016c99142b227500c844fa97ec22e3f9d3b1e982f14bcd999a7453e89ce5ef5c55f1c7f8f74ba904186cd67828200" extraData := "0xd883010209846765746888676f312e31392e38856c696e7578000000dc55905c071284214b9b9c85549ab3d2b972df0deef66ac2c98e82934ca974fdcd97f3309de967d3c9c43fa711a8d673af5d75465844bf8969c8d1948d903748ac7b8b1720fa64e50c35552c16704d214347f29fa77f77da6d75d7c752b742ad4855bae330426b823e742da31f816cc83bc16d69a9134be0cfb4a1d17ec34f1b5b32d5c20440b8536b1e88f0f247788386d0ed6c748e03a53160b4b30ed3748cc5000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000980a75ecd1309ea12fa2ed87a8744fbfc9b863d589037a9ace3b590165ea1c0c5ac72bf600b7c88c1e435f41932c1132aae1bfa0bb68e46b96ccb12c3415e4d82af717d8a2959d3f95eae5dc7d70144ce1b73b403b7eb6e0b973c2d38487e58fd6e145491b110080fb14ac915a0411fc78f19e09a399ddee0d20c63a75d8f930f1694544ad2dc01bb71b214cb885500844365e95cd9942c7276e7fd8a2750ec6dded3dcdc2f351782310b0eadc077db59abca0f0cd26776e2e7acb9f3bce40b1fa5221fd1561226c6263cc5ff474cf03cceff28abc65c9cbae594f725c80e12d96c9b86c3400e529bfe184056e257c07940bb664636f689e8d2027c834681f8f878b73445261034e946bb2d901b4b878f8b27bb8608c11016739b3f8a19e54ab8c7abacd936cfeba200f3645a98b65adb0dd3692b69ce0b3ae10e7176b9a4b0d83f04065b1042b4bcb646a34b75c550f92fc34b8b2b1db0fa0d3172db23ba92727c80bcd306320d0ff411bf858525fde13bc8e0370f84c8401e9c2e6a0820dc11d63176a0eb1b828bc5376867b275579112b7013358da40317e7bab6e98401e9c2e7a00edc71ce80105a3220a87bea2792fa340d66c59002f02b0a09349ed1ed28407080048b972fac2b9077a4dcb6fc37093799a652858016c99142b227500c844fa97ec22e3f9d3b1e982f14bcd999a7453e89ce5ef5c55f1c7f8f74ba904186cd67828200"
extra, err := parseExtra(extraData) extra, err := parseExtra(extraData)
assert.NoError(t, err) assert.NoError(t, err)
{ {
@ -105,4 +140,53 @@ func TestExtraParse(t *testing.T) {
} }
} }
} }
// case 6, |---Extra Vanity---|---Validators Number and Validators Bytes---|---Turn Length---|---Vote Attestation---|---Extra Seal---|
{
extraData := "0xd883010209846765746888676f312e31392e38856c696e7578000000dc55905c071284214b9b9c85549ab3d2b972df0deef66ac2c98e82934ca974fdcd97f3309de967d3c9c43fa711a8d673af5d75465844bf8969c8d1948d903748ac7b8b1720fa64e50c35552c16704d214347f29fa77f77da6d75d7c752b742ad4855bae330426b823e742da31f816cc83bc16d69a9134be0cfb4a1d17ec34f1b5b32d5c20440b8536b1e88f0f247788386d0ed6c748e03a53160b4b30ed3748cc5000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000980a75ecd1309ea12fa2ed87a8744fbfc9b863d589037a9ace3b590165ea1c0c5ac72bf600b7c88c1e435f41932c1132aae1bfa0bb68e46b96ccb12c3415e4d82af717d8a2959d3f95eae5dc7d70144ce1b73b403b7eb6e0b973c2d38487e58fd6e145491b110080fb14ac915a0411fc78f19e09a399ddee0d20c63a75d8f930f1694544ad2dc01bb71b214cb885500844365e95cd9942c7276e7fd8a2750ec6dded3dcdc2f351782310b0eadc077db59abca0f0cd26776e2e7acb9f3bce40b1fa5221fd1561226c6263cc5ff474cf03cceff28abc65c9cbae594f725c80e12d96c9b86c3400e529bfe184056e257c07940bb664636f689e8d2027c834681f8f878b73445261034e946bb2d901b4b87804f8b27bb8608c11016739b3f8a19e54ab8c7abacd936cfeba200f3645a98b65adb0dd3692b69ce0b3ae10e7176b9a4b0d83f04065b1042b4bcb646a34b75c550f92fc34b8b2b1db0fa0d3172db23ba92727c80bcd306320d0ff411bf858525fde13bc8e0370f84c8401e9c2e6a0820dc11d63176a0eb1b828bc5376867b275579112b7013358da40317e7bab6e98401e9c2e7a00edc71ce80105a3220a87bea2792fa340d66c59002f02b0a09349ed1ed28407080048b972fac2b9077a4dcb6fc37093799a652858016c99142b227500c844fa97ec22e3f9d3b1e982f14bcd999a7453e89ce5ef5c55f1c7f8f74ba904186cd67828200"
extra, err := parseExtra(extraData)
assert.NoError(t, err)
{
var have = common.Bytes2Hex(extra.Validators[0].Address[:])
var want = "1284214b9b9c85549ab3d2b972df0deef66ac2c9"
if have != want {
t.Fatalf("extra.Validators[0].Address mismatch, have %s, want %s", have, want)
}
}
{
var have = common.Bytes2Hex(extra.Validators[0].BLSPublicKey[:])
var want = "8e82934ca974fdcd97f3309de967d3c9c43fa711a8d673af5d75465844bf8969c8d1948d903748ac7b8b1720fa64e50c"
if have != want {
t.Fatalf("extra.Validators[0].BLSPublicKey mismatch, have %s, want %s", have, want)
}
}
{
var have = extra.Validators[0].VoteIncluded
var want = true
if have != want {
t.Fatalf("extra.Validators[0].VoteIncluded mismatch, have %t, want %t", have, want)
}
}
{
var have = common.Bytes2Hex(extra.Data.TargetHash[:])
var want = "0edc71ce80105a3220a87bea2792fa340d66c59002f02b0a09349ed1ed284070"
if have != want {
t.Fatalf("extra.Data.TargetHash mismatch, have %s, want %s", have, want)
}
}
{
var have = extra.Data.TargetNumber
var want = uint64(32096999)
if have != want {
t.Fatalf("extra.Data.TargetNumber mismatch, have %d, want %d", have, want)
}
}
{
var have = extra.TurnLength
var want = uint8(4)
if *have != want {
t.Fatalf("extra.TurnLength mismatch, have %d, want %d", *have, want)
}
}
}
} }

View File

@ -24,7 +24,7 @@ const (
BLSPublicKeyLength = 48 BLSPublicKeyLength = 48
// follow order in extra field // follow order in extra field
// |---Extra Vanity---|---Validators Number and Validators Bytes (or Empty)---|---Vote Attestation (or Empty)---|---Extra Seal---| // |---Extra Vanity---|---Validators Number and Validators Bytes (or Empty)---|---Turn Length (or Empty)---|---Vote Attestation (or Empty)---|---Extra Seal---|
extraVanityLength = 32 // Fixed number of extra-data prefix bytes reserved for signer vanity extraVanityLength = 32 // Fixed number of extra-data prefix bytes reserved for signer vanity
validatorNumberSize = 1 // Fixed number of extra prefix bytes reserved for validator number after Luban validatorNumberSize = 1 // Fixed number of extra prefix bytes reserved for validator number after Luban
validatorBytesLength = common.AddressLength + types.BLSPublicKeyLength validatorBytesLength = common.AddressLength + types.BLSPublicKeyLength
@ -35,6 +35,7 @@ type Extra struct {
ExtraVanity string ExtraVanity string
ValidatorSize uint8 ValidatorSize uint8
Validators validatorsAscending Validators validatorsAscending
TurnLength *uint8
*types.VoteAttestation *types.VoteAttestation
ExtraSeal []byte ExtraSeal []byte
} }
@ -113,6 +114,15 @@ func parseExtra(hexData string) (*Extra, error) {
sort.Sort(extra.Validators) sort.Sort(extra.Validators)
data = data[validatorBytesTotalLength-validatorNumberSize:] data = data[validatorBytesTotalLength-validatorNumberSize:]
dataLength = len(data) dataLength = len(data)
// parse TurnLength
if dataLength > 0 {
if data[0] != '\xf8' {
extra.TurnLength = &data[0]
data = data[1:]
dataLength = len(data)
}
}
} }
// parse Vote Attestation // parse Vote Attestation
@ -148,6 +158,10 @@ func prettyExtra(extra Extra) {
} }
} }
if extra.TurnLength != nil {
fmt.Printf("TurnLength : %d\n", *extra.TurnLength)
}
if extra.VoteAttestation != nil { if extra.VoteAttestation != nil {
fmt.Printf("Attestation :\n") fmt.Printf("Attestation :\n")
fmt.Printf("\tVoteAddressSet : %b, %d\n", extra.VoteAddressSet, bitset.From([]uint64{uint64(extra.VoteAddressSet)}).Count()) fmt.Printf("\tVoteAddressSet : %b, %d\n", extra.VoteAddressSet, bitset.From([]uint64{uint64(extra.VoteAddressSet)}).Count())

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -0,0 +1,23 @@
# 1.Background
This is to support some projects with customized tokens that they want to integrate into the BSC faucet tool.
## 1.1. How to Integrate Your Token
- Step 1: Fund the faucet address by sending a specific amount of your BEP-20 token to the faucet address (0xaa25aa7a19f9c426e07dee59b12f944f4d9f1dd3) on the BSC testnet.
- Step 2: Update this README.md file and create a Pull Request on [bsc github](https://github.com/bnb-chain/bsc) with relevant information.
We will review the request, and once it is approved, the faucet tool will start to support the customized token and list it on https://www.bnbchain.org/en/testnet-faucet.
# 2.Token List
## 2.1.DemoToken
- symbol: DEMO
- amount: 10000000000000000000
- icon: ./demotoken.png
- addr: https://testnet.bscscan.com/address/0xe15c158d768c306dae87b96430a94f884333e55d
- fundTx: [0xa499dc9aaf918aff0507538a8aa80a88d0af6ca15054e6acc57b69c651945280](https://testnet.bscscan.com/tx/0x2a3f334b6ca756b64331bdec9e6cf3207ac50a4839fda6379e909de4d9a194ca)
-
## 2.2.DIN token
- symbol: DIN
- amount: 10000000000000000000
- icon: ./DIN.png
- addr: https://testnet.bscscan.com/address/0xb8b40FcC5B4519Dba0E07Ac8821884CE90BdE677
- fundTx: [0x17fc4c1db133830c7c146a0d41ca1df31cb446989ec11b382d58bb6176d6fde3](https://testnet.bscscan.com/tx/0x17fc4c1db133830c7c146a0d41ca1df31cb446989ec11b382d58bb6176d6fde3)

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

View File

@ -49,12 +49,14 @@ import (
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
"github.com/gorilla/websocket" "github.com/gorilla/websocket"
"golang.org/x/time/rate"
) )
var ( var (
genesisFlag = flag.String("genesis", "", "Genesis json file to seed the chain with") genesisFlag = flag.String("genesis", "", "Genesis json file to seed the chain with")
apiPortFlag = flag.Int("apiport", 8080, "Listener port for the HTTP API connection") apiPortFlag = flag.Int("apiport", 8080, "Listener port for the HTTP API connection")
wsEndpoint = flag.String("ws", "http://127.0.0.1:7777/", "Url to ws endpoint") wsEndpoint = flag.String("ws", "http://127.0.0.1:7777/", "Url to ws endpoint")
wsEndpointMainnet = flag.String("ws.mainnet", "", "Url to ws endpoint of BSC mainnet")
netnameFlag = flag.String("faucet.name", "", "Network name to assign to the faucet") netnameFlag = flag.String("faucet.name", "", "Network name to assign to the faucet")
payoutFlag = flag.Int("faucet.amount", 1, "Number of Ethers to pay out per user request") payoutFlag = flag.Int("faucet.amount", 1, "Number of Ethers to pay out per user request")
@ -76,6 +78,12 @@ var (
fixGasPrice = flag.Int64("faucet.fixedprice", 0, "Will use fixed gas price if specified") fixGasPrice = flag.Int64("faucet.fixedprice", 0, "Will use fixed gas price if specified")
twitterTokenFlag = flag.String("twitter.token", "", "Bearer token to authenticate with the v2 Twitter API") twitterTokenFlag = flag.String("twitter.token", "", "Bearer token to authenticate with the v2 Twitter API")
twitterTokenV1Flag = flag.String("twitter.token.v1", "", "Bearer token to authenticate with the v1.1 Twitter API") twitterTokenV1Flag = flag.String("twitter.token.v1", "", "Bearer token to authenticate with the v1.1 Twitter API")
resendInterval = 15 * time.Second
resendBatchSize = 3
resendMaxGasPrice = big.NewInt(50 * params.GWei)
wsReadTimeout = 5 * time.Minute
minMainnetBalance = big.NewInt(2 * 1e6 * params.GWei) // 0.002 bnb
) )
var ( var (
@ -86,11 +94,17 @@ var (
//go:embed faucet.html //go:embed faucet.html
var websiteTmpl string var websiteTmpl string
func weiToEtherStringFx(wei *big.Int, prec int) string {
etherValue := new(big.Float).Quo(new(big.Float).SetInt(wei), big.NewFloat(params.Ether))
// Format the big.Float directly to a string with the specified precision
return etherValue.Text('f', prec)
}
func main() { func main() {
// Parse the flags and set up the logger to print everything requested // Parse the flags and set up the logger to print everything requested
flag.Parse() flag.Parse()
log.SetDefault(log.NewLogger(log.NewTerminalHandlerWithLevel(os.Stderr, log.FromLegacyLevel(*logFlag), true))) log.SetDefault(log.NewLogger(log.NewTerminalHandlerWithLevel(os.Stderr, log.FromLegacyLevel(*logFlag), false)))
log.Info("faucet started")
// Construct the payout tiers // Construct the payout tiers
amounts := make([]string, *tiersFlag) amounts := make([]string, *tiersFlag)
for i := 0; i < *tiersFlag; i++ { for i := 0; i < *tiersFlag; i++ {
@ -169,7 +183,7 @@ func main() {
log.Crit("Failed to unlock faucet signer account", "err", err) log.Crit("Failed to unlock faucet signer account", "err", err)
} }
// Assemble and start the faucet light service // Assemble and start the faucet light service
faucet, err := newFaucet(genesis, *wsEndpoint, ks, website.Bytes(), bep2eInfos) faucet, err := newFaucet(genesis, *wsEndpoint, *wsEndpointMainnet, ks, website.Bytes(), bep2eInfos)
if err != nil { if err != nil {
log.Crit("Failed to start faucet", "err", err) log.Crit("Failed to start faucet", "err", err)
} }
@ -196,9 +210,10 @@ type bep2eInfo struct {
// faucet represents a crypto faucet backed by an Ethereum light client. // faucet represents a crypto faucet backed by an Ethereum light client.
type faucet struct { type faucet struct {
config *params.ChainConfig // Chain configurations for signing config *params.ChainConfig // Chain configurations for signing
client *ethclient.Client // Client connection to the Ethereum chain client *ethclient.Client // Client connection to the Ethereum chain
index []byte // Index page to serve up on the web clientMainnet *ethclient.Client // Client connection to BSC mainnet for balance check
index []byte // Index page to serve up on the web
keystore *keystore.KeyStore // Keystore containing the single signer keystore *keystore.KeyStore // Keystore containing the single signer
account accounts.Account // Account funding user faucet requests account accounts.Account // Account funding user faucet requests
@ -216,6 +231,8 @@ type faucet struct {
bep2eInfos map[string]bep2eInfo bep2eInfos map[string]bep2eInfo
bep2eAbi abi.ABI bep2eAbi abi.ABI
limiter *IPRateLimiter
} }
// wsConn wraps a websocket connection with a write mutex as the underlying // wsConn wraps a websocket connection with a write mutex as the underlying
@ -225,7 +242,7 @@ type wsConn struct {
wlock sync.Mutex wlock sync.Mutex
} }
func newFaucet(genesis *core.Genesis, url string, ks *keystore.KeyStore, index []byte, bep2eInfos map[string]bep2eInfo) (*faucet, error) { func newFaucet(genesis *core.Genesis, url string, mainnetUrl string, ks *keystore.KeyStore, index []byte, bep2eInfos map[string]bep2eInfo) (*faucet, error) {
bep2eAbi, err := abi.JSON(strings.NewReader(bep2eAbiJson)) bep2eAbi, err := abi.JSON(strings.NewReader(bep2eAbiJson))
if err != nil { if err != nil {
return nil, err return nil, err
@ -234,17 +251,30 @@ func newFaucet(genesis *core.Genesis, url string, ks *keystore.KeyStore, index [
if err != nil { if err != nil {
return nil, err return nil, err
} }
clientMainnet, err := ethclient.Dial(mainnetUrl)
if err != nil {
// skip mainnet balance check if it there is no available mainnet endpoint
log.Warn("dail mainnet endpoint failed", "mainnetUrl", mainnetUrl, "err", err)
}
// Allow 1 request per minute with burst of 5, and cache up to 1000 IPs
limiter, err := NewIPRateLimiter(rate.Limit(1.0), 5, 1000)
if err != nil {
return nil, err
}
return &faucet{ return &faucet{
config: genesis.Config, config: genesis.Config,
client: client, client: client,
index: index, clientMainnet: clientMainnet,
keystore: ks, index: index,
account: ks.Accounts()[0], keystore: ks,
timeouts: make(map[string]time.Time), account: ks.Accounts()[0],
update: make(chan struct{}, 1), timeouts: make(map[string]time.Time),
bep2eInfos: bep2eInfos, update: make(chan struct{}, 1),
bep2eAbi: bep2eAbi, bep2eInfos: bep2eInfos,
bep2eAbi: bep2eAbi,
limiter: limiter,
}, nil }, nil
} }
@ -272,6 +302,20 @@ func (f *faucet) webHandler(w http.ResponseWriter, r *http.Request) {
// apiHandler handles requests for Ether grants and transaction statuses. // apiHandler handles requests for Ether grants and transaction statuses.
func (f *faucet) apiHandler(w http.ResponseWriter, r *http.Request) { func (f *faucet) apiHandler(w http.ResponseWriter, r *http.Request) {
ip := r.RemoteAddr
if len(r.Header.Get("X-Forwarded-For")) > 0 {
ips := strings.Split(r.Header.Get("X-Forwarded-For"), ",")
if len(ips) > 0 {
ip = strings.TrimSpace(ips[len(ips)-1])
}
}
if !f.limiter.GetLimiter(ip).Allow() {
log.Warn("Too many requests from client: ", "client", ip)
http.Error(w, "Too many requests", http.StatusTooManyRequests)
return
}
upgrader := websocket.Upgrader{CheckOrigin: func(r *http.Request) bool { return true }} upgrader := websocket.Upgrader{CheckOrigin: func(r *http.Request) bool { return true }}
conn, err := upgrader.Upgrade(w, r, nil) conn, err := upgrader.Upgrade(w, r, nil)
if err != nil { if err != nil {
@ -354,7 +398,11 @@ func (f *faucet) apiHandler(w http.ResponseWriter, r *http.Request) {
Captcha string `json:"captcha"` Captcha string `json:"captcha"`
Symbol string `json:"symbol"` Symbol string `json:"symbol"`
} }
// not sure if it helps or not, but set a read deadline could help prevent resource leakage
// if user did not give response for too long, then the routine will be stuck.
conn.SetReadDeadline(time.Now().Add(wsReadTimeout))
if err = conn.ReadJSON(&msg); err != nil { if err = conn.ReadJSON(&msg); err != nil {
log.Debug("read json message failed", "err", err, "ip", ip)
return return
} }
if !*noauthFlag && !strings.HasPrefix(msg.URL, "https://twitter.com/") && !strings.HasPrefix(msg.URL, "https://www.facebook.com/") { if !*noauthFlag && !strings.HasPrefix(msg.URL, "https://twitter.com/") && !strings.HasPrefix(msg.URL, "https://www.facebook.com/") {
@ -372,9 +420,9 @@ func (f *faucet) apiHandler(w http.ResponseWriter, r *http.Request) {
} }
continue continue
} }
log.Info("Faucet funds requested", "url", msg.URL, "tier", msg.Tier) log.Info("Faucet funds requested", "url", msg.URL, "tier", msg.Tier, "ip", ip)
// If captcha verifications are enabled, make sure we're not dealing with a robot // check #1: captcha verifications to exclude robot
if *captchaToken != "" { if *captchaToken != "" {
form := url.Values{} form := url.Values{}
form.Add("secret", *captchaSecret) form.Add("secret", *captchaSecret)
@ -451,88 +499,108 @@ func (f *faucet) apiHandler(w http.ResponseWriter, r *http.Request) {
} }
continue continue
} }
log.Info("Faucet request valid", "url", msg.URL, "tier", msg.Tier, "user", username, "address", address)
// Ensure the user didn't request funds too recently // check #2: check IP and ID(address) to ensure the user didn't request funds too frequently
f.lock.Lock() f.lock.Lock()
var (
fund bool
timeout time.Time
)
if ipTimeout := f.timeouts[ips[len(ips)-2]]; time.Now().Before(ipTimeout) { if ipTimeout := f.timeouts[ips[len(ips)-2]]; time.Now().Before(ipTimeout) {
f.lock.Unlock()
if err = sendError(wsconn, fmt.Errorf("%s left until next allowance", common.PrettyDuration(time.Until(ipTimeout)))); err != nil { // nolint: gosimple if err = sendError(wsconn, fmt.Errorf("%s left until next allowance", common.PrettyDuration(time.Until(ipTimeout)))); err != nil { // nolint: gosimple
log.Warn("Failed to send funding error to client", "err", err) log.Warn("Failed to send funding error to client", "err", err)
return
} }
f.lock.Unlock() log.Info("too frequent funding(ip)", "TimeLeft", common.PrettyDuration(time.Until(ipTimeout)), "ip", ips[len(ips)-2], "ipsStr", ipsStr)
continue continue
} }
if idTimeout := f.timeouts[id]; time.Now().Before(idTimeout) {
if timeout = f.timeouts[id]; time.Now().After(timeout) { f.lock.Unlock()
var tx *types.Transaction // Send an error if too frequent funding, otherwise a success
if msg.Symbol == "BNB" { if err = sendError(wsconn, fmt.Errorf("%s left until next allowance", common.PrettyDuration(time.Until(idTimeout)))); err != nil { // nolint: gosimple
// User wasn't funded recently, create the funding transaction log.Warn("Failed to send funding error to client", "err", err)
amount := new(big.Int).Div(new(big.Int).Mul(big.NewInt(int64(*payoutFlag)), ether), big.NewInt(10)) return
amount = new(big.Int).Mul(amount, new(big.Int).Exp(big.NewInt(5), big.NewInt(int64(msg.Tier)), nil))
amount = new(big.Int).Div(amount, new(big.Int).Exp(big.NewInt(2), big.NewInt(int64(msg.Tier)), nil))
tx = types.NewTransaction(f.nonce+uint64(len(f.reqs)), address, amount, 21000, f.price, nil)
} else {
tokenInfo, ok := f.bep2eInfos[msg.Symbol]
if !ok {
f.lock.Unlock()
log.Warn("Failed to find symbol", "symbol", msg.Symbol)
continue
}
input, err := f.bep2eAbi.Pack("transfer", address, &tokenInfo.Amount)
if err != nil {
f.lock.Unlock()
log.Warn("Failed to pack transfer transaction", "err", err)
continue
}
tx = types.NewTransaction(f.nonce+uint64(len(f.reqs)), tokenInfo.Contract, nil, 420000, f.price, input)
} }
signed, err := f.keystore.SignTx(f.account, tx, f.config.ChainID) log.Info("too frequent funding(id)", "TimeLeft", common.PrettyDuration(time.Until(idTimeout)), "id", id)
continue
}
// check #3: minimum mainnet balance check, internal error will bypass the check to avoid blocking the faucet service
if f.clientMainnet != nil {
mainnetAddr := address
balanceMainnet, err := f.clientMainnet.BalanceAt(context.Background(), mainnetAddr, nil)
if err != nil {
log.Warn("check balance failed, call BalanceAt", "err", err)
} else if balanceMainnet == nil {
log.Warn("check balance failed, balanceMainnet is nil")
} else {
if balanceMainnet.Cmp(minMainnetBalance) < 0 {
f.lock.Unlock()
log.Warn("insufficient BNB on BSC mainnet", "address", mainnetAddr,
"balanceMainnet", balanceMainnet, "minMainnetBalance", minMainnetBalance)
// Send an error if failed to meet the minimum balance requirement
if err = sendError(wsconn, fmt.Errorf("insufficient BNB on BSC mainnet (require >=%sBNB)",
weiToEtherStringFx(minMainnetBalance, 3))); err != nil {
log.Warn("Failed to send mainnet minimum balance error to client", "err", err)
return
}
continue
}
}
}
log.Info("Faucet request valid", "url", msg.URL, "tier", msg.Tier, "user", username, "address", address, "ip", ip)
// now, it is ok to send tBNB or other tokens
var tx *types.Transaction
if msg.Symbol == "BNB" {
// User wasn't funded recently, create the funding transaction
amount := new(big.Int).Div(new(big.Int).Mul(big.NewInt(int64(*payoutFlag)), ether), big.NewInt(10))
amount = new(big.Int).Mul(amount, new(big.Int).Exp(big.NewInt(5), big.NewInt(int64(msg.Tier)), nil))
amount = new(big.Int).Div(amount, new(big.Int).Exp(big.NewInt(2), big.NewInt(int64(msg.Tier)), nil))
tx = types.NewTransaction(f.nonce+uint64(len(f.reqs)), address, amount, 21000, f.price, nil)
} else {
tokenInfo, ok := f.bep2eInfos[msg.Symbol]
if !ok {
f.lock.Unlock()
log.Warn("Failed to find symbol", "symbol", msg.Symbol)
continue
}
input, err := f.bep2eAbi.Pack("transfer", address, &tokenInfo.Amount)
if err != nil { if err != nil {
f.lock.Unlock() f.lock.Unlock()
if err = sendError(wsconn, err); err != nil { log.Warn("Failed to pack transfer transaction", "err", err)
log.Warn("Failed to send transaction creation error to client", "err", err)
return
}
continue continue
} }
// Submit the transaction and mark as funded if successful tx = types.NewTransaction(f.nonce+uint64(len(f.reqs)), tokenInfo.Contract, nil, 420000, f.price, input)
if err := f.client.SendTransaction(context.Background(), signed); err != nil {
f.lock.Unlock()
if err = sendError(wsconn, err); err != nil {
log.Warn("Failed to send transaction transmission error to client", "err", err)
return
}
continue
}
f.reqs = append(f.reqs, &request{
Avatar: avatar,
Account: address,
Time: time.Now(),
Tx: signed,
})
timeout := time.Duration(*minutesFlag*int(math.Pow(3, float64(msg.Tier)))) * time.Minute
grace := timeout / 288 // 24h timeout => 5m grace
f.timeouts[id] = time.Now().Add(timeout - grace)
f.timeouts[ips[len(ips)-2]] = time.Now().Add(timeout - grace)
fund = true
} }
f.lock.Unlock() signed, err := f.keystore.SignTx(f.account, tx, f.config.ChainID)
if err != nil {
// Send an error if too frequent funding, othewise a success f.lock.Unlock()
if !fund { if err = sendError(wsconn, err); err != nil {
if err = sendError(wsconn, fmt.Errorf("%s left until next allowance", common.PrettyDuration(time.Until(timeout)))); err != nil { // nolint: gosimple log.Warn("Failed to send transaction creation error to client", "err", err)
log.Warn("Failed to send funding error to client", "err", err)
return return
} }
continue continue
} }
// Submit the transaction and mark as funded if successful
if err := f.client.SendTransaction(context.Background(), signed); err != nil {
f.lock.Unlock()
if err = sendError(wsconn, err); err != nil {
log.Warn("Failed to send transaction transmission error to client", "err", err)
return
}
continue
}
f.reqs = append(f.reqs, &request{
Avatar: avatar,
Account: address,
Time: time.Now(),
Tx: signed,
})
timeoutInt64 := time.Duration(*minutesFlag*int(math.Pow(3, float64(msg.Tier)))) * time.Minute
grace := timeoutInt64 / 288 // 24h timeout => 5m grace
f.timeouts[id] = time.Now().Add(timeoutInt64 - grace)
f.timeouts[ips[len(ips)-2]] = time.Now().Add(timeoutInt64 - grace)
f.lock.Unlock()
if err = sendSuccess(wsconn, fmt.Sprintf("Funding request accepted for %s into %s", username, address.Hex())); err != nil { if err = sendSuccess(wsconn, fmt.Sprintf("Funding request accepted for %s into %s", username, address.Hex())); err != nil {
log.Warn("Failed to send funding success to client", "err", err) log.Warn("Failed to send funding success to client", "err", err)
return return
@ -581,9 +649,52 @@ func (f *faucet) refresh(head *types.Header) error {
f.lock.Lock() f.lock.Lock()
f.head, f.balance = head, balance f.head, f.balance = head, balance
f.price, f.nonce = price, nonce f.price, f.nonce = price, nonce
if len(f.reqs) > 0 && f.reqs[0].Tx.Nonce() > f.nonce { if len(f.reqs) == 0 {
log.Debug("refresh len(f.reqs) == 0", "f.nonce", f.nonce)
f.lock.Unlock()
return nil
}
if f.reqs[0].Tx.Nonce() == f.nonce {
// if the next Tx failed to be included for a certain time(resendInterval), try to
// resend it with higher gasPrice, as it could be discarded in the network.
// Also resend extra following txs, as they could be discarded as well.
if time.Now().After(f.reqs[0].Time.Add(resendInterval)) {
for i, req := range f.reqs {
if i >= resendBatchSize {
break
}
prePrice := req.Tx.GasPrice()
// bump gas price 20% to replace the previous tx
newPrice := new(big.Int).Add(prePrice, new(big.Int).Div(prePrice, big.NewInt(5)))
if newPrice.Cmp(resendMaxGasPrice) >= 0 {
log.Info("resendMaxGasPrice reached", "newPrice", newPrice, "resendMaxGasPrice", resendMaxGasPrice, "nonce", req.Tx.Nonce())
break
}
newTx := types.NewTransaction(req.Tx.Nonce(), *req.Tx.To(), req.Tx.Value(), req.Tx.Gas(), newPrice, req.Tx.Data())
newSigned, err := f.keystore.SignTx(f.account, newTx, f.config.ChainID)
if err != nil {
log.Error("resend sign tx failed", "err", err)
}
log.Info("reqs[0] Tx has been stuck for a while, trigger resend",
"resendInterval", resendInterval, "resendTxSize", resendBatchSize,
"preHash", req.Tx.Hash().Hex(), "newHash", newSigned.Hash().Hex(),
"newPrice", newPrice, "nonce", req.Tx.Nonce(), "req.Tx.Gas()", req.Tx.Gas())
if err := f.client.SendTransaction(context.Background(), newSigned); err != nil {
log.Warn("resend tx failed", "err", err)
continue
}
req.Tx = newSigned
}
}
}
// it is abnormal that reqs[0] has larger nonce than next expected nonce.
// could be caused by reorg? reset it
if f.reqs[0].Tx.Nonce() > f.nonce {
log.Warn("reset due to nonce gap", "f.nonce", f.nonce, "f.reqs[0].Tx.Nonce()", f.reqs[0].Tx.Nonce())
f.reqs = f.reqs[:0] f.reqs = f.reqs[:0]
} }
// remove the reqs if they have smaller nonce, which means it is no longer valid,
// either has been accepted or replaced.
for len(f.reqs) > 0 && f.reqs[0].Tx.Nonce() < f.nonce { for len(f.reqs) > 0 && f.reqs[0].Tx.Nonce() < f.nonce {
f.reqs = f.reqs[1:] f.reqs = f.reqs[1:]
} }
@ -625,24 +736,27 @@ func (f *faucet) loop() {
balance := new(big.Int).Div(f.balance, ether) balance := new(big.Int).Div(f.balance, ether)
for _, conn := range f.conns { for _, conn := range f.conns {
if err := send(conn, map[string]interface{}{ go func(conn *wsConn) {
"funds": balance, if err := send(conn, map[string]interface{}{
"funded": f.nonce, "funds": balance,
"requests": f.reqs, "funded": f.nonce,
}, time.Second); err != nil { "requests": f.reqs,
log.Warn("Failed to send stats to client", "err", err) }, time.Second); err != nil {
conn.conn.Close() log.Warn("Failed to send stats to client", "err", err)
continue conn.conn.Close()
} return // Exit the goroutine if the first send fails
if err := send(conn, head, time.Second); err != nil { }
log.Warn("Failed to send header to client", "err", err)
conn.conn.Close() if err := send(conn, head, time.Second); err != nil {
} log.Warn("Failed to send header to client", "err", err)
conn.conn.Close()
}
}(conn)
} }
f.lock.RUnlock() f.lock.RUnlock()
} }
}() }()
// Wait for various events and assing to the appropriate background threads // Wait for various events and assign to the appropriate background threads
for { for {
select { select {
case head := <-heads: case head := <-heads:
@ -656,10 +770,12 @@ func (f *faucet) loop() {
// Pending requests updated, stream to clients // Pending requests updated, stream to clients
f.lock.RLock() f.lock.RLock()
for _, conn := range f.conns { for _, conn := range f.conns {
if err := send(conn, map[string]interface{}{"requests": f.reqs}, time.Second); err != nil { go func(conn *wsConn) {
log.Warn("Failed to send requests to client", "err", err) if err := send(conn, map[string]interface{}{"requests": f.reqs}, time.Second); err != nil {
conn.conn.Close() log.Warn("Failed to send requests to client", "err", err)
} conn.conn.Close()
}
}(conn)
} }
f.lock.RUnlock() f.lock.RUnlock()
} }

View File

@ -0,0 +1,44 @@
package main
import (
lru "github.com/hashicorp/golang-lru"
"golang.org/x/time/rate"
)
type IPRateLimiter struct {
ips *lru.Cache // LRU cache to store IP addresses and their associated rate limiters
r rate.Limit // the rate limit, e.g., 5 requests per second
b int // the burst size, e.g., allowing a burst of 10 requests at once. The rate limiter gets into action
// only after this number exceeds
}
func NewIPRateLimiter(r rate.Limit, b int, size int) (*IPRateLimiter, error) {
cache, err := lru.New(size)
if err != nil {
return nil, err
}
i := &IPRateLimiter{
ips: cache,
r: r,
b: b,
}
return i, nil
}
func (i *IPRateLimiter) addIP(ip string) *rate.Limiter {
limiter := rate.NewLimiter(i.r, i.b)
i.ips.Add(ip, limiter)
return limiter
}
func (i *IPRateLimiter) GetLimiter(ip string) *rate.Limiter {
if limiter, exists := i.ips.Get(ip); exists {
return limiter.(*rate.Limiter)
}
return i.addIP(ip)
}

View File

@ -48,7 +48,7 @@ func TestAttachWithHeaders(t *testing.T) {
// This is fixed in a follow-up PR. // This is fixed in a follow-up PR.
} }
// TestAttachWithHeaders tests that 'geth db --remotedb' with custom headers works, i.e // TestRemoteDbWithHeaders tests that 'geth db --remotedb' with custom headers works, i.e
// that custom headers are forwarded to the target. // that custom headers are forwarded to the target.
func TestRemoteDbWithHeaders(t *testing.T) { func TestRemoteDbWithHeaders(t *testing.T) {
t.Parallel() t.Parallel()

View File

@ -12,16 +12,16 @@ import (
"github.com/google/uuid" "github.com/google/uuid"
"github.com/logrusorgru/aurora" "github.com/logrusorgru/aurora"
"github.com/prysmaticlabs/prysm/v4/crypto/bls" "github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil" "github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/io/prompt" "github.com/prysmaticlabs/prysm/v5/io/prompt"
validatorpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/validator-client" validatorpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/validator-client"
"github.com/prysmaticlabs/prysm/v4/validator/accounts" "github.com/prysmaticlabs/prysm/v5/validator/accounts"
"github.com/prysmaticlabs/prysm/v4/validator/accounts/iface" "github.com/prysmaticlabs/prysm/v5/validator/accounts/iface"
"github.com/prysmaticlabs/prysm/v4/validator/accounts/petnames" "github.com/prysmaticlabs/prysm/v5/validator/accounts/petnames"
"github.com/prysmaticlabs/prysm/v4/validator/accounts/wallet" "github.com/prysmaticlabs/prysm/v5/validator/accounts/wallet"
"github.com/prysmaticlabs/prysm/v4/validator/keymanager" "github.com/prysmaticlabs/prysm/v5/validator/keymanager"
"github.com/prysmaticlabs/prysm/v4/validator/keymanager/local" "github.com/prysmaticlabs/prysm/v5/validator/keymanager/local"
"github.com/urfave/cli/v2" "github.com/urfave/cli/v2"
keystorev4 "github.com/wealdtech/go-eth2-wallet-encryptor-keystorev4" keystorev4 "github.com/wealdtech/go-eth2-wallet-encryptor-keystorev4"

View File

@ -41,6 +41,7 @@ import (
"github.com/ethereum/go-ethereum/core/state" "github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth"
"github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/internal/era" "github.com/ethereum/go-ethereum/internal/era"
"github.com/ethereum/go-ethereum/internal/flags" "github.com/ethereum/go-ethereum/internal/flags"
@ -61,8 +62,10 @@ var (
ArgsUsage: "<genesisPath>", ArgsUsage: "<genesisPath>",
Flags: flags.Merge([]cli.Flag{ Flags: flags.Merge([]cli.Flag{
utils.CachePreimagesFlag, utils.CachePreimagesFlag,
utils.OverrideCancun, utils.OverridePassedForkTime,
utils.OverrideBohr,
utils.OverrideVerkle, utils.OverrideVerkle,
utils.MultiDataBaseFlag,
}, utils.DatabaseFlags), }, utils.DatabaseFlags),
Description: ` Description: `
The init command initializes a new genesis block and definition for the network. The init command initializes a new genesis block and definition for the network.
@ -251,9 +254,13 @@ func initGenesis(ctx *cli.Context) error {
defer stack.Close() defer stack.Close()
var overrides core.ChainOverrides var overrides core.ChainOverrides
if ctx.IsSet(utils.OverrideCancun.Name) { if ctx.IsSet(utils.OverridePassedForkTime.Name) {
v := ctx.Uint64(utils.OverrideCancun.Name) v := ctx.Uint64(utils.OverridePassedForkTime.Name)
overrides.OverrideCancun = &v overrides.OverridePassedForkTime = &v
}
if ctx.IsSet(utils.OverrideBohr.Name) {
v := ctx.Uint64(utils.OverrideBohr.Name)
overrides.OverrideBohr = &v
} }
if ctx.IsSet(utils.OverrideVerkle.Name) { if ctx.IsSet(utils.OverrideVerkle.Name) {
v := ctx.Uint64(utils.OverrideVerkle.Name) v := ctx.Uint64(utils.OverrideVerkle.Name)
@ -267,15 +274,21 @@ func initGenesis(ctx *cli.Context) error {
defer chaindb.Close() defer chaindb.Close()
// if the trie data dir has been set, new trie db with a new state database // if the trie data dir has been set, new trie db with a new state database
if ctx.IsSet(utils.SeparateDBFlag.Name) { if ctx.IsSet(utils.MultiDataBaseFlag.Name) {
statediskdb, dbErr := stack.OpenDatabaseWithFreezer(name+"/state", 0, 0, "", "", false, false, false, false) statediskdb, dbErr := stack.OpenDatabaseWithFreezer(name+"/state", 0, 0, "", "", false, false, false, false)
if dbErr != nil { if dbErr != nil {
utils.Fatalf("Failed to open separate trie database: %v", dbErr) utils.Fatalf("Failed to open separate trie database: %v", dbErr)
} }
chaindb.SetStateStore(statediskdb) chaindb.SetStateStore(statediskdb)
blockdb, err := stack.OpenDatabaseWithFreezer(name+"/block", 0, 0, "", "", false, false, false, false)
if err != nil {
utils.Fatalf("Failed to open separate block database: %v", err)
}
chaindb.SetBlockStore(blockdb)
log.Warn("Multi-database is an experimental feature")
} }
triedb := utils.MakeTrieDatabase(ctx, chaindb, ctx.Bool(utils.CachePreimagesFlag.Name), false, genesis.IsVerkle()) triedb := utils.MakeTrieDatabase(ctx, stack, chaindb, ctx.Bool(utils.CachePreimagesFlag.Name), false, genesis.IsVerkle())
defer triedb.Close() defer triedb.Close()
_, hash, err := core.SetupGenesisBlockWithOverride(chaindb, triedb, genesis, &overrides) _, hash, err := core.SetupGenesisBlockWithOverride(chaindb, triedb, genesis, &overrides)
@ -473,6 +486,13 @@ func dumpGenesis(ctx *cli.Context) error {
} }
continue continue
} }
// set the separate state & block database
if stack.CheckIfMultiDataBase() && err == nil {
stateDiskDb := utils.MakeStateDataBase(ctx, stack, true, false)
db.SetStateStore(stateDiskDb)
blockDb := utils.MakeBlockDatabase(ctx, stack, true, false)
db.SetBlockStore(blockDb)
}
genesis, err := core.ReadGenesis(db) genesis, err := core.ReadGenesis(db)
if err != nil { if err != nil {
utils.Fatalf("failed to read genesis: %s", err) utils.Fatalf("failed to read genesis: %s", err)
@ -500,10 +520,16 @@ func importChain(ctx *cli.Context) error {
// Start system runtime metrics collection // Start system runtime metrics collection
go metrics.CollectProcessMetrics(3 * time.Second) go metrics.CollectProcessMetrics(3 * time.Second)
stack, _ := makeConfigNode(ctx) stack, cfg := makeConfigNode(ctx)
defer stack.Close() defer stack.Close()
chain, db := utils.MakeChain(ctx, stack, false) backend, err := eth.New(stack, &cfg.Eth)
if err != nil {
return err
}
chain := backend.BlockChain()
db := backend.ChainDb()
defer db.Close() defer db.Close()
// Start periodically gathering memory profiles // Start periodically gathering memory profiles
@ -758,7 +784,7 @@ func parseDumpConfig(ctx *cli.Context, stack *node.Node) (*state.DumpConfig, eth
} else { } else {
// Use latest // Use latest
if scheme == rawdb.PathScheme { if scheme == rawdb.PathScheme {
triedb := triedb.NewDatabase(db, &triedb.Config{PathDB: pathdb.ReadOnly}) triedb := triedb.NewDatabase(db, &triedb.Config{PathDB: utils.PathDBConfigAddJournalFilePath(stack, pathdb.ReadOnly)})
defer triedb.Close() defer triedb.Close()
if stateRoot := triedb.Head(); stateRoot != (common.Hash{}) { if stateRoot := triedb.Head(); stateRoot != (common.Hash{}) {
header.Root = stateRoot header.Root = stateRoot
@ -808,7 +834,8 @@ func dump(ctx *cli.Context) error {
if err != nil { if err != nil {
return err return err
} }
triedb := utils.MakeTrieDatabase(ctx, db, true, true, false) // always enable preimage lookup defer db.Close()
triedb := utils.MakeTrieDatabase(ctx, stack, db, true, true, false) // always enable preimage lookup
defer triedb.Close() defer triedb.Close()
state, err := state.New(root, state.NewDatabaseWithNodeDB(db, triedb), nil) state, err := state.New(root, state.NewDatabaseWithNodeDB(db, triedb), nil)
@ -828,7 +855,7 @@ func dumpAllRootHashInPath(ctx *cli.Context) error {
defer stack.Close() defer stack.Close()
db := utils.MakeChainDatabase(ctx, stack, true, false) db := utils.MakeChainDatabase(ctx, stack, true, false)
defer db.Close() defer db.Close()
triedb := triedb.NewDatabase(db, &triedb.Config{PathDB: pathdb.ReadOnly}) triedb := triedb.NewDatabase(db, &triedb.Config{PathDB: utils.PathDBConfigAddJournalFilePath(stack, pathdb.ReadOnly)})
defer triedb.Close() defer triedb.Close()
scheme, err := rawdb.ParseStateScheme(ctx.String(utils.StateSchemeFlag.Name), db) scheme, err := rawdb.ParseStateScheme(ctx.String(utils.StateSchemeFlag.Name), db)

View File

@ -33,6 +33,7 @@ import (
"github.com/ethereum/go-ethereum/accounts/keystore" "github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/accounts/scwallet" "github.com/ethereum/go-ethereum/accounts/scwallet"
"github.com/ethereum/go-ethereum/accounts/usbwallet" "github.com/ethereum/go-ethereum/accounts/usbwallet"
"github.com/ethereum/go-ethereum/beacon/fakebeacon"
"github.com/ethereum/go-ethereum/cmd/utils" "github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/rawdb"
@ -92,10 +93,11 @@ type ethstatsConfig struct {
} }
type gethConfig struct { type gethConfig struct {
Eth ethconfig.Config Eth ethconfig.Config
Node node.Config Node node.Config
Ethstats ethstatsConfig Ethstats ethstatsConfig
Metrics metrics.Config Metrics metrics.Config
FakeBeacon fakebeacon.Config
} }
func loadConfig(file string, cfg *gethConfig) error { func loadConfig(file string, cfg *gethConfig) error {
@ -185,22 +187,18 @@ func makeFullNode(ctx *cli.Context) (*node.Node, ethapi.Backend) {
params.RialtoGenesisHash = common.HexToHash(v) params.RialtoGenesisHash = common.HexToHash(v)
} }
if ctx.IsSet(utils.OverrideCancun.Name) { if ctx.IsSet(utils.OverridePassedForkTime.Name) {
v := ctx.Uint64(utils.OverrideCancun.Name) v := ctx.Uint64(utils.OverridePassedForkTime.Name)
cfg.Eth.OverrideCancun = &v cfg.Eth.OverridePassedForkTime = &v
}
if ctx.IsSet(utils.OverrideBohr.Name) {
v := ctx.Uint64(utils.OverrideBohr.Name)
cfg.Eth.OverrideBohr = &v
} }
if ctx.IsSet(utils.OverrideVerkle.Name) { if ctx.IsSet(utils.OverrideVerkle.Name) {
v := ctx.Uint64(utils.OverrideVerkle.Name) v := ctx.Uint64(utils.OverrideVerkle.Name)
cfg.Eth.OverrideVerkle = &v cfg.Eth.OverrideVerkle = &v
} }
if ctx.IsSet(utils.OverrideFeynman.Name) {
v := ctx.Uint64(utils.OverrideFeynman.Name)
cfg.Eth.OverrideFeynman = &v
}
if ctx.IsSet(utils.OverrideFeynmanFix.Name) {
v := ctx.Uint64(utils.OverrideFeynmanFix.Name)
cfg.Eth.OverrideFeynmanFix = &v
}
if ctx.IsSet(utils.OverrideFullImmutabilityThreshold.Name) { if ctx.IsSet(utils.OverrideFullImmutabilityThreshold.Name) {
params.FullImmutabilityThreshold = ctx.Uint64(utils.OverrideFullImmutabilityThreshold.Name) params.FullImmutabilityThreshold = ctx.Uint64(utils.OverrideFullImmutabilityThreshold.Name)
downloader.FullMaxForkAncestry = ctx.Uint64(utils.OverrideFullImmutabilityThreshold.Name) downloader.FullMaxForkAncestry = ctx.Uint64(utils.OverrideFullImmutabilityThreshold.Name)
@ -211,9 +209,13 @@ func makeFullNode(ctx *cli.Context) (*node.Node, ethapi.Backend) {
if ctx.IsSet(utils.OverrideDefaultExtraReserveForBlobRequests.Name) { if ctx.IsSet(utils.OverrideDefaultExtraReserveForBlobRequests.Name) {
params.DefaultExtraReserveForBlobRequests = ctx.Uint64(utils.OverrideDefaultExtraReserveForBlobRequests.Name) params.DefaultExtraReserveForBlobRequests = ctx.Uint64(utils.OverrideDefaultExtraReserveForBlobRequests.Name)
} }
if ctx.IsSet(utils.SeparateDBFlag.Name) && !stack.IsSeparatedDB() { if ctx.IsSet(utils.OverrideBreatheBlockInterval.Name) {
utils.Fatalf("Failed to locate separate database subdirectory when separatedb parameter has been set") params.BreatheBlockInterval = ctx.Uint64(utils.OverrideBreatheBlockInterval.Name)
} }
if ctx.IsSet(utils.OverrideFixedTurnLength.Name) {
params.FixedTurnLength = ctx.Uint64(utils.OverrideFixedTurnLength.Name)
}
backend, eth := utils.RegisterEthService(stack, &cfg.Eth) backend, eth := utils.RegisterEthService(stack, &cfg.Eth)
// Create gauge with geth system and build information // Create gauge with geth system and build information
@ -242,11 +244,22 @@ func makeFullNode(ctx *cli.Context) (*node.Node, ethapi.Backend) {
utils.RegisterEthStatsService(stack, backend, cfg.Ethstats.URL) utils.RegisterEthStatsService(stack, backend, cfg.Ethstats.URL)
} }
if ctx.IsSet(utils.FakeBeaconAddrFlag.Name) {
cfg.FakeBeacon.Addr = ctx.String(utils.FakeBeaconAddrFlag.Name)
}
if ctx.IsSet(utils.FakeBeaconPortFlag.Name) {
cfg.FakeBeacon.Port = ctx.Int(utils.FakeBeaconPortFlag.Name)
}
if cfg.FakeBeacon.Enable || ctx.IsSet(utils.FakeBeaconEnabledFlag.Name) {
go fakebeacon.NewService(&cfg.FakeBeacon, backend).Run()
}
git, _ := version.VCS() git, _ := version.VCS()
utils.SetupMetrics(ctx, utils.SetupMetrics(ctx,
utils.EnableBuildInfo(git.Commit, git.Date), utils.EnableBuildInfo(git.Commit, git.Date),
utils.EnableMinerInfo(ctx, &cfg.Eth.Miner), utils.EnableMinerInfo(ctx, &cfg.Eth.Miner),
utils.EnableNodeInfo(&cfg.Eth.TxPool, stack.Server().NodeInfo()), utils.EnableNodeInfo(&cfg.Eth.TxPool, stack.Server().NodeInfo()),
utils.EnableNodeTrack(ctx, &cfg.Eth, stack),
) )
return stack, backend return stack, backend
} }

View File

@ -106,12 +106,12 @@ Remove blockchain and state databases`,
dbInspectTrieCmd = &cli.Command{ dbInspectTrieCmd = &cli.Command{
Action: inspectTrie, Action: inspectTrie,
Name: "inspect-trie", Name: "inspect-trie",
ArgsUsage: "<blocknum> <jobnum>", ArgsUsage: "<blocknum> <jobnum> <topn>",
Flags: []cli.Flag{ Flags: []cli.Flag{
utils.DataDirFlag, utils.DataDirFlag,
utils.SyncModeFlag, utils.SyncModeFlag,
}, },
Usage: "Inspect the MPT tree of the account and contract.", Usage: "Inspect the MPT tree of the account and contract. 'blocknum' can be latest/snapshot/number. 'topn' means output the top N storage tries info ranked by the total number of TrieNodes",
Description: `This commands iterates the entrie WorldState.`, Description: `This commands iterates the entrie WorldState.`,
} }
dbCheckStateContentCmd = &cli.Command{ dbCheckStateContentCmd = &cli.Command{
@ -386,6 +386,7 @@ func inspectTrie(ctx *cli.Context) error {
blockNumber uint64 blockNumber uint64
trieRootHash common.Hash trieRootHash common.Hash
jobnum uint64 jobnum uint64
topN uint64
) )
stack, _ := makeConfigNode(ctx) stack, _ := makeConfigNode(ctx)
@ -405,24 +406,37 @@ func inspectTrie(ctx *cli.Context) error {
var err error var err error
blockNumber, err = strconv.ParseUint(ctx.Args().Get(0), 10, 64) blockNumber, err = strconv.ParseUint(ctx.Args().Get(0), 10, 64)
if err != nil { if err != nil {
return fmt.Errorf("failed to Parse blocknum, Args[0]: %v, err: %v", ctx.Args().Get(0), err) return fmt.Errorf("failed to parse blocknum, Args[0]: %v, err: %v", ctx.Args().Get(0), err)
} }
} }
if ctx.NArg() == 1 { if ctx.NArg() == 1 {
jobnum = 1000 jobnum = 1000
topN = 10
} else if ctx.NArg() == 2 {
var err error
jobnum, err = strconv.ParseUint(ctx.Args().Get(1), 10, 64)
if err != nil {
return fmt.Errorf("failed to parse jobnum, Args[1]: %v, err: %v", ctx.Args().Get(1), err)
}
topN = 10
} else { } else {
var err error var err error
jobnum, err = strconv.ParseUint(ctx.Args().Get(1), 10, 64) jobnum, err = strconv.ParseUint(ctx.Args().Get(1), 10, 64)
if err != nil { if err != nil {
return fmt.Errorf("failed to Parse jobnum, Args[1]: %v, err: %v", ctx.Args().Get(1), err) return fmt.Errorf("failed to parse jobnum, Args[1]: %v, err: %v", ctx.Args().Get(1), err)
}
topN, err = strconv.ParseUint(ctx.Args().Get(2), 10, 64)
if err != nil {
return fmt.Errorf("failed to parse topn, Args[1]: %v, err: %v", ctx.Args().Get(1), err)
} }
} }
if blockNumber != math.MaxUint64 { if blockNumber != math.MaxUint64 {
headerBlockHash = rawdb.ReadCanonicalHash(db, blockNumber) headerBlockHash = rawdb.ReadCanonicalHash(db, blockNumber)
if headerBlockHash == (common.Hash{}) { if headerBlockHash == (common.Hash{}) {
return errors.New("ReadHeadBlockHash empry hash") return errors.New("ReadHeadBlockHash empty hash")
} }
blockHeader := rawdb.ReadHeader(db, headerBlockHash, blockNumber) blockHeader := rawdb.ReadHeader(db, headerBlockHash, blockNumber)
trieRootHash = blockHeader.Root trieRootHash = blockHeader.Root
@ -436,7 +450,8 @@ func inspectTrie(ctx *cli.Context) error {
var config *triedb.Config var config *triedb.Config
if dbScheme == rawdb.PathScheme { if dbScheme == rawdb.PathScheme {
config = &triedb.Config{ config = &triedb.Config{
PathDB: pathdb.ReadOnly, PathDB: utils.PathDBConfigAddJournalFilePath(stack, pathdb.ReadOnly),
Cache: 0,
} }
} else if dbScheme == rawdb.HashScheme { } else if dbScheme == rawdb.HashScheme {
config = triedb.HashDefaults config = triedb.HashDefaults
@ -448,7 +463,7 @@ func inspectTrie(ctx *cli.Context) error {
fmt.Printf("fail to new trie tree, err: %v, rootHash: %v\n", err, trieRootHash.String()) fmt.Printf("fail to new trie tree, err: %v, rootHash: %v\n", err, trieRootHash.String())
return err return err
} }
theInspect, err := trie.NewInspector(theTrie, triedb, trieRootHash, blockNumber, jobnum) theInspect, err := trie.NewInspector(theTrie, triedb, trieRootHash, blockNumber, jobnum, int(topN))
if err != nil { if err != nil {
return err return err
} }
@ -493,7 +508,7 @@ func ancientInspect(ctx *cli.Context) error {
stack, _ := makeConfigNode(ctx) stack, _ := makeConfigNode(ctx)
defer stack.Close() defer stack.Close()
db := utils.MakeChainDatabase(ctx, stack, true, true) db := utils.MakeChainDatabase(ctx, stack, true, false)
defer db.Close() defer db.Close()
return rawdb.AncientInspect(db) return rawdb.AncientInspect(db)
} }
@ -519,7 +534,7 @@ func checkStateContent(ctx *cli.Context) error {
db := utils.MakeChainDatabase(ctx, stack, true, false) db := utils.MakeChainDatabase(ctx, stack, true, false)
defer db.Close() defer db.Close()
var ( var (
it = rawdb.NewKeyLengthIterator(db.NewIterator(prefix, start), 32) it ethdb.Iterator
hasher = crypto.NewKeccakState() hasher = crypto.NewKeccakState()
got = make([]byte, 32) got = make([]byte, 32)
errs int errs int
@ -527,6 +542,11 @@ func checkStateContent(ctx *cli.Context) error {
startTime = time.Now() startTime = time.Now()
lastLog = time.Now() lastLog = time.Now()
) )
if stack.CheckIfMultiDataBase() {
it = rawdb.NewKeyLengthIterator(db.StateStore().NewIterator(prefix, start), 32)
} else {
it = rawdb.NewKeyLengthIterator(db.NewIterator(prefix, start), 32)
}
for it.Next() { for it.Next() {
count++ count++
k := it.Key() k := it.Key()
@ -573,9 +593,11 @@ func dbStats(ctx *cli.Context) error {
defer db.Close() defer db.Close()
showLeveldbStats(db) showLeveldbStats(db)
if db.StateStore() != nil { if stack.CheckIfMultiDataBase() {
fmt.Println("show stats of state store") fmt.Println("show stats of state store")
showLeveldbStats(db.StateStore()) showLeveldbStats(db.StateStore())
fmt.Println("show stats of block store")
showLeveldbStats(db.BlockStore())
} }
return nil return nil
@ -591,10 +613,11 @@ func dbCompact(ctx *cli.Context) error {
log.Info("Stats before compaction") log.Info("Stats before compaction")
showLeveldbStats(db) showLeveldbStats(db)
statediskdb := db.StateStore() if stack.CheckIfMultiDataBase() {
if statediskdb != nil {
fmt.Println("show stats of state store") fmt.Println("show stats of state store")
showLeveldbStats(statediskdb) showLeveldbStats(db.StateStore())
fmt.Println("show stats of block store")
showLeveldbStats(db.BlockStore())
} }
log.Info("Triggering compaction") log.Info("Triggering compaction")
@ -603,8 +626,12 @@ func dbCompact(ctx *cli.Context) error {
return err return err
} }
if statediskdb != nil { if stack.CheckIfMultiDataBase() {
if err := statediskdb.Compact(nil, nil); err != nil { if err := db.StateStore().Compact(nil, nil); err != nil {
log.Error("Compact err", "error", err)
return err
}
if err := db.BlockStore().Compact(nil, nil); err != nil {
log.Error("Compact err", "error", err) log.Error("Compact err", "error", err)
return err return err
} }
@ -612,9 +639,11 @@ func dbCompact(ctx *cli.Context) error {
log.Info("Stats after compaction") log.Info("Stats after compaction")
showLeveldbStats(db) showLeveldbStats(db)
if statediskdb != nil { if stack.CheckIfMultiDataBase() {
fmt.Println("show stats of state store after compaction") fmt.Println("show stats of state store after compaction")
showLeveldbStats(statediskdb) showLeveldbStats(db.StateStore())
fmt.Println("show stats of block store after compaction")
showLeveldbStats(db.BlockStore())
} }
return nil return nil
} }
@ -635,18 +664,18 @@ func dbGet(ctx *cli.Context) error {
log.Info("Could not decode the key", "error", err) log.Info("Could not decode the key", "error", err)
return err return err
} }
opDb := db
statediskdb := db.StateStore() if stack.CheckIfMultiDataBase() {
data, err := db.Get(key) keyType := rawdb.DataTypeByKey(key)
if err != nil { if keyType == rawdb.StateDataType {
// if separate trie db exist, try to get it from separate db opDb = db.StateStore()
if statediskdb != nil { } else if keyType == rawdb.BlockDataType {
statedata, dberr := statediskdb.Get(key) opDb = db.BlockStore()
if dberr == nil {
fmt.Printf("key %#x: %#x\n", key, statedata)
return nil
}
} }
}
data, err := opDb.Get(key)
if err != nil {
log.Info("Get operation failed", "key", fmt.Sprintf("%#x", key), "error", err) log.Info("Get operation failed", "key", fmt.Sprintf("%#x", key), "error", err)
return err return err
} }
@ -809,11 +838,21 @@ func dbDelete(ctx *cli.Context) error {
log.Info("Could not decode the key", "error", err) log.Info("Could not decode the key", "error", err)
return err return err
} }
data, err := db.Get(key) opDb := db
if stack.CheckIfMultiDataBase() {
keyType := rawdb.DataTypeByKey(key)
if keyType == rawdb.StateDataType {
opDb = db.StateStore()
} else if keyType == rawdb.BlockDataType {
opDb = db.BlockStore()
}
}
data, err := opDb.Get(key)
if err == nil { if err == nil {
fmt.Printf("Previous value: %#x\n", data) fmt.Printf("Previous value: %#x\n", data)
} }
if err = db.Delete(key); err != nil { if err = opDb.Delete(key); err != nil {
log.Info("Delete operation returned an error", "key", fmt.Sprintf("%#x", key), "error", err) log.Info("Delete operation returned an error", "key", fmt.Sprintf("%#x", key), "error", err)
return err return err
} }
@ -923,11 +962,22 @@ func dbPut(ctx *cli.Context) error {
log.Info("Could not decode the value", "error", err) log.Info("Could not decode the value", "error", err)
return err return err
} }
data, err = db.Get(key)
opDb := db
if stack.CheckIfMultiDataBase() {
keyType := rawdb.DataTypeByKey(key)
if keyType == rawdb.StateDataType {
opDb = db.StateStore()
} else if keyType == rawdb.BlockDataType {
opDb = db.BlockStore()
}
}
data, err = opDb.Get(key)
if err == nil { if err == nil {
fmt.Printf("Previous value: %#x\n", data) fmt.Printf("Previous value: %#x\n", data)
} }
return db.Put(key, value) return opDb.Put(key, value)
} }
// dbDumpTrie shows the key-value slots of a given storage trie // dbDumpTrie shows the key-value slots of a given storage trie
@ -940,8 +990,7 @@ func dbDumpTrie(ctx *cli.Context) error {
db := utils.MakeChainDatabase(ctx, stack, true, false) db := utils.MakeChainDatabase(ctx, stack, true, false)
defer db.Close() defer db.Close()
triedb := utils.MakeTrieDatabase(ctx, stack, db, false, true, false)
triedb := utils.MakeTrieDatabase(ctx, db, false, true, false)
defer triedb.Close() defer triedb.Close()
var ( var (
@ -1019,7 +1068,7 @@ func freezerInspect(ctx *cli.Context) error {
stack, _ := makeConfigNode(ctx) stack, _ := makeConfigNode(ctx)
ancient := stack.ResolveAncient("chaindata", ctx.String(utils.AncientFlag.Name)) ancient := stack.ResolveAncient("chaindata", ctx.String(utils.AncientFlag.Name))
stack.Close() stack.Close()
return rawdb.InspectFreezerTable(ancient, freezer, table, start, end) return rawdb.InspectFreezerTable(ancient, freezer, table, start, end, stack.CheckIfMultiDataBase())
} }
func importLDBdata(ctx *cli.Context) error { func importLDBdata(ctx *cli.Context) error {
@ -1159,7 +1208,7 @@ func showMetaData(ctx *cli.Context) error {
defer stack.Close() defer stack.Close()
db := utils.MakeChainDatabase(ctx, stack, true, false) db := utils.MakeChainDatabase(ctx, stack, true, false)
defer db.Close() defer db.Close()
ancients, err := db.Ancients() ancients, err := db.BlockStore().Ancients()
if err != nil { if err != nil {
fmt.Fprintf(os.Stderr, "Error accessing ancients: %v", err) fmt.Fprintf(os.Stderr, "Error accessing ancients: %v", err)
} }
@ -1206,7 +1255,7 @@ func hbss2pbss(ctx *cli.Context) error {
defer stack.Close() defer stack.Close()
db := utils.MakeChainDatabase(ctx, stack, false, false) db := utils.MakeChainDatabase(ctx, stack, false, false)
db.Sync() db.BlockStore().Sync()
stateDiskDb := db.StateStore() stateDiskDb := db.StateStore()
defer db.Close() defer db.Close()

View File

@ -25,6 +25,8 @@ import (
"strings" "strings"
"time" "time"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/accounts" "github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/keystore" "github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/cmd/utils" "github.com/ethereum/go-ethereum/cmd/utils"
@ -70,13 +72,14 @@ var (
utils.USBFlag, utils.USBFlag,
utils.SmartCardDaemonPathFlag, utils.SmartCardDaemonPathFlag,
utils.RialtoHash, utils.RialtoHash,
utils.OverrideCancun, utils.OverridePassedForkTime,
utils.OverrideBohr,
utils.OverrideVerkle, utils.OverrideVerkle,
utils.OverrideFeynman,
utils.OverrideFeynmanFix,
utils.OverrideFullImmutabilityThreshold, utils.OverrideFullImmutabilityThreshold,
utils.OverrideMinBlocksForBlobRequests, utils.OverrideMinBlocksForBlobRequests,
utils.OverrideDefaultExtraReserveForBlobRequests, utils.OverrideDefaultExtraReserveForBlobRequests,
utils.OverrideBreatheBlockInterval,
utils.OverrideFixedTurnLength,
utils.EnablePersonal, utils.EnablePersonal,
utils.TxPoolLocalsFlag, utils.TxPoolLocalsFlag,
utils.TxPoolNoLocalsFlag, utils.TxPoolNoLocalsFlag,
@ -103,6 +106,7 @@ var (
utils.TransactionHistoryFlag, utils.TransactionHistoryFlag,
utils.StateHistoryFlag, utils.StateHistoryFlag,
utils.PathDBSyncFlag, utils.PathDBSyncFlag,
utils.JournalFileFlag,
utils.LightServeFlag, // deprecated utils.LightServeFlag, // deprecated
utils.LightIngressFlag, // deprecated utils.LightIngressFlag, // deprecated
utils.LightEgressFlag, // deprecated utils.LightEgressFlag, // deprecated
@ -123,6 +127,7 @@ var (
utils.CacheSnapshotFlag, utils.CacheSnapshotFlag,
// utils.CacheNoPrefetchFlag, // utils.CacheNoPrefetchFlag,
utils.CachePreimagesFlag, utils.CachePreimagesFlag,
utils.MultiDataBaseFlag,
utils.PersistDiffFlag, utils.PersistDiffFlag,
utils.DiffBlockFlag, utils.DiffBlockFlag,
utils.PruneAncientDataFlag, utils.PruneAncientDataFlag,
@ -144,6 +149,7 @@ var (
// utils.MinerNewPayloadTimeout, // utils.MinerNewPayloadTimeout,
utils.NATFlag, utils.NATFlag,
utils.NoDiscoverFlag, utils.NoDiscoverFlag,
utils.PeerFilterPatternsFlag,
utils.DiscoveryV4Flag, utils.DiscoveryV4Flag,
utils.DiscoveryV5Flag, utils.DiscoveryV5Flag,
utils.InstanceFlag, utils.InstanceFlag,
@ -226,6 +232,12 @@ var (
utils.MetricsInfluxDBBucketFlag, utils.MetricsInfluxDBBucketFlag,
utils.MetricsInfluxDBOrganizationFlag, utils.MetricsInfluxDBOrganizationFlag,
} }
fakeBeaconFlags = []cli.Flag{
utils.FakeBeaconEnabledFlag,
utils.FakeBeaconAddrFlag,
utils.FakeBeaconPortFlag,
}
) )
var app = flags.NewApp("the go-ethereum command line interface") var app = flags.NewApp("the go-ethereum command line interface")
@ -280,6 +292,7 @@ func init() {
consoleFlags, consoleFlags,
debug.Flags, debug.Flags,
metricsFlags, metricsFlags,
fakeBeaconFlags,
) )
flags.AutoEnvVars(app.Flags, "GETH") flags.AutoEnvVars(app.Flags, "GETH")
@ -331,9 +344,6 @@ func prepare(ctx *cli.Context) {
5. Networking is disabled; there is no listen-address, the maximum number of peers is set 5. Networking is disabled; there is no listen-address, the maximum number of peers is set
to 0, and discovery is disabled. to 0, and discovery is disabled.
`) `)
case !ctx.IsSet(utils.NetworkIdFlag.Name):
log.Info("Starting Geth on BSC mainnet...")
} }
// If we're a full node on mainnet without --cache specified, bump default cache allowance // If we're a full node on mainnet without --cache specified, bump default cache allowance
if !ctx.IsSet(utils.CacheFlag.Name) && !ctx.IsSet(utils.NetworkIdFlag.Name) { if !ctx.IsSet(utils.CacheFlag.Name) && !ctx.IsSet(utils.NetworkIdFlag.Name) {
@ -368,8 +378,6 @@ func geth(ctx *cli.Context) error {
// it unlocks any requested accounts, and starts the RPC/IPC interfaces and the // it unlocks any requested accounts, and starts the RPC/IPC interfaces and the
// miner. // miner.
func startNode(ctx *cli.Context, stack *node.Node, backend ethapi.Backend, isConsole bool) { func startNode(ctx *cli.Context, stack *node.Node, backend ethapi.Backend, isConsole bool) {
debug.Memsize.Add("node", stack)
// Start up the node itself // Start up the node itself
utils.StartNode(ctx, stack, isConsole) utils.StartNode(ctx, stack, isConsole)
@ -442,12 +450,17 @@ func startNode(ctx *cli.Context, stack *node.Node, backend ethapi.Backend, isCon
} }
// Start auxiliary services if enabled // Start auxiliary services if enabled
ethBackend, ok := backend.(*eth.EthAPIBackend)
gasCeil := ethBackend.Miner().GasCeil()
if gasCeil > params.SystemTxsGas {
ethBackend.TxPool().SetMaxGas(gasCeil - params.SystemTxsGas)
}
if ctx.Bool(utils.MiningEnabledFlag.Name) { if ctx.Bool(utils.MiningEnabledFlag.Name) {
// Mining only makes sense if a full Ethereum node is running // Mining only makes sense if a full Ethereum node is running
if ctx.String(utils.SyncModeFlag.Name) == "light" { if ctx.String(utils.SyncModeFlag.Name) == "light" {
utils.Fatalf("Light clients do not support mining") utils.Fatalf("Light clients do not support mining")
} }
ethBackend, ok := backend.(*eth.EthAPIBackend)
if !ok { if !ok {
utils.Fatalf("Ethereum service not running") utils.Fatalf("Ethereum service not running")
} }

View File

@ -75,7 +75,7 @@ func NewLevelDBDatabaseWithFreezer(file string, cache int, handles int, ancient
if err != nil { if err != nil {
return nil, err return nil, err
} }
frdb, err := rawdb.NewDatabaseWithFreezer(kvdb, ancient, namespace, readonly, disableFreeze, isLastOffset, pruneAncientData) frdb, err := rawdb.NewDatabaseWithFreezer(kvdb, ancient, namespace, readonly, disableFreeze, isLastOffset, pruneAncientData, false)
if err != nil { if err != nil {
kvdb.Close() kvdb.Close()
return nil, err return nil, err
@ -155,6 +155,12 @@ func BlockchainCreator(t *testing.T, chaindbPath, AncientPath string, blockRemai
triedb := triedb.NewDatabase(db, nil) triedb := triedb.NewDatabase(db, nil)
defer triedb.Close() defer triedb.Close()
if err = db.SetupFreezerEnv(&ethdb.FreezerEnv{
ChainCfg: gspec.Config,
BlobExtraReserve: params.DefaultExtraReserveForBlobRequests,
}); err != nil {
t.Fatalf("Failed to create chain: %v", err)
}
genesis := gspec.MustCommit(db, triedb) genesis := gspec.MustCommit(db, triedb)
// Initialize a fresh chain with only a genesis block // Initialize a fresh chain with only a genesis block
blockchain, err := core.NewBlockChain(db, config, gspec, nil, engine, vm.Config{}, nil, nil) blockchain, err := core.NewBlockChain(db, config, gspec, nil, engine, vm.Config{}, nil, nil)

View File

@ -43,9 +43,11 @@ import (
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics" "github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/node" "github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie" "github.com/ethereum/go-ethereum/trie"
"github.com/ethereum/go-ethereum/triedb" "github.com/ethereum/go-ethereum/triedb"
"github.com/ethereum/go-ethereum/triedb/pathdb"
cli "github.com/urfave/cli/v2" cli "github.com/urfave/cli/v2"
) )
@ -245,7 +247,16 @@ func accessDb(ctx *cli.Context, stack *node.Node) (ethdb.Database, error) {
NoBuild: true, NoBuild: true,
AsyncBuild: false, AsyncBuild: false,
} }
snaptree, err := snapshot.New(snapconfig, chaindb, triedb.NewDatabase(chaindb, nil), headBlock.Root(), TriesInMemory, false) dbScheme := rawdb.ReadStateScheme(chaindb)
var config *triedb.Config
if dbScheme == rawdb.PathScheme {
config = &triedb.Config{
PathDB: utils.PathDBConfigAddJournalFilePath(stack, pathdb.ReadOnly),
}
} else if dbScheme == rawdb.HashScheme {
config = triedb.HashDefaults
}
snaptree, err := snapshot.New(snapconfig, chaindb, triedb.NewDatabase(chaindb, config), headBlock.Root(), TriesInMemory, false)
if err != nil { if err != nil {
log.Error("snaptree error", "err", err) log.Error("snaptree error", "err", err)
return nil, err // The relevant snapshot(s) might not exist return nil, err // The relevant snapshot(s) might not exist
@ -333,6 +344,9 @@ func pruneBlock(ctx *cli.Context) error {
stack, config = makeConfigNode(ctx) stack, config = makeConfigNode(ctx)
defer stack.Close() defer stack.Close()
blockAmountReserved = ctx.Uint64(utils.BlockAmountReserved.Name) blockAmountReserved = ctx.Uint64(utils.BlockAmountReserved.Name)
if blockAmountReserved < params.FullImmutabilityThreshold {
return fmt.Errorf("block-amount-reserved must be greater than or equal to %d", params.FullImmutabilityThreshold)
}
chaindb, err = accessDb(ctx, stack) chaindb, err = accessDb(ctx, stack)
if err != nil { if err != nil {
return err return err
@ -355,7 +369,15 @@ func pruneBlock(ctx *cli.Context) error {
if !ctx.IsSet(utils.AncientFlag.Name) { if !ctx.IsSet(utils.AncientFlag.Name) {
return errors.New("datadir.ancient must be set") return errors.New("datadir.ancient must be set")
} else { } else {
oldAncientPath = ctx.String(utils.AncientFlag.Name) if stack.CheckIfMultiDataBase() {
ancientPath := ctx.String(utils.AncientFlag.Name)
index := strings.LastIndex(ancientPath, "/ancient/chain")
if index != -1 {
oldAncientPath = ancientPath[:index] + "/block/ancient/chain"
}
} else {
oldAncientPath = ctx.String(utils.AncientFlag.Name)
}
if !filepath.IsAbs(oldAncientPath) { if !filepath.IsAbs(oldAncientPath) {
// force absolute paths, which often fail due to the splicing of relative paths // force absolute paths, which often fail due to the splicing of relative paths
return errors.New("datadir.ancient not abs path") return errors.New("datadir.ancient not abs path")
@ -401,7 +423,7 @@ func pruneBlock(ctx *cli.Context) error {
} }
if _, err := os.Stat(newAncientPath); err == nil { if _, err := os.Stat(newAncientPath); err == nil {
// No file lock found for old ancientDB but new ancientDB exsisted, indicating the geth was interrupted // No file lock found for old ancientDB but new ancientDB existed, indicating the geth was interrupted
// after old ancientDB removal, this happened after backup successfully, so just rename the new ancientDB // after old ancientDB removal, this happened after backup successfully, so just rename the new ancientDB
if err := blockpruner.AncientDbReplacer(); err != nil { if err := blockpruner.AncientDbReplacer(); err != nil {
log.Error("Failed to rename new ancient directory") log.Error("Failed to rename new ancient directory")
@ -524,7 +546,7 @@ func verifyState(ctx *cli.Context) error {
log.Error("Failed to load head block") log.Error("Failed to load head block")
return errors.New("no head block") return errors.New("no head block")
} }
triedb := utils.MakeTrieDatabase(ctx, chaindb, false, true, false) triedb := utils.MakeTrieDatabase(ctx, stack, chaindb, false, true, false)
defer triedb.Close() defer triedb.Close()
snapConfig := snapshot.Config{ snapConfig := snapshot.Config{
@ -579,7 +601,7 @@ func traverseState(ctx *cli.Context) error {
chaindb := utils.MakeChainDatabase(ctx, stack, true, false) chaindb := utils.MakeChainDatabase(ctx, stack, true, false)
defer chaindb.Close() defer chaindb.Close()
triedb := utils.MakeTrieDatabase(ctx, chaindb, false, true, false) triedb := utils.MakeTrieDatabase(ctx, stack, chaindb, false, true, false)
defer triedb.Close() defer triedb.Close()
headBlock := rawdb.ReadHeadBlock(chaindb) headBlock := rawdb.ReadHeadBlock(chaindb)
@ -688,7 +710,7 @@ func traverseRawState(ctx *cli.Context) error {
chaindb := utils.MakeChainDatabase(ctx, stack, true, false) chaindb := utils.MakeChainDatabase(ctx, stack, true, false)
defer chaindb.Close() defer chaindb.Close()
triedb := utils.MakeTrieDatabase(ctx, chaindb, false, true, false) triedb := utils.MakeTrieDatabase(ctx, stack, chaindb, false, true, false)
defer triedb.Close() defer triedb.Close()
headBlock := rawdb.ReadHeadBlock(chaindb) headBlock := rawdb.ReadHeadBlock(chaindb)
@ -852,7 +874,8 @@ func dumpState(ctx *cli.Context) error {
if err != nil { if err != nil {
return err return err
} }
triedb := utils.MakeTrieDatabase(ctx, db, false, true, false) defer db.Close()
triedb := utils.MakeTrieDatabase(ctx, stack, db, false, true, false)
defer triedb.Close() defer triedb.Close()
snapConfig := snapshot.Config{ snapConfig := snapshot.Config{
@ -935,7 +958,7 @@ func snapshotExportPreimages(ctx *cli.Context) error {
chaindb := utils.MakeChainDatabase(ctx, stack, true, false) chaindb := utils.MakeChainDatabase(ctx, stack, true, false)
defer chaindb.Close() defer chaindb.Close()
triedb := utils.MakeTrieDatabase(ctx, chaindb, false, true, false) triedb := utils.MakeTrieDatabase(ctx, stack, chaindb, false, true, false)
defer triedb.Close() defer triedb.Close()
var root common.Hash var root common.Hash

View File

@ -24,4 +24,24 @@ testnet validators version
### 2.Get Transaction Count ### 2.Get Transaction Count
```bash ```bash
node gettxcount.js --rpc ${url} --startNum ${start} --endNum ${end} --miner ${miner} (optional) node gettxcount.js --rpc ${url} --startNum ${start} --endNum ${end} --miner ${miner} (optional)
``` ```
### 3. Get Performance
```bash
node get_perf.js --rpc ${url} --startNum ${start} --endNum ${end}
```
output as following
```bash
Get the performance between [ 19470 , 19670 )
txCountPerBlock = 3142.81 txCountTotal = 628562 BlockCount = 200 avgBlockTime = 3.005 inturnBlocksRatio = 0.975 justifiedBlocksRatio = 0.98
txCountPerSecond = 1045.8602329450914 avgGasUsedPerBlock = 250.02062627 avgGasUsedPerSecond = 83.20153952412646
```
### 4. Get validators slash count
```bash
use the latest block
node getslashcount.js --Rpc ${ArchiveRpc}
use a block number
node getslashcount.js --Rpc ${ArchiveRpc} --Num ${blockNum}
```

View File

@ -0,0 +1,51 @@
import { ethers } from "ethers";
import program from "commander";
// depends on ethjs v6.11.0+ for 4844, https://github.com/ethers-io/ethers.js/releases/tag/v6.11.0
// BSC testnet enabled 4844 on block: 39539137
// Usage:
// nvm use 20
// node check_blobtx.js --rpc https://data-seed-prebsc-1-s1.binance.org:8545 --startNum 39539137
// node check_blobtx.js --rpc https://data-seed-prebsc-1-s1.binance.org:8545 --startNum 39539137 --endNum 40345994
program.option("--rpc <Rpc>", "Rpc Server URL");
program.option("--startNum <Num>", "start block", 0);
program.option("--endNum <Num>", "end block", 0);
program.parse(process.argv);
const provider = new ethers.JsonRpcProvider(program.rpc);
const main = async () => {
var startBlock = parseInt(program.startNum)
var endBlock = parseInt(program.endNum)
if (isNaN(endBlock) || isNaN(startBlock) || startBlock == 0) {
console.error("invalid input, --startNum", program.startNum, "--end", program.endNum)
return
}
// if --endNum is not specified, set it to the latest block number.
if (endBlock == 0) {
endBlock = await provider.getBlockNumber();
}
if (startBlock > endBlock) {
console.error("invalid input, startBlock:",startBlock, " endBlock:", endBlock);
return
}
for (let i = startBlock; i <= endBlock; i++) {
let blockData = await provider.getBlock(i);
console.log("startBlock:",startBlock, "endBlock:", endBlock, "curBlock", i, "blobGasUsed", blockData.blobGasUsed);
if (blockData.blobGasUsed == 0) {
continue
}
for (let txIndex = 0; txIndex<= blockData.transactions.length - 1; txIndex++) {
let txHash = blockData.transactions[txIndex]
let txData = await provider.getTransaction(txHash);
if (txData.type == 3) {
console.log("BlobTx in block:",i, " txIndex:", txIndex, " txHash:", txHash);
}
}
}
};
main().then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exit(1);
});

View File

@ -0,0 +1,49 @@
import { ethers } from "ethers";
import program from "commander";
// Usage:
// node faucet_request.js --rpc localhost:8545 --startNum 39539137
// node faucet_request.js --rpc localhost:8545 --startNum 39539137 --endNum 40345994
// node faucet_request.js --rpc https://data-seed-prebsc-1-s1.bnbchain.org:8545 --startNum 39539137 --endNum 40345994
program.option("--rpc <Rpc>", "Rpc Server URL");
program.option("--startNum <Num>", "start block", 0);
program.option("--endNum <Num>", "end block", 0);
program.parse(process.argv);
const provider = new ethers.JsonRpcProvider(program.rpc);
const main = async () => {
var startBlock = parseInt(program.startNum)
var endBlock = parseInt(program.endNum)
if (isNaN(endBlock) || isNaN(startBlock) || startBlock == 0) {
console.error("invalid input, --startNum", program.startNum, "--end", program.endNum)
return
}
// if --endNum is not specified, set it to the latest block number.
if (endBlock == 0) {
endBlock = await provider.getBlockNumber();
}
if (startBlock > endBlock) {
console.error("invalid input, startBlock:",startBlock, " endBlock:", endBlock);
return
}
let startBalance = await provider.getBalance("0xaa25Aa7a19f9c426E07dee59b12f944f4d9f1DD3", startBlock)
let endBalance = await provider.getBalance("0xaa25Aa7a19f9c426E07dee59b12f944f4d9f1DD3", endBlock)
const faucetAmount = BigInt(0.3 * 10**18); // Convert 0.3 ether to wei as a BigInt
const numFaucetRequest = (startBalance - endBalance) / faucetAmount;
// Convert BigInt to ether
const startBalanceEth = Number(startBalance) / 10**18;
const endBalanceEth = Number(endBalance) / 10**18;
console.log(`Start Balance: ${startBalanceEth} ETH`);
console.log(`End Balance: ${endBalanceEth} ETH`);
console.log("successful faucet request: ",numFaucetRequest);
};
main().then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exit(1);
});

70
cmd/jsutils/get_perf.js Normal file
View File

@ -0,0 +1,70 @@
import { ethers } from "ethers";
import program from "commander";
program.option("--rpc <rpc>", "Rpc");
program.option("--startNum <startNum>", "start num")
program.option("--endNum <endNum>", "end num")
program.parse(process.argv);
const provider = new ethers.JsonRpcProvider(program.rpc)
const main = async () => {
let txCountTotal = 0;
let gasUsedTotal = 0;
let inturnBlocks = 0;
let justifiedBlocks = 0;
let turnLength = await provider.send("parlia_getTurnLength", [
ethers.toQuantity(program.startNum)]);
for (let i = program.startNum; i < program.endNum; i++) {
let txCount = await provider.send("eth_getBlockTransactionCountByNumber", [
ethers.toQuantity(i)]);
txCountTotal += ethers.toNumber(txCount)
let header = await provider.send("eth_getHeaderByNumber", [
ethers.toQuantity(i)]);
let gasUsed = eval(eval(header.gasUsed).toString(10))
gasUsedTotal += gasUsed
let difficulty = eval(eval(header.difficulty).toString(10))
if (difficulty == 2) {
inturnBlocks += 1
}
let timestamp = eval(eval(header.timestamp).toString(10))
let justifiedNumber = await provider.send("parlia_getJustifiedNumber", [
ethers.toQuantity(i)]);
if (justifiedNumber + 1 == i) {
justifiedBlocks += 1
} else {
console.log("justified unexpected", "BlockNumber =", i,"justifiedNumber",justifiedNumber)
}
console.log("BlockNumber =", i, "mod =", i%turnLength, "miner =", header.miner , "difficulty =", difficulty, "txCount =", ethers.toNumber(txCount), "gasUsed", gasUsed, "timestamp", timestamp)
}
let blockCount = program.endNum - program.startNum
let txCountPerBlock = txCountTotal/blockCount
let startHeader = await provider.send("eth_getHeaderByNumber", [
ethers.toQuantity(program.startNum)]);
let startTime = eval(eval(startHeader.timestamp).toString(10))
let endHeader = await provider.send("eth_getHeaderByNumber", [
ethers.toQuantity(program.endNum)]);
let endTime = eval(eval(endHeader.timestamp).toString(10))
let timeCost = endTime - startTime
let avgBlockTime = timeCost/blockCount
let inturnBlocksRatio = inturnBlocks/blockCount
let justifiedBlocksRatio = justifiedBlocks/blockCount
let tps = txCountTotal/timeCost
let M = 1000000
let avgGasUsedPerBlock = gasUsedTotal/blockCount/M
let avgGasUsedPerSecond = gasUsedTotal/timeCost/M
console.log("Get the performance between [", program.startNum, ",", program.endNum, ")");
console.log("txCountPerBlock =", txCountPerBlock, "txCountTotal =", txCountTotal, "BlockCount =", blockCount, "avgBlockTime =", avgBlockTime, "inturnBlocksRatio =", inturnBlocksRatio, "justifiedBlocksRatio =", justifiedBlocksRatio);
console.log("txCountPerSecond =", tps, "avgGasUsedPerBlock =", avgGasUsedPerBlock, "avgGasUsedPerSecond =", avgGasUsedPerSecond);
};
main().then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exit(1);
});

View File

@ -0,0 +1,164 @@
import { ethers } from "ethers";
import program from "commander";
// Global Options:
program.option("--rpc <rpc>", "Rpc");
// GetTxCount Options:
program.option("--startNum <startNum>", "start num")
program.option("--endNum <endNum>", "end num")
program.option("--miner <miner>", "miner", "")
// GetVersion Options:
program.option("--num <Num>", "validator num", 21)
// GetTopAddr Options:
program.option("--topNum <Num>", "top num of address to be displayed", 20)
program.parse(process.argv);
const provider = new ethers.JsonRpcProvider(program.rpc)
function printUsage() {
console.log("Usage:");
console.log(" node getchainstatus.js --help");
console.log(" node getchainstatus.js [subcommand] [options]");
console.log("\nSubcommands:");
console.log(" GetTxCount: find the block with max tx size of a range");
console.log(" GetVersion: dump validators' binary version, based on Header.Extra");
console.log(" GetTopAddr: get hottest $topNum target address within a block range");
console.log("\nOptions:");
console.log(" --rpc specify the url of RPC endpoint");
console.log(" --startNum the start block number, for command GetTxCount");
console.log(" --endNum the end block number, for command GetTxCount");
console.log(" --miner the miner address, for command GetTxCount");
console.log(" --num the number of blocks to be checked, for command GetVersion");
console.log(" --topNum the topNum of blocks to be checked, for command GetVersion");
console.log("\nExample:");
// mainnet https://bsc-mainnet.nodereal.io/v1/454e504917db4f82b756bd0cf6317dce
console.log(" node getchainstatus.js GetTxCount --rpc https://bsc-testnet-dataseed.bnbchain.org --startNum 40000001 --endNum 40000005")
console.log(" node getchainstatus.js GetVersion --rpc https://bsc-testnet-dataseed.bnbchain.org --num 21")
console.log(" node getchainstatus.js GetTopAddr --rpc https://bsc-testnet-dataseed.bnbchain.org --startNum 40000001 --endNum 40000010 --topNum 10")
}
// 1.cmd: "GetTxCount", usage:
// node getchainstatus.js GetTxCount --rpc https://bsc-testnet-dataseed.bnbchain.org \
// --startNum 40000001 --endNum 40000005 \
// --miner(optional): specified: find the max txCounter from the specified validator,
// not specified: find the max txCounter from all validators
async function getTxCount() {
let txCount = 0;
let num = 0;
console.log("Find the max txs count between", program.startNum, "and", program.endNum);
for (let i = program.startNum; i < program.endNum; i++) {
if (program.miner !== "") {
let blockData = await provider.getBlock(Number(i))
if (program.miner !== blockData.miner) {
continue
}
}
let x = await provider.send("eth_getBlockTransactionCountByNumber", [
ethers.toQuantity(i)]);
let a = ethers.toNumber(x)
if (a > txCount) {
num = i;
txCount = a;
}
}
console.log("BlockNum = ", num, "TxCount =", txCount);
}
// 2.cmd: "GetVersion", usage:
// node getchainstatus.js GetVersion \
// --rpc https://bsc-testnet-dataseed.bnbchain.org \
// --num(optional): defualt 21, the number of blocks that will be checked
async function getBinaryVersion() {
const blockNum = await provider.getBlockNumber();
console.log(blockNum);
for (let i = 0; i < program.num; i++) {
let blockData = await provider.getBlock(blockNum - i);
// 1.get Geth client version
let major = ethers.toNumber(ethers.dataSlice(blockData.extraData, 2, 3))
let minor = ethers.toNumber(ethers.dataSlice(blockData.extraData, 3, 4))
let patch = ethers.toNumber(ethers.dataSlice(blockData.extraData, 4, 5))
// 2.get minimum txGasPrice based on the last non-zero-gasprice transaction
let lastGasPrice = 0
for (let txIndex = blockData.transactions.length - 1; txIndex >= 0; txIndex--) {
let txHash = blockData.transactions[txIndex]
let txData = await provider.getTransaction(txHash);
if (txData.gasPrice == 0) {
continue
}
lastGasPrice = txData.gasPrice
break
}
console.log(blockData.miner, "version =", major + "." + minor + "." + patch, " MinGasPrice = " + lastGasPrice)
}
};
// 3.cmd: "GetTopAddr", usage:
// node getchainstatus.js GetTopAddr \
// --rpc https://bsc-testnet-dataseed.bnbchain.org \
// --startNum 40000001 --endNum 40000005 \
// --topNum(optional): the top num of address to be displayed, default 20
function getTopKElements(map, k) {
let entries = Array.from(map.entries());
entries.sort((a, b) => b[1] - a[1]);
return entries.slice(0, k);
}
async function getTopAddr() {
let countMap = new Map();
let totalTxs = 0
console.log("Find the top target address, between", program.startNum, "and", program.endNum);
for (let i = program.startNum; i <= program.endNum; i++) {
let blockData = await provider.getBlock(Number(i), true)
totalTxs += blockData.transactions.length
for (let txIndex = blockData.transactions.length - 1; txIndex >= 0; txIndex--) {
let txData = await blockData.getTransaction(txIndex)
if (txData.to == null) {
console.log("Contract creation,txHash:", txData.hash)
continue
}
let toAddr = txData.to;
if (countMap.has(toAddr)) {
countMap.set(toAddr, countMap.get(toAddr) + 1);
} else {
countMap.set(toAddr, 1);
}
}
console.log("progress:", (program.endNum-i), "blocks left", "totalTxs", totalTxs)
}
let tops = getTopKElements(countMap, program.topNum)
tops.forEach((value, key) => {
// value: [ '0x40661F989826CC641Ce1601526Bb16a4221412c8', 71 ]
console.log(key+":", value[0], " ", value[1], " ", ((value[1]*100)/totalTxs).toFixed(2)+"%");
});
};
const main = async () => {
if (process.argv.length <= 2) {
console.error('invalid process.argv.length', process.argv.length);
printUsage()
return
}
const cmd = process.argv[2]
if (cmd === "--help") {
printUsage()
return
}
if (cmd === "GetTxCount") {
await getTxCount()
} else if (cmd === "GetVersion") {
await getBinaryVersion()
} else if (cmd === "GetTopAddr") {
await getTopAddr()
} else {
console.log("unsupported cmd", cmd);
printUsage()
}
}
main().then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exit(1);
});

View File

@ -0,0 +1,119 @@
import { ethers } from "ethers";
import program from "commander";
program.option("--Rpc <Rpc>", "Rpc");
program.option("--Num <Num>", "num", 0)
program.parse(process.argv);
const provider = new ethers.JsonRpcProvider(program.Rpc);
const slashAbi = [
"function getSlashIndicator(address validatorAddr) external view returns (uint256, uint256)"
]
const validatorSetAbi = [
"function getLivingValidators() external view returns (address[], bytes[])"
]
const stakeHubAbi = [
"function getValidatorDescription(address validatorAddr) external view returns (tuple(string, string, string, string))",
"function consensusToOperator(address consensusAddr) public view returns (address)"
]
const addrValidatorSet = '0x0000000000000000000000000000000000001000';
const validatorSet = new ethers.Contract(addrValidatorSet, validatorSetAbi, provider);
const addrSlash = '0x0000000000000000000000000000000000001001';
const slashIndicator = new ethers.Contract(addrSlash, slashAbi, provider)
const addrStakeHub = '0x0000000000000000000000000000000000002002';
const stakeHub = new ethers.Contract(addrStakeHub, stakeHubAbi, provider)
const validatorMap = new Map([
//BSC
["0x37e9627A91DD13e453246856D58797Ad6583D762", "LegendII"],
["0xB4647b856CB9C3856d559C885Bed8B43e0846a47", "CertiK"],
["0x75B851a27D7101438F45fce31816501193239A83", "Figment"],
["0x502aECFE253E6AA0e8D2A06E12438FFeD0Fe16a0", "BscScan"],
["0xCa503a7eD99eca485da2E875aedf7758472c378C", "InfStones"],
["0x5009317FD4F6F8FeEa9dAe41E5F0a4737BB7A7D5", "NodeReal"],
["0x1cFDBd2dFf70C6e2e30df5012726F87731F38164", "Tranchess"],
["0xF8de5e61322302b2c6e0a525cC842F10332811bf", "Namelix"],
["0xCcB42A9b8d6C46468900527Bc741938E78AB4577", "Turing"],
["0x9f1b7FAE54BE07F4FEE34Eb1aaCb39A1F7B6FC92", "TWStaking"],
["0x7E1FdF03Eb3aC35BF0256694D7fBe6B6d7b3E0c8","LegendIII"],
["0x7b501c7944185130DD4aD73293e8Aa84eFfDcee7","MathW"],
["0x58567F7A51a58708C8B40ec592A38bA64C0697De","Legend"],
["0x460A252B4fEEFA821d3351731220627D7B7d1F3d","Defibit"],
["0x8A239732871AdC8829EA2f47e94087C5FBad47b6","The48Club"],
["0xD3b0d838cCCEAe7ebF1781D11D1bB741DB7Fe1A7","BNBEve"],
["0xF8B99643fAfC79d9404DE68E48C4D49a3936f787","Avengers"],
["0x4e5acf9684652BEa56F2f01b7101a225Ee33d23f","HashKey"],
["0x9bb56C2B4DBE5a06d79911C9899B6f817696ACFc","Feynman"],
["0xbdcc079BBb23C1D9a6F36AA31309676C258aBAC7","Fuji"],
["0x38944092685a336CB6B9ea58836436709a2adC89","Shannon"],
["0xfC1004C0f296Ec3Df4F6762E9EabfcF20EB304a2","Aoraki"],
["0xa0884bb00E5F23fE2427f0E5eC9E51F812848563","Coda"],
["0xe7776De78740f28a96412eE5cbbB8f90896b11A5","Ankr"],
["0xA2D969E82524001Cb6a2357dBF5922B04aD2FCD8","Pexmons"],
["0x5cf810AB8C718ac065b45f892A5BAdAB2B2946B9","Zen"],
["0x4d15D9BCd0c2f33E7510c0de8b42697CA558234a","LegendVII"],
["0x1579ca96EBd49A0B173f86C372436ab1AD393380","LegendV"],
["0xd1F72d433f362922f6565FC77c25e095B29141c8","LegendVI"],
["0xf9814D93b4d904AaA855cBD4266D6Eb0Ec1Aa478","Legend8"],
["0x025a4e09Ea947b8d695f53ddFDD48ddB8F9B06b7","Ciscox"],
["0xE9436F6F30b4B01b57F2780B2898f3820EbD7B98","LegendIV"],
["0xC2d534F079444E6E7Ff9DabB3FD8a26c607932c8","Axion"],
["0x9F7110Ba7EdFda83Fc71BeA6BA3c0591117b440D","LegendIX"],
["0xB997Bf1E3b96919fBA592c1F61CE507E165Ec030","Seoraksan"],
["0x286C1b674d48cFF67b4096b6c1dc22e769581E91","Sigm8"],
["0x73A26778ef9509a6E94b55310eE7233795a9EB25","Coinlix"],
["0x18c44f4FBEde9826C7f257d500A65a3D5A8edebc","Nozti"],
["0xA100FCd08cE722Dc68Ddc3b54237070Cb186f118","Tiollo"],
["0x0F28847cfdbf7508B13Ebb9cEb94B2f1B32E9503","Raptas"],
["0xfD85346c8C991baC16b9c9157e6bdfDACE1cD7d7","Glorin"],
["0x978F05CED39A4EaFa6E8FD045Fe2dd6Da836c7DF","NovaX"],
["0xd849d1dF66bFF1c2739B4399425755C2E0fAbbAb","Nexa"],
["0xA015d9e9206859c13201BB3D6B324d6634276534","Star"],
["0x5ADde0151BfAB27f329e5112c1AeDeed7f0D3692","Veri"],
//Chapel
["0x08265dA01E1A65d62b903c7B34c08cB389bF3D99","Ararat"],
["0x7f5f2cF1aec83bF0c74DF566a41aa7ed65EA84Ea","Kita"],
["0x53387F3321FD69d1E030BB921230dFb188826AFF","Fuji"],
["0x76D76ee8823dE52A1A431884c2ca930C5e72bff3","Seoraksan"],
["0xd447b49CD040D20BC21e49ffEa6487F5638e4346","Everest"],
["0x1a3d9D7A717D64e6088aC937d5aAcDD3E20ca963","Elbrus"],
["0x40D3256EB0BaBE89f0ea54EDAa398513136612f5","Bloxroute"],
["0xF9a1Db0d6f22Bd78ffAECCbc8F47c83Df9FBdbCf","Test"]
]);
const main = async () => {
let blockNum = ethers.getNumber(program.Num)
if (blockNum === 0) {
blockNum = await provider.getBlockNumber()
}
let block = await provider.getBlock(blockNum)
console.log("At block", blockNum, "time", block.date)
const data = await validatorSet.getLivingValidators({blockTag:blockNum})
let totalSlash = 0
for (let i = 0; i < data[0].length; i++) {
let addr = data[0][i];
var val
if (!validatorMap.has(addr)) {
let opAddr = await stakeHub.consensusToOperator(addr, {blockTag:blockNum})
let value = await stakeHub.getValidatorDescription(opAddr, {blockTag:blockNum})
val = value[0]
console.log(addr, val)
} else {
val = validatorMap.get(addr)
}
let info = await slashIndicator.getSlashIndicator(addr, {blockTag:blockNum})
let count = ethers.toNumber(info[1])
totalSlash += count
console.log("Slash:", count, addr, val)
}
console.log("Total slash count", totalSlash)
};
main().then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exit(1);
});

View File

@ -1,41 +0,0 @@
import { ethers } from "ethers";
import program from "commander";
program.option("--rpc <rpc>", "Rpc");
program.option("--startNum <startNum>", "start num")
program.option("--endNum <endNum>", "end num")
// --miner:
// specified: find the max txCounter from the specified validator
// not specified: find the max txCounter from all validators
program.option("--miner <miner>", "miner", "")
program.parse(process.argv);
const provider = new ethers.JsonRpcProvider(program.rpc)
const main = async () => {
let txCount = 0;
let num = 0;
console.log("Find the max txs count between", program.startNum, "and", program.endNum);
for (let i = program.startNum; i < program.endNum; i++) {
if (program.miner !== "") {
let blockData = await provider.getBlock(Number(i))
if (program.miner !== blockData.miner) {
continue
}
}
let x = await provider.send("eth_getBlockTransactionCountByNumber", [
ethers.toQuantity(i)]);
let a = ethers.toNumber(x)
if (a > txCount) {
num = i;
txCount = a;
}
}
console.log("BlockNum = ", num, "TxCount =", txCount);
};
main().then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exit(1);
});

View File

@ -1,38 +0,0 @@
import { ethers } from "ethers";
import program from "commander";
program.option("--Rpc <Rpc>", "Rpc");
program.option("--Num <Num>", "validator num", 21)
program.parse(process.argv);
const provider = new ethers.JsonRpcProvider(program.Rpc);
const main = async () => {
const blockNum = await provider.getBlockNumber();
console.log(blockNum);
for (let i = 0; i < program.Num; i++) {
let blockData = await provider.getBlock(blockNum - i);
// 1.get Geth client version
let major = ethers.toNumber(ethers.dataSlice(blockData.extraData, 2, 3))
let minor = ethers.toNumber(ethers.dataSlice(blockData.extraData, 3, 4))
let patch = ethers.toNumber(ethers.dataSlice(blockData.extraData, 4, 5))
// 2.get minimum txGasPrice based on the last non-zero-gasprice transaction
let lastGasPrice = 0
for (let txIndex = blockData.transactions.length - 1; txIndex >= 0; txIndex--) {
let txHash = blockData.transactions[txIndex]
let txData = await provider.getTransaction(txHash);
if (txData.gasPrice == 0) {
continue
}
lastGasPrice = txData.gasPrice
break
}
console.log(blockData.miner, "version =", major + "." + minor + "." + patch, " MinGasPrice = " + lastGasPrice)
}
};
main().then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exit(1);
});

View File

@ -35,8 +35,11 @@ import (
"strings" "strings"
"time" "time"
"github.com/ethereum/go-ethereum/internal/version"
"github.com/ethereum/go-ethereum/accounts" "github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/keystore" "github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/beacon/fakebeacon"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/fdlimit" "github.com/ethereum/go-ethereum/common/fdlimit"
"github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core"
@ -93,10 +96,10 @@ var (
Value: flags.DirectoryString(node.DefaultDataDir()), Value: flags.DirectoryString(node.DefaultDataDir()),
Category: flags.EthCategory, Category: flags.EthCategory,
} }
SeparateDBFlag = &cli.BoolFlag{ MultiDataBaseFlag = &cli.BoolFlag{
Name: "separatedb", Name: "multidatabase",
Usage: "Enable a separated trie database, it will be created within a subdirectory called state, " + Usage: "Enable a separated state and block database, it will be created within two subdirectory called state and block, " +
"Users can copy this state directory to another directory or disk, and then create a symbolic link to the state directory under the chaindata", "Users can copy this state or block directory to another directory or disk, and then create a symbolic link to the state directory under the chaindata",
Category: flags.EthCategory, Category: flags.EthCategory,
} }
DirectBroadcastFlag = &cli.BoolFlag{ DirectBroadcastFlag = &cli.BoolFlag{
@ -224,7 +227,7 @@ var (
// hbss2pbss command options // hbss2pbss command options
ForceFlag = &cli.BoolFlag{ ForceFlag = &cli.BoolFlag{
Name: "force", Name: "force",
Usage: "Force convert hbss trie node to pbss trie node. Ingore any metadata", Usage: "Force convert hbss trie node to pbss trie node. Ignore any metadata",
Value: false, Value: false,
} }
// Dump command options. // Dump command options.
@ -305,9 +308,14 @@ var (
Usage: "Manually specify the Rialto Genesis Hash, to trigger builtin network logic", Usage: "Manually specify the Rialto Genesis Hash, to trigger builtin network logic",
Category: flags.EthCategory, Category: flags.EthCategory,
} }
OverrideCancun = &cli.Uint64Flag{ OverridePassedForkTime = &cli.Uint64Flag{
Name: "override.cancun", Name: "override.passedforktime",
Usage: "Manually specify the Cancun fork timestamp, overriding the bundled setting", Usage: "Manually specify the hard fork timestamp except the last one, overriding the bundled setting",
Category: flags.EthCategory,
}
OverrideBohr = &cli.Uint64Flag{
Name: "override.bohr",
Usage: "Manually specify the Bohr fork timestamp, overriding the bundled setting",
Category: flags.EthCategory, Category: flags.EthCategory,
} }
OverrideVerkle = &cli.Uint64Flag{ OverrideVerkle = &cli.Uint64Flag{
@ -315,15 +323,6 @@ var (
Usage: "Manually specify the Verkle fork timestamp, overriding the bundled setting", Usage: "Manually specify the Verkle fork timestamp, overriding the bundled setting",
Category: flags.EthCategory, Category: flags.EthCategory,
} }
OverrideFeynman = &cli.Uint64Flag{
Name: "override.feynman",
Usage: "Manually specify the Feynman fork timestamp, overriding the bundled setting",
Category: flags.EthCategory,
}
OverrideFeynmanFix = &cli.Uint64Flag{
Name: "override.feynmanfix",
Usage: "Manually specify the FeynmanFix fork timestamp, overriding the bundled setting",
}
OverrideFullImmutabilityThreshold = &cli.Uint64Flag{ OverrideFullImmutabilityThreshold = &cli.Uint64Flag{
Name: "override.immutabilitythreshold", Name: "override.immutabilitythreshold",
Usage: "It is the number of blocks after which a chain segment is considered immutable, only for testing purpose", Usage: "It is the number of blocks after which a chain segment is considered immutable, only for testing purpose",
@ -342,6 +341,18 @@ var (
Value: params.DefaultExtraReserveForBlobRequests, Value: params.DefaultExtraReserveForBlobRequests,
Category: flags.EthCategory, Category: flags.EthCategory,
} }
OverrideBreatheBlockInterval = &cli.Uint64Flag{
Name: "override.breatheblockinterval",
Usage: "It changes the interval between breathe blocks, only for testing purpose",
Value: params.BreatheBlockInterval,
Category: flags.EthCategory,
}
OverrideFixedTurnLength = &cli.Uint64Flag{
Name: "override.fixedturnlength",
Usage: "It use fixed or random values for turn length instead of reading from the contract, only for testing purpose",
Value: params.FixedTurnLength,
Category: flags.EthCategory,
}
SyncModeFlag = &flags.TextMarshalerFlag{ SyncModeFlag = &flags.TextMarshalerFlag{
Name: "syncmode", Name: "syncmode",
Usage: `Blockchain sync mode ("snap" or "full")`, Usage: `Blockchain sync mode ("snap" or "full")`,
@ -365,6 +376,12 @@ var (
Value: false, Value: false,
Category: flags.StateCategory, Category: flags.StateCategory,
} }
JournalFileFlag = &cli.BoolFlag{
Name: "journalfile",
Usage: "Enable using journal file to store the TrieJournal instead of KVDB in pbss (default = false)",
Value: false,
Category: flags.StateCategory,
}
StateHistoryFlag = &cli.Uint64Flag{ StateHistoryFlag = &cli.Uint64Flag{
Name: "history.state", Name: "history.state",
Usage: "Number of recent blocks to retain state history for (default = 90,000 blocks, 0 = entire chain)", Usage: "Number of recent blocks to retain state history for (default = 90,000 blocks, 0 = entire chain)",
@ -876,6 +893,11 @@ var (
Usage: "Disables the peer discovery mechanism (manual peer addition)", Usage: "Disables the peer discovery mechanism (manual peer addition)",
Category: flags.NetworkingCategory, Category: flags.NetworkingCategory,
} }
PeerFilterPatternsFlag = &cli.StringSliceFlag{
Name: "peerfilter",
Usage: "Disallow peers connection if peer name matches the given regular expressions",
Category: flags.NetworkingCategory,
}
DiscoveryV4Flag = &cli.BoolFlag{ DiscoveryV4Flag = &cli.BoolFlag{
Name: "discovery.v4", Name: "discovery.v4",
Aliases: []string{"discv4"}, Aliases: []string{"discv4"},
@ -1069,6 +1091,7 @@ Please note that --` + MetricsHTTPFlag.Name + ` must be set to start the server.
Name: "block-amount-reserved", Name: "block-amount-reserved",
Usage: "Sets the expected remained amount of blocks for offline block prune", Usage: "Sets the expected remained amount of blocks for offline block prune",
Category: flags.BlockHistoryCategory, Category: flags.BlockHistoryCategory,
Value: params.FullImmutabilityThreshold,
} }
CheckSnapshotWithMPT = &cli.BoolFlag{ CheckSnapshotWithMPT = &cli.BoolFlag{
@ -1126,6 +1149,25 @@ Please note that --` + MetricsHTTPFlag.Name + ` must be set to start the server.
Value: params.DefaultExtraReserveForBlobRequests, Value: params.DefaultExtraReserveForBlobRequests,
Category: flags.MiscCategory, Category: flags.MiscCategory,
} }
// Fake beacon
FakeBeaconEnabledFlag = &cli.BoolFlag{
Name: "fake-beacon",
Usage: "Enable the HTTP-RPC server of fake-beacon",
Category: flags.APICategory,
}
FakeBeaconAddrFlag = &cli.StringFlag{
Name: "fake-beacon.addr",
Usage: "HTTP-RPC server listening addr of fake-beacon",
Value: fakebeacon.DefaultAddr,
Category: flags.APICategory,
}
FakeBeaconPortFlag = &cli.IntFlag{
Name: "fake-beacon.port",
Usage: "HTTP-RPC server listening port of fake-beacon",
Value: fakebeacon.DefaultPort,
Category: flags.APICategory,
}
) )
var ( var (
@ -1144,7 +1186,6 @@ var (
DBEngineFlag, DBEngineFlag,
StateSchemeFlag, StateSchemeFlag,
HttpHeaderFlag, HttpHeaderFlag,
SeparateDBFlag,
} }
) )
@ -1542,6 +1583,9 @@ func SetP2PConfig(ctx *cli.Context, cfg *p2p.Config) {
if ctx.IsSet(NoDiscoverFlag.Name) { if ctx.IsSet(NoDiscoverFlag.Name) {
cfg.NoDiscovery = true cfg.NoDiscovery = true
} }
if ctx.IsSet(PeerFilterPatternsFlag.Name) {
cfg.PeerFilterPatterns = ctx.StringSlice(PeerFilterPatternsFlag.Name)
}
CheckExclusive(ctx, DiscoveryV4Flag, NoDiscoverFlag) CheckExclusive(ctx, DiscoveryV4Flag, NoDiscoverFlag)
CheckExclusive(ctx, DiscoveryV5Flag, NoDiscoverFlag) CheckExclusive(ctx, DiscoveryV5Flag, NoDiscoverFlag)
@ -1962,6 +2006,10 @@ func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *ethconfig.Config) {
if ctx.IsSet(PathDBSyncFlag.Name) { if ctx.IsSet(PathDBSyncFlag.Name) {
cfg.PathSyncFlush = true cfg.PathSyncFlush = true
} }
if ctx.IsSet(JournalFileFlag.Name) {
cfg.JournalFileEnabled = true
}
if ctx.String(GCModeFlag.Name) == "archive" && cfg.TransactionHistory != 0 { if ctx.String(GCModeFlag.Name) == "archive" && cfg.TransactionHistory != 0 {
cfg.TransactionHistory = 0 cfg.TransactionHistory = 0
log.Warn("Disabled transaction unindexing for archive node") log.Warn("Disabled transaction unindexing for archive node")
@ -2057,7 +2105,7 @@ func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *ethconfig.Config) {
} }
cfg.Genesis = core.DefaultBSCGenesisBlock() cfg.Genesis = core.DefaultBSCGenesisBlock()
SetDNSDiscoveryDefaults(cfg, params.BSCGenesisHash) SetDNSDiscoveryDefaults(cfg, params.BSCGenesisHash)
case ctx.Bool(ChapelFlag.Name): case ctx.Bool(ChapelFlag.Name) || cfg.NetworkId == 97:
if !ctx.IsSet(NetworkIdFlag.Name) { if !ctx.IsSet(NetworkIdFlag.Name) {
cfg.NetworkId = 97 cfg.NetworkId = 97
} }
@ -2271,6 +2319,67 @@ func EnableNodeInfo(poolConfig *legacypool.Config, nodeInfo *p2p.NodeInfo) Setup
} }
} }
func EnableNodeTrack(ctx *cli.Context, cfg *ethconfig.Config, stack *node.Node) SetupMetricsOption {
nodeInfo := stack.Server().NodeInfo()
return func() {
// register node info into metrics
metrics.NewRegisteredLabel("node-stats", nil).Mark(map[string]interface{}{
"NodeType": parseNodeType(),
"ENR": nodeInfo.ENR,
"Mining": ctx.Bool(MiningEnabledFlag.Name),
"Etherbase": parseEtherbase(cfg),
"MiningFeatures": parseMiningFeatures(ctx, cfg),
"DBFeatures": parseDBFeatures(cfg, stack),
})
}
}
func parseEtherbase(cfg *ethconfig.Config) string {
if cfg.Miner.Etherbase == (common.Address{}) {
return ""
}
return cfg.Miner.Etherbase.String()
}
func parseNodeType() string {
git, _ := version.VCS()
version := []string{params.VersionWithMeta}
if len(git.Commit) >= 7 {
version = append(version, git.Commit[:7])
}
if git.Date != "" {
version = append(version, git.Date)
}
arch := []string{runtime.GOOS, runtime.GOARCH}
infos := []string{"BSC", strings.Join(version, "-"), strings.Join(arch, "-"), runtime.Version()}
return strings.Join(infos, "/")
}
func parseDBFeatures(cfg *ethconfig.Config, stack *node.Node) string {
var features []string
if cfg.StateScheme == rawdb.PathScheme {
features = append(features, "PBSS")
}
if stack.CheckIfMultiDataBase() {
features = append(features, "MultiDB")
}
return strings.Join(features, "|")
}
func parseMiningFeatures(ctx *cli.Context, cfg *ethconfig.Config) string {
if !ctx.Bool(MiningEnabledFlag.Name) {
return ""
}
var features []string
if cfg.Miner.Mev.Enabled {
features = append(features, "MEV")
}
if cfg.Miner.VoteEnable {
features = append(features, "FFVoting")
}
return strings.Join(features, "|")
}
func SetupMetrics(ctx *cli.Context, options ...SetupMetricsOption) { func SetupMetrics(ctx *cli.Context, options ...SetupMetricsOption) {
if metrics.Enabled { if metrics.Enabled {
log.Info("Enabling metrics collection") log.Info("Enabling metrics collection")
@ -2377,9 +2486,11 @@ func MakeChainDatabase(ctx *cli.Context, stack *node.Node, readonly, disableFree
default: default:
chainDb, err = stack.OpenDatabaseWithFreezer("chaindata", cache, handles, ctx.String(AncientFlag.Name), "", readonly, disableFreeze, false, false) chainDb, err = stack.OpenDatabaseWithFreezer("chaindata", cache, handles, ctx.String(AncientFlag.Name), "", readonly, disableFreeze, false, false)
// set the separate state database // set the separate state database
if stack.IsSeparatedDB() && err == nil { if stack.CheckIfMultiDataBase() && err == nil {
stateDiskDb := MakeStateDataBase(ctx, stack, readonly, false) stateDiskDb := MakeStateDataBase(ctx, stack, readonly, false)
chainDb.SetStateStore(stateDiskDb) chainDb.SetStateStore(stateDiskDb)
blockDb := MakeBlockDatabase(ctx, stack, readonly, false)
chainDb.SetBlockStore(blockDb)
} }
} }
if err != nil { if err != nil {
@ -2391,7 +2502,7 @@ func MakeChainDatabase(ctx *cli.Context, stack *node.Node, readonly, disableFree
// MakeStateDataBase open a separate state database using the flags passed to the client and will hard crash if it fails. // MakeStateDataBase open a separate state database using the flags passed to the client and will hard crash if it fails.
func MakeStateDataBase(ctx *cli.Context, stack *node.Node, readonly, disableFreeze bool) ethdb.Database { func MakeStateDataBase(ctx *cli.Context, stack *node.Node, readonly, disableFreeze bool) ethdb.Database {
cache := ctx.Int(CacheFlag.Name) * ctx.Int(CacheDatabaseFlag.Name) / 100 cache := ctx.Int(CacheFlag.Name) * ctx.Int(CacheDatabaseFlag.Name) / 100
handles := MakeDatabaseHandles(ctx.Int(FDLimitFlag.Name)) / 2 handles := MakeDatabaseHandles(ctx.Int(FDLimitFlag.Name)) * 90 / 100
statediskdb, err := stack.OpenDatabaseWithFreezer("chaindata/state", cache, handles, "", "", readonly, disableFreeze, false, false) statediskdb, err := stack.OpenDatabaseWithFreezer("chaindata/state", cache, handles, "", "", readonly, disableFreeze, false, false)
if err != nil { if err != nil {
Fatalf("Failed to open separate trie database: %v", err) Fatalf("Failed to open separate trie database: %v", err)
@ -2399,6 +2510,23 @@ func MakeStateDataBase(ctx *cli.Context, stack *node.Node, readonly, disableFree
return statediskdb return statediskdb
} }
// MakeBlockDatabase open a separate block database using the flags passed to the client and will hard crash if it fails.
func MakeBlockDatabase(ctx *cli.Context, stack *node.Node, readonly, disableFreeze bool) ethdb.Database {
cache := ctx.Int(CacheFlag.Name) * ctx.Int(CacheDatabaseFlag.Name) / 100
handles := MakeDatabaseHandles(ctx.Int(FDLimitFlag.Name)) / 10
blockDb, err := stack.OpenDatabaseWithFreezer("chaindata/block", cache, handles, "", "", readonly, disableFreeze, false, false)
if err != nil {
Fatalf("Failed to open separate block database: %v", err)
}
return blockDb
}
func PathDBConfigAddJournalFilePath(stack *node.Node, config *pathdb.Config) *pathdb.Config {
path := fmt.Sprintf("%s/%s", stack.ResolvePath("chaindata"), eth.JournalFileName)
config.JournalFilePath = path
return config
}
// tryMakeReadOnlyDatabase try to open the chain database in read-only mode, // tryMakeReadOnlyDatabase try to open the chain database in read-only mode,
// or fallback to write mode if the database is not initialized. // or fallback to write mode if the database is not initialized.
// //
@ -2541,7 +2669,7 @@ func MakeConsolePreloads(ctx *cli.Context) []string {
} }
// MakeTrieDatabase constructs a trie database based on the configured scheme. // MakeTrieDatabase constructs a trie database based on the configured scheme.
func MakeTrieDatabase(ctx *cli.Context, disk ethdb.Database, preimage bool, readOnly bool, isVerkle bool) *triedb.Database { func MakeTrieDatabase(ctx *cli.Context, stack *node.Node, disk ethdb.Database, preimage bool, readOnly bool, isVerkle bool) *triedb.Database {
config := &triedb.Config{ config := &triedb.Config{
Preimages: preimage, Preimages: preimage,
IsVerkle: isVerkle, IsVerkle: isVerkle,
@ -2562,6 +2690,7 @@ func MakeTrieDatabase(ctx *cli.Context, disk ethdb.Database, preimage bool, read
} else { } else {
config.PathDB = pathdb.Defaults config.PathDB = pathdb.Defaults
} }
config.PathDB.JournalFilePath = fmt.Sprintf("%s/%s", stack.ResolvePath("chaindata"), eth.JournalFileName)
return triedb.NewDatabase(disk, config) return triedb.NewDatabase(disk, config)
} }

View File

@ -83,7 +83,7 @@ func TestHistoryImportAndExport(t *testing.T) {
t.Fatalf("unable to initialize chain: %v", err) t.Fatalf("unable to initialize chain: %v", err)
} }
if _, err := chain.InsertChain(blocks); err != nil { if _, err := chain.InsertChain(blocks); err != nil {
t.Fatalf("error insterting chain: %v", err) t.Fatalf("error inserting chain: %v", err)
} }
// Make temp directory for era files. // Make temp directory for era files.
@ -163,7 +163,7 @@ func TestHistoryImportAndExport(t *testing.T) {
// Now import Era. // Now import Era.
freezer := t.TempDir() freezer := t.TempDir()
db2, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), freezer, "", false, false, false, false) db2, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), freezer, "", false, false, false, false, false)
if err != nil { if err != nil {
panic(err) panic(err)
} }

View File

@ -59,6 +59,9 @@ type ChainHeaderReader interface {
// GetHighestVerifiedHeader retrieves the highest header verified. // GetHighestVerifiedHeader retrieves the highest header verified.
GetHighestVerifiedHeader() *types.Header GetHighestVerifiedHeader() *types.Header
// GetVerifiedBlockByHash retrieves the highest verified block.
GetVerifiedBlockByHash(hash common.Hash) *types.Header
// ChasingHead return the best chain head of peers. // ChasingHead return the best chain head of peers.
ChasingHead() *types.Header ChasingHead() *types.Header
} }

View File

@ -2306,6 +2306,19 @@ const validatorSetABI = `
], ],
"stateMutability": "view" "stateMutability": "view"
}, },
{
"inputs": [],
"name": "getTurnLength",
"outputs": [
{
"internalType": "uint256",
"name": "",
"type": "uint256"
}
],
"stateMutability": "view",
"type": "function"
},
{ {
"type": "function", "type": "function",
"name": "getValidators", "name": "getValidators",

View File

@ -31,13 +31,7 @@ type API struct {
// GetSnapshot retrieves the state snapshot at a given block. // GetSnapshot retrieves the state snapshot at a given block.
func (api *API) GetSnapshot(number *rpc.BlockNumber) (*Snapshot, error) { func (api *API) GetSnapshot(number *rpc.BlockNumber) (*Snapshot, error) {
// Retrieve the requested block number (or current if none requested) header := api.getHeader(number)
var header *types.Header
if number == nil || *number == rpc.LatestBlockNumber {
header = api.chain.CurrentHeader()
} else {
header = api.chain.GetHeaderByNumber(uint64(number.Int64()))
}
// Ensure we have an actually valid block and return its snapshot // Ensure we have an actually valid block and return its snapshot
if header == nil { if header == nil {
return nil, errUnknownBlock return nil, errUnknownBlock
@ -56,13 +50,7 @@ func (api *API) GetSnapshotAtHash(hash common.Hash) (*Snapshot, error) {
// GetValidators retrieves the list of validators at the specified block. // GetValidators retrieves the list of validators at the specified block.
func (api *API) GetValidators(number *rpc.BlockNumber) ([]common.Address, error) { func (api *API) GetValidators(number *rpc.BlockNumber) ([]common.Address, error) {
// Retrieve the requested block number (or current if none requested) header := api.getHeader(number)
var header *types.Header
if number == nil || *number == rpc.LatestBlockNumber {
header = api.chain.CurrentHeader()
} else {
header = api.chain.GetHeaderByNumber(uint64(number.Int64()))
}
// Ensure we have an actually valid block and return the validators from its snapshot // Ensure we have an actually valid block and return the validators from its snapshot
if header == nil { if header == nil {
return nil, errUnknownBlock return nil, errUnknownBlock
@ -86,3 +74,65 @@ func (api *API) GetValidatorsAtHash(hash common.Hash) ([]common.Address, error)
} }
return snap.validators(), nil return snap.validators(), nil
} }
func (api *API) GetJustifiedNumber(number *rpc.BlockNumber) (uint64, error) {
header := api.getHeader(number)
// Ensure we have an actually valid block and return the validators from its snapshot
if header == nil {
return 0, errUnknownBlock
}
snap, err := api.parlia.snapshot(api.chain, header.Number.Uint64(), header.Hash(), nil)
if err != nil || snap.Attestation == nil {
return 0, err
}
return snap.Attestation.TargetNumber, nil
}
func (api *API) GetTurnLength(number *rpc.BlockNumber) (uint8, error) {
header := api.getHeader(number)
// Ensure we have an actually valid block and return the validators from its snapshot
if header == nil {
return 0, errUnknownBlock
}
snap, err := api.parlia.snapshot(api.chain, header.Number.Uint64(), header.Hash(), nil)
if err != nil || snap.TurnLength == 0 {
return 0, err
}
return snap.TurnLength, nil
}
func (api *API) GetFinalizedNumber(number *rpc.BlockNumber) (uint64, error) {
header := api.getHeader(number)
// Ensure we have an actually valid block and return the validators from its snapshot
if header == nil {
return 0, errUnknownBlock
}
snap, err := api.parlia.snapshot(api.chain, header.Number.Uint64(), header.Hash(), nil)
if err != nil || snap.Attestation == nil {
return 0, err
}
return snap.Attestation.SourceNumber, nil
}
func (api *API) getHeader(number *rpc.BlockNumber) (header *types.Header) {
currentHeader := api.chain.CurrentHeader()
if number == nil || *number == rpc.LatestBlockNumber {
header = currentHeader // current if none requested
} else if *number == rpc.SafeBlockNumber {
justifiedNumber, _, err := api.parlia.GetJustifiedNumberAndHash(api.chain, []*types.Header{currentHeader})
if err != nil {
return nil
}
header = api.chain.GetHeaderByNumber(justifiedNumber)
} else if *number == rpc.FinalizedBlockNumber {
header = api.parlia.GetFinalizedHeader(api.chain, currentHeader)
} else if *number == rpc.PendingBlockNumber {
return nil // no pending blocks on bsc
} else if *number == rpc.EarliestBlockNumber {
header = api.chain.GetHeaderByNumber(0)
} else {
header = api.chain.GetHeaderByNumber(uint64(number.Int64()))
}
return
}

View File

@ -0,0 +1,91 @@
package parlia
import (
"context"
"errors"
"math/big"
mrand "math/rand"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/core/systemcontracts"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rpc"
)
func (p *Parlia) getTurnLength(chain consensus.ChainHeaderReader, header *types.Header) (*uint8, error) {
parent := chain.GetHeaderByHash(header.ParentHash)
if parent == nil {
return nil, errors.New("parent not found")
}
var turnLength uint8
if p.chainConfig.IsBohr(parent.Number, parent.Time) {
turnLengthFromContract, err := p.getTurnLengthFromContract(parent)
if err != nil {
return nil, err
}
if turnLengthFromContract == nil {
return nil, errors.New("unexpected error when getTurnLengthFromContract")
}
turnLength = uint8(turnLengthFromContract.Int64())
} else {
turnLength = defaultTurnLength
}
log.Debug("getTurnLength", "turnLength", turnLength)
return &turnLength, nil
}
func (p *Parlia) getTurnLengthFromContract(header *types.Header) (turnLength *big.Int, err error) {
// mock to get turnLength from the contract
if params.FixedTurnLength >= 1 && params.FixedTurnLength <= 9 {
if params.FixedTurnLength == 2 {
return p.getRandTurnLength(header)
}
return big.NewInt(int64(params.FixedTurnLength)), nil
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
method := "getTurnLength"
toAddress := common.HexToAddress(systemcontracts.ValidatorContract)
gas := (hexutil.Uint64)(uint64(math.MaxUint64 / 2))
data, err := p.validatorSetABI.Pack(method)
if err != nil {
log.Error("Unable to pack tx for getTurnLength", "error", err)
return nil, err
}
msgData := (hexutil.Bytes)(data)
blockNr := rpc.BlockNumberOrHashWithHash(header.Hash(), false)
result, err := p.ethAPI.Call(ctx, ethapi.TransactionArgs{
Gas: &gas,
To: &toAddress,
Data: &msgData,
}, &blockNr, nil, nil)
if err != nil {
return nil, err
}
if err := p.validatorSetABI.UnpackIntoInterface(&turnLength, method, result); err != nil {
return nil, err
}
return turnLength, nil
}
// getRandTurnLength returns a random valid value, used to test switching turn length
func (p *Parlia) getRandTurnLength(header *types.Header) (turnLength *big.Int, err error) {
turnLengths := [8]uint8{1, 3, 4, 5, 6, 7, 8, 9}
r := mrand.New(mrand.NewSource(int64(header.Time)))
lengthIndex := int(r.Int31n(int32(len(turnLengths))))
return big.NewInt(int64(turnLengths[lengthIndex])), nil
}

View File

@ -15,14 +15,13 @@ import (
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/internal/ethapi" "github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rpc" "github.com/ethereum/go-ethereum/rpc"
) )
const SecondsPerDay uint64 = 86400
// the params should be two blocks' time(timestamp) // the params should be two blocks' time(timestamp)
func sameDayInUTC(first, second uint64) bool { func sameDayInUTC(first, second uint64) bool {
return first/SecondsPerDay == second/SecondsPerDay return first/params.BreatheBlockInterval == second/params.BreatheBlockInterval
} }
func isBreatheBlock(lastBlockTime, blockTime uint64) bool { func isBreatheBlock(lastBlockTime, blockTime uint64) bool {

View File

@ -6,6 +6,7 @@ import (
"encoding/hex" "encoding/hex"
"errors" "errors"
"fmt" "fmt"
"io"
"math" "math"
"math/big" "math/big"
"math/rand" "math/rand"
@ -16,7 +17,7 @@ import (
lru "github.com/hashicorp/golang-lru" lru "github.com/hashicorp/golang-lru"
"github.com/holiman/uint256" "github.com/holiman/uint256"
"github.com/prysmaticlabs/prysm/v4/crypto/bls" "github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/willf/bitset" "github.com/willf/bitset"
"golang.org/x/crypto/sha3" "golang.org/x/crypto/sha3"
@ -48,15 +49,18 @@ import (
) )
const ( const (
inMemorySnapshots = 256 // Number of recent snapshots to keep in memory inMemorySnapshots = 256 // Number of recent snapshots to keep in memory
inMemorySignatures = 4096 // Number of recent block signatures to keep in memory inMemorySignatures = 4096 // Number of recent block signatures to keep in memory
inMemoryHeaders = 86400 // Number of recent headers to keep in memory for double sign detection,
checkpointInterval = 1024 // Number of blocks after which to save the snapshot to the database checkpointInterval = 1024 // Number of blocks after which to save the snapshot to the database
defaultEpochLength = uint64(100) // Default number of blocks of checkpoint to update validatorSet from contract defaultEpochLength = uint64(200) // Default number of blocks of checkpoint to update validatorSet from contract
defaultTurnLength = uint8(1) // Default consecutive number of blocks a validator receives priority for block production
extraVanity = 32 // Fixed number of extra-data prefix bytes reserved for signer vanity extraVanity = 32 // Fixed number of extra-data prefix bytes reserved for signer vanity
extraSeal = 65 // Fixed number of extra-data suffix bytes reserved for signer seal extraSeal = 65 // Fixed number of extra-data suffix bytes reserved for signer seal
nextForkHashSize = 4 // Fixed number of extra-data suffix bytes reserved for nextForkHash. nextForkHashSize = 4 // Fixed number of extra-data suffix bytes reserved for nextForkHash.
turnLengthSize = 1 // Fixed number of extra-data suffix bytes reserved for turnLength
validatorBytesLengthBeforeLuban = common.AddressLength validatorBytesLengthBeforeLuban = common.AddressLength
validatorBytesLength = common.AddressLength + types.BLSPublicKeyLength validatorBytesLength = common.AddressLength + types.BLSPublicKeyLength
@ -64,7 +68,6 @@ const (
wiggleTime = uint64(1) // second, Random delay (per signer) to allow concurrent signers wiggleTime = uint64(1) // second, Random delay (per signer) to allow concurrent signers
initialBackOffTime = uint64(1) // second initialBackOffTime = uint64(1) // second
processBackOffTime = uint64(1) // second
systemRewardPercent = 4 // it means 1/2^4 = 1/16 percentage of gas fee incoming will be distributed to system systemRewardPercent = 4 // it means 1/2^4 = 1/16 percentage of gas fee incoming will be distributed to system
@ -80,6 +83,7 @@ var (
verifyVoteAttestationErrorCounter = metrics.NewRegisteredCounter("parlia/verifyVoteAttestation/error", nil) verifyVoteAttestationErrorCounter = metrics.NewRegisteredCounter("parlia/verifyVoteAttestation/error", nil)
updateAttestationErrorCounter = metrics.NewRegisteredCounter("parlia/updateAttestation/error", nil) updateAttestationErrorCounter = metrics.NewRegisteredCounter("parlia/updateAttestation/error", nil)
validVotesfromSelfCounter = metrics.NewRegisteredCounter("parlia/VerifyVote/self", nil) validVotesfromSelfCounter = metrics.NewRegisteredCounter("parlia/VerifyVote/self", nil)
doubleSignCounter = metrics.NewRegisteredCounter("parlia/doublesign", nil)
systemContracts = map[common.Address]bool{ systemContracts = map[common.Address]bool{
common.HexToAddress(systemcontracts.ValidatorContract): true, common.HexToAddress(systemcontracts.ValidatorContract): true,
@ -124,6 +128,10 @@ var (
// invalid list of validators (i.e. non divisible by 20 bytes). // invalid list of validators (i.e. non divisible by 20 bytes).
errInvalidSpanValidators = errors.New("invalid validator list on sprint end block") errInvalidSpanValidators = errors.New("invalid validator list on sprint end block")
// errInvalidTurnLength is returned if a block contains an
// invalid length of turn (i.e. no data left after parsing validators).
errInvalidTurnLength = errors.New("invalid turnLength")
// errInvalidMixDigest is returned if a block's mix digest is non-zero. // errInvalidMixDigest is returned if a block's mix digest is non-zero.
errInvalidMixDigest = errors.New("non-zero mix digest") errInvalidMixDigest = errors.New("non-zero mix digest")
@ -134,6 +142,10 @@ var (
// list of validators different than the one the local node calculated. // list of validators different than the one the local node calculated.
errMismatchingEpochValidators = errors.New("mismatching validator list on epoch block") errMismatchingEpochValidators = errors.New("mismatching validator list on epoch block")
// errMismatchingEpochTurnLength is returned if a sprint block contains a
// turn length different than the one the local node calculated.
errMismatchingEpochTurnLength = errors.New("mismatching turn length on epoch block")
// errInvalidDifficulty is returned if the difficulty of a block is missing. // errInvalidDifficulty is returned if the difficulty of a block is missing.
errInvalidDifficulty = errors.New("invalid difficulty") errInvalidDifficulty = errors.New("invalid difficulty")
@ -216,8 +228,11 @@ type Parlia struct {
genesisHash common.Hash genesisHash common.Hash
db ethdb.Database // Database to store and retrieve snapshot checkpoints db ethdb.Database // Database to store and retrieve snapshot checkpoints
recentSnaps *lru.ARCCache // Snapshots for recent block to speed up recentSnaps *lru.ARCCache // Snapshots for recent block to speed up
signatures *lru.ARCCache // Signatures of recent blocks to speed up mining signatures *lru.ARCCache // Signatures of recent blocks to speed up mining
recentHeaders *lru.ARCCache //
// Recent headers to check for double signing: key includes block number and miner. value is the block header
// If same key's value already exists for different block header roots then double sign is detected
signer types.Signer signer types.Signer
@ -263,6 +278,10 @@ func New(
if err != nil { if err != nil {
panic(err) panic(err)
} }
recentHeaders, err := lru.NewARC(inMemoryHeaders)
if err != nil {
panic(err)
}
vABIBeforeLuban, err := abi.JSON(strings.NewReader(validatorSetABIBeforeLuban)) vABIBeforeLuban, err := abi.JSON(strings.NewReader(validatorSetABIBeforeLuban))
if err != nil { if err != nil {
panic(err) panic(err)
@ -286,6 +305,7 @@ func New(
db: db, db: db,
ethAPI: ethAPI, ethAPI: ethAPI,
recentSnaps: recentSnaps, recentSnaps: recentSnaps,
recentHeaders: recentHeaders,
signatures: signatures, signatures: signatures,
validatorSetABIBeforeLuban: vABIBeforeLuban, validatorSetABIBeforeLuban: vABIBeforeLuban,
validatorSetABI: vABI, validatorSetABI: vABI,
@ -297,6 +317,10 @@ func New(
return c return c
} }
func (p *Parlia) Period() uint64 {
return p.config.Period
}
func (p *Parlia) IsSystemTransaction(tx *types.Transaction, header *types.Header) (bool, error) { func (p *Parlia) IsSystemTransaction(tx *types.Transaction, header *types.Header) (bool, error) {
// deploy a contract // deploy a contract
if tx.To() == nil { if tx.To() == nil {
@ -355,6 +379,7 @@ func (p *Parlia) VerifyHeaders(chain consensus.ChainHeaderReader, headers []*typ
// On luban fork, we introduce vote attestation into the header's extra field, so extra format is different from before. // On luban fork, we introduce vote attestation into the header's extra field, so extra format is different from before.
// Before luban fork: |---Extra Vanity---|---Validators Bytes (or Empty)---|---Extra Seal---| // Before luban fork: |---Extra Vanity---|---Validators Bytes (or Empty)---|---Extra Seal---|
// After luban fork: |---Extra Vanity---|---Validators Number and Validators Bytes (or Empty)---|---Vote Attestation (or Empty)---|---Extra Seal---| // After luban fork: |---Extra Vanity---|---Validators Number and Validators Bytes (or Empty)---|---Vote Attestation (or Empty)---|---Extra Seal---|
// After bohr fork: |---Extra Vanity---|---Validators Number and Validators Bytes (or Empty)---|---Turn Length (or Empty)---|---Vote Attestation (or Empty)---|---Extra Seal---|
func getValidatorBytesFromHeader(header *types.Header, chainConfig *params.ChainConfig, parliaConfig *params.ParliaConfig) []byte { func getValidatorBytesFromHeader(header *types.Header, chainConfig *params.ChainConfig, parliaConfig *params.ParliaConfig) []byte {
if len(header.Extra) <= extraVanity+extraSeal { if len(header.Extra) <= extraVanity+extraSeal {
return nil return nil
@ -371,11 +396,15 @@ func getValidatorBytesFromHeader(header *types.Header, chainConfig *params.Chain
return nil return nil
} }
num := int(header.Extra[extraVanity]) num := int(header.Extra[extraVanity])
if num == 0 || len(header.Extra) <= extraVanity+extraSeal+num*validatorBytesLength {
return nil
}
start := extraVanity + validatorNumberSize start := extraVanity + validatorNumberSize
end := start + num*validatorBytesLength end := start + num*validatorBytesLength
extraMinLen := end + extraSeal
if chainConfig.IsBohr(header.Number, header.Time) {
extraMinLen += turnLengthSize
}
if num == 0 || len(header.Extra) < extraMinLen {
return nil
}
return header.Extra[start:end] return header.Extra[start:end]
} }
@ -394,11 +423,14 @@ func getVoteAttestationFromHeader(header *types.Header, chainConfig *params.Chai
attestationBytes = header.Extra[extraVanity : len(header.Extra)-extraSeal] attestationBytes = header.Extra[extraVanity : len(header.Extra)-extraSeal]
} else { } else {
num := int(header.Extra[extraVanity]) num := int(header.Extra[extraVanity])
if len(header.Extra) <= extraVanity+extraSeal+validatorNumberSize+num*validatorBytesLength { start := extraVanity + validatorNumberSize + num*validatorBytesLength
if chainConfig.IsBohr(header.Number, header.Time) {
start += turnLengthSize
}
end := len(header.Extra) - extraSeal
if end <= start {
return nil, nil return nil, nil
} }
start := extraVanity + validatorNumberSize + num*validatorBytesLength
end := len(header.Extra) - extraSeal
attestationBytes = header.Extra[start:end] attestationBytes = header.Extra[start:end]
} }
@ -590,15 +622,11 @@ func (p *Parlia) verifyHeader(chain consensus.ChainHeaderReader, header *types.H
return fmt.Errorf("invalid excessBlobGas: have %d, expected nil", header.ExcessBlobGas) return fmt.Errorf("invalid excessBlobGas: have %d, expected nil", header.ExcessBlobGas)
case header.BlobGasUsed != nil: case header.BlobGasUsed != nil:
return fmt.Errorf("invalid blobGasUsed: have %d, expected nil", header.BlobGasUsed) return fmt.Errorf("invalid blobGasUsed: have %d, expected nil", header.BlobGasUsed)
case header.ParentBeaconRoot != nil:
return fmt.Errorf("invalid parentBeaconRoot, have %#x, expected nil", header.ParentBeaconRoot)
case header.WithdrawalsHash != nil: case header.WithdrawalsHash != nil:
return fmt.Errorf("invalid WithdrawalsHash, have %#x, expected nil", header.WithdrawalsHash) return fmt.Errorf("invalid WithdrawalsHash, have %#x, expected nil", header.WithdrawalsHash)
} }
} else { } else {
switch { switch {
case header.ParentBeaconRoot != nil:
return fmt.Errorf("invalid parentBeaconRoot, have %#x, expected nil", header.ParentBeaconRoot)
case !header.EmptyWithdrawalsHash(): case !header.EmptyWithdrawalsHash():
return errors.New("header has wrong WithdrawalsHash") return errors.New("header has wrong WithdrawalsHash")
} }
@ -607,6 +635,17 @@ func (p *Parlia) verifyHeader(chain consensus.ChainHeaderReader, header *types.H
} }
} }
bohr := chain.Config().IsBohr(header.Number, header.Time)
if !bohr {
if header.ParentBeaconRoot != nil {
return fmt.Errorf("invalid parentBeaconRoot, have %#x, expected nil", header.ParentBeaconRoot)
}
} else {
if header.ParentBeaconRoot == nil || *header.ParentBeaconRoot != (common.Hash{}) {
return fmt.Errorf("invalid parentBeaconRoot, have %#x, expected zero hash", header.ParentBeaconRoot)
}
}
// All basic checks passed, verify cascading fields // All basic checks passed, verify cascading fields
return p.verifyCascadingFields(chain, header, parents) return p.verifyCascadingFields(chain, header, parents)
} }
@ -699,13 +738,28 @@ func (p *Parlia) snapshot(chain consensus.ChainHeaderReader, number uint64, hash
} }
} }
// If we're at the genesis, snapshot the initial state. // If we're at the genesis, snapshot the initial state. Alternatively if we have
if number == 0 { // piled up more headers than allowed to be reorged (chain reinit from a freezer),
checkpoint := chain.GetHeaderByNumber(number) // consider the checkpoint trusted and snapshot it.
if checkpoint != nil { // An offset `p.config.Epoch - 1` can ensure getting the right validators.
// get checkpoint data if number == 0 || ((number+1)%p.config.Epoch == 0 && (len(headers) > int(params.FullImmutabilityThreshold))) {
hash := checkpoint.Hash() var (
checkpoint *types.Header
blockHash common.Hash
)
if number == 0 {
checkpoint = chain.GetHeaderByNumber(0)
if checkpoint != nil {
blockHash = checkpoint.Hash()
}
} else {
checkpoint = chain.GetHeaderByNumber(number + 1 - p.config.Epoch)
blockHeader := chain.GetHeaderByNumber(number)
if blockHeader != nil {
blockHash = blockHeader.Hash()
}
}
if checkpoint != nil && blockHash != (common.Hash{}) {
// get validators from headers // get validators from headers
validators, voteAddrs, err := parseValidators(checkpoint, p.chainConfig, p.config) validators, voteAddrs, err := parseValidators(checkpoint, p.chainConfig, p.config)
if err != nil { if err != nil {
@ -713,11 +767,27 @@ func (p *Parlia) snapshot(chain consensus.ChainHeaderReader, number uint64, hash
} }
// new snapshot // new snapshot
snap = newSnapshot(p.config, p.signatures, number, hash, validators, voteAddrs, p.ethAPI) snap = newSnapshot(p.config, p.signatures, number, blockHash, validators, voteAddrs, p.ethAPI)
// get turnLength from headers and use that for new turnLength
turnLength, err := parseTurnLength(checkpoint, p.chainConfig, p.config)
if err != nil {
return nil, err
}
if turnLength != nil {
snap.TurnLength = *turnLength
}
// snap.Recents is currently empty, which affects the following:
// a. The function SignRecently - This is acceptable since an empty snap.Recents results in a more lenient check.
// b. The function blockTimeVerifyForRamanujanFork - This is also acceptable as it won't be invoked during `snap.apply`.
// c. This may cause a mismatch in the slash systemtx, but the transaction list is not verified during `snap.apply`.
// snap.Attestation is nil, but Snapshot.updateAttestation will handle it correctly.
if err := snap.store(p.db); err != nil { if err := snap.store(p.db); err != nil {
return nil, err return nil, err
} }
log.Info("Stored checkpoint snapshot to disk", "number", number, "hash", hash) log.Info("Stored checkpoint snapshot to disk", "number", number, "hash", blockHash)
break break
} }
} }
@ -809,6 +879,17 @@ func (p *Parlia) verifySeal(chain consensus.ChainHeaderReader, header *types.Hea
return errCoinBaseMisMatch return errCoinBaseMisMatch
} }
// check for double sign & add to cache
key := proposalKey(*header)
preHash, ok := p.recentHeaders.Get(key)
if ok && preHash != header.Hash() {
doubleSignCounter.Inc(1)
log.Warn("DoubleSign detected", " block", header.Number, " miner", header.Coinbase,
"hash1", preHash.(common.Hash), "hash2", header.Hash())
} else {
p.recentHeaders.Add(key, header.Hash())
}
if _, ok := snap.Validators[signer]; !ok { if _, ok := snap.Validators[signer]; !ok {
return errUnauthorizedValidator(signer.String()) return errUnauthorizedValidator(signer.String())
} }
@ -863,6 +944,24 @@ func (p *Parlia) prepareValidators(header *types.Header) error {
return nil return nil
} }
func (p *Parlia) prepareTurnLength(chain consensus.ChainHeaderReader, header *types.Header) error {
if header.Number.Uint64()%p.config.Epoch != 0 ||
!p.chainConfig.IsBohr(header.Number, header.Time) {
return nil
}
turnLength, err := p.getTurnLength(chain, header)
if err != nil {
return err
}
if turnLength != nil {
header.Extra = append(header.Extra, *turnLength)
}
return nil
}
func (p *Parlia) assembleVoteAttestation(chain consensus.ChainHeaderReader, header *types.Header) error { func (p *Parlia) assembleVoteAttestation(chain consensus.ChainHeaderReader, header *types.Header) error {
if !p.chainConfig.IsLuban(header.Number) || header.Number.Uint64() < 2 { if !p.chainConfig.IsLuban(header.Number) || header.Number.Uint64() < 2 {
return nil return nil
@ -994,6 +1093,9 @@ func (p *Parlia) Prepare(chain consensus.ChainHeaderReader, header *types.Header
return err return err
} }
if err := p.prepareTurnLength(chain, header); err != nil {
return err
}
// add extra seal space // add extra seal space
header.Extra = append(header.Extra, make([]byte, extraSeal)...) header.Extra = append(header.Extra, make([]byte, extraSeal)...)
@ -1044,6 +1146,30 @@ func (p *Parlia) verifyValidators(header *types.Header) error {
return nil return nil
} }
func (p *Parlia) verifyTurnLength(chain consensus.ChainHeaderReader, header *types.Header) error {
if header.Number.Uint64()%p.config.Epoch != 0 ||
!p.chainConfig.IsBohr(header.Number, header.Time) {
return nil
}
turnLengthFromHeader, err := parseTurnLength(header, p.chainConfig, p.config)
if err != nil {
return err
}
if turnLengthFromHeader != nil {
turnLength, err := p.getTurnLength(chain, header)
if err != nil {
return err
}
if turnLength != nil && *turnLength == *turnLengthFromHeader {
log.Debug("verifyTurnLength", "turnLength", *turnLength)
return nil
}
}
return errMismatchingEpochTurnLength
}
func (p *Parlia) distributeFinalityReward(chain consensus.ChainHeaderReader, state *state.StateDB, header *types.Header, func (p *Parlia) distributeFinalityReward(chain consensus.ChainHeaderReader, state *state.StateDB, header *types.Header,
cx core.ChainContext, txs *[]*types.Transaction, receipts *[]*types.Receipt, systemTxs *[]*types.Transaction, cx core.ChainContext, txs *[]*types.Transaction, receipts *[]*types.Receipt, systemTxs *[]*types.Transaction,
usedGas *uint64, mining bool) error { usedGas *uint64, mining bool) error {
@ -1138,6 +1264,10 @@ func (p *Parlia) Finalize(chain consensus.ChainHeaderReader, header *types.Heade
return err return err
} }
if err := p.verifyTurnLength(chain, header); err != nil {
return err
}
cx := chainContext{Chain: chain, parlia: p} cx := chainContext{Chain: chain, parlia: p}
parent := chain.GetHeaderByHash(header.ParentHash) parent := chain.GetHeaderByHash(header.ParentHash)
@ -1164,7 +1294,7 @@ func (p *Parlia) Finalize(chain consensus.ChainHeaderReader, header *types.Heade
} }
} }
if header.Difficulty.Cmp(diffInTurn) != 0 { if header.Difficulty.Cmp(diffInTurn) != 0 {
spoiledVal := snap.supposeValidator() spoiledVal := snap.inturnValidator()
signedRecently := false signedRecently := false
if p.chainConfig.IsPlato(header.Number) { if p.chainConfig.IsPlato(header.Number) {
signedRecently = snap.SignRecently(spoiledVal) signedRecently = snap.SignRecently(spoiledVal)
@ -1255,7 +1385,7 @@ func (p *Parlia) FinalizeAndAssemble(chain consensus.ChainHeaderReader, header *
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
} }
spoiledVal := snap.supposeValidator() spoiledVal := snap.inturnValidator()
signedRecently := false signedRecently := false
if p.chainConfig.IsPlato(header.Number) { if p.chainConfig.IsPlato(header.Number) {
signedRecently = snap.SignRecently(spoiledVal) signedRecently = snap.SignRecently(spoiledVal)
@ -1337,7 +1467,7 @@ func (p *Parlia) IsActiveValidatorAt(chain consensus.ChainHeaderReader, header *
func (p *Parlia) VerifyVote(chain consensus.ChainHeaderReader, vote *types.VoteEnvelope) error { func (p *Parlia) VerifyVote(chain consensus.ChainHeaderReader, vote *types.VoteEnvelope) error {
targetNumber := vote.Data.TargetNumber targetNumber := vote.Data.TargetNumber
targetHash := vote.Data.TargetHash targetHash := vote.Data.TargetHash
header := chain.GetHeaderByHash(targetHash) header := chain.GetVerifiedBlockByHash(targetHash)
if header == nil { if header == nil {
log.Warn("BlockHeader at current voteBlockNumber is nil", "targetNumber", targetNumber, "targetHash", targetHash) log.Warn("BlockHeader at current voteBlockNumber is nil", "targetNumber", targetNumber, "targetHash", targetHash)
return errors.New("BlockHeader at current voteBlockNumber is nil") return errors.New("BlockHeader at current voteBlockNumber is nil")
@ -1408,10 +1538,13 @@ func (p *Parlia) Delay(chain consensus.ChainReader, header *types.Header, leftOv
delay = delay - *leftOver delay = delay - *leftOver
} }
// The blocking time should be no more than half of period // The blocking time should be no more than half of period when snap.TurnLength == 1
half := time.Duration(p.config.Period) * time.Second / 2 timeForMining := time.Duration(p.config.Period) * time.Second / 2
if delay > half { if !snap.lastBlockInOneTurn(header.Number.Uint64()) {
delay = half timeForMining = time.Duration(p.config.Period) * time.Second * 2 / 3
}
if delay > timeForMining {
delay = timeForMining
} }
return &delay return &delay
} }
@ -1482,12 +1615,15 @@ func (p *Parlia) Seal(chain consensus.ChainHeaderReader, block *types.Block, res
copy(header.Extra[len(header.Extra)-extraSeal:], sig) copy(header.Extra[len(header.Extra)-extraSeal:], sig)
if p.shouldWaitForCurrentBlockProcess(chain, header, snap) { if p.shouldWaitForCurrentBlockProcess(chain, header, snap) {
log.Info("Waiting for received in turn block to process") highestVerifiedHeader := chain.GetHighestVerifiedHeader()
// including time for writing and committing blocks
waitProcessEstimate := math.Ceil(float64(highestVerifiedHeader.GasUsed) / float64(100_000_000))
log.Info("Waiting for received in turn block to process", "waitProcessEstimate(Seconds)", waitProcessEstimate)
select { select {
case <-stop: case <-stop:
log.Info("Received block process finished, abort block seal") log.Info("Received block process finished, abort block seal")
return return
case <-time.After(time.Duration(processBackOffTime) * time.Second): case <-time.After(time.Duration(waitProcessEstimate) * time.Second):
if chain.CurrentHeader().Number.Uint64() >= header.Number.Uint64() { if chain.CurrentHeader().Number.Uint64() >= header.Number.Uint64() {
log.Info("Process backoff time exhausted, and current header has updated to abort this seal") log.Info("Process backoff time exhausted, and current header has updated to abort this seal")
return return
@ -1569,11 +1705,35 @@ func CalcDifficulty(snap *Snapshot, signer common.Address) *big.Int {
return new(big.Int).Set(diffNoTurn) return new(big.Int).Set(diffNoTurn)
} }
func encodeSigHeaderWithoutVoteAttestation(w io.Writer, header *types.Header, chainId *big.Int) {
err := rlp.Encode(w, []interface{}{
chainId,
header.ParentHash,
header.UncleHash,
header.Coinbase,
header.Root,
header.TxHash,
header.ReceiptHash,
header.Bloom,
header.Difficulty,
header.Number,
header.GasLimit,
header.GasUsed,
header.Time,
header.Extra[:extraVanity], // this will panic if extra is too short, should check before calling encodeSigHeaderWithoutVoteAttestation
header.MixDigest,
header.Nonce,
})
if err != nil {
panic("can't encode: " + err.Error())
}
}
// SealHash returns the hash of a block without vote attestation prior to it being sealed. // SealHash returns the hash of a block without vote attestation prior to it being sealed.
// So it's not the real hash of a block, just used as unique id to distinguish task // So it's not the real hash of a block, just used as unique id to distinguish task
func (p *Parlia) SealHash(header *types.Header) (hash common.Hash) { func (p *Parlia) SealHash(header *types.Header) (hash common.Hash) {
hasher := sha3.NewLegacyKeccak256() hasher := sha3.NewLegacyKeccak256()
types.EncodeSigHeaderWithoutVoteAttestation(hasher, header, p.chainConfig.ChainID) encodeSigHeaderWithoutVoteAttestation(hasher, header, p.chainConfig.ChainID)
hasher.Sum(hash[:0]) hasher.Sum(hash[:0])
return hash return hash
} }
@ -1879,42 +2039,40 @@ func (p *Parlia) GetFinalizedHeader(chain consensus.ChainHeaderReader, header *t
// =========================== utility function ========================== // =========================== utility function ==========================
func (p *Parlia) backOffTime(snap *Snapshot, header *types.Header, val common.Address) uint64 { func (p *Parlia) backOffTime(snap *Snapshot, header *types.Header, val common.Address) uint64 {
if snap.inturn(val) { if snap.inturn(val) {
log.Debug("backOffTime", "blockNumber", header.Number, "in turn validator", val)
return 0 return 0
} else { } else {
delay := initialBackOffTime delay := initialBackOffTime
validators := snap.validators() validators := snap.validators()
if p.chainConfig.IsPlanck(header.Number) { if p.chainConfig.IsPlanck(header.Number) {
// reverse the key/value of snap.Recents to get recentsMap counts := snap.countRecents()
recentsMap := make(map[common.Address]uint64, len(snap.Recents)) for addr, seenTimes := range counts {
bound := uint64(0) log.Debug("backOffTime", "blockNumber", header.Number, "validator", addr, "seenTimes", seenTimes)
if n, limit := header.Number.Uint64(), uint64(len(validators)/2+1); n > limit {
bound = n - limit
}
for seen, recent := range snap.Recents {
if seen <= bound {
continue
}
recentsMap[recent] = seen
} }
// The backOffTime does not matter when a validator has signed recently. // The backOffTime does not matter when a validator has signed recently.
if _, ok := recentsMap[val]; ok { if snap.signRecentlyByCounts(val, counts) {
return 0 return 0
} }
inTurnAddr := validators[(snap.Number+1)%uint64(len(validators))] inTurnAddr := snap.inturnValidator()
if _, ok := recentsMap[inTurnAddr]; ok { if snap.signRecentlyByCounts(inTurnAddr, counts) {
log.Debug("in turn validator has recently signed, skip initialBackOffTime", log.Debug("in turn validator has recently signed, skip initialBackOffTime",
"inTurnAddr", inTurnAddr) "inTurnAddr", inTurnAddr)
delay = 0 delay = 0
} }
// Exclude the recently signed validators // Exclude the recently signed validators and the in turn validator
temp := make([]common.Address, 0, len(validators)) temp := make([]common.Address, 0, len(validators))
for _, addr := range validators { for _, addr := range validators {
if _, ok := recentsMap[addr]; ok { if snap.signRecentlyByCounts(addr, counts) {
continue continue
} }
if p.chainConfig.IsBohr(header.Number, header.Time) {
if addr == inTurnAddr {
continue
}
}
temp = append(temp, addr) temp = append(temp, addr)
} }
validators = temp validators = temp
@ -1932,7 +2090,11 @@ func (p *Parlia) backOffTime(snap *Snapshot, header *types.Header, val common.Ad
return 0 return 0
} }
s := rand.NewSource(int64(snap.Number)) randSeed := snap.Number
if p.chainConfig.IsBohr(header.Number, header.Time) {
randSeed = header.Number.Uint64() / uint64(snap.TurnLength)
}
s := rand.NewSource(int64(randSeed))
r := rand.New(s) r := rand.New(s)
n := len(validators) n := len(validators)
backOffSteps := make([]uint64, 0, n) backOffSteps := make([]uint64, 0, n)
@ -2011,3 +2173,8 @@ func applyMessage(
} }
return msg.Gas() - returnGas, err return msg.Gas() - returnGas, err
} }
// proposalKey build a key which is a combination of the block number and the proposer address.
func proposalKey(header types.Header) string {
return header.ParentHash.String() + header.Coinbase.String()
}

View File

@ -22,22 +22,44 @@ func TestImpactOfValidatorOutOfService(t *testing.T) {
testCases := []struct { testCases := []struct {
totalValidators int totalValidators int
downValidators int downValidators int
turnLength int
}{ }{
{3, 1}, {3, 1, 1},
{5, 2}, {5, 2, 1},
{10, 1}, {10, 1, 2},
{10, 4}, {10, 4, 2},
{21, 1}, {21, 1, 3},
{21, 3}, {21, 3, 3},
{21, 5}, {21, 5, 4},
{21, 10}, {21, 10, 5},
} }
for _, tc := range testCases { for _, tc := range testCases {
simulateValidatorOutOfService(tc.totalValidators, tc.downValidators) simulateValidatorOutOfService(tc.totalValidators, tc.downValidators, tc.turnLength)
} }
} }
func simulateValidatorOutOfService(totalValidators int, downValidators int) { // refer Snapshot.SignRecently
func signRecently(idx int, recents map[uint64]int, turnLength int) bool {
recentSignTimes := 0
for _, signIdx := range recents {
if signIdx == idx {
recentSignTimes += 1
}
}
return recentSignTimes >= turnLength
}
// refer Snapshot.minerHistoryCheckLen
func minerHistoryCheckLen(totalValidators int, turnLength int) uint64 {
return uint64(totalValidators/2+1)*uint64(turnLength) - 1
}
// refer Snapshot.inturnValidator
func inturnValidator(totalValidators int, turnLength int, height int) int {
return height / turnLength % totalValidators
}
func simulateValidatorOutOfService(totalValidators int, downValidators int, turnLength int) {
downBlocks := 10000 downBlocks := 10000
recoverBlocks := 10000 recoverBlocks := 10000
recents := make(map[uint64]int) recents := make(map[uint64]int)
@ -55,12 +77,7 @@ func simulateValidatorOutOfService(totalValidators int, downValidators int) {
delete(validators, down[i]) delete(validators, down[i])
} }
isRecentSign := func(idx int) bool { isRecentSign := func(idx int) bool {
for _, signIdx := range recents { return signRecently(idx, recents, turnLength)
if signIdx == idx {
return true
}
}
return false
} }
isInService := func(idx int) bool { isInService := func(idx int) bool {
return validators[idx] return validators[idx]
@ -68,10 +85,10 @@ func simulateValidatorOutOfService(totalValidators int, downValidators int) {
downDelay := uint64(0) downDelay := uint64(0)
for h := 1; h <= downBlocks; h++ { for h := 1; h <= downBlocks; h++ {
if limit := uint64(totalValidators/2 + 1); uint64(h) >= limit { if limit := minerHistoryCheckLen(totalValidators, turnLength) + 1; uint64(h) >= limit {
delete(recents, uint64(h)-limit) delete(recents, uint64(h)-limit)
} }
proposer := h % totalValidators proposer := inturnValidator(totalValidators, turnLength, h)
if !isInService(proposer) || isRecentSign(proposer) { if !isInService(proposer) || isRecentSign(proposer) {
candidates := make(map[int]bool, totalValidators/2) candidates := make(map[int]bool, totalValidators/2)
for v := range validators { for v := range validators {
@ -99,10 +116,10 @@ func simulateValidatorOutOfService(totalValidators int, downValidators int) {
recoverDelay := uint64(0) recoverDelay := uint64(0)
lastseen := downBlocks lastseen := downBlocks
for h := downBlocks + 1; h <= downBlocks+recoverBlocks; h++ { for h := downBlocks + 1; h <= downBlocks+recoverBlocks; h++ {
if limit := uint64(totalValidators/2 + 1); uint64(h) >= limit { if limit := minerHistoryCheckLen(totalValidators, turnLength) + 1; uint64(h) >= limit {
delete(recents, uint64(h)-limit) delete(recents, uint64(h)-limit)
} }
proposer := h % totalValidators proposer := inturnValidator(totalValidators, turnLength, h)
if !isInService(proposer) || isRecentSign(proposer) { if !isInService(proposer) || isRecentSign(proposer) {
lastseen = h lastseen = h
candidates := make(map[int]bool, totalValidators/2) candidates := make(map[int]bool, totalValidators/2)

View File

@ -22,6 +22,7 @@ import (
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"math"
"sort" "sort"
lru "github.com/hashicorp/golang-lru" lru "github.com/hashicorp/golang-lru"
@ -43,6 +44,7 @@ type Snapshot struct {
Number uint64 `json:"number"` // Block number where the snapshot was created Number uint64 `json:"number"` // Block number where the snapshot was created
Hash common.Hash `json:"hash"` // Block hash where the snapshot was created Hash common.Hash `json:"hash"` // Block hash where the snapshot was created
TurnLength uint8 `json:"turn_length"` // Length of `turn`, meaning the consecutive number of blocks a validator receives priority for block production
Validators map[common.Address]*ValidatorInfo `json:"validators"` // Set of authorized validators at this moment Validators map[common.Address]*ValidatorInfo `json:"validators"` // Set of authorized validators at this moment
Recents map[uint64]common.Address `json:"recents"` // Set of recent validators for spam protections Recents map[uint64]common.Address `json:"recents"` // Set of recent validators for spam protections
RecentForkHashes map[uint64]string `json:"recent_fork_hashes"` // Set of recent forkHash RecentForkHashes map[uint64]string `json:"recent_fork_hashes"` // Set of recent forkHash
@ -72,6 +74,7 @@ func newSnapshot(
sigCache: sigCache, sigCache: sigCache,
Number: number, Number: number,
Hash: hash, Hash: hash,
TurnLength: defaultTurnLength,
Recents: make(map[uint64]common.Address), Recents: make(map[uint64]common.Address),
RecentForkHashes: make(map[uint64]string), RecentForkHashes: make(map[uint64]string),
Validators: make(map[common.Address]*ValidatorInfo), Validators: make(map[common.Address]*ValidatorInfo),
@ -114,6 +117,10 @@ func loadSnapshot(config *params.ParliaConfig, sigCache *lru.ARCCache, db ethdb.
if err := json.Unmarshal(blob, snap); err != nil { if err := json.Unmarshal(blob, snap); err != nil {
return nil, err return nil, err
} }
if snap.TurnLength == 0 { // no TurnLength field in old snapshots
snap.TurnLength = defaultTurnLength
}
snap.config = config snap.config = config
snap.sigCache = sigCache snap.sigCache = sigCache
snap.ethAPI = ethAPI snap.ethAPI = ethAPI
@ -138,6 +145,7 @@ func (s *Snapshot) copy() *Snapshot {
sigCache: s.sigCache, sigCache: s.sigCache,
Number: s.Number, Number: s.Number,
Hash: s.Hash, Hash: s.Hash,
TurnLength: s.TurnLength,
Validators: make(map[common.Address]*ValidatorInfo), Validators: make(map[common.Address]*ValidatorInfo),
Recents: make(map[uint64]common.Address), Recents: make(map[uint64]common.Address),
RecentForkHashes: make(map[uint64]string), RecentForkHashes: make(map[uint64]string),
@ -210,17 +218,45 @@ func (s *Snapshot) updateAttestation(header *types.Header, chainConfig *params.C
} }
} }
func (s *Snapshot) SignRecently(validator common.Address) bool { func (s *Snapshot) versionHistoryCheckLen() uint64 {
for seen, recent := range s.Recents { return uint64(len(s.Validators)) * uint64(s.TurnLength)
if recent == validator { }
if limit := uint64(len(s.Validators)/2 + 1); s.Number+1 < limit || seen > s.Number+1-limit {
return true func (s *Snapshot) minerHistoryCheckLen() uint64 {
} return (uint64(len(s.Validators))/2+1)*uint64(s.TurnLength) - 1
} }
func (s *Snapshot) countRecents() map[common.Address]uint8 {
leftHistoryBound := uint64(0) // the bound is excluded
checkHistoryLength := s.minerHistoryCheckLen()
if s.Number > checkHistoryLength {
leftHistoryBound = s.Number - checkHistoryLength
} }
counts := make(map[common.Address]uint8, len(s.Validators))
for seen, recent := range s.Recents {
if seen <= leftHistoryBound || recent == (common.Address{}) /*when seen == `epochKey`*/ {
continue
}
counts[recent] += 1
}
return counts
}
func (s *Snapshot) signRecentlyByCounts(validator common.Address, counts map[common.Address]uint8) bool {
if seenTimes, ok := counts[validator]; ok && seenTimes >= s.TurnLength {
if seenTimes > s.TurnLength {
log.Warn("produce more blocks than expected!", "validator", validator, "seenTimes", seenTimes)
}
return true
}
return false return false
} }
func (s *Snapshot) SignRecently(validator common.Address) bool {
return s.signRecentlyByCounts(validator, s.countRecents())
}
func (s *Snapshot) apply(headers []*types.Header, chain consensus.ChainHeaderReader, parents []*types.Header, chainConfig *params.ChainConfig) (*Snapshot, error) { func (s *Snapshot) apply(headers []*types.Header, chain consensus.ChainHeaderReader, parents []*types.Header, chainConfig *params.ChainConfig) (*Snapshot, error) {
// Allow passing in no headers for cleaner code // Allow passing in no headers for cleaner code
if len(headers) == 0 { if len(headers) == 0 {
@ -247,10 +283,10 @@ func (s *Snapshot) apply(headers []*types.Header, chain consensus.ChainHeaderRea
for _, header := range headers { for _, header := range headers {
number := header.Number.Uint64() number := header.Number.Uint64()
// Delete the oldest validator from the recent list to allow it signing again // Delete the oldest validator from the recent list to allow it signing again
if limit := uint64(len(snap.Validators)/2 + 1); number >= limit { if limit := snap.minerHistoryCheckLen() + 1; number >= limit {
delete(snap.Recents, number-limit) delete(snap.Recents, number-limit)
} }
if limit := uint64(len(snap.Validators)); number >= limit { if limit := snap.versionHistoryCheckLen(); number >= limit {
delete(snap.RecentForkHashes, number-limit) delete(snap.RecentForkHashes, number-limit)
} }
// Resolve the authorization key and check against signers // Resolve the authorization key and check against signers
@ -261,19 +297,47 @@ func (s *Snapshot) apply(headers []*types.Header, chain consensus.ChainHeaderRea
if _, ok := snap.Validators[validator]; !ok { if _, ok := snap.Validators[validator]; !ok {
return nil, errUnauthorizedValidator(validator.String()) return nil, errUnauthorizedValidator(validator.String())
} }
for _, recent := range snap.Recents { if chainConfig.IsBohr(header.Number, header.Time) {
if recent == validator { if snap.SignRecently(validator) {
return nil, errRecentlySigned return nil, errRecentlySigned
} }
} else {
for _, recent := range snap.Recents {
if recent == validator {
return nil, errRecentlySigned
}
}
} }
snap.Recents[number] = validator snap.Recents[number] = validator
snap.RecentForkHashes[number] = hex.EncodeToString(header.Extra[extraVanity-nextForkHashSize : extraVanity])
snap.updateAttestation(header, chainConfig, s.config)
// change validator set // change validator set
if number > 0 && number%s.config.Epoch == uint64(len(snap.Validators)/2) { if number > 0 && number%s.config.Epoch == snap.minerHistoryCheckLen() {
checkpointHeader := FindAncientHeader(header, uint64(len(snap.Validators)/2), chain, parents) epochKey := math.MaxUint64 - header.Number.Uint64()/s.config.Epoch // impossible used as a block number
if chainConfig.IsBohr(header.Number, header.Time) {
// after switching the validator set, snap.Validators may become larger,
// then the unexpected second switch will happen, just skip it.
if _, ok := snap.Recents[epochKey]; ok {
continue
}
}
checkpointHeader := FindAncientHeader(header, snap.minerHistoryCheckLen(), chain, parents)
if checkpointHeader == nil { if checkpointHeader == nil {
return nil, consensus.ErrUnknownAncestor return nil, consensus.ErrUnknownAncestor
} }
oldVersionsLen := snap.versionHistoryCheckLen()
// get turnLength from headers and use that for new turnLength
turnLength, err := parseTurnLength(checkpointHeader, chainConfig, s.config)
if err != nil {
return nil, err
}
if turnLength != nil {
snap.TurnLength = *turnLength
log.Debug("validator set switch", "turnLength", *turnLength)
}
// get validators from headers and use that for new validator set // get validators from headers and use that for new validator set
newValArr, voteAddrs, err := parseValidators(checkpointHeader, chainConfig, s.config) newValArr, voteAddrs, err := parseValidators(checkpointHeader, chainConfig, s.config)
if err != nil { if err != nil {
@ -289,18 +353,18 @@ func (s *Snapshot) apply(headers []*types.Header, chain consensus.ChainHeaderRea
} }
} }
} }
oldLimit := len(snap.Validators)/2 + 1 if chainConfig.IsBohr(header.Number, header.Time) {
newLimit := len(newVals)/2 + 1 // BEP-404: Clear Miner History when Switching Validators Set
if newLimit < oldLimit { snap.Recents = make(map[uint64]common.Address)
for i := 0; i < oldLimit-newLimit; i++ { snap.Recents[epochKey] = common.Address{}
delete(snap.Recents, number-uint64(newLimit)-uint64(i)) log.Debug("Recents are cleared up", "blockNumber", number)
} } else {
} oldLimit := len(snap.Validators)/2 + 1
oldLimit = len(snap.Validators) newLimit := len(newVals)/2 + 1
newLimit = len(newVals) if newLimit < oldLimit {
if newLimit < oldLimit { for i := 0; i < oldLimit-newLimit; i++ {
for i := 0; i < oldLimit-newLimit; i++ { delete(snap.Recents, number-uint64(newLimit)-uint64(i))
delete(snap.RecentForkHashes, number-uint64(newLimit)-uint64(i)) }
} }
} }
snap.Validators = newVals snap.Validators = newVals
@ -310,11 +374,10 @@ func (s *Snapshot) apply(headers []*types.Header, chain consensus.ChainHeaderRea
snap.Validators[val].Index = idx + 1 // offset by 1 snap.Validators[val].Index = idx + 1 // offset by 1
} }
} }
for i := snap.versionHistoryCheckLen(); i < oldVersionsLen; i++ {
delete(snap.RecentForkHashes, number-i)
}
} }
snap.updateAttestation(header, chainConfig, s.config)
snap.RecentForkHashes[number] = hex.EncodeToString(header.Extra[extraVanity-nextForkHashSize : extraVanity])
} }
snap.Number += uint64(len(headers)) snap.Number += uint64(len(headers))
snap.Hash = headers[len(headers)-1].Hash() snap.Hash = headers[len(headers)-1].Hash()
@ -331,17 +394,20 @@ func (s *Snapshot) validators() []common.Address {
return validators return validators
} }
// inturn returns if a validator at a given block height is in-turn or not. // lastBlockInOneTurn returns if the block at height `blockNumber` is the last block in current turn.
func (s *Snapshot) inturn(validator common.Address) bool { func (s *Snapshot) lastBlockInOneTurn(blockNumber uint64) bool {
validators := s.validators() return (blockNumber+1)%uint64(s.TurnLength) == 0
offset := (s.Number + 1) % uint64(len(validators))
return validators[offset] == validator
} }
// inturnValidator returns the validator at a given block height. // inturn returns if a validator at a given block height is in-turn or not.
func (s *Snapshot) inturn(validator common.Address) bool {
return s.inturnValidator() == validator
}
// inturnValidator returns the validator for the following block height.
func (s *Snapshot) inturnValidator() common.Address { func (s *Snapshot) inturnValidator() common.Address {
validators := s.validators() validators := s.validators()
offset := (s.Number + 1) % uint64(len(validators)) offset := (s.Number + 1) / uint64(s.TurnLength) % uint64(len(validators))
return validators[offset] return validators[offset]
} }
@ -379,12 +445,6 @@ func (s *Snapshot) indexOfVal(validator common.Address) int {
return -1 return -1
} }
func (s *Snapshot) supposeValidator() common.Address {
validators := s.validators()
index := (s.Number + 1) % uint64(len(validators))
return validators[index]
}
func parseValidators(header *types.Header, chainConfig *params.ChainConfig, parliaConfig *params.ParliaConfig) ([]common.Address, []types.BLSPublicKey, error) { func parseValidators(header *types.Header, chainConfig *params.ChainConfig, parliaConfig *params.ParliaConfig) ([]common.Address, []types.BLSPublicKey, error) {
validatorsBytes := getValidatorBytesFromHeader(header, chainConfig, parliaConfig) validatorsBytes := getValidatorBytesFromHeader(header, chainConfig, parliaConfig)
if len(validatorsBytes) == 0 { if len(validatorsBytes) == 0 {
@ -410,6 +470,24 @@ func parseValidators(header *types.Header, chainConfig *params.ChainConfig, parl
return cnsAddrs, voteAddrs, nil return cnsAddrs, voteAddrs, nil
} }
func parseTurnLength(header *types.Header, chainConfig *params.ChainConfig, parliaConfig *params.ParliaConfig) (*uint8, error) {
if header.Number.Uint64()%parliaConfig.Epoch != 0 ||
!chainConfig.IsBohr(header.Number, header.Time) {
return nil, nil
}
if len(header.Extra) <= extraVanity+extraSeal {
return nil, errInvalidSpanValidators
}
num := int(header.Extra[extraVanity])
pos := extraVanity + validatorNumberSize + num*validatorBytesLength
if len(header.Extra) <= pos {
return nil, errInvalidTurnLength
}
turnLength := header.Extra[pos]
return &turnLength, nil
}
func FindAncientHeader(header *types.Header, ite uint64, chain consensus.ChainHeaderReader, candidateParents []*types.Header) *types.Header { func FindAncientHeader(header *types.Header, ite uint64, chain consensus.ChainHeaderReader, candidateParents []*types.Header) *types.Header {
ancient := header ancient := header
for i := uint64(1); i <= ite; i++ { for i := uint64(1); i <= ite; i++ {

View File

@ -71,6 +71,8 @@ var (
justifiedBlockGauge = metrics.NewRegisteredGauge("chain/head/justified", nil) justifiedBlockGauge = metrics.NewRegisteredGauge("chain/head/justified", nil)
finalizedBlockGauge = metrics.NewRegisteredGauge("chain/head/finalized", nil) finalizedBlockGauge = metrics.NewRegisteredGauge("chain/head/finalized", nil)
blockInsertMgaspsGauge = metrics.NewRegisteredGauge("chain/insert/mgasps", nil)
chainInfoGauge = metrics.NewRegisteredGaugeInfo("chain/info", nil) chainInfoGauge = metrics.NewRegisteredGaugeInfo("chain/info", nil)
accountReadTimer = metrics.NewRegisteredTimer("chain/account/reads", nil) accountReadTimer = metrics.NewRegisteredTimer("chain/account/reads", nil)
@ -98,6 +100,8 @@ var (
blockReorgAddMeter = metrics.NewRegisteredMeter("chain/reorg/add", nil) blockReorgAddMeter = metrics.NewRegisteredMeter("chain/reorg/add", nil)
blockReorgDropMeter = metrics.NewRegisteredMeter("chain/reorg/drop", nil) blockReorgDropMeter = metrics.NewRegisteredMeter("chain/reorg/drop", nil)
blockRecvTimeDiffGauge = metrics.NewRegisteredGauge("chain/block/recvtimediff", nil)
errStateRootVerificationFailed = errors.New("state root verification failed") errStateRootVerificationFailed = errors.New("state root verification failed")
errInsertionInterrupted = errors.New("insertion is interrupted") errInsertionInterrupted = errors.New("insertion is interrupted")
errChainStopped = errors.New("blockchain is stopped") errChainStopped = errors.New("blockchain is stopped")
@ -165,6 +169,8 @@ type CacheConfig struct {
StateHistory uint64 // Number of blocks from head whose state histories are reserved. StateHistory uint64 // Number of blocks from head whose state histories are reserved.
StateScheme string // Scheme used to store ethereum states and merkle tree nodes on top StateScheme string // Scheme used to store ethereum states and merkle tree nodes on top
PathSyncFlush bool // Whether sync flush the trienodebuffer of pathdb to disk. PathSyncFlush bool // Whether sync flush the trienodebuffer of pathdb to disk.
JournalFilePath string
JournalFile bool
SnapshotNoBuild bool // Whether the background generation is allowed SnapshotNoBuild bool // Whether the background generation is allowed
SnapshotWait bool // Wait for snapshot construction on startup. TODO(karalabe): This is a dirty hack for testing, nuke it SnapshotWait bool // Wait for snapshot construction on startup. TODO(karalabe): This is a dirty hack for testing, nuke it
@ -184,10 +190,12 @@ func (c *CacheConfig) triedbConfig() *triedb.Config {
} }
if c.StateScheme == rawdb.PathScheme { if c.StateScheme == rawdb.PathScheme {
config.PathDB = &pathdb.Config{ config.PathDB = &pathdb.Config{
SyncFlush: c.PathSyncFlush, SyncFlush: c.PathSyncFlush,
StateHistory: c.StateHistory, StateHistory: c.StateHistory,
CleanCacheSize: c.TrieCleanLimit * 1024 * 1024, CleanCacheSize: c.TrieCleanLimit * 1024 * 1024,
DirtyCacheSize: c.TrieDirtyLimit * 1024 * 1024, DirtyCacheSize: c.TrieDirtyLimit * 1024 * 1024,
JournalFilePath: c.JournalFilePath,
JournalFile: c.JournalFile,
} }
} }
return config return config
@ -253,25 +261,28 @@ type BlockChain struct {
triesInMemory uint64 triesInMemory uint64
txIndexer *txIndexer // Transaction indexer, might be nil if not enabled txIndexer *txIndexer // Transaction indexer, might be nil if not enabled
hc *HeaderChain hc *HeaderChain
rmLogsFeed event.Feed rmLogsFeed event.Feed
chainFeed event.Feed chainFeed event.Feed
chainSideFeed event.Feed chainSideFeed event.Feed
chainHeadFeed event.Feed chainHeadFeed event.Feed
chainBlockFeed event.Feed chainBlockFeed event.Feed
logsFeed event.Feed logsFeed event.Feed
blockProcFeed event.Feed blockProcFeed event.Feed
finalizedHeaderFeed event.Feed finalizedHeaderFeed event.Feed
scope event.SubscriptionScope highestVerifiedBlockFeed event.Feed
genesisBlock *types.Block scope event.SubscriptionScope
genesisBlock *types.Block
// This mutex synchronizes chain write operations. // This mutex synchronizes chain write operations.
// Readers don't need to take it, they can just read the database. // Readers don't need to take it, they can just read the database.
chainmu *syncx.ClosableMutex chainmu *syncx.ClosableMutex
highestVerifiedHeader atomic.Pointer[types.Header] highestVerifiedHeader atomic.Pointer[types.Header]
highestVerifiedBlock atomic.Pointer[types.Header]
currentBlock atomic.Pointer[types.Header] // Current head of the chain currentBlock atomic.Pointer[types.Header] // Current head of the chain
currentSnapBlock atomic.Pointer[types.Header] // Current head of snap-sync currentSnapBlock atomic.Pointer[types.Header] // Current head of snap-sync
currentFinalBlock atomic.Pointer[types.Header] // Latest (consensus) finalized block
chasingHead atomic.Pointer[types.Header] chasingHead atomic.Pointer[types.Header]
bodyCache *lru.Cache[common.Hash, *types.Body] bodyCache *lru.Cache[common.Hash, *types.Body]
@ -294,6 +305,7 @@ type BlockChain struct {
diffLayerFreezerBlockLimit uint64 diffLayerFreezerBlockLimit uint64
wg sync.WaitGroup wg sync.WaitGroup
dbWg sync.WaitGroup
quit chan struct{} // shutdown signal, closed in Stop. quit chan struct{} // shutdown signal, closed in Stop.
stopping atomic.Bool // false if chain is running, true when stopped stopping atomic.Bool // false if chain is running, true when stopped
procInterrupt atomic.Bool // interrupt signaler for block processing procInterrupt atomic.Bool // interrupt signaler for block processing
@ -392,6 +404,7 @@ func NewBlockChain(db ethdb.Database, cacheConfig *CacheConfig, genesis *Genesis
} }
bc.highestVerifiedHeader.Store(nil) bc.highestVerifiedHeader.Store(nil)
bc.highestVerifiedBlock.Store(nil)
bc.currentBlock.Store(nil) bc.currentBlock.Store(nil)
bc.currentSnapBlock.Store(nil) bc.currentSnapBlock.Store(nil)
bc.chasingHead.Store(nil) bc.chasingHead.Store(nil)
@ -412,7 +425,7 @@ func NewBlockChain(db ethdb.Database, cacheConfig *CacheConfig, genesis *Genesis
// Make sure the state associated with the block is available, or log out // Make sure the state associated with the block is available, or log out
// if there is no available state, waiting for state sync. // if there is no available state, waiting for state sync.
head := bc.CurrentBlock() head := bc.CurrentBlock()
if !bc.NoTries() && !bc.HasState(head.Root) { if !bc.HasState(head.Root) {
if head.Number.Uint64() == 0 { if head.Number.Uint64() == 0 {
// The genesis state is missing, which is only possible in the path-based // The genesis state is missing, which is only possible in the path-based
// scheme. This situation occurs when the initial state sync is not finished // scheme. This situation occurs when the initial state sync is not finished
@ -428,7 +441,7 @@ func NewBlockChain(db ethdb.Database, cacheConfig *CacheConfig, genesis *Genesis
if bc.cacheConfig.SnapshotLimit > 0 { if bc.cacheConfig.SnapshotLimit > 0 {
diskRoot = rawdb.ReadSnapshotRoot(bc.db) diskRoot = rawdb.ReadSnapshotRoot(bc.db)
} }
if bc.triedb.Scheme() == rawdb.PathScheme { if bc.triedb.Scheme() == rawdb.PathScheme && !bc.NoTries() {
recoverable, _ := bc.triedb.Recoverable(diskRoot) recoverable, _ := bc.triedb.Recoverable(diskRoot)
if !bc.HasState(diskRoot) && !recoverable { if !bc.HasState(diskRoot) && !recoverable {
diskRoot = bc.triedb.Head() diskRoot = bc.triedb.Head()
@ -454,8 +467,8 @@ func NewBlockChain(db ethdb.Database, cacheConfig *CacheConfig, genesis *Genesis
} }
} }
// Ensure that a previous crash in SetHead doesn't leave extra ancients // Ensure that a previous crash in SetHead doesn't leave extra ancients
if frozen, err := bc.db.ItemAmountInAncient(); err == nil && frozen > 0 { if frozen, err := bc.db.BlockStore().ItemAmountInAncient(); err == nil && frozen > 0 {
frozen, err = bc.db.Ancients() frozen, err = bc.db.BlockStore().Ancients()
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -649,13 +662,6 @@ func (bc *BlockChain) cacheDiffLayer(diffLayer *types.DiffLayer, diffLayerCh cha
} }
} }
func (bc *BlockChain) cacheBlock(hash common.Hash, block *types.Block) {
bc.blockCache.Add(hash, block)
if bc.chainConfig.IsCancun(block.Number(), block.Time()) {
bc.sidecarsCache.Add(hash, block.Sidecars())
}
}
// empty returns an indicator whether the blockchain is empty. // empty returns an indicator whether the blockchain is empty.
// Note, it's a special case that we connect a non-empty ancient // Note, it's a special case that we connect a non-empty ancient
// database with an empty node, so that we can plugin the ancient // database with an empty node, so that we can plugin the ancient
@ -991,12 +997,23 @@ func (bc *BlockChain) rewindPathHead(head *types.Header, root common.Hash) (*typ
// then block number zero is returned, indicating that snapshot recovery is disabled // then block number zero is returned, indicating that snapshot recovery is disabled
// and the whole snapshot should be auto-generated in case of head mismatch. // and the whole snapshot should be auto-generated in case of head mismatch.
func (bc *BlockChain) rewindHead(head *types.Header, root common.Hash) (*types.Header, uint64) { func (bc *BlockChain) rewindHead(head *types.Header, root common.Hash) (*types.Header, uint64) {
if bc.triedb.Scheme() == rawdb.PathScheme { if bc.triedb.Scheme() == rawdb.PathScheme && !bc.NoTries() {
return bc.rewindPathHead(head, root) return bc.rewindPathHead(head, root)
} }
return bc.rewindHashHead(head, root) return bc.rewindHashHead(head, root)
} }
// SetFinalized sets the finalized block.
// This function differs slightly from Ethereum; we fine-tune it through the outer-layer setting finalizedBlockGauge.
func (bc *BlockChain) SetFinalized(header *types.Header) {
bc.currentFinalBlock.Store(header)
if header != nil {
rawdb.WriteFinalizedBlockHash(bc.db.BlockStore(), header.Hash())
} else {
rawdb.WriteFinalizedBlockHash(bc.db.BlockStore(), common.Hash{})
}
}
// setHeadBeyondRoot rewinds the local chain to a new head with the extra condition // setHeadBeyondRoot rewinds the local chain to a new head with the extra condition
// that the rewind must pass the specified state root. This method is meant to be // that the rewind must pass the specified state root. This method is meant to be
// used when rewinding with snapshots enabled to ensure that we go back further than // used when rewinding with snapshots enabled to ensure that we go back further than
@ -1028,6 +1045,19 @@ func (bc *BlockChain) setHeadBeyondRoot(head uint64, time uint64, root common.Ha
// block. Note, depth equality is permitted to allow using SetHead as a // block. Note, depth equality is permitted to allow using SetHead as a
// chain reparation mechanism without deleting any data! // chain reparation mechanism without deleting any data!
if currentBlock := bc.CurrentBlock(); currentBlock != nil && header.Number.Uint64() <= currentBlock.Number.Uint64() { if currentBlock := bc.CurrentBlock(); currentBlock != nil && header.Number.Uint64() <= currentBlock.Number.Uint64() {
// load bc.snaps for the judge `HasState`
if bc.NoTries() {
if bc.cacheConfig.SnapshotLimit > 0 {
snapconfig := snapshot.Config{
CacheSize: bc.cacheConfig.SnapshotLimit,
NoBuild: bc.cacheConfig.SnapshotNoBuild,
AsyncBuild: !bc.cacheConfig.SnapshotWait,
}
bc.snaps, _ = snapshot.New(snapconfig, bc.db, bc.triedb, header.Root, int(bc.cacheConfig.TriesInMemory), bc.NoTries())
}
defer func() { bc.snaps = nil }()
}
var newHeadBlock *types.Header var newHeadBlock *types.Header
newHeadBlock, rootNumber = bc.rewindHead(header, root) newHeadBlock, rootNumber = bc.rewindHead(header, root)
rawdb.WriteHeadBlockHash(db, newHeadBlock.Hash()) rawdb.WriteHeadBlockHash(db, newHeadBlock.Hash())
@ -1075,7 +1105,7 @@ func (bc *BlockChain) setHeadBeyondRoot(head uint64, time uint64, root common.Ha
// intent afterwards is full block importing, delete the chain segment // intent afterwards is full block importing, delete the chain segment
// between the stateful-block and the sethead target. // between the stateful-block and the sethead target.
var wipe bool var wipe bool
frozen, _ := bc.db.Ancients() frozen, _ := bc.db.BlockStore().Ancients()
if headNumber+1 < frozen { if headNumber+1 < frozen {
wipe = pivot == nil || headNumber >= *pivot wipe = pivot == nil || headNumber >= *pivot
} }
@ -1084,11 +1114,11 @@ func (bc *BlockChain) setHeadBeyondRoot(head uint64, time uint64, root common.Ha
// Rewind the header chain, deleting all block bodies until then // Rewind the header chain, deleting all block bodies until then
delFn := func(db ethdb.KeyValueWriter, hash common.Hash, num uint64) { delFn := func(db ethdb.KeyValueWriter, hash common.Hash, num uint64) {
// Ignore the error here since light client won't hit this path // Ignore the error here since light client won't hit this path
frozen, _ := bc.db.Ancients() frozen, _ := bc.db.BlockStore().Ancients()
if num+1 <= frozen { if num+1 <= frozen {
// Truncate all relative data(header, total difficulty, body, receipt // Truncate all relative data(header, total difficulty, body, receipt
// and canonical hash) from ancient store. // and canonical hash) from ancient store.
if _, err := bc.db.TruncateHead(num); err != nil { if _, err := bc.db.BlockStore().TruncateHead(num); err != nil {
log.Crit("Failed to truncate ancient data", "number", num, "err", err) log.Crit("Failed to truncate ancient data", "number", num, "err", err)
} }
// Remove the hash <-> number mapping from the active store. // Remove the hash <-> number mapping from the active store.
@ -1106,7 +1136,7 @@ func (bc *BlockChain) setHeadBeyondRoot(head uint64, time uint64, root common.Ha
// If SetHead was only called as a chain reparation method, try to skip // If SetHead was only called as a chain reparation method, try to skip
// touching the header chain altogether, unless the freezer is broken // touching the header chain altogether, unless the freezer is broken
if repair { if repair {
if target, force := updateFn(bc.db, bc.CurrentBlock()); force { if target, force := updateFn(bc.db.BlockStore(), bc.CurrentBlock()); force {
bc.hc.SetHead(target.Number.Uint64(), updateFn, delFn) bc.hc.SetHead(target.Number.Uint64(), updateFn, delFn)
} }
} else { } else {
@ -1129,6 +1159,11 @@ func (bc *BlockChain) setHeadBeyondRoot(head uint64, time uint64, root common.Ha
bc.txLookupCache.Purge() bc.txLookupCache.Purge()
bc.futureBlocks.Purge() bc.futureBlocks.Purge()
if finalized := bc.CurrentFinalBlock(); finalized != nil && head < finalized.Number.Uint64() {
log.Error("SetHead invalidated finalized block")
bc.SetFinalized(nil)
}
return rootNumber, bc.loadLastState() return rootNumber, bc.loadLastState()
} }
@ -1197,10 +1232,10 @@ func (bc *BlockChain) ResetWithGenesisBlock(genesis *types.Block) error {
defer bc.chainmu.Unlock() defer bc.chainmu.Unlock()
// Prepare the genesis block and reinitialise the chain // Prepare the genesis block and reinitialise the chain
batch := bc.db.NewBatch() blockBatch := bc.db.BlockStore().NewBatch()
rawdb.WriteTd(batch, genesis.Hash(), genesis.NumberU64(), genesis.Difficulty()) rawdb.WriteTd(blockBatch, genesis.Hash(), genesis.NumberU64(), genesis.Difficulty())
rawdb.WriteBlock(batch, genesis) rawdb.WriteBlock(blockBatch, genesis)
if err := batch.Write(); err != nil { if err := blockBatch.Write(); err != nil {
log.Crit("Failed to write genesis block", "err", err) log.Crit("Failed to write genesis block", "err", err)
} }
bc.writeHeadBlock(genesis) bc.writeHeadBlock(genesis)
@ -1262,18 +1297,33 @@ func (bc *BlockChain) ExportN(w io.Writer, first uint64, last uint64) error {
// //
// Note, this function assumes that the `mu` mutex is held! // Note, this function assumes that the `mu` mutex is held!
func (bc *BlockChain) writeHeadBlock(block *types.Block) { func (bc *BlockChain) writeHeadBlock(block *types.Block) {
// Add the block to the canonical chain number scheme and mark as the head bc.dbWg.Add(2)
batch := bc.db.NewBatch() defer bc.dbWg.Wait()
rawdb.WriteHeadHeaderHash(batch, block.Hash()) go func() {
rawdb.WriteHeadFastBlockHash(batch, block.Hash()) defer bc.dbWg.Done()
rawdb.WriteCanonicalHash(batch, block.Hash(), block.NumberU64()) // Add the block to the canonical chain number scheme and mark as the head
rawdb.WriteTxLookupEntriesByBlock(batch, block) blockBatch := bc.db.BlockStore().NewBatch()
rawdb.WriteHeadBlockHash(batch, block.Hash()) rawdb.WriteCanonicalHash(blockBatch, block.Hash(), block.NumberU64())
rawdb.WriteHeadHeaderHash(blockBatch, block.Hash())
rawdb.WriteHeadBlockHash(blockBatch, block.Hash())
rawdb.WriteHeadFastBlockHash(blockBatch, block.Hash())
// Flush the whole batch into the disk, exit the node if failed
if err := blockBatch.Write(); err != nil {
log.Crit("Failed to update chain indexes and markers in block db", "err", err)
}
}()
go func() {
defer bc.dbWg.Done()
batch := bc.db.NewBatch()
rawdb.WriteTxLookupEntriesByBlock(batch, block)
// Flush the whole batch into the disk, exit the node if failed
if err := batch.Write(); err != nil {
log.Crit("Failed to update chain indexes in chain db", "err", err)
}
}()
// Flush the whole batch into the disk, exit the node if failed
if err := batch.Write(); err != nil {
log.Crit("Failed to update chain indexes and markers", "err", err)
}
// Update all in-memory chain markers in the last step // Update all in-memory chain markers in the last step
bc.hc.SetCurrentHeader(block.Header()) bc.hc.SetCurrentHeader(block.Header())
@ -1345,7 +1395,7 @@ func (bc *BlockChain) Stop() {
if !bc.cacheConfig.TrieDirtyDisabled { if !bc.cacheConfig.TrieDirtyDisabled {
triedb := bc.triedb triedb := bc.triedb
var once sync.Once var once sync.Once
for _, offset := range []uint64{0, 1, TriesInMemory - 1} { for _, offset := range []uint64{0, 1, bc.TriesInMemory() - 1} {
if number := bc.CurrentBlock().Number.Uint64(); number > offset { if number := bc.CurrentBlock().Number.Uint64(); number > offset {
recent := bc.GetBlockByNumber(number - offset) recent := bc.GetBlockByNumber(number - offset)
log.Info("Writing cached state to disk", "block", recent.Number(), "hash", recent.Hash(), "root", recent.Root()) log.Info("Writing cached state to disk", "block", recent.Number(), "hash", recent.Hash(), "root", recent.Root())
@ -1354,7 +1404,7 @@ func (bc *BlockChain) Stop() {
} else { } else {
rawdb.WriteSafePointBlockNumber(bc.db, recent.NumberU64()) rawdb.WriteSafePointBlockNumber(bc.db, recent.NumberU64())
once.Do(func() { once.Do(func() {
rawdb.WriteHeadBlockHash(bc.db, recent.Hash()) rawdb.WriteHeadBlockHash(bc.db.BlockStore(), recent.Hash())
}) })
} }
} }
@ -1494,7 +1544,7 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [
} else if !reorg { } else if !reorg {
return false return false
} }
rawdb.WriteHeadFastBlockHash(bc.db, head.Hash()) rawdb.WriteHeadFastBlockHash(bc.db.BlockStore(), head.Hash())
bc.currentSnapBlock.Store(head.Header()) bc.currentSnapBlock.Store(head.Header())
headFastBlockGauge.Update(int64(head.NumberU64())) headFastBlockGauge.Update(int64(head.NumberU64()))
return true return true
@ -1511,9 +1561,9 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [
// Ensure genesis is in ancients. // Ensure genesis is in ancients.
if first.NumberU64() == 1 { if first.NumberU64() == 1 {
if frozen, _ := bc.db.Ancients(); frozen == 0 { if frozen, _ := bc.db.BlockStore().Ancients(); frozen == 0 {
td := bc.genesisBlock.Difficulty() td := bc.genesisBlock.Difficulty()
writeSize, err := rawdb.WriteAncientBlocks(bc.db, []*types.Block{bc.genesisBlock}, []types.Receipts{nil}, td) writeSize, err := rawdb.WriteAncientBlocks(bc.db.BlockStore(), []*types.Block{bc.genesisBlock}, []types.Receipts{nil}, td)
if err != nil { if err != nil {
log.Error("Error writing genesis to ancients", "err", err) log.Error("Error writing genesis to ancients", "err", err)
return 0, err return 0, err
@ -1531,7 +1581,7 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [
// Write all chain data to ancients. // Write all chain data to ancients.
td := bc.GetTd(first.Hash(), first.NumberU64()) td := bc.GetTd(first.Hash(), first.NumberU64())
writeSize, err := rawdb.WriteAncientBlocksWithBlobs(bc.db, blockChain, receiptChain, td) writeSize, err := rawdb.WriteAncientBlocksWithBlobs(bc.db.BlockStore(), blockChain, receiptChain, td)
if err != nil { if err != nil {
log.Error("Error importing chain data to ancients", "err", err) log.Error("Error importing chain data to ancients", "err", err)
return 0, err return 0, err
@ -1539,7 +1589,7 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [
size += writeSize size += writeSize
// Sync the ancient store explicitly to ensure all data has been flushed to disk. // Sync the ancient store explicitly to ensure all data has been flushed to disk.
if err := bc.db.Sync(); err != nil { if err := bc.db.BlockStore().Sync(); err != nil {
return 0, err return 0, err
} }
// Update the current snap block because all block data is now present in DB. // Update the current snap block because all block data is now present in DB.
@ -1547,7 +1597,7 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [
if !updateHead(blockChain[len(blockChain)-1]) { if !updateHead(blockChain[len(blockChain)-1]) {
// We end up here if the header chain has reorg'ed, and the blocks/receipts // We end up here if the header chain has reorg'ed, and the blocks/receipts
// don't match the canonical chain. // don't match the canonical chain.
if _, err := bc.db.TruncateHead(previousSnapBlock + 1); err != nil { if _, err := bc.db.BlockStore().TruncateHead(previousSnapBlock + 1); err != nil {
log.Error("Can't truncate ancient store after failed insert", "err", err) log.Error("Can't truncate ancient store after failed insert", "err", err)
} }
return 0, errSideChainReceipts return 0, errSideChainReceipts
@ -1555,24 +1605,24 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [
// Delete block data from the main database. // Delete block data from the main database.
var ( var (
batch = bc.db.NewBatch()
canonHashes = make(map[common.Hash]struct{}) canonHashes = make(map[common.Hash]struct{})
blockBatch = bc.db.BlockStore().NewBatch()
) )
for _, block := range blockChain { for _, block := range blockChain {
canonHashes[block.Hash()] = struct{}{} canonHashes[block.Hash()] = struct{}{}
if block.NumberU64() == 0 { if block.NumberU64() == 0 {
continue continue
} }
rawdb.DeleteCanonicalHash(batch, block.NumberU64()) rawdb.DeleteCanonicalHash(blockBatch, block.NumberU64())
rawdb.DeleteBlockWithoutNumber(batch, block.Hash(), block.NumberU64()) rawdb.DeleteBlockWithoutNumber(blockBatch, block.Hash(), block.NumberU64())
} }
// Delete side chain hash-to-number mappings. // Delete side chain hash-to-number mappings.
for _, nh := range rawdb.ReadAllHashesInRange(bc.db, first.NumberU64(), last.NumberU64()) { for _, nh := range rawdb.ReadAllHashesInRange(bc.db.BlockStore(), first.NumberU64(), last.NumberU64()) {
if _, canon := canonHashes[nh.Hash]; !canon { if _, canon := canonHashes[nh.Hash]; !canon {
rawdb.DeleteHeader(batch, nh.Hash, nh.Number) rawdb.DeleteHeader(blockBatch, nh.Hash, nh.Number)
} }
} }
if err := batch.Write(); err != nil { if err := blockBatch.Write(); err != nil {
return 0, err return 0, err
} }
stats.processed += int32(len(blockChain)) stats.processed += int32(len(blockChain))
@ -1584,6 +1634,7 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [
var ( var (
skipPresenceCheck = false skipPresenceCheck = false
batch = bc.db.NewBatch() batch = bc.db.NewBatch()
blockBatch = bc.db.BlockStore().NewBatch()
) )
for i, block := range blockChain { for i, block := range blockChain {
// Short circuit insertion if shutting down or processing failed // Short circuit insertion if shutting down or processing failed
@ -1607,10 +1658,10 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [
} }
} }
// Write all the data out into the database // Write all the data out into the database
rawdb.WriteBody(batch, block.Hash(), block.NumberU64(), block.Body()) rawdb.WriteBody(blockBatch, block.Hash(), block.NumberU64(), block.Body())
rawdb.WriteReceipts(batch, block.Hash(), block.NumberU64(), receiptChain[i]) rawdb.WriteReceipts(blockBatch, block.Hash(), block.NumberU64(), receiptChain[i])
if bc.chainConfig.IsCancun(block.Number(), block.Time()) { if bc.chainConfig.IsCancun(block.Number(), block.Time()) {
rawdb.WriteBlobSidecars(batch, block.Hash(), block.NumberU64(), block.Sidecars()) rawdb.WriteBlobSidecars(blockBatch, block.Hash(), block.NumberU64(), block.Sidecars())
} }
// Write everything belongs to the blocks into the database. So that // Write everything belongs to the blocks into the database. So that
@ -1623,6 +1674,13 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [
size += int64(batch.ValueSize()) size += int64(batch.ValueSize())
batch.Reset() batch.Reset()
} }
if blockBatch.ValueSize() >= ethdb.IdealBatchSize {
if err := blockBatch.Write(); err != nil {
return 0, err
}
size += int64(blockBatch.ValueSize())
blockBatch.Reset()
}
stats.processed++ stats.processed++
} }
// Write everything belongs to the blocks into the database. So that // Write everything belongs to the blocks into the database. So that
@ -1634,6 +1692,12 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [
return 0, err return 0, err
} }
} }
if blockBatch.ValueSize() > 0 {
size += int64(blockBatch.ValueSize())
if err := blockBatch.Write(); err != nil {
return 0, err
}
}
updateHead(blockChain[len(blockChain)-1]) updateHead(blockChain[len(blockChain)-1])
return 0, nil return 0, nil
} }
@ -1678,14 +1742,14 @@ func (bc *BlockChain) writeBlockWithoutState(block *types.Block, td *big.Int) (e
if bc.insertStopped() { if bc.insertStopped() {
return errInsertionInterrupted return errInsertionInterrupted
} }
batch := bc.db.NewBatch() blockBatch := bc.db.BlockStore().NewBatch()
rawdb.WriteTd(batch, block.Hash(), block.NumberU64(), td) rawdb.WriteTd(blockBatch, block.Hash(), block.NumberU64(), td)
rawdb.WriteBlock(batch, block) rawdb.WriteBlock(blockBatch, block)
// if cancun is enabled, here need to write sidecars too // if cancun is enabled, here need to write sidecars too
if bc.chainConfig.IsCancun(block.Number(), block.Time()) { if bc.chainConfig.IsCancun(block.Number(), block.Time()) {
rawdb.WriteBlobSidecars(batch, block.Hash(), block.NumberU64(), block.Sidecars()) rawdb.WriteBlobSidecars(blockBatch, block.Hash(), block.NumberU64(), block.Sidecars())
} }
if err := batch.Write(); err != nil { if err := blockBatch.Write(); err != nil {
log.Crit("Failed to write block into disk", "err", err) log.Crit("Failed to write block into disk", "err", err)
} }
return nil return nil
@ -1723,7 +1787,7 @@ func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts []*types.
wg := sync.WaitGroup{} wg := sync.WaitGroup{}
wg.Add(1) wg.Add(1)
go func() { go func() {
blockBatch := bc.db.NewBatch() blockBatch := bc.db.BlockStore().NewBatch()
rawdb.WriteTd(blockBatch, block.Hash(), block.NumberU64(), externTd) rawdb.WriteTd(blockBatch, block.Hash(), block.NumberU64(), externTd)
rawdb.WriteBlock(blockBatch, block) rawdb.WriteBlock(blockBatch, block)
rawdb.WriteReceipts(blockBatch, block.Hash(), block.NumberU64(), receipts) rawdb.WriteReceipts(blockBatch, block.Hash(), block.NumberU64(), receipts)
@ -1731,10 +1795,20 @@ func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts []*types.
if bc.chainConfig.IsCancun(block.Number(), block.Time()) { if bc.chainConfig.IsCancun(block.Number(), block.Time()) {
rawdb.WriteBlobSidecars(blockBatch, block.Hash(), block.NumberU64(), block.Sidecars()) rawdb.WriteBlobSidecars(blockBatch, block.Hash(), block.NumberU64(), block.Sidecars())
} }
rawdb.WritePreimages(blockBatch, state.Preimages()) if bc.db.StateStore() != nil {
rawdb.WritePreimages(bc.db.StateStore(), state.Preimages())
} else {
rawdb.WritePreimages(blockBatch, state.Preimages())
}
if err := blockBatch.Write(); err != nil { if err := blockBatch.Write(); err != nil {
log.Crit("Failed to write block into disk", "err", err) log.Crit("Failed to write block into disk", "err", err)
} }
bc.hc.tdCache.Add(block.Hash(), externTd)
bc.blockCache.Add(block.Hash(), block)
bc.cacheReceipts(block.Hash(), receipts, block)
if bc.chainConfig.IsCancun(block.Number(), block.Time()) {
bc.sidecarsCache.Add(block.Hash(), block.Sidecars())
}
wg.Done() wg.Done()
}() }()
@ -1759,7 +1833,7 @@ func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts []*types.
// Flush limits are not considered for the first TriesInMemory blocks. // Flush limits are not considered for the first TriesInMemory blocks.
current := block.NumberU64() current := block.NumberU64()
if current <= TriesInMemory { if current <= bc.TriesInMemory() {
return nil return nil
} }
// If we exceeded our memory allowance, flush matured singleton nodes to disk // If we exceeded our memory allowance, flush matured singleton nodes to disk
@ -1857,14 +1931,19 @@ func (bc *BlockChain) WriteBlockAndSetHead(block *types.Block, receipts []*types
// writeBlockAndSetHead is the internal implementation of WriteBlockAndSetHead. // writeBlockAndSetHead is the internal implementation of WriteBlockAndSetHead.
// This function expects the chain mutex to be held. // This function expects the chain mutex to be held.
func (bc *BlockChain) writeBlockAndSetHead(block *types.Block, receipts []*types.Receipt, logs []*types.Log, state *state.StateDB, emitHeadEvent bool) (status WriteStatus, err error) { func (bc *BlockChain) writeBlockAndSetHead(block *types.Block, receipts []*types.Receipt, logs []*types.Log, state *state.StateDB, emitHeadEvent bool) (status WriteStatus, err error) {
if err := bc.writeBlockWithState(block, receipts, state); err != nil {
return NonStatTy, err
}
currentBlock := bc.CurrentBlock() currentBlock := bc.CurrentBlock()
reorg, err := bc.forker.ReorgNeededWithFastFinality(currentBlock, block.Header()) reorg, err := bc.forker.ReorgNeededWithFastFinality(currentBlock, block.Header())
if err != nil { if err != nil {
return NonStatTy, err return NonStatTy, err
} }
if reorg {
bc.highestVerifiedBlock.Store(types.CopyHeader(block.Header()))
bc.highestVerifiedBlockFeed.Send(HighestVerifiedBlockEvent{Header: block.Header()})
}
if err := bc.writeBlockWithState(block, receipts, state); err != nil {
return NonStatTy, err
}
if reorg { if reorg {
// Reorganise the chain if the parent is not the head block // Reorganise the chain if the parent is not the head block
if block.ParentHash() != currentBlock.Hash() { if block.ParentHash() != currentBlock.Hash() {
@ -1892,12 +1971,16 @@ func (bc *BlockChain) writeBlockAndSetHead(block *types.Block, receipts []*types
// canonical blocks. Avoid firing too many ChainHeadEvents, // canonical blocks. Avoid firing too many ChainHeadEvents,
// we will fire an accumulated ChainHeadEvent and disable fire // we will fire an accumulated ChainHeadEvent and disable fire
// event here. // event here.
var finalizedHeader *types.Header
if posa, ok := bc.Engine().(consensus.PoSA); ok {
if finalizedHeader = posa.GetFinalizedHeader(bc, block.Header()); finalizedHeader != nil {
bc.SetFinalized(finalizedHeader)
}
}
if emitHeadEvent { if emitHeadEvent {
bc.chainHeadFeed.Send(ChainHeadEvent{Block: block}) bc.chainHeadFeed.Send(ChainHeadEvent{Block: block})
if posa, ok := bc.Engine().(consensus.PoSA); ok { if finalizedHeader != nil {
if finalizedHeader := posa.GetFinalizedHeader(bc, block.Header()); finalizedHeader != nil { bc.finalizedHeaderFeed.Send(FinalizedHeaderEvent{finalizedHeader})
bc.finalizedHeaderFeed.Send(FinalizedHeaderEvent{finalizedHeader})
}
} }
} }
} else { } else {
@ -1974,6 +2057,9 @@ func (bc *BlockChain) insertChain(chain types.Blocks, setHead bool) (int, error)
return 0, nil return 0, nil
} }
if len(chain) > 0 {
blockRecvTimeDiffGauge.Update(time.Now().Unix() - int64(chain[0].Time()))
}
// Start a parallel signature recovery (signer will fluke on fork transition, minimal perf loss) // Start a parallel signature recovery (signer will fluke on fork transition, minimal perf loss)
signer := types.MakeSigner(bc.chainConfig, chain[0].Number(), chain[0].Time()) signer := types.MakeSigner(bc.chainConfig, chain[0].Number(), chain[0].Time())
go SenderCacher.RecoverFromBlocks(signer, chain) go SenderCacher.RecoverFromBlocks(signer, chain)
@ -2136,7 +2222,7 @@ func (bc *BlockChain) insertChain(chain types.Blocks, setHead bool) (int, error)
// state, but if it's this special case here(skip reexecution) we will lose // state, but if it's this special case here(skip reexecution) we will lose
// the empty receipt entry. // the empty receipt entry.
if len(block.Transactions()) == 0 { if len(block.Transactions()) == 0 {
rawdb.WriteReceipts(bc.db, block.Hash(), block.NumberU64(), nil) rawdb.WriteReceipts(bc.db.BlockStore(), block.Hash(), block.NumberU64(), nil)
} else { } else {
log.Error("Please file an issue, skip known block execution without receipt", log.Error("Please file an issue, skip known block execution without receipt",
"hash", block.Hash(), "number", block.NumberU64()) "hash", block.Hash(), "number", block.NumberU64())
@ -2208,8 +2294,6 @@ func (bc *BlockChain) insertChain(chain types.Blocks, setHead bool) (int, error)
vtime := time.Since(vstart) vtime := time.Since(vstart)
proctime := time.Since(start) // processing + validation proctime := time.Since(start) // processing + validation
bc.cacheBlock(block.Hash(), block)
// Update the metrics touched during block processing and validation // Update the metrics touched during block processing and validation
accountReadTimer.Update(statedb.AccountReads) // Account reads are complete(in processing) accountReadTimer.Update(statedb.AccountReads) // Account reads are complete(in processing)
storageReadTimer.Update(statedb.StorageReads) // Storage reads are complete(in processing) storageReadTimer.Update(statedb.StorageReads) // Storage reads are complete(in processing)
@ -2241,8 +2325,6 @@ func (bc *BlockChain) insertChain(chain types.Blocks, setHead bool) (int, error)
return it.index, err return it.index, err
} }
bc.cacheReceipts(block.Hash(), receipts, block)
// Update the metrics touched during block commit // Update the metrics touched during block commit
accountCommitTimer.Update(statedb.AccountCommits) // Account commits are complete, we can mark them accountCommitTimer.Update(statedb.AccountCommits) // Account commits are complete, we can mark them
storageCommitTimer.Update(statedb.StorageCommits) // Storage commits are complete, we can mark them storageCommitTimer.Update(statedb.StorageCommits) // Storage commits are complete, we can mark them
@ -2322,26 +2404,11 @@ func (bc *BlockChain) updateHighestVerifiedHeader(header *types.Header) {
if header == nil || header.Number == nil { if header == nil || header.Number == nil {
return return
} }
currentHeader := bc.highestVerifiedHeader.Load() currentBlock := bc.CurrentBlock()
if currentHeader == nil { reorg, err := bc.forker.ReorgNeededWithFastFinality(currentBlock, header)
if err == nil && reorg {
bc.highestVerifiedHeader.Store(types.CopyHeader(header)) bc.highestVerifiedHeader.Store(types.CopyHeader(header))
return log.Trace("updateHighestVerifiedHeader", "number", header.Number.Uint64(), "hash", header.Hash())
}
newParentTD := bc.GetTd(header.ParentHash, header.Number.Uint64()-1)
if newParentTD == nil {
newParentTD = big.NewInt(0)
}
oldParentTD := bc.GetTd(currentHeader.ParentHash, currentHeader.Number.Uint64()-1)
if oldParentTD == nil {
oldParentTD = big.NewInt(0)
}
newTD := big.NewInt(0).Add(newParentTD, header.Difficulty)
oldTD := big.NewInt(0).Add(oldParentTD, currentHeader.Difficulty)
if newTD.Cmp(oldTD) > 0 {
bc.highestVerifiedHeader.Store(types.CopyHeader(header))
return
} }
} }
@ -2681,6 +2748,7 @@ func (bc *BlockChain) reorg(oldHead *types.Header, newHead *types.Block) error {
var ( var (
indexesBatch = bc.db.NewBatch() indexesBatch = bc.db.NewBatch()
diffs = types.HashDifference(deletedTxs, addedTxs) diffs = types.HashDifference(deletedTxs, addedTxs)
blockBatch = bc.db.BlockStore().NewBatch()
) )
for _, tx := range diffs { for _, tx := range diffs {
rawdb.DeleteTxLookupEntry(indexesBatch, tx) rawdb.DeleteTxLookupEntry(indexesBatch, tx)
@ -2697,11 +2765,14 @@ func (bc *BlockChain) reorg(oldHead *types.Header, newHead *types.Block) error {
if hash == (common.Hash{}) { if hash == (common.Hash{}) {
break break
} }
rawdb.DeleteCanonicalHash(indexesBatch, i) rawdb.DeleteCanonicalHash(blockBatch, i)
} }
if err := indexesBatch.Write(); err != nil { if err := indexesBatch.Write(); err != nil {
log.Crit("Failed to delete useless indexes", "err", err) log.Crit("Failed to delete useless indexes", "err", err)
} }
if err := blockBatch.Write(); err != nil {
log.Crit("Failed to delete useless indexes use block batch", "err", err)
}
// Send out events for logs from the old canon chain, and 'reborn' // Send out events for logs from the old canon chain, and 'reborn'
// logs from the new canon chain. The number of logs can be very // logs from the new canon chain. The number of logs can be very

View File

@ -48,18 +48,23 @@ func (st *insertStats) report(chain []*types.Block, index int, snapDiffItems, sn
// If we're at the last block of the batch or report period reached, log // If we're at the last block of the batch or report period reached, log
if index == len(chain)-1 || elapsed >= statsReportLimit { if index == len(chain)-1 || elapsed >= statsReportLimit {
// Count the number of transactions in this segment // Count the number of transactions in this segment
var txs int var txs, blobs int
for _, block := range chain[st.lastIndex : index+1] { for _, block := range chain[st.lastIndex : index+1] {
txs += len(block.Transactions()) txs += len(block.Transactions())
for _, sidecar := range block.Sidecars() {
blobs += len(sidecar.Blobs)
}
} }
end := chain[index] end := chain[index]
// Assemble the log context and send it to the logger // Assemble the log context and send it to the logger
mgasps := float64(st.usedGas) * 1000 / float64(elapsed)
context := []interface{}{ context := []interface{}{
"number", end.Number(), "hash", end.Hash(), "miner", end.Coinbase(), "number", end.Number(), "hash", end.Hash(), "miner", end.Coinbase(),
"blocks", st.processed, "txs", txs, "mgas", float64(st.usedGas) / 1000000, "blocks", st.processed, "txs", txs, "blobs", blobs, "mgas", float64(st.usedGas) / 1000000,
"elapsed", common.PrettyDuration(elapsed), "mgasps", float64(st.usedGas) * 1000 / float64(elapsed), "elapsed", common.PrettyDuration(elapsed), "mgasps", mgasps,
} }
blockInsertMgaspsGauge.Update(int64(mgasps))
if timestamp := time.Unix(int64(end.Time()), 0); time.Since(timestamp) > time.Minute { if timestamp := time.Unix(int64(end.Time()), 0); time.Since(timestamp) > time.Minute {
context = append(context, []interface{}{"age", common.PrettyAge(timestamp)}...) context = append(context, []interface{}{"age", common.PrettyAge(timestamp)}...)
} }

View File

@ -98,6 +98,15 @@ func (bc *BlockChain) GetHeaderByHash(hash common.Hash) *types.Header {
return bc.hc.GetHeaderByHash(hash) return bc.hc.GetHeaderByHash(hash)
} }
// GetVerifiedBlockByHash retrieves the header of a verified block, it may be only in memory.
func (bc *BlockChain) GetVerifiedBlockByHash(hash common.Hash) *types.Header {
highestVerifiedBlock := bc.highestVerifiedBlock.Load()
if highestVerifiedBlock != nil && highestVerifiedBlock.Hash() == hash {
return highestVerifiedBlock
}
return bc.hc.GetHeaderByHash(hash)
}
// GetHeaderByNumber retrieves a block header from the database by number, // GetHeaderByNumber retrieves a block header from the database by number,
// caching it (associated with its hash) if found. // caching it (associated with its hash) if found.
func (bc *BlockChain) GetHeaderByNumber(number uint64) *types.Header { func (bc *BlockChain) GetHeaderByNumber(number uint64) *types.Header {
@ -486,6 +495,11 @@ func (bc *BlockChain) SubscribeChainHeadEvent(ch chan<- ChainHeadEvent) event.Su
return bc.scope.Track(bc.chainHeadFeed.Subscribe(ch)) return bc.scope.Track(bc.chainHeadFeed.Subscribe(ch))
} }
// SubscribeHighestVerifiedBlockEvent registers a subscription of HighestVerifiedBlockEvent.
func (bc *BlockChain) SubscribeHighestVerifiedHeaderEvent(ch chan<- HighestVerifiedBlockEvent) event.Subscription {
return bc.scope.Track(bc.highestVerifiedBlockFeed.Subscribe(ch))
}
// SubscribeChainBlockEvent registers a subscription of ChainBlockEvent. // SubscribeChainBlockEvent registers a subscription of ChainBlockEvent.
func (bc *BlockChain) SubscribeChainBlockEvent(ch chan<- ChainHeadEvent) event.Subscription { func (bc *BlockChain) SubscribeChainBlockEvent(ch chan<- ChainHeadEvent) event.Subscription {
return bc.scope.Track(bc.chainBlockFeed.Subscribe(ch)) return bc.scope.Track(bc.chainBlockFeed.Subscribe(ch))
@ -511,3 +525,12 @@ func (bc *BlockChain) SubscribeBlockProcessingEvent(ch chan<- bool) event.Subscr
func (bc *BlockChain) SubscribeFinalizedHeaderEvent(ch chan<- FinalizedHeaderEvent) event.Subscription { func (bc *BlockChain) SubscribeFinalizedHeaderEvent(ch chan<- FinalizedHeaderEvent) event.Subscription {
return bc.scope.Track(bc.finalizedHeaderFeed.Subscribe(ch)) return bc.scope.Track(bc.finalizedHeaderFeed.Subscribe(ch))
} }
// AncientTail retrieves the tail the ancients blocks
func (bc *BlockChain) AncientTail() (uint64, error) {
tail, err := bc.db.BlockStore().Tail()
if err != nil {
return 0, err
}
return tail, nil
}

View File

@ -26,6 +26,8 @@ import (
"testing" "testing"
"time" "time"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus/ethash" "github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/rawdb"
@ -1795,6 +1797,13 @@ func testRepairWithScheme(t *testing.T, tt *rewindTest, snapshots bool, scheme s
config.SnapshotWait = true config.SnapshotWait = true
} }
config.TriesInMemory = 128 config.TriesInMemory = 128
if err = db.SetupFreezerEnv(&ethdb.FreezerEnv{
ChainCfg: gspec.Config,
BlobExtraReserve: params.DefaultExtraReserveForBlobRequests,
}); err != nil {
t.Fatalf("Failed to create chain: %v", err)
}
chain, err := NewBlockChain(db, config, gspec, nil, engine, vm.Config{}, nil, nil) chain, err := NewBlockChain(db, config, gspec, nil, engine, vm.Config{}, nil, nil)
if err != nil { if err != nil {
t.Fatalf("Failed to create chain: %v", err) t.Fatalf("Failed to create chain: %v", err)

View File

@ -27,6 +27,8 @@ import (
"testing" "testing"
"time" "time"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus/ethash" "github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/rawdb"
@ -1998,6 +2000,13 @@ func testSetHeadWithScheme(t *testing.T, tt *rewindTest, snapshots bool, scheme
config.SnapshotWait = true config.SnapshotWait = true
} }
config.TriesInMemory = 128 config.TriesInMemory = 128
if err = db.SetupFreezerEnv(&ethdb.FreezerEnv{
ChainCfg: gspec.Config,
BlobExtraReserve: params.DefaultExtraReserveForBlobRequests,
}); err != nil {
t.Fatalf("Failed to create chain: %v", err)
}
chain, err := NewBlockChain(db, config, gspec, nil, engine, vm.Config{}, nil, nil) chain, err := NewBlockChain(db, config, gspec, nil, engine, vm.Config{}, nil, nil)
if err != nil { if err != nil {
t.Fatalf("Failed to create chain: %v", err) t.Fatalf("Failed to create chain: %v", err)

View File

@ -974,7 +974,7 @@ func testFastVsFullChains(t *testing.T, scheme string) {
t.Fatalf("failed to insert receipt %d: %v", n, err) t.Fatalf("failed to insert receipt %d: %v", n, err)
} }
// Freezer style fast import the chain. // Freezer style fast import the chain.
ancientDb, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false) ancientDb, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false, false)
if err != nil { if err != nil {
t.Fatalf("failed to create temp freezer db: %v", err) t.Fatalf("failed to create temp freezer db: %v", err)
} }
@ -1069,7 +1069,7 @@ func testLightVsFastVsFullChainHeads(t *testing.T, scheme string) {
// makeDb creates a db instance for testing. // makeDb creates a db instance for testing.
makeDb := func() ethdb.Database { makeDb := func() ethdb.Database {
db, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false) db, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false, false)
if err != nil { if err != nil {
t.Fatalf("failed to create temp freezer db: %v", err) t.Fatalf("failed to create temp freezer db: %v", err)
} }
@ -1957,7 +1957,7 @@ func testLargeReorgTrieGC(t *testing.T, scheme string) {
competitor, _ := GenerateChain(genesis.Config, shared[len(shared)-1], engine, genDb, 2*TriesInMemory+1, func(i int, b *BlockGen) { b.SetCoinbase(common.Address{3}) }) competitor, _ := GenerateChain(genesis.Config, shared[len(shared)-1], engine, genDb, 2*TriesInMemory+1, func(i int, b *BlockGen) { b.SetCoinbase(common.Address{3}) })
// Import the shared chain and the original canonical one // Import the shared chain and the original canonical one
db, _ := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false) db, _ := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false, false)
defer db.Close() defer db.Close()
chain, err := NewBlockChain(db, DefaultCacheConfigWithScheme(scheme), genesis, nil, engine, vm.Config{}, nil, nil) chain, err := NewBlockChain(db, DefaultCacheConfigWithScheme(scheme), genesis, nil, engine, vm.Config{}, nil, nil)
@ -2026,7 +2026,7 @@ func testBlockchainRecovery(t *testing.T, scheme string) {
_, blocks, receipts := GenerateChainWithGenesis(gspec, ethash.NewFaker(), int(height), nil) _, blocks, receipts := GenerateChainWithGenesis(gspec, ethash.NewFaker(), int(height), nil)
// Import the chain as a ancient-first node and ensure all pointers are updated // Import the chain as a ancient-first node and ensure all pointers are updated
ancientDb, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false) ancientDb, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false, false)
if err != nil { if err != nil {
t.Fatalf("failed to create temp freezer db: %v", err) t.Fatalf("failed to create temp freezer db: %v", err)
} }
@ -2097,7 +2097,7 @@ func testInsertReceiptChainRollback(t *testing.T, scheme string) {
} }
// Set up a BlockChain that uses the ancient store. // Set up a BlockChain that uses the ancient store.
ancientDb, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false) ancientDb, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false, false)
if err != nil { if err != nil {
t.Fatalf("failed to create temp freezer db: %v", err) t.Fatalf("failed to create temp freezer db: %v", err)
} }
@ -2167,7 +2167,7 @@ func testLowDiffLongChain(t *testing.T, scheme string) {
}) })
// Import the canonical chain // Import the canonical chain
diskdb, _ := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false) diskdb, _ := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false, false)
defer diskdb.Close() defer diskdb.Close()
chain, err := NewBlockChain(diskdb, DefaultCacheConfigWithScheme(scheme), genesis, nil, engine, vm.Config{}, nil, nil) chain, err := NewBlockChain(diskdb, DefaultCacheConfigWithScheme(scheme), genesis, nil, engine, vm.Config{}, nil, nil)
@ -2384,7 +2384,7 @@ func testInsertKnownChainData(t *testing.T, typ string, scheme string) {
b.OffsetTime(-9) // A higher difficulty b.OffsetTime(-9) // A higher difficulty
}) })
// Import the shared chain and the original canonical one // Import the shared chain and the original canonical one
chaindb, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false) chaindb, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false, false)
if err != nil { if err != nil {
t.Fatalf("failed to create temp freezer db: %v", err) t.Fatalf("failed to create temp freezer db: %v", err)
} }
@ -2555,7 +2555,7 @@ func testInsertKnownChainDataWithMerging(t *testing.T, typ string, mergeHeight i
} }
}) })
// Import the shared chain and the original canonical one // Import the shared chain and the original canonical one
chaindb, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false) chaindb, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false, false)
if err != nil { if err != nil {
t.Fatalf("failed to create temp freezer db: %v", err) t.Fatalf("failed to create temp freezer db: %v", err)
} }
@ -3858,7 +3858,7 @@ func testSetCanonical(t *testing.T, scheme string) {
} }
gen.AddTx(tx) gen.AddTx(tx)
}) })
diskdb, _ := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false) diskdb, _ := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), t.TempDir(), "", false, false, false, false, false)
defer diskdb.Close() defer diskdb.Close()
chain, err := NewBlockChain(diskdb, DefaultCacheConfigWithScheme(scheme), gspec, nil, engine, vm.Config{}, nil, nil) chain, err := NewBlockChain(diskdb, DefaultCacheConfigWithScheme(scheme), gspec, nil, engine, vm.Config{}, nil, nil)
@ -4483,7 +4483,7 @@ func (c *mockParlia) CalcDifficulty(chain consensus.ChainHeaderReader, time uint
func TestParliaBlobFeeReward(t *testing.T) { func TestParliaBlobFeeReward(t *testing.T) {
// Have N headers in the freezer // Have N headers in the freezer
frdir := t.TempDir() frdir := t.TempDir()
db, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), frdir, "", false, false, false, false) db, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), frdir, "", false, false, false, false, false)
if err != nil { if err != nil {
t.Fatalf("failed to create database with ancient backend") t.Fatalf("failed to create database with ancient backend")
} }

View File

@ -486,7 +486,7 @@ func (cm *chainMaker) makeHeader(parent *types.Block, state *state.StateDB, engi
if cm.config.Parlia != nil { if cm.config.Parlia != nil {
header.WithdrawalsHash = &types.EmptyWithdrawalsHash header.WithdrawalsHash = &types.EmptyWithdrawalsHash
} }
if cm.config.Parlia == nil { if cm.config.Parlia == nil || cm.config.IsBohr(header.Number, header.Time) {
header.ParentBeaconRoot = new(common.Hash) header.ParentBeaconRoot = new(common.Hash)
} }
} }
@ -621,6 +621,10 @@ func (cm *chainMaker) GetHighestVerifiedHeader() *types.Header {
panic("not supported") panic("not supported")
} }
func (cm *chainMaker) GetVerifiedBlockByHash(hash common.Hash) *types.Header {
return cm.GetHeaderByHash(hash)
}
func (cm *chainMaker) ChasingHead() *types.Header { func (cm *chainMaker) ChasingHead() *types.Header {
panic("not supported") panic("not supported")
} }

View File

@ -5,6 +5,9 @@ import (
"errors" "errors"
"fmt" "fmt"
"sync" "sync"
"time"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/gopool" "github.com/ethereum/go-ethereum/common/gopool"
@ -14,6 +17,10 @@ import (
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
) )
var (
daCheckTimer = metrics.NewRegisteredTimer("chain/dacheck", nil)
)
// validateBlobSidecar it is same as validateBlobSidecar in core/txpool/validation.go // validateBlobSidecar it is same as validateBlobSidecar in core/txpool/validation.go
func validateBlobSidecar(hashes []common.Hash, sidecar *types.BlobSidecar) error { func validateBlobSidecar(hashes []common.Hash, sidecar *types.BlobSidecar) error {
if len(sidecar.Blobs) != len(hashes) { if len(sidecar.Blobs) != len(hashes) {
@ -46,6 +53,10 @@ func validateBlobSidecar(hashes []common.Hash, sidecar *types.BlobSidecar) error
// IsDataAvailable it checks that the blobTx block has available blob data // IsDataAvailable it checks that the blobTx block has available blob data
func IsDataAvailable(chain consensus.ChainHeaderReader, block *types.Block) (err error) { func IsDataAvailable(chain consensus.ChainHeaderReader, block *types.Block) (err error) {
defer func(start time.Time) {
daCheckTimer.Update(time.Since(start))
}(time.Now())
// refer logic in ValidateBody // refer logic in ValidateBody
if !chain.Config().IsCancun(block.Number(), block.Time()) { if !chain.Config().IsCancun(block.Number(), block.Time()) {
if block.Sidecars() != nil { if block.Sidecars() != nil {

View File

@ -365,6 +365,10 @@ func (r *mockDAHeaderReader) GetHighestVerifiedHeader() *types.Header {
panic("not supported") panic("not supported")
} }
func (r *mockDAHeaderReader) GetVerifiedBlockByHash(hash common.Hash) *types.Header {
panic("not supported")
}
func createMockDATx(config *params.ChainConfig, sidecar *types.BlobTxSidecar) *types.Transaction { func createMockDATx(config *params.ChainConfig, sidecar *types.BlobTxSidecar) *types.Transaction {
if sidecar == nil { if sidecar == nil {
tx := &types.DynamicFeeTx{ tx := &types.DynamicFeeTx{

View File

@ -50,3 +50,5 @@ type ChainSideEvent struct {
} }
type ChainHeadEvent struct{ Block *types.Block } type ChainHeadEvent struct{ Block *types.Block }
type HighestVerifiedBlockEvent struct{ Header *types.Header }

View File

@ -86,9 +86,16 @@ func (f *ForkChoice) ReorgNeeded(current *types.Header, extern *types.Header) (b
localTD = f.chain.GetTd(current.Hash(), current.Number.Uint64()) localTD = f.chain.GetTd(current.Hash(), current.Number.Uint64())
externTd = f.chain.GetTd(extern.Hash(), extern.Number.Uint64()) externTd = f.chain.GetTd(extern.Hash(), extern.Number.Uint64())
) )
if localTD == nil || externTd == nil { if localTD == nil {
return false, errors.New("missing td") return false, errors.New("missing td")
} }
if externTd == nil {
ptd := f.chain.GetTd(extern.ParentHash, extern.Number.Uint64()-1)
if ptd == nil {
return false, consensus.ErrUnknownAncestor
}
externTd = new(big.Int).Add(ptd, extern.Difficulty)
}
// Accept the new header as the chain head if the transition // Accept the new header as the chain head if the transition
// is already triggered. We assume all the headers after the // is already triggered. We assume all the headers after the
// transition come from the trusted consensus layer. // transition come from the trusted consensus layer.
@ -114,7 +121,19 @@ func (f *ForkChoice) ReorgNeeded(current *types.Header, extern *types.Header) (b
if f.preserve != nil { if f.preserve != nil {
currentPreserve, externPreserve = f.preserve(current), f.preserve(extern) currentPreserve, externPreserve = f.preserve(current), f.preserve(extern)
} }
reorg = !currentPreserve && (externPreserve || f.rand.Float64() < 0.5) choiceRules := func() bool {
if extern.Time == current.Time {
doubleSign := (extern.Coinbase == current.Coinbase)
if doubleSign {
return extern.Hash().Cmp(current.Hash()) < 0
} else {
return f.rand.Float64() < 0.5
}
} else {
return extern.Time < current.Time
}
}
reorg = !currentPreserve && (externPreserve || choiceRules())
} }
return reorg, nil return reorg, nil
} }

View File

@ -216,10 +216,9 @@ func (e *GenesisMismatchError) Error() string {
// ChainOverrides contains the changes to chain config // ChainOverrides contains the changes to chain config
// Typically, these modifications involve hardforks that are not enabled on the BSC mainnet, intended for testing purposes. // Typically, these modifications involve hardforks that are not enabled on the BSC mainnet, intended for testing purposes.
type ChainOverrides struct { type ChainOverrides struct {
OverrideCancun *uint64 OverridePassedForkTime *uint64
OverrideVerkle *uint64 OverrideBohr *uint64
OverrideFeynman *uint64 OverrideVerkle *uint64
OverrideFeynmanFix *uint64
} }
// SetupGenesisBlock writes or updates the genesis block in db. // SetupGenesisBlock writes or updates the genesis block in db.
@ -245,18 +244,21 @@ func SetupGenesisBlockWithOverride(db ethdb.Database, triedb *triedb.Database, g
} }
applyOverrides := func(config *params.ChainConfig) { applyOverrides := func(config *params.ChainConfig) {
if config != nil { if config != nil {
if overrides != nil && overrides.OverrideCancun != nil { if overrides != nil && overrides.OverridePassedForkTime != nil {
config.CancunTime = overrides.OverrideCancun config.ShanghaiTime = overrides.OverridePassedForkTime
config.KeplerTime = overrides.OverridePassedForkTime
config.FeynmanTime = overrides.OverridePassedForkTime
config.FeynmanFixTime = overrides.OverridePassedForkTime
config.CancunTime = overrides.OverridePassedForkTime
config.HaberTime = overrides.OverridePassedForkTime
config.HaberFixTime = overrides.OverridePassedForkTime
}
if overrides != nil && overrides.OverrideBohr != nil {
config.BohrTime = overrides.OverrideBohr
} }
if overrides != nil && overrides.OverrideVerkle != nil { if overrides != nil && overrides.OverrideVerkle != nil {
config.VerkleTime = overrides.OverrideVerkle config.VerkleTime = overrides.OverrideVerkle
} }
if overrides != nil && overrides.OverrideFeynman != nil {
config.FeynmanTime = overrides.OverrideFeynman
}
if overrides != nil && overrides.OverrideFeynmanFix != nil {
config.FeynmanFixTime = overrides.OverrideFeynmanFix
}
} }
} }
// Just commit the new block if there is no stored genesis block. // Just commit the new block if there is no stored genesis block.
@ -452,7 +454,7 @@ func (g *Genesis) ToBlock() *types.Block {
// EIP-4788: The parentBeaconBlockRoot of the genesis block is always // EIP-4788: The parentBeaconBlockRoot of the genesis block is always
// the zero hash. This is because the genesis block does not have a parent // the zero hash. This is because the genesis block does not have a parent
// by definition. // by definition.
if conf.Parlia == nil { if conf.Parlia == nil || conf.IsBohr(num, g.Timestamp) {
head.ParentBeaconRoot = new(common.Hash) head.ParentBeaconRoot = new(common.Hash)
} }
@ -493,13 +495,13 @@ func (g *Genesis) Commit(db ethdb.Database, triedb *triedb.Database) (*types.Blo
if err := flushAlloc(&g.Alloc, db, triedb, block.Hash()); err != nil { if err := flushAlloc(&g.Alloc, db, triedb, block.Hash()); err != nil {
return nil, err return nil, err
} }
rawdb.WriteTd(db, block.Hash(), block.NumberU64(), block.Difficulty()) rawdb.WriteTd(db.BlockStore(), block.Hash(), block.NumberU64(), block.Difficulty())
rawdb.WriteBlock(db, block) rawdb.WriteBlock(db.BlockStore(), block)
rawdb.WriteReceipts(db, block.Hash(), block.NumberU64(), nil) rawdb.WriteReceipts(db.BlockStore(), block.Hash(), block.NumberU64(), nil)
rawdb.WriteCanonicalHash(db, block.Hash(), block.NumberU64()) rawdb.WriteCanonicalHash(db.BlockStore(), block.Hash(), block.NumberU64())
rawdb.WriteHeadBlockHash(db, block.Hash()) rawdb.WriteHeadBlockHash(db.BlockStore(), block.Hash())
rawdb.WriteHeadFastBlockHash(db, block.Hash()) rawdb.WriteHeadFastBlockHash(db.BlockStore(), block.Hash())
rawdb.WriteHeadHeaderHash(db, block.Hash()) rawdb.WriteHeadHeaderHash(db.BlockStore(), block.Hash())
rawdb.WriteChainConfig(db, block.Hash(), config) rawdb.WriteChainConfig(db, block.Hash(), config)
return block, nil return block, nil
} }

View File

@ -172,9 +172,9 @@ func (hc *HeaderChain) Reorg(headers []*types.Header) error {
// pile them onto the existing chain. Otherwise, do the necessary // pile them onto the existing chain. Otherwise, do the necessary
// reorgs. // reorgs.
var ( var (
first = headers[0] first = headers[0]
last = headers[len(headers)-1] last = headers[len(headers)-1]
batch = hc.chainDb.NewBatch() blockBatch = hc.chainDb.BlockStore().NewBatch()
) )
if first.ParentHash != hc.currentHeaderHash { if first.ParentHash != hc.currentHeaderHash {
// Delete any canonical number assignments above the new head // Delete any canonical number assignments above the new head
@ -183,7 +183,7 @@ func (hc *HeaderChain) Reorg(headers []*types.Header) error {
if hash == (common.Hash{}) { if hash == (common.Hash{}) {
break break
} }
rawdb.DeleteCanonicalHash(batch, i) rawdb.DeleteCanonicalHash(blockBatch, i)
} }
// Overwrite any stale canonical number assignments, going // Overwrite any stale canonical number assignments, going
// backwards from the first header in this import until the // backwards from the first header in this import until the
@ -194,7 +194,7 @@ func (hc *HeaderChain) Reorg(headers []*types.Header) error {
headHash = header.Hash() headHash = header.Hash()
) )
for rawdb.ReadCanonicalHash(hc.chainDb, headNumber) != headHash { for rawdb.ReadCanonicalHash(hc.chainDb, headNumber) != headHash {
rawdb.WriteCanonicalHash(batch, headHash, headNumber) rawdb.WriteCanonicalHash(blockBatch, headHash, headNumber)
if headNumber == 0 { if headNumber == 0 {
break // It shouldn't be reached break // It shouldn't be reached
} }
@ -209,16 +209,16 @@ func (hc *HeaderChain) Reorg(headers []*types.Header) error {
for i := 0; i < len(headers)-1; i++ { for i := 0; i < len(headers)-1; i++ {
hash := headers[i+1].ParentHash // Save some extra hashing hash := headers[i+1].ParentHash // Save some extra hashing
num := headers[i].Number.Uint64() num := headers[i].Number.Uint64()
rawdb.WriteCanonicalHash(batch, hash, num) rawdb.WriteCanonicalHash(blockBatch, hash, num)
rawdb.WriteHeadHeaderHash(batch, hash) rawdb.WriteHeadHeaderHash(blockBatch, hash)
} }
// Write the last header // Write the last header
hash := headers[len(headers)-1].Hash() hash := headers[len(headers)-1].Hash()
num := headers[len(headers)-1].Number.Uint64() num := headers[len(headers)-1].Number.Uint64()
rawdb.WriteCanonicalHash(batch, hash, num) rawdb.WriteCanonicalHash(blockBatch, hash, num)
rawdb.WriteHeadHeaderHash(batch, hash) rawdb.WriteHeadHeaderHash(blockBatch, hash)
if err := batch.Write(); err != nil { if err := blockBatch.Write(); err != nil {
return err return err
} }
// Last step update all in-memory head header markers // Last step update all in-memory head header markers
@ -244,7 +244,7 @@ func (hc *HeaderChain) WriteHeaders(headers []*types.Header) (int, error) {
newTD = new(big.Int).Set(ptd) // Total difficulty of inserted chain newTD = new(big.Int).Set(ptd) // Total difficulty of inserted chain
inserted []rawdb.NumberHash // Ephemeral lookup of number/hash for the chain inserted []rawdb.NumberHash // Ephemeral lookup of number/hash for the chain
parentKnown = true // Set to true to force hc.HasHeader check the first iteration parentKnown = true // Set to true to force hc.HasHeader check the first iteration
batch = hc.chainDb.NewBatch() blockBatch = hc.chainDb.BlockStore().NewBatch()
) )
for i, header := range headers { for i, header := range headers {
var hash common.Hash var hash common.Hash
@ -264,10 +264,10 @@ func (hc *HeaderChain) WriteHeaders(headers []*types.Header) (int, error) {
alreadyKnown := parentKnown && hc.HasHeader(hash, number) alreadyKnown := parentKnown && hc.HasHeader(hash, number)
if !alreadyKnown { if !alreadyKnown {
// Irrelevant of the canonical status, write the TD and header to the database. // Irrelevant of the canonical status, write the TD and header to the database.
rawdb.WriteTd(batch, hash, number, newTD) rawdb.WriteTd(blockBatch, hash, number, newTD)
hc.tdCache.Add(hash, new(big.Int).Set(newTD)) hc.tdCache.Add(hash, new(big.Int).Set(newTD))
rawdb.WriteHeader(batch, header) rawdb.WriteHeader(blockBatch, header)
inserted = append(inserted, rawdb.NumberHash{Number: number, Hash: hash}) inserted = append(inserted, rawdb.NumberHash{Number: number, Hash: hash})
hc.headerCache.Add(hash, header) hc.headerCache.Add(hash, header)
hc.numberCache.Add(hash, number) hc.numberCache.Add(hash, number)
@ -280,7 +280,7 @@ func (hc *HeaderChain) WriteHeaders(headers []*types.Header) (int, error) {
return 0, errors.New("aborted") return 0, errors.New("aborted")
} }
// Commit to disk! // Commit to disk!
if err := batch.Write(); err != nil { if err := blockBatch.Write(); err != nil {
log.Crit("Failed to write headers", "error", err) log.Crit("Failed to write headers", "error", err)
} }
return len(inserted), nil return len(inserted), nil
@ -436,6 +436,10 @@ func (hc *HeaderChain) GetHighestVerifiedHeader() *types.Header {
return nil return nil
} }
func (hc *HeaderChain) GetVerifiedBlockByHash(hash common.Hash) *types.Header {
return hc.GetHeaderByHash(hash)
}
func (hc *HeaderChain) ChasingHead() *types.Header { func (hc *HeaderChain) ChasingHead() *types.Header {
return nil return nil
} }
@ -642,7 +646,7 @@ func (hc *HeaderChain) setHead(headBlock uint64, headTime uint64, updateFn Updat
} }
var ( var (
parentHash common.Hash parentHash common.Hash
batch = hc.chainDb.NewBatch() blockBatch = hc.chainDb.BlockStore().NewBatch()
origin = true origin = true
) )
done := func(header *types.Header) bool { done := func(header *types.Header) bool {
@ -668,7 +672,7 @@ func (hc *HeaderChain) setHead(headBlock uint64, headTime uint64, updateFn Updat
// first then remove the relative data from the database. // first then remove the relative data from the database.
// //
// Update head first(head fast block, head full block) before deleting the data. // Update head first(head fast block, head full block) before deleting the data.
markerBatch := hc.chainDb.NewBatch() markerBatch := hc.chainDb.BlockStore().NewBatch()
if updateFn != nil { if updateFn != nil {
newHead, force := updateFn(markerBatch, parent) newHead, force := updateFn(markerBatch, parent)
if force && ((headTime > 0 && newHead.Time < headTime) || (headTime == 0 && newHead.Number.Uint64() < headBlock)) { if force && ((headTime > 0 && newHead.Time < headTime) || (headTime == 0 && newHead.Number.Uint64() < headBlock)) {
@ -691,7 +695,7 @@ func (hc *HeaderChain) setHead(headBlock uint64, headTime uint64, updateFn Updat
// we don't end up with dangling daps in the database // we don't end up with dangling daps in the database
var nums []uint64 var nums []uint64
if origin { if origin {
for n := num + 1; len(rawdb.ReadAllHashes(hc.chainDb, n)) > 0; n++ { for n := num + 1; len(rawdb.ReadAllHashes(hc.chainDb.BlockStore(), n)) > 0; n++ {
nums = append([]uint64{n}, nums...) // suboptimal, but we don't really expect this path nums = append([]uint64{n}, nums...) // suboptimal, but we don't really expect this path
} }
origin = false origin = false
@ -701,23 +705,23 @@ func (hc *HeaderChain) setHead(headBlock uint64, headTime uint64, updateFn Updat
// Remove the related data from the database on all sidechains // Remove the related data from the database on all sidechains
for _, num := range nums { for _, num := range nums {
// Gather all the side fork hashes // Gather all the side fork hashes
hashes := rawdb.ReadAllHashes(hc.chainDb, num) hashes := rawdb.ReadAllHashes(hc.chainDb.BlockStore(), num)
if len(hashes) == 0 { if len(hashes) == 0 {
// No hashes in the database whatsoever, probably frozen already // No hashes in the database whatsoever, probably frozen already
hashes = append(hashes, hdr.Hash()) hashes = append(hashes, hdr.Hash())
} }
for _, hash := range hashes { for _, hash := range hashes {
if delFn != nil { if delFn != nil {
delFn(batch, hash, num) delFn(blockBatch, hash, num)
} }
rawdb.DeleteHeader(batch, hash, num) rawdb.DeleteHeader(blockBatch, hash, num)
rawdb.DeleteTd(batch, hash, num) rawdb.DeleteTd(blockBatch, hash, num)
} }
rawdb.DeleteCanonicalHash(batch, num) rawdb.DeleteCanonicalHash(blockBatch, num)
} }
} }
// Flush all accumulated deletions. // Flush all accumulated deletions.
if err := batch.Write(); err != nil { if err := blockBatch.Write(); err != nil {
log.Crit("Failed to rewind block", "error", err) log.Crit("Failed to rewind block", "error", err)
} }
// Clear out any stale content from the caches // Clear out any stale content from the caches

View File

@ -34,14 +34,23 @@ import (
"golang.org/x/exp/slices" "golang.org/x/exp/slices"
) )
// Support Multi-Database Based on Data Pattern, the Chaindata will be divided into three stores: BlockStore, StateStore, and ChainStore,
// according to data schema and read/write behavior. When using the following data interfaces, you should take note of the following:
//
// 1) Block-Related Data: For CanonicalHash, Header, Body, Td, Receipts, and BlobSidecars, the Write, Delete, and Iterator
// operations should carefully ensure that the database being used is BlockStore.
// 2) Meta-Related Data: For HeaderNumber, HeadHeaderHash, HeadBlockHash, HeadFastBlockHash, and FinalizedBlockHash, the
// Write and Delete operations should carefully ensure that the database being used is BlockStore.
// 3) Ancient Data: When using a multi-database, Ancient data will use the BlockStore.
// ReadCanonicalHash retrieves the hash assigned to a canonical block number. // ReadCanonicalHash retrieves the hash assigned to a canonical block number.
func ReadCanonicalHash(db ethdb.Reader, number uint64) common.Hash { func ReadCanonicalHash(db ethdb.Reader, number uint64) common.Hash {
var data []byte var data []byte
db.ReadAncients(func(reader ethdb.AncientReaderOp) error { db.BlockStoreReader().ReadAncients(func(reader ethdb.AncientReaderOp) error {
data, _ = reader.Ancient(ChainFreezerHashTable, number) data, _ = reader.Ancient(ChainFreezerHashTable, number)
if len(data) == 0 { if len(data) == 0 {
// Get it by hash from leveldb // Get it by hash from leveldb
data, _ = db.Get(headerHashKey(number)) data, _ = db.BlockStoreReader().Get(headerHashKey(number))
} }
return nil return nil
}) })
@ -144,8 +153,8 @@ func ReadAllCanonicalHashes(db ethdb.Iteratee, from uint64, to uint64, limit int
} }
// ReadHeaderNumber returns the header number assigned to a hash. // ReadHeaderNumber returns the header number assigned to a hash.
func ReadHeaderNumber(db ethdb.KeyValueReader, hash common.Hash) *uint64 { func ReadHeaderNumber(db ethdb.MultiDatabaseReader, hash common.Hash) *uint64 {
data, _ := db.Get(headerNumberKey(hash)) data, _ := db.BlockStoreReader().Get(headerNumberKey(hash))
if len(data) != 8 { if len(data) != 8 {
return nil return nil
} }
@ -170,8 +179,8 @@ func DeleteHeaderNumber(db ethdb.KeyValueWriter, hash common.Hash) {
} }
// ReadHeadHeaderHash retrieves the hash of the current canonical head header. // ReadHeadHeaderHash retrieves the hash of the current canonical head header.
func ReadHeadHeaderHash(db ethdb.KeyValueReader) common.Hash { func ReadHeadHeaderHash(db ethdb.MultiDatabaseReader) common.Hash {
data, _ := db.Get(headHeaderKey) data, _ := db.BlockStoreReader().Get(headHeaderKey)
if len(data) == 0 { if len(data) == 0 {
return common.Hash{} return common.Hash{}
} }
@ -186,8 +195,8 @@ func WriteHeadHeaderHash(db ethdb.KeyValueWriter, hash common.Hash) {
} }
// ReadHeadBlockHash retrieves the hash of the current canonical head block. // ReadHeadBlockHash retrieves the hash of the current canonical head block.
func ReadHeadBlockHash(db ethdb.KeyValueReader) common.Hash { func ReadHeadBlockHash(db ethdb.MultiDatabaseReader) common.Hash {
data, _ := db.Get(headBlockKey) data, _ := db.BlockStoreReader().Get(headBlockKey)
if len(data) == 0 { if len(data) == 0 {
return common.Hash{} return common.Hash{}
} }
@ -202,8 +211,8 @@ func WriteHeadBlockHash(db ethdb.KeyValueWriter, hash common.Hash) {
} }
// ReadHeadFastBlockHash retrieves the hash of the current fast-sync head block. // ReadHeadFastBlockHash retrieves the hash of the current fast-sync head block.
func ReadHeadFastBlockHash(db ethdb.KeyValueReader) common.Hash { func ReadHeadFastBlockHash(db ethdb.MultiDatabaseReader) common.Hash {
data, _ := db.Get(headFastBlockKey) data, _ := db.BlockStoreReader().Get(headFastBlockKey)
if len(data) == 0 { if len(data) == 0 {
return common.Hash{} return common.Hash{}
} }
@ -217,6 +226,22 @@ func WriteHeadFastBlockHash(db ethdb.KeyValueWriter, hash common.Hash) {
} }
} }
// ReadFinalizedBlockHash retrieves the hash of the finalized block.
func ReadFinalizedBlockHash(db ethdb.MultiDatabaseReader) common.Hash {
data, _ := db.BlockStoreReader().Get(headFinalizedBlockKey)
if len(data) == 0 {
return common.Hash{}
}
return common.BytesToHash(data)
}
// WriteFinalizedBlockHash stores the hash of the finalized block.
func WriteFinalizedBlockHash(db ethdb.KeyValueWriter, hash common.Hash) {
if err := db.Put(headFinalizedBlockKey, hash.Bytes()); err != nil {
log.Crit("Failed to store last finalized block's hash", "err", err)
}
}
// ReadLastPivotNumber retrieves the number of the last pivot block. If the node // ReadLastPivotNumber retrieves the number of the last pivot block. If the node
// full synced, the last pivot will always be nil. // full synced, the last pivot will always be nil.
func ReadLastPivotNumber(db ethdb.KeyValueReader) *uint64 { func ReadLastPivotNumber(db ethdb.KeyValueReader) *uint64 {
@ -281,13 +306,13 @@ func ReadHeaderRange(db ethdb.Reader, number uint64, count uint64) []rlp.RawValu
// It's ok to request block 0, 1 item // It's ok to request block 0, 1 item
count = number + 1 count = number + 1
} }
limit, _ := db.Ancients() limit, _ := db.BlockStoreReader().Ancients()
// First read live blocks // First read live blocks
if i >= limit { if i >= limit {
// If we need to read live blocks, we need to figure out the hash first // If we need to read live blocks, we need to figure out the hash first
hash := ReadCanonicalHash(db, number) hash := ReadCanonicalHash(db, number)
for ; i >= limit && count > 0; i-- { for ; i >= limit && count > 0; i-- {
if data, _ := db.Get(headerKey(i, hash)); len(data) > 0 { if data, _ := db.BlockStoreReader().Get(headerKey(i, hash)); len(data) > 0 {
rlpHeaders = append(rlpHeaders, data) rlpHeaders = append(rlpHeaders, data)
// Get the parent hash for next query // Get the parent hash for next query
hash = types.HeaderParentHashFromRLP(data) hash = types.HeaderParentHashFromRLP(data)
@ -300,8 +325,8 @@ func ReadHeaderRange(db ethdb.Reader, number uint64, count uint64) []rlp.RawValu
if count == 0 { if count == 0 {
return rlpHeaders return rlpHeaders
} }
// read remaining from ancients // read remaining from ancients, cap at 2M
data, err := db.AncientRange(ChainFreezerHeaderTable, i+1-count, count, 0) data, err := db.BlockStoreReader().AncientRange(ChainFreezerHeaderTable, i+1-count, count, 2*1024*1024)
if err != nil { if err != nil {
log.Error("Failed to read headers from freezer", "err", err) log.Error("Failed to read headers from freezer", "err", err)
return rlpHeaders return rlpHeaders
@ -320,7 +345,7 @@ func ReadHeaderRange(db ethdb.Reader, number uint64, count uint64) []rlp.RawValu
// ReadHeaderRLP retrieves a block header in its raw RLP database encoding. // ReadHeaderRLP retrieves a block header in its raw RLP database encoding.
func ReadHeaderRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue { func ReadHeaderRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue {
var data []byte var data []byte
db.ReadAncients(func(reader ethdb.AncientReaderOp) error { db.BlockStoreReader().ReadAncients(func(reader ethdb.AncientReaderOp) error {
// First try to look up the data in ancient database. Extra hash // First try to look up the data in ancient database. Extra hash
// comparison is necessary since ancient database only maintains // comparison is necessary since ancient database only maintains
// the canonical data. // the canonical data.
@ -329,7 +354,7 @@ func ReadHeaderRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValu
return nil return nil
} }
// If not, try reading from leveldb // If not, try reading from leveldb
data, _ = db.Get(headerKey(number, hash)) data, _ = db.BlockStoreReader().Get(headerKey(number, hash))
return nil return nil
}) })
return data return data
@ -337,10 +362,10 @@ func ReadHeaderRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValu
// HasHeader verifies the existence of a block header corresponding to the hash. // HasHeader verifies the existence of a block header corresponding to the hash.
func HasHeader(db ethdb.Reader, hash common.Hash, number uint64) bool { func HasHeader(db ethdb.Reader, hash common.Hash, number uint64) bool {
if isCanon(db, number, hash) { if isCanon(db.BlockStoreReader(), number, hash) {
return true return true
} }
if has, err := db.Has(headerKey(number, hash)); !has || err != nil { if has, err := db.BlockStoreReader().Has(headerKey(number, hash)); !has || err != nil {
return false return false
} }
return true return true
@ -427,14 +452,14 @@ func ReadBodyRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue
// comparison is necessary since ancient database only maintains // comparison is necessary since ancient database only maintains
// the canonical data. // the canonical data.
var data []byte var data []byte
db.ReadAncients(func(reader ethdb.AncientReaderOp) error { db.BlockStoreReader().ReadAncients(func(reader ethdb.AncientReaderOp) error {
// Check if the data is in ancients // Check if the data is in ancients
if isCanon(reader, number, hash) { if isCanon(reader, number, hash) {
data, _ = reader.Ancient(ChainFreezerBodiesTable, number) data, _ = reader.Ancient(ChainFreezerBodiesTable, number)
return nil return nil
} }
// If not, try reading from leveldb // If not, try reading from leveldb
data, _ = db.Get(blockBodyKey(number, hash)) data, _ = db.BlockStoreReader().Get(blockBodyKey(number, hash))
return nil return nil
}) })
return data return data
@ -444,7 +469,7 @@ func ReadBodyRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue
// block at number, in RLP encoding. // block at number, in RLP encoding.
func ReadCanonicalBodyRLP(db ethdb.Reader, number uint64) rlp.RawValue { func ReadCanonicalBodyRLP(db ethdb.Reader, number uint64) rlp.RawValue {
var data []byte var data []byte
db.ReadAncients(func(reader ethdb.AncientReaderOp) error { db.BlockStoreReader().ReadAncients(func(reader ethdb.AncientReaderOp) error {
data, _ = reader.Ancient(ChainFreezerBodiesTable, number) data, _ = reader.Ancient(ChainFreezerBodiesTable, number)
if len(data) > 0 { if len(data) > 0 {
return nil return nil
@ -452,8 +477,8 @@ func ReadCanonicalBodyRLP(db ethdb.Reader, number uint64) rlp.RawValue {
// Block is not in ancients, read from leveldb by hash and number. // Block is not in ancients, read from leveldb by hash and number.
// Note: ReadCanonicalHash cannot be used here because it also // Note: ReadCanonicalHash cannot be used here because it also
// calls ReadAncients internally. // calls ReadAncients internally.
hash, _ := db.Get(headerHashKey(number)) hash, _ := db.BlockStoreReader().Get(headerHashKey(number))
data, _ = db.Get(blockBodyKey(number, common.BytesToHash(hash))) data, _ = db.BlockStoreReader().Get(blockBodyKey(number, common.BytesToHash(hash)))
return nil return nil
}) })
return data return data
@ -468,10 +493,10 @@ func WriteBodyRLP(db ethdb.KeyValueWriter, hash common.Hash, number uint64, rlp
// HasBody verifies the existence of a block body corresponding to the hash. // HasBody verifies the existence of a block body corresponding to the hash.
func HasBody(db ethdb.Reader, hash common.Hash, number uint64) bool { func HasBody(db ethdb.Reader, hash common.Hash, number uint64) bool {
if isCanon(db, number, hash) { if isCanon(db.BlockStoreReader(), number, hash) {
return true return true
} }
if has, err := db.Has(blockBodyKey(number, hash)); !has || err != nil { if has, err := db.BlockStoreReader().Has(blockBodyKey(number, hash)); !has || err != nil {
return false return false
} }
return true return true
@ -500,6 +525,13 @@ func WriteBody(db ethdb.KeyValueWriter, hash common.Hash, number uint64, body *t
WriteBodyRLP(db, hash, number, data) WriteBodyRLP(db, hash, number, data)
} }
// DeleteBody removes all block body data associated with a hash.
func DeleteBody(db ethdb.KeyValueWriter, hash common.Hash, number uint64) {
if err := db.Delete(blockBodyKey(number, hash)); err != nil {
log.Crit("Failed to delete block body", "err", err)
}
}
func WriteDiffLayer(db ethdb.KeyValueWriter, hash common.Hash, layer *types.DiffLayer) { func WriteDiffLayer(db ethdb.KeyValueWriter, hash common.Hash, layer *types.DiffLayer) {
data, err := rlp.EncodeToBytes(layer) data, err := rlp.EncodeToBytes(layer)
if err != nil { if err != nil {
@ -538,24 +570,17 @@ func DeleteDiffLayer(db ethdb.KeyValueWriter, blockHash common.Hash) {
} }
} }
// DeleteBody removes all block body data associated with a hash.
func DeleteBody(db ethdb.KeyValueWriter, hash common.Hash, number uint64) {
if err := db.Delete(blockBodyKey(number, hash)); err != nil {
log.Crit("Failed to delete block body", "err", err)
}
}
// ReadTdRLP retrieves a block's total difficulty corresponding to the hash in RLP encoding. // ReadTdRLP retrieves a block's total difficulty corresponding to the hash in RLP encoding.
func ReadTdRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue { func ReadTdRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue {
var data []byte var data []byte
db.ReadAncients(func(reader ethdb.AncientReaderOp) error { db.BlockStoreReader().ReadAncients(func(reader ethdb.AncientReaderOp) error {
// Check if the data is in ancients // Check if the data is in ancients
if isCanon(reader, number, hash) { if isCanon(reader, number, hash) {
data, _ = reader.Ancient(ChainFreezerDifficultyTable, number) data, _ = reader.Ancient(ChainFreezerDifficultyTable, number)
return nil return nil
} }
// If not, try reading from leveldb // If not, try reading from leveldb
data, _ = db.Get(headerTDKey(number, hash)) data, _ = db.BlockStoreReader().Get(headerTDKey(number, hash))
return nil return nil
}) })
return data return data
@ -596,10 +621,10 @@ func DeleteTd(db ethdb.KeyValueWriter, hash common.Hash, number uint64) {
// HasReceipts verifies the existence of all the transaction receipts belonging // HasReceipts verifies the existence of all the transaction receipts belonging
// to a block. // to a block.
func HasReceipts(db ethdb.Reader, hash common.Hash, number uint64) bool { func HasReceipts(db ethdb.Reader, hash common.Hash, number uint64) bool {
if isCanon(db, number, hash) { if isCanon(db.BlockStoreReader(), number, hash) {
return true return true
} }
if has, err := db.Has(blockReceiptsKey(number, hash)); !has || err != nil { if has, err := db.BlockStoreReader().Has(blockReceiptsKey(number, hash)); !has || err != nil {
return false return false
} }
return true return true
@ -608,14 +633,14 @@ func HasReceipts(db ethdb.Reader, hash common.Hash, number uint64) bool {
// ReadReceiptsRLP retrieves all the transaction receipts belonging to a block in RLP encoding. // ReadReceiptsRLP retrieves all the transaction receipts belonging to a block in RLP encoding.
func ReadReceiptsRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue { func ReadReceiptsRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue {
var data []byte var data []byte
db.ReadAncients(func(reader ethdb.AncientReaderOp) error { db.BlockStoreReader().ReadAncients(func(reader ethdb.AncientReaderOp) error {
// Check if the data is in ancients // Check if the data is in ancients
if isCanon(reader, number, hash) { if isCanon(reader, number, hash) {
data, _ = reader.Ancient(ChainFreezerReceiptTable, number) data, _ = reader.Ancient(ChainFreezerReceiptTable, number)
return nil return nil
} }
// If not, try reading from leveldb // If not, try reading from leveldb
data, _ = db.Get(blockReceiptsKey(number, hash)) data, _ = db.BlockStoreReader().Get(blockReceiptsKey(number, hash))
return nil return nil
}) })
return data return data
@ -868,14 +893,14 @@ func WriteAncientBlocks(db ethdb.AncientWriter, blocks []*types.Block, receipts
// ReadBlobSidecarsRLP retrieves all the transaction blobs belonging to a block in RLP encoding. // ReadBlobSidecarsRLP retrieves all the transaction blobs belonging to a block in RLP encoding.
func ReadBlobSidecarsRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue { func ReadBlobSidecarsRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue {
var data []byte var data []byte
db.ReadAncients(func(reader ethdb.AncientReaderOp) error { db.BlockStoreReader().ReadAncients(func(reader ethdb.AncientReaderOp) error {
// Check if the data is in ancients // Check if the data is in ancients
if isCanon(reader, number, hash) { if isCanon(reader, number, hash) {
data, _ = reader.Ancient(ChainFreezerBlobSidecarTable, number) data, _ = reader.Ancient(ChainFreezerBlobSidecarTable, number)
return nil return nil
} }
// If not, try reading from leveldb // If not, try reading from leveldb
data, _ = db.Get(blockBlobSidecarsKey(number, hash)) data, _ = db.BlockStoreReader().Get(blockBlobSidecarsKey(number, hash))
return nil return nil
}) })
return data return data

View File

@ -518,7 +518,7 @@ func checkBlobSidecarsRLP(have, want types.BlobSidecars) error {
func TestAncientStorage(t *testing.T) { func TestAncientStorage(t *testing.T) {
// Freezer style fast import the chain. // Freezer style fast import the chain.
frdir := t.TempDir() frdir := t.TempDir()
db, err := NewDatabaseWithFreezer(NewMemoryDatabase(), frdir, "", false, false, false, false) db, err := NewDatabaseWithFreezer(NewMemoryDatabase(), frdir, "", false, false, false, false, false)
if err != nil { if err != nil {
t.Fatalf("failed to create database with ancient backend") t.Fatalf("failed to create database with ancient backend")
} }
@ -657,7 +657,7 @@ func TestHashesInRange(t *testing.T) {
func BenchmarkWriteAncientBlocks(b *testing.B) { func BenchmarkWriteAncientBlocks(b *testing.B) {
// Open freezer database. // Open freezer database.
frdir := b.TempDir() frdir := b.TempDir()
db, err := NewDatabaseWithFreezer(NewMemoryDatabase(), frdir, "", false, false, false, false) db, err := NewDatabaseWithFreezer(NewMemoryDatabase(), frdir, "", false, false, false, false, false)
if err != nil { if err != nil {
b.Fatalf("failed to create database with ancient backend") b.Fatalf("failed to create database with ancient backend")
} }
@ -1001,7 +1001,7 @@ func TestHeadersRLPStorage(t *testing.T) {
// Have N headers in the freezer // Have N headers in the freezer
frdir := t.TempDir() frdir := t.TempDir()
db, err := NewDatabaseWithFreezer(NewMemoryDatabase(), frdir, "", false, false, false, false) db, err := NewDatabaseWithFreezer(NewMemoryDatabase(), frdir, "", false, false, false, false, false)
if err != nil { if err != nil {
t.Fatalf("failed to create database with ancient backend") t.Fatalf("failed to create database with ancient backend")
} }

View File

@ -18,6 +18,8 @@ package rawdb
import ( import (
"fmt" "fmt"
"io"
"os"
"path/filepath" "path/filepath"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
@ -98,6 +100,18 @@ func inspectFreezers(db ethdb.Database) ([]freezerInfo, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
file, err := os.Open(filepath.Join(datadir, StateFreezerName))
if err != nil {
return nil, err
}
defer file.Close()
// if state freezer folder has been pruned, there is no need for inspection
_, err = file.Readdirnames(1)
if err == io.EOF {
continue
}
f, err := NewStateFreezer(datadir, true, 0) f, err := NewStateFreezer(datadir, true, 0)
if err != nil { if err != nil {
return nil, err return nil, err
@ -121,16 +135,25 @@ func inspectFreezers(db ethdb.Database) ([]freezerInfo, error) {
// ancient indicates the path of root ancient directory where the chain freezer can // ancient indicates the path of root ancient directory where the chain freezer can
// be opened. Start and end specify the range for dumping out indexes. // be opened. Start and end specify the range for dumping out indexes.
// Note this function can only be used for debugging purposes. // Note this function can only be used for debugging purposes.
func InspectFreezerTable(ancient string, freezerName string, tableName string, start, end int64) error { func InspectFreezerTable(ancient string, freezerName string, tableName string, start, end int64, multiDatabase bool) error {
var ( var (
path string path string
tables map[string]bool tables map[string]bool
) )
switch freezerName { switch freezerName {
case ChainFreezerName: case ChainFreezerName:
path, tables = resolveChainFreezerDir(ancient), chainFreezerNoSnappy if multiDatabase {
path, tables = resolveChainFreezerDir(filepath.Dir(ancient)+"/block/ancient"), chainFreezerNoSnappy
} else {
path, tables = resolveChainFreezerDir(ancient), chainFreezerNoSnappy
}
case StateFreezerName: case StateFreezerName:
path, tables = filepath.Join(ancient, freezerName), stateFreezerNoSnappy if multiDatabase {
path, tables = filepath.Join(filepath.Dir(ancient)+"/state/ancient", freezerName), stateFreezerNoSnappy
} else {
path, tables = filepath.Join(ancient, freezerName), stateFreezerNoSnappy
}
default: default:
return fmt.Errorf("unknown freezer, supported ones: %v", freezers) return fmt.Errorf("unknown freezer, supported ones: %v", freezers)
} }

View File

@ -24,12 +24,12 @@ import (
"sync/atomic" "sync/atomic"
"time" "time"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
) )
const ( const (
@ -58,11 +58,14 @@ type chainFreezer struct {
wg sync.WaitGroup wg sync.WaitGroup
trigger chan chan struct{} // Manual blocking freeze trigger, test determinism trigger chan chan struct{} // Manual blocking freeze trigger, test determinism
freezeEnv atomic.Value freezeEnv atomic.Value
waitEnvTimes int
multiDatabase bool
} }
// newChainFreezer initializes the freezer for ancient chain data. // newChainFreezer initializes the freezer for ancient chain data.
func newChainFreezer(datadir string, namespace string, readonly bool, offset uint64) (*chainFreezer, error) { func newChainFreezer(datadir string, namespace string, readonly bool, offset uint64, multiDatabase bool) (*chainFreezer, error) {
freezer, err := NewChainFreezer(datadir, namespace, readonly, offset) freezer, err := NewChainFreezer(datadir, namespace, readonly, offset)
if err != nil { if err != nil {
return nil, err return nil, err
@ -87,6 +90,57 @@ func (f *chainFreezer) Close() error {
return f.Freezer.Close() return f.Freezer.Close()
} }
// readHeadNumber returns the number of chain head block. 0 is returned if the
// block is unknown or not available yet.
func (f *chainFreezer) readHeadNumber(db ethdb.Reader) uint64 {
hash := ReadHeadBlockHash(db)
if hash == (common.Hash{}) {
log.Error("Head block is not reachable")
return 0
}
number := ReadHeaderNumber(db, hash)
if number == nil {
log.Error("Number of head block is missing")
return 0
}
return *number
}
// readFinalizedNumber returns the number of finalized block. 0 is returned
// if the block is unknown or not available yet.
func (f *chainFreezer) readFinalizedNumber(db ethdb.Reader) uint64 {
hash := ReadFinalizedBlockHash(db)
if hash == (common.Hash{}) {
return 0
}
number := ReadHeaderNumber(db, hash)
if number == nil {
log.Error("Number of finalized block is missing")
return 0
}
return *number
}
// freezeThreshold returns the threshold for chain freezing. It's determined
// by formula: max(finality, HEAD-params.FullImmutabilityThreshold).
func (f *chainFreezer) freezeThreshold(db ethdb.Reader) (uint64, error) {
var (
head = f.readHeadNumber(db)
final = f.readFinalizedNumber(db)
headLimit uint64
)
if head > params.FullImmutabilityThreshold {
headLimit = head - params.FullImmutabilityThreshold
}
if final == 0 && headLimit == 0 {
return 0, errors.New("freezing threshold is not available")
}
if final > headLimit {
return final, nil
}
return headLimit, nil
}
// freeze is a background thread that periodically checks the blockchain for any // freeze is a background thread that periodically checks the blockchain for any
// import progress and moves ancient data from the fast database into the freezer. // import progress and moves ancient data from the fast database into the freezer.
// //
@ -125,74 +179,125 @@ func (f *chainFreezer) freeze(db ethdb.KeyValueStore) {
} }
} }
// check freezer env first, it must wait a while when the env is necessary var (
err := f.checkFreezerEnv() frozen uint64
if err == missFreezerEnvErr { threshold uint64
log.Warn("Freezer need related env, may wait for a while", "err", err) first uint64 // the first block to freeze
backoff = true last uint64 // the last block to freeze
continue
} hash common.Hash
if err != nil { number *uint64
log.Error("Freezer check FreezerEnv err", "err", err) head *types.Header
backoff = true err error
continue )
// use finalized block as the chain freeze indicator was used for multiDatabase feature, if multiDatabase is false, keep 9W blocks in db
if f.multiDatabase {
threshold, err = f.freezeThreshold(nfdb)
if err != nil {
backoff = true
log.Debug("Current full block not old enough to freeze", "err", err)
continue
}
frozen = f.frozen.Load()
// Short circuit if the blocks below threshold are already frozen.
if frozen != 0 && frozen-1 >= threshold {
backoff = true
log.Debug("Ancient blocks frozen already", "threshold", threshold, "frozen", frozen)
continue
}
hash = ReadHeadBlockHash(nfdb)
if hash == (common.Hash{}) {
log.Debug("Current full block hash unavailable") // new chain, empty database
backoff = true
continue
}
number = ReadHeaderNumber(nfdb, hash)
if number == nil {
log.Error("Current full block number unavailable", "hash", hash)
backoff = true
continue
}
head = ReadHeader(nfdb, hash, *number)
if head == nil {
log.Error("Current full block unavailable", "number", *number, "hash", hash)
backoff = true
continue
}
first = frozen
last = threshold
if last-first+1 > freezerBatchLimit {
last = freezerBatchLimit + first - 1
}
} else {
// Retrieve the freezing threshold.
hash = ReadHeadBlockHash(nfdb)
if hash == (common.Hash{}) {
log.Debug("Current full block hash unavailable") // new chain, empty database
backoff = true
continue
}
number = ReadHeaderNumber(nfdb, hash)
threshold = f.threshold.Load()
frozen = f.frozen.Load()
switch {
case number == nil:
log.Error("Current full block number unavailable", "hash", hash)
backoff = true
continue
case *number < threshold:
log.Debug("Current full block not old enough to freeze", "number", *number, "hash", hash, "delay", threshold)
backoff = true
continue
case *number-threshold <= frozen:
log.Debug("Ancient blocks frozen already", "number", *number, "hash", hash, "frozen", frozen)
backoff = true
continue
}
head = ReadHeader(nfdb, hash, *number)
if head == nil {
log.Error("Current full block unavailable", "number", *number, "hash", hash)
backoff = true
continue
}
first, _ = f.Ancients()
last = *number - threshold
if last-first > freezerBatchLimit {
last = first + freezerBatchLimit
}
} }
// Retrieve the freezing threshold. // check env first before chain freeze, it must wait when the env is necessary
hash := ReadHeadBlockHash(nfdb) if err := f.checkFreezerEnv(); err != nil {
if hash == (common.Hash{}) { f.waitEnvTimes++
log.Debug("Current full block hash unavailable") // new chain, empty database if f.waitEnvTimes%30 == 0 {
backoff = true log.Warn("Freezer need related env, may wait for a while, and it's not a issue when non-import block", "err", err)
continue return
} }
number := ReadHeaderNumber(nfdb, hash)
threshold := f.threshold.Load()
frozen := f.frozen.Load()
switch {
case number == nil:
log.Error("Current full block number unavailable", "hash", hash)
backoff = true
continue
case *number < threshold:
log.Debug("Current full block not old enough to freeze", "number", *number, "hash", hash, "delay", threshold)
backoff = true
continue
case *number-threshold <= frozen:
log.Debug("Ancient blocks frozen already", "number", *number, "hash", hash, "frozen", frozen)
backoff = true
continue
}
head := ReadHeader(nfdb, hash, *number)
if head == nil {
log.Error("Current full block unavailable", "number", *number, "hash", hash)
backoff = true backoff = true
continue continue
} }
// Seems we have data ready to be frozen, process in usable batches // Seems we have data ready to be frozen, process in usable batches
var ( var (
start = time.Now() start = time.Now()
first, _ = f.Ancients()
limit = *number - threshold
) )
if limit-first > freezerBatchLimit {
limit = first + freezerBatchLimit
}
ancients, err := f.freezeRangeWithBlobs(nfdb, first, limit) ancients, err := f.freezeRangeWithBlobs(nfdb, first, last)
if err != nil { if err != nil {
log.Error("Error in block freeze operation", "err", err) log.Error("Error in block freeze operation", "err", err)
backoff = true backoff = true
continue continue
} }
// Batch of blocks have been frozen, flush them before wiping from leveldb // Batch of blocks have been frozen, flush them before wiping from leveldb
if err := f.Sync(); err != nil { if err := f.Sync(); err != nil {
log.Crit("Failed to flush frozen tables", "err", err) log.Crit("Failed to flush frozen tables", "err", err)
} }
// Wipe out all data from the active database // Wipe out all data from the active database
batch := db.NewBatch() batch := db.NewBatch()
for i := 0; i < len(ancients); i++ { for i := 0; i < len(ancients); i++ {
@ -358,13 +463,16 @@ func (f *chainFreezer) freezeRangeWithBlobs(nfdb *nofreezedb, number, limit uint
return hashes, err return hashes, err
} }
// freezeRange moves a batch of chain segments from the fast database to the freezer.
// The parameters (number, limit) specify the relevant block range, both of which
// are included.
func (f *chainFreezer) freezeRange(nfdb *nofreezedb, number, limit uint64) (hashes []common.Hash, err error) { func (f *chainFreezer) freezeRange(nfdb *nofreezedb, number, limit uint64) (hashes []common.Hash, err error) {
if number > limit { if number > limit {
return nil, nil return nil, nil
} }
env, _ := f.freezeEnv.Load().(*ethdb.FreezerEnv) env, _ := f.freezeEnv.Load().(*ethdb.FreezerEnv)
hashes = make([]common.Hash, 0, limit-number) hashes = make([]common.Hash, 0, limit-number+1)
_, err = f.ModifyAncients(func(op ethdb.AncientWriteOp) error { _, err = f.ModifyAncients(func(op ethdb.AncientWriteOp) error {
for ; number <= limit; number++ { for ; number <= limit; number++ {
// Retrieve all the components of the canonical block. // Retrieve all the components of the canonical block.
@ -437,14 +545,7 @@ func (f *chainFreezer) checkFreezerEnv() error {
if exist { if exist {
return nil return nil
} }
blobFrozen, err := f.TableAncients(ChainFreezerBlobSidecarTable) return missFreezerEnvErr
if err != nil {
return err
}
if blobFrozen > 0 {
return missFreezerEnvErr
}
return nil
} }
func isCancun(env *ethdb.FreezerEnv, num *big.Int, time uint64) bool { func isCancun(env *ethdb.FreezerEnv, num *big.Int, time uint64) bool {

View File

@ -35,16 +35,16 @@ import (
// injects into the database the block hash->number mappings. // injects into the database the block hash->number mappings.
func InitDatabaseFromFreezer(db ethdb.Database) { func InitDatabaseFromFreezer(db ethdb.Database) {
// If we can't access the freezer or it's empty, abort // If we can't access the freezer or it's empty, abort
frozen, err := db.ItemAmountInAncient() frozen, err := db.BlockStore().ItemAmountInAncient()
if err != nil || frozen == 0 { if err != nil || frozen == 0 {
return return
} }
var ( var (
batch = db.NewBatch() batch = db.BlockStore().NewBatch()
start = time.Now() start = time.Now()
logged = start.Add(-7 * time.Second) // Unindex during import is fast, don't double log logged = start.Add(-7 * time.Second) // Unindex during import is fast, don't double log
hash common.Hash hash common.Hash
offset = db.AncientOffSet() offset = db.BlockStore().AncientOffSet()
) )
for i := uint64(0) + offset; i < frozen+offset; i++ { for i := uint64(0) + offset; i < frozen+offset; i++ {
// We read 100K hashes at a time, for a total of 3.2M // We read 100K hashes at a time, for a total of 3.2M
@ -52,7 +52,7 @@ func InitDatabaseFromFreezer(db ethdb.Database) {
if i+count > frozen+offset { if i+count > frozen+offset {
count = frozen + offset - i count = frozen + offset - i
} }
data, err := db.AncientRange(ChainFreezerHashTable, i, count, 32*count) data, err := db.BlockStore().AncientRange(ChainFreezerHashTable, i, count, 32*count)
if err != nil { if err != nil {
log.Crit("Failed to init database from freezer", "err", err) log.Crit("Failed to init database from freezer", "err", err)
} }
@ -80,8 +80,8 @@ func InitDatabaseFromFreezer(db ethdb.Database) {
} }
batch.Reset() batch.Reset()
WriteHeadHeaderHash(db, hash) WriteHeadHeaderHash(db.BlockStore(), hash)
WriteHeadFastBlockHash(db, hash) WriteHeadFastBlockHash(db.BlockStore(), hash)
log.Info("Initialized database from freezer", "blocks", frozen, "elapsed", common.PrettyDuration(time.Since(start))) log.Info("Initialized database from freezer", "blocks", frozen, "elapsed", common.PrettyDuration(time.Since(start)))
} }
@ -100,7 +100,7 @@ func iterateTransactions(db ethdb.Database, from uint64, to uint64, reverse bool
number uint64 number uint64
rlp rlp.RawValue rlp rlp.RawValue
} }
if offset := db.AncientOffSet(); offset > from { if offset := db.BlockStore().AncientOffSet(); offset > from {
from = offset from = offset
} }
if to <= from { if to <= from {
@ -187,7 +187,7 @@ func iterateTransactions(db ethdb.Database, from uint64, to uint64, reverse bool
// signal received. // signal received.
func indexTransactions(db ethdb.Database, from uint64, to uint64, interrupt chan struct{}, hook func(uint64) bool, report bool) { func indexTransactions(db ethdb.Database, from uint64, to uint64, interrupt chan struct{}, hook func(uint64) bool, report bool) {
// short circuit for invalid range // short circuit for invalid range
if offset := db.AncientOffSet(); offset > from { if offset := db.BlockStore().AncientOffSet(); offset > from {
from = offset from = offset
} }
if from >= to { if from >= to {
@ -286,7 +286,7 @@ func indexTransactionsForTesting(db ethdb.Database, from uint64, to uint64, inte
// signal received. // signal received.
func unindexTransactions(db ethdb.Database, from uint64, to uint64, interrupt chan struct{}, hook func(uint64) bool, report bool) { func unindexTransactions(db ethdb.Database, from uint64, to uint64, interrupt chan struct{}, hook func(uint64) bool, report bool) {
// short circuit for invalid range // short circuit for invalid range
if offset := db.AncientOffSet(); offset > from { if offset := db.BlockStore().AncientOffSet(); offset > from {
from = offset from = offset
} }
if from >= to { if from >= to {

View File

@ -26,14 +26,13 @@ import (
"strings" "strings"
"time" "time"
"github.com/olekukonko/tablewriter"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/ethdb/leveldb" "github.com/ethereum/go-ethereum/ethdb/leveldb"
"github.com/ethereum/go-ethereum/ethdb/memorydb" "github.com/ethereum/go-ethereum/ethdb/memorydb"
"github.com/ethereum/go-ethereum/ethdb/pebble" "github.com/ethereum/go-ethereum/ethdb/pebble"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/olekukonko/tablewriter"
) )
// freezerdb is a database wrapper that enables freezer data retrievals. // freezerdb is a database wrapper that enables freezer data retrievals.
@ -44,6 +43,7 @@ type freezerdb struct {
ethdb.AncientFreezer ethdb.AncientFreezer
diffStore ethdb.KeyValueStore diffStore ethdb.KeyValueStore
stateStore ethdb.Database stateStore ethdb.Database
blockStore ethdb.Database
} }
func (frdb *freezerdb) StateStoreReader() ethdb.Reader { func (frdb *freezerdb) StateStoreReader() ethdb.Reader {
@ -53,6 +53,20 @@ func (frdb *freezerdb) StateStoreReader() ethdb.Reader {
return frdb.stateStore return frdb.stateStore
} }
func (frdb *freezerdb) BlockStoreReader() ethdb.Reader {
if frdb.blockStore == nil {
return frdb
}
return frdb.blockStore
}
func (frdb *freezerdb) BlockStoreWriter() ethdb.Writer {
if frdb.blockStore == nil {
return frdb
}
return frdb.blockStore
}
// AncientDatadir returns the path of root ancient directory. // AncientDatadir returns the path of root ancient directory.
func (frdb *freezerdb) AncientDatadir() (string, error) { func (frdb *freezerdb) AncientDatadir() (string, error) {
return frdb.ancientRoot, nil return frdb.ancientRoot, nil
@ -78,6 +92,11 @@ func (frdb *freezerdb) Close() error {
errs = append(errs, err) errs = append(errs, err)
} }
} }
if frdb.blockStore != nil {
if err := frdb.blockStore.Close(); err != nil {
errs = append(errs, err)
}
}
if len(errs) != 0 { if len(errs) != 0 {
return fmt.Errorf("%v", errs) return fmt.Errorf("%v", errs)
} }
@ -99,6 +118,13 @@ func (frdb *freezerdb) StateStore() ethdb.Database {
return frdb.stateStore return frdb.stateStore
} }
func (frdb *freezerdb) GetStateStore() ethdb.Database {
if frdb.stateStore != nil {
return frdb.stateStore
}
return frdb
}
func (frdb *freezerdb) SetStateStore(state ethdb.Database) { func (frdb *freezerdb) SetStateStore(state ethdb.Database) {
if frdb.stateStore != nil { if frdb.stateStore != nil {
frdb.stateStore.Close() frdb.stateStore.Close()
@ -106,6 +132,25 @@ func (frdb *freezerdb) SetStateStore(state ethdb.Database) {
frdb.stateStore = state frdb.stateStore = state
} }
func (frdb *freezerdb) BlockStore() ethdb.Database {
if frdb.blockStore != nil {
return frdb.blockStore
} else {
return frdb
}
}
func (frdb *freezerdb) SetBlockStore(block ethdb.Database) {
if frdb.blockStore != nil {
frdb.blockStore.Close()
}
frdb.blockStore = block
}
func (frdb *freezerdb) HasSeparateBlockStore() bool {
return frdb.blockStore != nil
}
// Freeze is a helper method used for external testing to trigger and block until // Freeze is a helper method used for external testing to trigger and block until
// a freeze cycle completes, without having to sleep for a minute to trigger the // a freeze cycle completes, without having to sleep for a minute to trigger the
// automatic background run. // automatic background run.
@ -118,7 +163,6 @@ func (frdb *freezerdb) Freeze(threshold uint64) error {
frdb.AncientStore.(*chainFreezer).threshold.Store(old) frdb.AncientStore.(*chainFreezer).threshold.Store(old)
}(frdb.AncientStore.(*chainFreezer).threshold.Load()) }(frdb.AncientStore.(*chainFreezer).threshold.Load())
frdb.AncientStore.(*chainFreezer).threshold.Store(threshold) frdb.AncientStore.(*chainFreezer).threshold.Store(threshold)
// Trigger a freeze cycle and block until it's done // Trigger a freeze cycle and block until it's done
trigger := make(chan struct{}, 1) trigger := make(chan struct{}, 1)
frdb.AncientStore.(*chainFreezer).trigger <- trigger frdb.AncientStore.(*chainFreezer).trigger <- trigger
@ -135,6 +179,7 @@ type nofreezedb struct {
ethdb.KeyValueStore ethdb.KeyValueStore
diffStore ethdb.KeyValueStore diffStore ethdb.KeyValueStore
stateStore ethdb.Database stateStore ethdb.Database
blockStore ethdb.Database
} }
// HasAncient returns an error as we don't have a backing chain freezer. // HasAncient returns an error as we don't have a backing chain freezer.
@ -157,7 +202,7 @@ func (db *nofreezedb) Ancients() (uint64, error) {
return 0, errNotSupported return 0, errNotSupported
} }
// Ancients returns an error as we don't have a backing chain freezer. // ItemAmountInAncient returns an error as we don't have a backing chain freezer.
func (db *nofreezedb) ItemAmountInAncient() (uint64, error) { func (db *nofreezedb) ItemAmountInAncient() (uint64, error) {
return 0, errNotSupported return 0, errNotSupported
} }
@ -218,6 +263,13 @@ func (db *nofreezedb) SetStateStore(state ethdb.Database) {
db.stateStore = state db.stateStore = state
} }
func (db *nofreezedb) GetStateStore() ethdb.Database {
if db.stateStore != nil {
return db.stateStore
}
return db
}
func (db *nofreezedb) StateStoreReader() ethdb.Reader { func (db *nofreezedb) StateStoreReader() ethdb.Reader {
if db.stateStore != nil { if db.stateStore != nil {
return db.stateStore return db.stateStore
@ -225,6 +277,35 @@ func (db *nofreezedb) StateStoreReader() ethdb.Reader {
return db return db
} }
func (db *nofreezedb) BlockStore() ethdb.Database {
if db.blockStore != nil {
return db.blockStore
}
return db
}
func (db *nofreezedb) SetBlockStore(block ethdb.Database) {
db.blockStore = block
}
func (db *nofreezedb) HasSeparateBlockStore() bool {
return db.blockStore != nil
}
func (db *nofreezedb) BlockStoreReader() ethdb.Reader {
if db.blockStore != nil {
return db.blockStore
}
return db
}
func (db *nofreezedb) BlockStoreWriter() ethdb.Writer {
if db.blockStore != nil {
return db.blockStore
}
return db
}
func (db *nofreezedb) ReadAncients(fn func(reader ethdb.AncientReaderOp) error) (err error) { func (db *nofreezedb) ReadAncients(fn func(reader ethdb.AncientReaderOp) error) (err error) {
// Unlike other ancient-related methods, this method does not return // Unlike other ancient-related methods, this method does not return
// errNotSupported when invoked. // errNotSupported when invoked.
@ -266,6 +347,111 @@ func NewDatabase(db ethdb.KeyValueStore) ethdb.Database {
return &nofreezedb{KeyValueStore: db} return &nofreezedb{KeyValueStore: db}
} }
type emptyfreezedb struct {
ethdb.KeyValueStore
}
// HasAncient returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) HasAncient(kind string, number uint64) (bool, error) {
return false, nil
}
// Ancient returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) Ancient(kind string, number uint64) ([]byte, error) {
return nil, nil
}
// AncientRange returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) AncientRange(kind string, start, max, maxByteSize uint64) ([][]byte, error) {
return nil, nil
}
// Ancients returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) Ancients() (uint64, error) {
return 0, nil
}
// ItemAmountInAncient returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) ItemAmountInAncient() (uint64, error) {
return 0, nil
}
// Tail returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) Tail() (uint64, error) {
return 0, nil
}
// AncientSize returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) AncientSize(kind string) (uint64, error) {
return 0, nil
}
// ModifyAncients returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) ModifyAncients(func(ethdb.AncientWriteOp) error) (int64, error) {
return 0, nil
}
// TruncateHead returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) TruncateHead(items uint64) (uint64, error) {
return 0, nil
}
// TruncateTail returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) TruncateTail(items uint64) (uint64, error) {
return 0, nil
}
// TruncateTableTail returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) TruncateTableTail(kind string, tail uint64) (uint64, error) {
return 0, nil
}
// ResetTable returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) ResetTable(kind string, startAt uint64, onlyEmpty bool) error {
return nil
}
// Sync returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) Sync() error {
return nil
}
func (db *emptyfreezedb) DiffStore() ethdb.KeyValueStore { return db }
func (db *emptyfreezedb) SetDiffStore(diff ethdb.KeyValueStore) {}
func (db *emptyfreezedb) StateStore() ethdb.Database { return db }
func (db *emptyfreezedb) GetStateStore() ethdb.Database { return db }
func (db *emptyfreezedb) SetStateStore(state ethdb.Database) {}
func (db *emptyfreezedb) StateStoreReader() ethdb.Reader { return db }
func (db *emptyfreezedb) BlockStore() ethdb.Database { return db }
func (db *emptyfreezedb) SetBlockStore(block ethdb.Database) {}
func (db *emptyfreezedb) HasSeparateBlockStore() bool { return false }
func (db *emptyfreezedb) BlockStoreReader() ethdb.Reader { return db }
func (db *emptyfreezedb) BlockStoreWriter() ethdb.Writer { return db }
func (db *emptyfreezedb) ReadAncients(fn func(reader ethdb.AncientReaderOp) error) (err error) {
return nil
}
func (db *emptyfreezedb) AncientOffSet() uint64 { return 0 }
// MigrateTable returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) MigrateTable(kind string, convert convertLegacyFn) error {
return nil
}
// AncientDatadir returns nil for pruned db that we don't have a backing chain freezer.
func (db *emptyfreezedb) AncientDatadir() (string, error) {
return "", nil
}
func (db *emptyfreezedb) SetupFreezerEnv(env *ethdb.FreezerEnv) error {
return nil
}
// NewEmptyFreezeDB is used for CLI such as `geth db inspect` in pruned db that we don't
// have a backing chain freezer.
// WARNING: it must be only used in the above case.
func NewEmptyFreezeDB(db ethdb.KeyValueStore) ethdb.Database {
return &emptyfreezedb{KeyValueStore: db}
}
// NewFreezerDb only create a freezer without statedb. // NewFreezerDb only create a freezer without statedb.
func NewFreezerDb(db ethdb.KeyValueStore, frz, namespace string, readonly bool, newOffSet uint64) (*Freezer, error) { func NewFreezerDb(db ethdb.KeyValueStore, frz, namespace string, readonly bool, newOffSet uint64) (*Freezer, error) {
// Create the idle freezer instance, this operation should be atomic to avoid mismatch between offset and acientDB. // Create the idle freezer instance, this operation should be atomic to avoid mismatch between offset and acientDB.
@ -306,7 +492,7 @@ func resolveChainFreezerDir(ancient string) string {
// value data store with a freezer moving immutable chain segments into cold // value data store with a freezer moving immutable chain segments into cold
// storage. The passed ancient indicates the path of root ancient directory // storage. The passed ancient indicates the path of root ancient directory
// where the chain freezer can be opened. // where the chain freezer can be opened.
func NewDatabaseWithFreezer(db ethdb.KeyValueStore, ancient string, namespace string, readonly, disableFreeze, isLastOffset, pruneAncientData bool) (ethdb.Database, error) { func NewDatabaseWithFreezer(db ethdb.KeyValueStore, ancient string, namespace string, readonly, disableFreeze, isLastOffset, pruneAncientData, multiDatabase bool) (ethdb.Database, error) {
var offset uint64 var offset uint64
// The offset of ancientDB should be handled differently in different scenarios. // The offset of ancientDB should be handled differently in different scenarios.
if isLastOffset { if isLastOffset {
@ -315,6 +501,12 @@ func NewDatabaseWithFreezer(db ethdb.KeyValueStore, ancient string, namespace st
offset = ReadOffSetOfCurrentAncientFreezer(db) offset = ReadOffSetOfCurrentAncientFreezer(db)
} }
// This case is used for someone who wants to execute geth db inspect CLI in a pruned db
if !disableFreeze && readonly && ReadAncientType(db) == PruneFreezerType {
log.Warn("Disk db is pruned, using an empty freezer db for CLI")
return NewEmptyFreezeDB(db), nil
}
if pruneAncientData && !disableFreeze && !readonly { if pruneAncientData && !disableFreeze && !readonly {
frdb, err := newPrunedFreezer(resolveChainFreezerDir(ancient), db, offset) frdb, err := newPrunedFreezer(resolveChainFreezerDir(ancient), db, offset)
if err != nil { if err != nil {
@ -342,9 +534,18 @@ func NewDatabaseWithFreezer(db ethdb.KeyValueStore, ancient string, namespace st
} }
// Create the idle freezer instance // Create the idle freezer instance
frdb, err := newChainFreezer(resolveChainFreezerDir(ancient), namespace, readonly, offset) frdb, err := newChainFreezer(resolveChainFreezerDir(ancient), namespace, readonly, offset, multiDatabase)
// We are creating the freezerdb here because the validation logic for db and freezer below requires certain interfaces
// that need a database type. Therefore, we are pre-creating it for subsequent use.
freezerDb := &freezerdb{
ancientRoot: ancient,
KeyValueStore: db,
AncientStore: frdb,
AncientFreezer: frdb,
}
if err != nil { if err != nil {
printChainMetadata(db) printChainMetadata(freezerDb)
return nil, err return nil, err
} }
@ -380,10 +581,10 @@ func NewDatabaseWithFreezer(db ethdb.KeyValueStore, ancient string, namespace st
// the freezer and the key-value store. // the freezer and the key-value store.
frgenesis, err := frdb.Ancient(ChainFreezerHashTable, 0) frgenesis, err := frdb.Ancient(ChainFreezerHashTable, 0)
if err != nil { if err != nil {
printChainMetadata(db) printChainMetadata(freezerDb)
return nil, fmt.Errorf("failed to retrieve genesis from ancient %v", err) return nil, fmt.Errorf("failed to retrieve genesis from ancient %v", err)
} else if !bytes.Equal(kvgenesis, frgenesis) { } else if !bytes.Equal(kvgenesis, frgenesis) {
printChainMetadata(db) printChainMetadata(freezerDb)
return nil, fmt.Errorf("genesis mismatch: %#x (leveldb) != %#x (ancients)", kvgenesis, frgenesis) return nil, fmt.Errorf("genesis mismatch: %#x (leveldb) != %#x (ancients)", kvgenesis, frgenesis)
} }
// Key-value store and freezer belong to the same network. Ensure that they // Key-value store and freezer belong to the same network. Ensure that they
@ -391,7 +592,7 @@ func NewDatabaseWithFreezer(db ethdb.KeyValueStore, ancient string, namespace st
if kvhash, _ := db.Get(headerHashKey(frozen)); len(kvhash) == 0 { if kvhash, _ := db.Get(headerHashKey(frozen)); len(kvhash) == 0 {
// Subsequent header after the freezer limit is missing from the database. // Subsequent header after the freezer limit is missing from the database.
// Reject startup if the database has a more recent head. // Reject startup if the database has a more recent head.
if head := *ReadHeaderNumber(db, ReadHeadHeaderHash(db)); head > frozen-1 { if head := *ReadHeaderNumber(freezerDb, ReadHeadHeaderHash(freezerDb)); head > frozen-1 {
// Find the smallest block stored in the key-value store // Find the smallest block stored in the key-value store
// in range of [frozen, head] // in range of [frozen, head]
var number uint64 var number uint64
@ -401,7 +602,7 @@ func NewDatabaseWithFreezer(db ethdb.KeyValueStore, ancient string, namespace st
} }
} }
// We are about to exit on error. Print database metadata before exiting // We are about to exit on error. Print database metadata before exiting
printChainMetadata(db) printChainMetadata(freezerDb)
return nil, fmt.Errorf("gap in the chain between ancients [0 - #%d] and leveldb [#%d - #%d] ", return nil, fmt.Errorf("gap in the chain between ancients [0 - #%d] and leveldb [#%d - #%d] ",
frozen-1, number, head) frozen-1, number, head)
} }
@ -416,11 +617,11 @@ func NewDatabaseWithFreezer(db ethdb.KeyValueStore, ancient string, namespace st
// store, otherwise we'll end up missing data. We check block #1 to decide // store, otherwise we'll end up missing data. We check block #1 to decide
// if we froze anything previously or not, but do take care of databases with // if we froze anything previously or not, but do take care of databases with
// only the genesis block. // only the genesis block.
if ReadHeadHeaderHash(db) != common.BytesToHash(kvgenesis) { if ReadHeadHeaderHash(freezerDb) != common.BytesToHash(kvgenesis) {
// Key-value store contains more data than the genesis block, make sure we // Key-value store contains more data than the genesis block, make sure we
// didn't freeze anything yet. // didn't freeze anything yet.
if kvblob, _ := db.Get(headerHashKey(1)); len(kvblob) == 0 { if kvblob, _ := db.Get(headerHashKey(1)); len(kvblob) == 0 {
printChainMetadata(db) printChainMetadata(freezerDb)
return nil, errors.New("ancient chain segments already extracted, please set --datadir.ancient to the correct path") return nil, errors.New("ancient chain segments already extracted, please set --datadir.ancient to the correct path")
} }
// Block #1 is still in the database, we're allowed to init a new freezer // Block #1 is still in the database, we're allowed to init a new freezer
@ -429,6 +630,7 @@ func NewDatabaseWithFreezer(db ethdb.KeyValueStore, ancient string, namespace st
// freezer. // freezer.
} }
} }
// no prune ancient start success // no prune ancient start success
if !readonly { if !readonly {
WriteAncientType(db, EntireFreezerType) WriteAncientType(db, EntireFreezerType)
@ -441,12 +643,7 @@ func NewDatabaseWithFreezer(db ethdb.KeyValueStore, ancient string, namespace st
frdb.wg.Done() frdb.wg.Done()
}() }()
} }
return &freezerdb{ return freezerDb, nil
ancientRoot: ancient,
KeyValueStore: db,
AncientStore: frdb,
AncientFreezer: frdb,
}, nil
} }
// NewMemoryDatabase creates an ephemeral in-memory key-value database without a // NewMemoryDatabase creates an ephemeral in-memory key-value database without a
@ -518,9 +715,12 @@ type OpenOptions struct {
DisableFreeze bool DisableFreeze bool
IsLastOffset bool IsLastOffset bool
PruneAncientData bool PruneAncientData bool
// Ephemeral means that filesystem sync operations should be avoided: data integrity in the face of // Ephemeral means that filesystem sync operations should be avoided: data integrity in the face of
// a crash is not important. This option should typically be used in tests. // a crash is not important. This option should typically be used in tests.
Ephemeral bool Ephemeral bool
MultiDataBase bool
} }
// openKeyValueDatabase opens a disk-based key-value database, e.g. leveldb or pebble. // openKeyValueDatabase opens a disk-based key-value database, e.g. leveldb or pebble.
@ -565,13 +765,13 @@ func Open(o OpenOptions) (ethdb.Database, error) {
} }
if ReadAncientType(kvdb) == PruneFreezerType { if ReadAncientType(kvdb) == PruneFreezerType {
if !o.PruneAncientData { if !o.PruneAncientData {
log.Warn("Disk db is pruned") log.Warn("NOTICE: You're opening a pruned disk db!")
} }
} }
if len(o.AncientsDirectory) == 0 { if len(o.AncientsDirectory) == 0 {
return kvdb, nil return kvdb, nil
} }
frdb, err := NewDatabaseWithFreezer(kvdb, o.AncientsDirectory, o.Namespace, o.ReadOnly, o.DisableFreeze, o.IsLastOffset, o.PruneAncientData) frdb, err := NewDatabaseWithFreezer(kvdb, o.AncientsDirectory, o.Namespace, o.ReadOnly, o.DisableFreeze, o.IsLastOffset, o.PruneAncientData, o.MultiDataBase)
if err != nil { if err != nil {
kvdb.Close() kvdb.Close()
return nil, err return nil, err
@ -613,7 +813,7 @@ func AncientInspect(db ethdb.Database) error {
offset := counter(ReadOffSetOfCurrentAncientFreezer(db)) offset := counter(ReadOffSetOfCurrentAncientFreezer(db))
// Get number of ancient rows inside the freezer. // Get number of ancient rows inside the freezer.
ancients := counter(0) ancients := counter(0)
if count, err := db.ItemAmountInAncient(); err != nil { if count, err := db.BlockStore().ItemAmountInAncient(); err != nil {
log.Error("failed to get the items amount in ancientDB", "err", err) log.Error("failed to get the items amount in ancientDB", "err", err)
return err return err
} else { } else {
@ -661,6 +861,48 @@ func PruneHashTrieNodeInDataBase(db ethdb.Database) error {
return nil return nil
} }
type DataType int
const (
StateDataType DataType = iota
BlockDataType
ChainDataType
Unknown
)
func DataTypeByKey(key []byte) DataType {
switch {
// state
case IsLegacyTrieNode(key, key),
bytes.HasPrefix(key, stateIDPrefix) && len(key) == len(stateIDPrefix)+common.HashLength,
IsAccountTrieNode(key),
IsStorageTrieNode(key):
return StateDataType
// block
case bytes.HasPrefix(key, headerPrefix) && len(key) == (len(headerPrefix)+8+common.HashLength),
bytes.HasPrefix(key, blockBodyPrefix) && len(key) == (len(blockBodyPrefix)+8+common.HashLength),
bytes.HasPrefix(key, blockReceiptsPrefix) && len(key) == (len(blockReceiptsPrefix)+8+common.HashLength),
bytes.HasPrefix(key, headerPrefix) && bytes.HasSuffix(key, headerTDSuffix),
bytes.HasPrefix(key, headerPrefix) && bytes.HasSuffix(key, headerHashSuffix),
bytes.HasPrefix(key, headerNumberPrefix) && len(key) == (len(headerNumberPrefix)+common.HashLength):
return BlockDataType
default:
for _, meta := range [][]byte{
fastTrieProgressKey, persistentStateIDKey, trieJournalKey, snapSyncStatusFlagKey} {
if bytes.Equal(key, meta) {
return StateDataType
}
}
for _, meta := range [][]byte{headHeaderKey, headFinalizedBlockKey, headBlockKey, headFastBlockKey} {
if bytes.Equal(key, meta) {
return BlockDataType
}
}
return ChainDataType
}
}
// InspectDatabase traverses the entire database and checks the size // InspectDatabase traverses the entire database and checks the size
// of all different categories of data. // of all different categories of data.
func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error { func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
@ -668,10 +910,15 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
defer it.Release() defer it.Release()
var trieIter ethdb.Iterator var trieIter ethdb.Iterator
var blockIter ethdb.Iterator
if db.StateStore() != nil { if db.StateStore() != nil {
trieIter = db.StateStore().NewIterator(keyPrefix, nil) trieIter = db.StateStore().NewIterator(keyPrefix, nil)
defer trieIter.Release() defer trieIter.Release()
} }
if db.HasSeparateBlockStore() {
blockIter = db.BlockStore().NewIterator(keyPrefix, nil)
defer blockIter.Release()
}
var ( var (
count int64 count int64
start = time.Now() start = time.Now()
@ -801,6 +1048,7 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
value = trieIter.Value() value = trieIter.Value()
size = common.StorageSize(len(key) + len(value)) size = common.StorageSize(len(key) + len(value))
) )
total += size
switch { switch {
case IsLegacyTrieNode(key, value): case IsLegacyTrieNode(key, value):
@ -814,9 +1062,10 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
default: default:
var accounted bool var accounted bool
for _, meta := range [][]byte{ for _, meta := range [][]byte{
fastTrieProgressKey, persistentStateIDKey, trieJournalKey} { fastTrieProgressKey, persistentStateIDKey, trieJournalKey, snapSyncStatusFlagKey} {
if bytes.Equal(key, meta) { if bytes.Equal(key, meta) {
metadata.Add(size) metadata.Add(size)
accounted = true
break break
} }
} }
@ -830,6 +1079,54 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
logged = time.Now() logged = time.Now()
} }
} }
log.Info("Inspecting separate state database", "count", count, "elapsed", common.PrettyDuration(time.Since(start)))
}
// inspect separate block db
if blockIter != nil {
count = 0
logged = time.Now()
for blockIter.Next() {
var (
key = blockIter.Key()
value = blockIter.Value()
size = common.StorageSize(len(key) + len(value))
)
total += size
switch {
case bytes.HasPrefix(key, headerPrefix) && len(key) == (len(headerPrefix)+8+common.HashLength):
headers.Add(size)
case bytes.HasPrefix(key, blockBodyPrefix) && len(key) == (len(blockBodyPrefix)+8+common.HashLength):
bodies.Add(size)
case bytes.HasPrefix(key, blockReceiptsPrefix) && len(key) == (len(blockReceiptsPrefix)+8+common.HashLength):
receipts.Add(size)
case bytes.HasPrefix(key, headerPrefix) && bytes.HasSuffix(key, headerTDSuffix):
tds.Add(size)
case bytes.HasPrefix(key, headerPrefix) && bytes.HasSuffix(key, headerHashSuffix):
numHashPairings.Add(size)
case bytes.HasPrefix(key, headerNumberPrefix) && len(key) == (len(headerNumberPrefix)+common.HashLength):
hashNumPairings.Add(size)
default:
var accounted bool
for _, meta := range [][]byte{headHeaderKey, headFinalizedBlockKey, headBlockKey, headFastBlockKey} {
if bytes.Equal(key, meta) {
metadata.Add(size)
accounted = true
break
}
}
if !accounted {
unaccounted.Add(size)
}
}
count++
if count%1000 == 0 && time.Since(logged) > 8*time.Second {
log.Info("Inspecting separate block database", "count", count, "elapsed", common.PrettyDuration(time.Since(start)))
logged = time.Now()
}
}
log.Info("Inspecting separate block database", "count", count, "elapsed", common.PrettyDuration(time.Since(start)))
} }
// Display the database statistic of key-value store. // Display the database statistic of key-value store.
stats := [][]string{ stats := [][]string{
@ -856,7 +1153,7 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
{"Light client", "Bloom trie nodes", bloomTrieNodes.Size(), bloomTrieNodes.Count()}, {"Light client", "Bloom trie nodes", bloomTrieNodes.Size(), bloomTrieNodes.Count()},
} }
// Inspect all registered append-only file store then. // Inspect all registered append-only file store then.
ancients, err := inspectFreezers(db) ancients, err := inspectFreezers(db.BlockStore())
if err != nil { if err != nil {
return err return err
} }
@ -962,7 +1259,7 @@ func DeleteTrieState(db ethdb.Database) error {
} }
// printChainMetadata prints out chain metadata to stderr. // printChainMetadata prints out chain metadata to stderr.
func printChainMetadata(db ethdb.KeyValueStore) { func printChainMetadata(db ethdb.Reader) {
fmt.Fprintf(os.Stderr, "Chain metadata\n") fmt.Fprintf(os.Stderr, "Chain metadata\n")
for _, v := range ReadChainMetadata(db) { for _, v := range ReadChainMetadata(db) {
fmt.Fprintf(os.Stderr, " %s\n", strings.Join(v, ": ")) fmt.Fprintf(os.Stderr, " %s\n", strings.Join(v, ": "))
@ -973,7 +1270,7 @@ func printChainMetadata(db ethdb.KeyValueStore) {
// ReadChainMetadata returns a set of key/value pairs that contains information // ReadChainMetadata returns a set of key/value pairs that contains information
// about the database chain status. This can be used for diagnostic purposes // about the database chain status. This can be used for diagnostic purposes
// when investigating the state of the node. // when investigating the state of the node.
func ReadChainMetadata(db ethdb.KeyValueStore) [][]string { func ReadChainMetadata(db ethdb.Reader) [][]string {
pp := func(val *uint64) string { pp := func(val *uint64) string {
if val == nil { if val == nil {
return "<nil>" return "<nil>"

View File

@ -239,7 +239,7 @@ func (f *Freezer) Ancient(kind string, number uint64) ([]byte, error) {
// - if maxBytes is not specified, 'count' items will be returned if they are present. // - if maxBytes is not specified, 'count' items will be returned if they are present.
func (f *Freezer) AncientRange(kind string, start, count, maxBytes uint64) ([][]byte, error) { func (f *Freezer) AncientRange(kind string, start, count, maxBytes uint64) ([][]byte, error) {
if table := f.tables[kind]; table != nil { if table := f.tables[kind]; table != nil {
return table.RetrieveItems(start, count, maxBytes) return table.RetrieveItems(start-f.offset, count, maxBytes)
} }
return nil, errUnknownTable return nil, errUnknownTable
} }
@ -252,7 +252,7 @@ func (f *Freezer) Ancients() (uint64, error) {
func (f *Freezer) TableAncients(kind string) (uint64, error) { func (f *Freezer) TableAncients(kind string) (uint64, error) {
f.writeLock.RLock() f.writeLock.RLock()
defer f.writeLock.RUnlock() defer f.writeLock.RUnlock()
return f.tables[kind].items.Load(), nil return f.tables[kind].items.Load() + f.offset, nil
} }
// ItemAmountInAncient returns the actual length of current ancientDB. // ItemAmountInAncient returns the actual length of current ancientDB.
@ -541,41 +541,6 @@ func gcKvStore(db ethdb.KeyValueStore, ancients []common.Hash, first uint64, fro
} }
batch.Reset() batch.Reset()
// Step into the future and delete and dangling side chains
if frozen > 0 {
tip := frozen
nfdb := &nofreezedb{KeyValueStore: db}
for len(dangling) > 0 {
drop := make(map[common.Hash]struct{})
for _, hash := range dangling {
log.Debug("Dangling parent from freezer", "number", tip-1, "hash", hash)
drop[hash] = struct{}{}
}
children := ReadAllHashes(db, tip)
for i := 0; i < len(children); i++ {
// Dig up the child and ensure it's dangling
child := ReadHeader(nfdb, children[i], tip)
if child == nil {
log.Error("Missing dangling header", "number", tip, "hash", children[i])
continue
}
if _, ok := drop[child.ParentHash]; !ok {
children = append(children[:i], children[i+1:]...)
i--
continue
}
// Delete all block data associated with the child
log.Debug("Deleting dangling block", "number", tip, "hash", children[i], "parent", child.ParentHash)
DeleteBlock(batch, children[i], tip)
}
dangling = children
tip++
}
if err := batch.Write(); err != nil {
log.Crit("Failed to delete dangling side blocks", "err", err)
}
}
// Log something friendly for the user // Log something friendly for the user
context := []interface{}{ context := []interface{}{
"blocks", frozen - first, "elapsed", common.PrettyDuration(time.Since(start)), "number", frozen - 1, "blocks", frozen - first, "elapsed", common.PrettyDuration(time.Since(start)), "number", frozen - 1,

View File

@ -127,6 +127,11 @@ func newFreezerTable(path, name string, disableSnappy, readonly bool) (*freezerT
return newTable(path, name, metrics.NilMeter{}, metrics.NilMeter{}, metrics.NilGauge{}, freezerTableSize, disableSnappy, readonly) return newTable(path, name, metrics.NilMeter{}, metrics.NilMeter{}, metrics.NilGauge{}, freezerTableSize, disableSnappy, readonly)
} }
// newAdditionTable opens the given path as a addition table.
func newAdditionTable(path, name string, disableSnappy, readonly bool) (*freezerTable, error) {
return openAdditionTable(path, name, metrics.NilMeter{}, metrics.NilMeter{}, metrics.NilGauge{}, freezerTableSize, disableSnappy, readonly)
}
// newTable opens a freezer table, creating the data and index files if they are // newTable opens a freezer table, creating the data and index files if they are
// non-existent. Both files are truncated to the shortest common length to ensure // non-existent. Both files are truncated to the shortest common length to ensure
// they don't go out of sync. // they don't go out of sync.

View File

@ -8,6 +8,8 @@ import (
"sync/atomic" "sync/atomic"
"time" "time"
"golang.org/x/exp/slices"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
@ -68,26 +70,43 @@ func newPrunedFreezer(datadir string, db ethdb.KeyValueStore, offset uint64) (*p
func (f *prunedfreezer) repair(datadir string) error { func (f *prunedfreezer) repair(datadir string) error {
offset := atomic.LoadUint64(&f.frozen) offset := atomic.LoadUint64(&f.frozen)
// compatible freezer // compatible freezer
min := uint64(math.MaxUint64) minItems := uint64(math.MaxUint64)
for name, disableSnappy := range chainFreezerNoSnappy { for name, disableSnappy := range chainFreezerNoSnappy {
table, err := newFreezerTable(datadir, name, disableSnappy, false) var (
table *freezerTable
err error
)
if slices.Contains(additionTables, name) {
table, err = newAdditionTable(datadir, name, disableSnappy, false)
} else {
table, err = newFreezerTable(datadir, name, disableSnappy, false)
}
if err != nil { if err != nil {
return err return err
} }
// addition tables only align head
if slices.Contains(additionTables, name) {
if EmptyTable(table) {
continue
}
}
items := table.items.Load() items := table.items.Load()
if min > items { if minItems > items {
min = items minItems = items
} }
table.Close() table.Close()
} }
log.Info("Read ancientdb item counts", "items", min)
offset += min
if frozen := ReadFrozenOfAncientFreezer(f.db); frozen > offset { // If the dataset has undergone a prune block, the offset is a non-zero value, otherwise the offset is a zero value.
offset = frozen // The minItems is the value relative to offset
} offset += minItems
atomic.StoreUint64(&f.frozen, offset) // FrozenOfAncientFreezer is the progress of the last prune-freezer freeze.
frozenInDB := ReadFrozenOfAncientFreezer(f.db)
maxOffset := max(offset, frozenInDB)
log.Info("Read ancient db item counts", "items", minItems, "frozen", maxOffset)
atomic.StoreUint64(&f.frozen, maxOffset)
if err := f.Sync(); err != nil { if err := f.Sync(); err != nil {
return nil return nil
} }
@ -138,12 +157,12 @@ func (f *prunedfreezer) AncientOffSet() uint64 {
// MigrateTable processes the entries in a given table in sequence // MigrateTable processes the entries in a given table in sequence
// converting them to a new format if they're of an old format. // converting them to a new format if they're of an old format.
func (db *prunedfreezer) MigrateTable(kind string, convert convertLegacyFn) error { func (f *prunedfreezer) MigrateTable(kind string, convert convertLegacyFn) error {
return errNotSupported return errNotSupported
} }
// AncientDatadir returns an error as we don't have a backing chain freezer. // AncientDatadir returns an error as we don't have a backing chain freezer.
func (db *prunedfreezer) AncientDatadir() (string, error) { func (f *prunedfreezer) AncientDatadir() (string, error) {
return "", errNotSupported return "", errNotSupported
} }
@ -299,9 +318,8 @@ func (f *prunedfreezer) freeze() {
log.Error("Append ancient err", "number", f.frozen, "hash", hash, "err", err) log.Error("Append ancient err", "number", f.frozen, "hash", hash, "err", err)
break break
} }
if hash != (common.Hash{}) { // may include common.Hash{}, will be delete in gcKvStore
ancients = append(ancients, hash) ancients = append(ancients, hash)
}
} }
// Batch of blocks have been frozen, flush them before wiping from leveldb // Batch of blocks have been frozen, flush them before wiping from leveldb
if err := f.Sync(); err != nil { if err := f.Sync(); err != nil {

View File

@ -40,6 +40,9 @@ var (
// headFastBlockKey tracks the latest known incomplete block's hash during fast sync. // headFastBlockKey tracks the latest known incomplete block's hash during fast sync.
headFastBlockKey = []byte("LastFast") headFastBlockKey = []byte("LastFast")
// headFinalizedBlockKey tracks the latest known finalized block hash.
headFinalizedBlockKey = []byte("LastFinalized")
// persistentStateIDKey tracks the id of latest stored state(for path-based only). // persistentStateIDKey tracks the id of latest stored state(for path-based only).
persistentStateIDKey = []byte("LastStateID") persistentStateIDKey = []byte("LastStateID")

View File

@ -27,6 +27,26 @@ type table struct {
prefix string prefix string
} }
func (t *table) BlockStoreReader() ethdb.Reader {
return t
}
func (t *table) BlockStoreWriter() ethdb.Writer {
return t
}
func (t *table) BlockStore() ethdb.Database {
return t
}
func (t *table) SetBlockStore(block ethdb.Database) {
panic("not implement")
}
func (t *table) HasSeparateBlockStore() bool {
panic("not implement")
}
// NewTable returns a database object that prefixes all keys with a given string. // NewTable returns a database object that prefixes all keys with a given string.
func NewTable(db ethdb.Database, prefix string) ethdb.Database { func NewTable(db ethdb.Database, prefix string) ethdb.Database {
return &table{ return &table{
@ -231,6 +251,10 @@ func (t *table) SetStateStore(state ethdb.Database) {
panic("not implement") panic("not implement")
} }
func (t *table) GetStateStore() ethdb.Database {
return nil
}
func (t *table) StateStoreReader() ethdb.Reader { func (t *table) StateStoreReader() ethdb.Reader {
return nil return nil
} }

View File

@ -382,7 +382,7 @@ func (p *BlockPruner) backUpOldDb(name string, cache, handles int, namespace str
log.Info("chainDB opened successfully") log.Info("chainDB opened successfully")
// Get the number of items in old ancient db. // Get the number of items in old ancient db.
itemsOfAncient, err := chainDb.ItemAmountInAncient() itemsOfAncient, err := chainDb.BlockStore().ItemAmountInAncient()
log.Info("the number of items in ancientDB is ", "itemsOfAncient", itemsOfAncient) log.Info("the number of items in ancientDB is ", "itemsOfAncient", itemsOfAncient)
// If we can't access the freezer or it's empty, abort. // If we can't access the freezer or it's empty, abort.
@ -402,7 +402,7 @@ func (p *BlockPruner) backUpOldDb(name string, cache, handles int, namespace str
var oldOffSet uint64 var oldOffSet uint64
if interrupt { if interrupt {
// The interrupt scecario within this function is specific for old and new ancientDB exsisted concurrently, // The interrupt scecario within this function is specific for old and new ancientDB existed concurrently,
// should use last version of offset for oldAncientDB, because current offset is // should use last version of offset for oldAncientDB, because current offset is
// actually of the new ancientDB_Backup, but what we want is the offset of ancientDB being backup. // actually of the new ancientDB_Backup, but what we want is the offset of ancientDB being backup.
oldOffSet = rawdb.ReadOffSetOfLastAncientFreezer(chainDb) oldOffSet = rawdb.ReadOffSetOfLastAncientFreezer(chainDb)

View File

@ -933,8 +933,8 @@ func (s *StateDB) copyInternal(doPrefetch bool) *StateDB {
// along with their original values. // along with their original values.
state.accounts = copySet(s.accounts) state.accounts = copySet(s.accounts)
state.storages = copy2DSet(s.storages) state.storages = copy2DSet(s.storages)
state.accountsOrigin = copySet(state.accountsOrigin) state.accountsOrigin = copySet(s.accountsOrigin)
state.storagesOrigin = copy2DSet(state.storagesOrigin) state.storagesOrigin = copy2DSet(s.storagesOrigin)
// Deep copy the logs occurred in the scope of block // Deep copy the logs occurred in the scope of block
for hash, logs := range s.logs { for hash, logs := range s.logs {

View File

@ -25,7 +25,7 @@ import (
) )
// NewStateSync creates a new state trie download scheduler. // NewStateSync creates a new state trie download scheduler.
func NewStateSync(root common.Hash, database ethdb.KeyValueReader, onLeaf func(keys [][]byte, leaf []byte) error, scheme string) *trie.Sync { func NewStateSync(root common.Hash, database ethdb.Database, onLeaf func(keys [][]byte, leaf []byte) error, scheme string) *trie.Sync {
// Register the storage slot callback if the external callback is specified. // Register the storage slot callback if the external callback is specified.
var onSlot func(keys [][]byte, path []byte, leaf []byte, parent common.Hash, parentPath []byte) error var onSlot func(keys [][]byte, path []byte, leaf []byte, parent common.Hash, parentPath []byte) error
if onLeaf != nil { if onLeaf != nil {

View File

@ -268,7 +268,7 @@ func testIterativeStateSync(t *testing.T, count int, commit bool, bypath bool, s
} }
} }
batch := dstDb.NewBatch() batch := dstDb.NewBatch()
if err := sched.Commit(batch); err != nil { if err := sched.Commit(batch, nil); err != nil {
t.Fatalf("failed to commit data: %v", err) t.Fatalf("failed to commit data: %v", err)
} }
batch.Write() batch.Write()
@ -369,7 +369,7 @@ func testIterativeDelayedStateSync(t *testing.T, scheme string) {
nodeProcessed = len(nodeResults) nodeProcessed = len(nodeResults)
} }
batch := dstDb.NewBatch() batch := dstDb.NewBatch()
if err := sched.Commit(batch); err != nil { if err := sched.Commit(batch, nil); err != nil {
t.Fatalf("failed to commit data: %v", err) t.Fatalf("failed to commit data: %v", err)
} }
batch.Write() batch.Write()
@ -469,7 +469,7 @@ func testIterativeRandomStateSync(t *testing.T, count int, scheme string) {
} }
} }
batch := dstDb.NewBatch() batch := dstDb.NewBatch()
if err := sched.Commit(batch); err != nil { if err := sched.Commit(batch, nil); err != nil {
t.Fatalf("failed to commit data: %v", err) t.Fatalf("failed to commit data: %v", err)
} }
batch.Write() batch.Write()
@ -575,7 +575,7 @@ func testIterativeRandomDelayedStateSync(t *testing.T, scheme string) {
} }
} }
batch := dstDb.NewBatch() batch := dstDb.NewBatch()
if err := sched.Commit(batch); err != nil { if err := sched.Commit(batch, nil); err != nil {
t.Fatalf("failed to commit data: %v", err) t.Fatalf("failed to commit data: %v", err)
} }
batch.Write() batch.Write()
@ -688,7 +688,7 @@ func testIncompleteStateSync(t *testing.T, scheme string) {
} }
} }
batch := dstDb.NewBatch() batch := dstDb.NewBatch()
if err := sched.Commit(batch); err != nil { if err := sched.Commit(batch, nil); err != nil {
t.Fatalf("failed to commit data: %v", err) t.Fatalf("failed to commit data: %v", err)
} }
batch.Write() batch.Write()

View File

@ -231,6 +231,13 @@ func ApplyTransaction(config *params.ChainConfig, bc ChainContext, author *commo
// ProcessBeaconBlockRoot applies the EIP-4788 system call to the beacon block root // ProcessBeaconBlockRoot applies the EIP-4788 system call to the beacon block root
// contract. This method is exported to be used in tests. // contract. This method is exported to be used in tests.
func ProcessBeaconBlockRoot(beaconRoot common.Hash, vmenv *vm.EVM, statedb *state.StateDB) { func ProcessBeaconBlockRoot(beaconRoot common.Hash, vmenv *vm.EVM, statedb *state.StateDB) {
// Return immediately if beaconRoot equals the zero hash when using the Parlia engine.
if beaconRoot == (common.Hash{}) {
if chainConfig := vmenv.ChainConfig(); chainConfig != nil && chainConfig.Parlia != nil {
return
}
}
// If EIP-4788 is enabled, we need to invoke the beaconroot storage contract with // If EIP-4788 is enabled, we need to invoke the beaconroot storage contract with
// the new root // the new root
msg := &Message{ msg := &Message{

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,27 @@
package bohr
import _ "embed"
// contract codes for Mainnet upgrade
var (
//go:embed mainnet/ValidatorContract
MainnetValidatorContract string
//go:embed mainnet/StakeHubContract
MainnetStakeHubContract string
)
// contract codes for Chapel upgrade
var (
//go:embed chapel/ValidatorContract
ChapelValidatorContract string
//go:embed chapel/StakeHubContract
ChapelStakeHubContract string
)
// contract codes for Rialto upgrade
var (
//go:embed rialto/ValidatorContract
RialtoValidatorContract string
//go:embed rialto/StakeHubContract
RialtoStakeHubContract string
)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,15 @@
package bruno
import _ "embed"
// contract codes for Mainnet upgrade
var (
//go:embed mainnet/ValidatorContract
MainnetValidatorContract string
)
// contract codes for Chapel upgrade
var (
//go:embed chapel/ValidatorContract
ChapelValidatorContract string
)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,19 @@
package euler
import _ "embed"
// contract codes for Mainnet upgrade
var (
//go:embed mainnet/ValidatorContract
MainnetValidatorContract string
//go:embed mainnet/SlashContract
MainnetSlashContract string
)
// contract codes for Chapel upgrade
var (
//go:embed chapel/ValidatorContract
ChapelValidatorContract string
//go:embed chapel/SlashContract
ChapelSlashContract string
)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Some files were not shown because too many files have changed in this diff Show More