* core: use finalized block as the chain freeze indicator (#28683)
* core/rawdb: use max(finality, head-90k) as chain freezing threshold
* core: impl multi database for block data
* core: fix db inspect total size bug
* core: add tips for user who use multi-database
* core: adapt some cmd for multi-database
* core: adapter blockBlobSidecars
* core: fix freezer readHeader bug
---------
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
* ci: temp enable blobtx branch ci run;
* Switch ON blobpool & ensure Cancun hardfork can occur (#2223)
* feat: support blob storage & miscs; (#2229)
* chainconfig: use cancun fork for BSC;
* feat: fill WithdrawalsHash when BSC enable cancun fork;
* rawdb: support to CRUD blobs;
* freezer: support to freeze block blobs;
* blockchain: add blob cache & blob query helper;
* freezer: refactor addition table logic, add uts;
* blobexpiry: add more extra expiry time, and logs;
* parlia: implement IsDataAvailable function;
* blob: refactor blob transfer logic;
* blob: support config blob extra reserve;
* blockchian: support to import block with blob & blobGasFee; (#2260)
* blob: implement min&max gas price logic;
* blockchian: support import side chain;
* blobpool: reject the banned address;
* blockchain: add chasing head for DA check;
* params: update blob related config;
* blockchain: opt data available checking performance;
* params: modify blob related params;
* gasprice: support BEP-336 blob gas price calculate;
* blobTx: mining + brodcasting (#2253)
* blobtx mining pass (#2282)
* Sidecar fetching changes for 4844 (#2283)
* ci: temp enable blobtx branch ci run;
* Switch ON blobpool & ensure Cancun hardfork can occur (#2223)
* feat: support blob storage & miscs; (#2229)
* chainconfig: use cancun fork for BSC;
feat: fill WithdrawalsHash when BSC enable cancun fork;
* rawdb: support to CRUD blobs;
* freezer: support to freeze block blobs;
* blockchain: add blob cache & blob query helper;
* freezer: refactor addition table logic, add uts;
* blobexpiry: add more extra expiry time, and logs;
* parlia: implement IsDataAvailable function;
* blob: refactor blob transfer logic;
* blob: support config blob extra reserve;
* blockchian: support to import block with blob & blobGasFee; (#2260)
* blob: implement min&max gas price logic;
* blockchian: support import side chain;
* blobpool: reject the banned address;
* blockchain: add chasing head for DA check;
* params: update blob related config;
* blockchain: opt data available checking performance;
* params: modify blob related params;
* gasprice: support BEP-336 blob gas price calculate;
* fix failed check for WithdrawalsHash (#2276)
* eth: include sidecars in fitering of body
* core: refactor sidecars name
* eth: sidecars type refactor
* core: remove extra from bad merge
* eth: fix handlenewblock test after merge
* Implement eth_getBlobSidecars && eth_getBlobSidecarByTxHash (#2286)
* execution: add blob gas fee reward to system;
* syncing: support blob syncing & DA checking;
* naming: rename blobs to sidecars;
* fix the semantics of WithXXX (#2293)
* config: reduce sidecar cache to 1024 and rename (#2297)
* fix: Withdrawals turn into empty from nil when BlockBody has Sidecars (#2301)
* internal/api_test: add test case for eth_getBlobSidecars && eth_getBlobSidecarByTxHash (#2300)
* consensus/misc: rollback CalcBlobFee (#2306)
* flags: add new flags to override blobs' params;
* freezer: fix blob ancient save error;
* blobsidecar: add new sidecar struct with metadata; (#2315)
* core/rawdb: optimize write block with sidecars (#2318)
* core: more check for validity of sidecars
* mev: add TxIndex for mev bid (#2325)
* remove useless Config() (#2326)
* fix WithSidecars (#2327)
* fix: fix mined block sidecar issue; (#2328)
* fix WithSidecars (#2329)
---------
Co-authored-by: GalaIO <GalaIO@users.noreply.github.com>
Co-authored-by: buddho <galaxystroller@gmail.com>
Co-authored-by: Satyajit Das <emailtovamos@gmail.com>
Co-authored-by: Eric <45141191+zlacfzy@users.noreply.github.com>
Co-authored-by: zzzckck <152148891+zzzckck@users.noreply.github.com>
It seems there is no fully typed library implementation of an LRU cache.
So I wrote one. Method names are the same as github.com/hashicorp/golang-lru,
and the new type can be used as a drop-in replacement.
Two reasons to do this:
- It's much easier to understand what a cache is for when the types are right there.
- Performance: the new implementation is slightly faster and performs zero memory
allocations in Add when the cache is at capacity. Overall, memory usage of the cache
is much reduced because keys are values are no longer wrapped in interface.
* core: recover the state in SetChainHead if the head state is missing
* core: disable test logging
* core: address comment from martin
* core: improve log level in case state is recovered
* core, eth, les, light: rename SetChainHead to SetCanonical
This PR reduces the amount of work we do when answering header queries, e.g. when a peer
is syncing from us.
For some items, e.g block bodies, when we read the rlp-data from database, we plug it
directly into the response package. We didn't do that for headers, but instead read
headers-rlp, decode to types.Header, and re-encode to rlp. This PR changes that to keep it
in RLP-form as much as possible. When a node is syncing from us, it typically requests 192
contiguous headers. On master it has the following effect:
- For headers not in ancient: 2 db lookups. One for translating hash->number (even though
the request is by number), and another for reading by hash (this latter one is sometimes
cached).
- For headers in ancient: 1 file lookup/syscall for translating hash->number (even though
the request is by number), and another for reading the header itself. After this, it
also performes a hashing of the header, to ensure that the hash is what it expected. In
this PR, I instead move the logic for "give me a sequence of blocks" into the lower
layers, where the database can determine how and what to read from leveldb and/or
ancients.
There are basically four types of requests; three of them are improved this way. The
fourth, by hash going backwards, is more tricky to optimize. However, since we know that
the gap is 0, we can look up by the parentHash, and stlil shave off all the number->hash
lookups.
The gapped collection can be optimized similarly, as a follow-up, at least in three out of
four cases.
Co-authored-by: Felix Lange <fjl@twurst.com>
* all: work for eth1/2 transtition
* consensus/beacon, eth: change beacon difficulty to 0
* eth: updates
* all: add terminalBlockDifficulty config, fix rebasing issues
* eth: implemented merge interop spec
* internal/ethapi: update to v1.0.0.alpha.2
This commit updates the code to the new spec, moving payloadId into
it's own object. It also fixes an issue with finalizing an empty blockhash.
It also properly sets the basefee
* all: sync polishes, other fixes + refactors
* core, eth: correct semantics for LeavePoW, EnterPoS
* core: fixed rebasing artifacts
* core: light: performance improvements
* core: use keyed field (f)
* core: eth: fix compilation issues + tests
* eth/catalyst: dbetter error codes
* all: move Merger to consensus/, remove reliance on it in bc
* all: renamed EnterPoS and LeavePoW to ReachTDD and FinalizePoS
* core: make mergelogs a function
* core: use InsertChain instead of InsertBlock
* les: drop merger from lightchain object
* consensus: add merger
* core: recoverAncestors in catalyst mode
* core: fix nitpick
* all: removed merger from beacon, use TTD, nitpicks
* consensus: eth: add docstring, removed unnecessary code duplication
* consensus/beacon: better comment
* all: easy to fix nitpicks by karalabe
* consensus/beacon: verify known headers to be sure
* core: comments
* core: eth: don't drop peers who advertise blocks, nitpicks
* core: never add beacon blocks to the future queue
* core: fixed nitpicks
* consensus/beacon: simplify IsTTDReached check
* consensus/beacon: correct IsTTDReached check
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
This removes some code:
- The clique engine calculated the snapshot twice when verifying headers/blocks.
- The method GetBlockHashesFromHash in Header/Block/Lightchain was only used by tests. It
is now removed from the API.
- The method GetTdByHash internally looked up the number before calling GetTd(hash, num).
In many cases, callers already had the number, and used this method just because it has a
shorter name. I have removed the method to make the API surface smaller.
This change increases the cache size from 64 to 256 Mb for block bodies.
Benchmarks have shown this to be one bottleneck when trying to achieve
higher download speeds.
The commit also includes a minor optimization for header inserts in package
core: previously, the presence of headers in the database was checked for
every header before writing it. With the change, if one header fails the
presence check, all subsequent headers are also assumed to be missing.
This is an improvement because in practice, the headers are almost always
missing during sync.
* all: add thousandths separators for big numbers on log messages
* p2p/sentry: drop accidental file
* common, log: add fast number formatter
* common, eth/protocols/snap: simplifty fancy num types
* log: handle nil big ints
This PR implements the following modifications
- Don't shortcut check if block is present, thus avoid disk lookup
- Don't check hash ancestry in early-check (it's still done in parallel checker)
- Don't check time.Now for every single header
Charts and background info can be found here: https://github.com/holiman/headerimport/blob/main/README.md
With these changes, writing 1M headers goes down to from 80s to 62s.
* core: add test for headerchain inserts
* core, light: write headerchains in batches
* core: change to one callback per batch of inserted headers + review concerns
* core: error-check on batch write
* core: unexport writeHeaders
* core: remove callback parameter in InsertHeaderChain
The semantics of InsertHeaderChain are now much simpler: it is now an
all-or-nothing operation. The new WriteStatus return value allows
callers to check for the canonicality of the insertion. This change
simplifies use of HeaderChain in package les, where the callback was
previously used to post chain events.
* core: skip some hashing when writing headers
* core: less hashing in header validation
* core: fix headerchain flaw regarding blacklisted hashes
Co-authored-by: Felix Lange <fjl@twurst.com>
* core: fix the condition of reorg
* core: fix nitpick to only retrieve head once
* core: don't reorg if received chain is longer at same diff
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* core, eth: some fixes for freezer
* vendor, core/rawdb, cmd/geth: add db inspector
* core, cmd/utils: check ancient store path forceily
* cmd/geth, common, core/rawdb: a few fixes
* cmd/geth: support windows file rename and fix rename error
* core: support ancient plugin
* core, cmd: streaming file copy
* cmd, consensus, core, tests: keep genesis in leveldb
* core: write txlookup during ancient init
* core: bump database version
* all: freezer style syncing
core, eth, les, light: clean up freezer relative APIs
core, eth, les, trie, ethdb, light: clean a bit
core, eth, les, light: add unit tests
core, light: rewrite setHead function
core, eth: fix downloader unit tests
core: add receipt chain insertion test
core: use constant instead of hardcoding table name
core: fix rollback
core: fix setHead
core/rawdb: remove canonical block first and then iterate side chain
core/rawdb, ethdb: add hasAncient interface
eth/downloader: calculate ancient limit via cht first
core, eth, ethdb: lots of fixes
* eth/downloader: print ancient disable log only for fast sync
This PR is a more advanced form of the dirty-to-clean cacher (#18995),
where we reuse previous database write batches as datasets to uncache,
saving a dirty-trie-iteration and a dirty-trie-rlp-reencoding per block.
* ethdb: add Putter interface and Has method
* ethdb: improve docs and add IdealBatchSize
* ethdb: remove memory batch lock
Batches are not safe for concurrent use.
* core: use ethdb.Putter for Write* functions
This covers the easy cases.
* core/state: simplify StateSync
* trie: optimize local node check
* ethdb: add ValueSize to Batch
* core: optimize HasHeader check
This avoids one random database read get the block number. For many uses
of HasHeader, the expectation is that it's actually there. Using Has
avoids a load + decode of the value.
* core: write fast sync block data in batches
Collect writes into batches up to the ideal size instead of issuing many
small, concurrent writes.
* eth/downloader: commit larger state batches
Collect nodes into a batch up to the ideal size instead of committing
whenever a node is received.
* core: optimize HasBlock check
This avoids a random database read to get the number.
* core: use numberCache in HasHeader
numberCache has higher capacity, increasing the odds of finding the
header without a database lookup.
* core: write imported block data using a batch
Restore batch writes of state and add blocks, tx entries, receipts to
the same batch. The change also simplifies the miner.
This commit also removes posting of logs when a forked block is imported.
* core: fix DB write error handling
* ethdb: use RLock for Has
* core: fix HasBlock comment