This pull request removes the `fsync` of index files in freezer.ModifyAncients function for
performance gain.
Originally, fsync is added after each freezer write operation to ensure
the written data is truly transferred into disk. Unfortunately, it turns
out `fsync` can be relatively slow, especially on
macOS (see https://github.com/ethereum/go-ethereum/issues/28754 for more
information).
In this pull request, fsync for index file is removed as it turns out
index file can be recovered even after a unclean shutdown. But fsync for data file is still kept, as
we have no meaningful way to validate the data correctness after unclean shutdown.
---
**But why do we need the `fsync` in the first place?**
As it's necessary for freezer to survive/recover after the machine crash
(e.g. power failure).
In linux, whenever the file write is performed, the file metadata update
and data update are
not necessarily performed at the same time. Typically, the metadata will
be flushed/journalled
ahead of the file data. Therefore, we make the pessimistic assumption
that the file is first
extended with invalid "garbage" data (normally zero bytes) and that
afterwards the correct
data replaces the garbage.
We have observed that the index file of the freezer often contain
garbage entry with zero value
(filenumber = 0, offset = 0) after a machine power failure. It proves
that the index file is extended
without the data being flushed. And this corruption can destroy the
whole freezer data eventually.
Performing fsync after each write operation can reduce the time window
for data to be transferred
to the disk and ensure the correctness of the data in the disk to the
greatest extent.
---
**How can we maintain this guarantee without relying on fsync?**
Because the items in the index file are strictly in order, we can
leverage this characteristic to
detect the corruption and truncate them when freezer is opened.
Specifically these validation
rules are performed for each index file:
For two consecutive index items:
- If their file numbers are the same, then the offset of the latter one
MUST not be less than that of the former.
- If the file number of the latter one is equal to that of the former
plus one, then the offset of the latter one MUST not be 0.
- If their file numbers are not equal, and the latter's file number is
not equal to the former plus 1, the latter one is valid
And also, for the first non-head item, it must refer to the earliest
data file, or the next file if the
earliest file is not sufficient to place the first item(very special
case, only theoretical possible
in tests)
With these validation rules, we can detect the invalid item in index
file with greatest possibility.
---
But unfortunately, these scenarios are not covered and could still lead
to a freezer corruption if it occurs:
**All items in index file are in zero value**
It's impossible to distinguish if they are truly zero (e.g. all the data
entries maintained in freezer
are zero size) or just the garbage left by OS. In this case, these index
items will be kept by truncating
the entire data file, namely the freezer is corrupted.
However, we can consider that the probability of this situation
occurring is quite low, and even
if it occurs, the freezer can be considered to be close to an empty
state. Rerun the state sync
should be acceptable.
**Index file is integral while relative data file is corrupted**
It might be possible the data file is corrupted whose file size is
extended correctly with garbage
filled (e.g. zero bytes). In this case, it's impossible to detect the
corruption by index validation.
We can either choose to `fsync` the data file, or blindly believe that
if index file is integral then
the data file could be integral with very high chance. In this pull
request, the first option is taken.
This is the fix to issue #27483. A new hiddenBytes() is introduced to calculate the byte size of hidden items in the freezer table. When reporting the size of the freezer table, size of the hidden items will be subtracted from the total size.
---------
Co-authored-by: Yifan <Yifan Wang>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This change adds the ability to perform reads from freezer without size limitation. This can be useful in cases where callers are certain that out-of-memory will not happen (e.g. reading only a few elements).
The previous API was designed to behave both optimally and secure while servicing a request from a peer, whereas this change should _not_ be used when an untrusted peer can influence the query size.
The meter for "for measuring the effective amount of data read" within the freezertable was never updated. This change remedies that.
---------
Signed-off-by: jsvisa <delweng@gmail.com>
This PR does a few things.
It fixes a shutdown-order flaw in the chainfreezer. Previously, the chain-freezer would shutdown the freezer backend first, and then signal for the loop to exit. This can lead to a scenario where the freezer tries to fsync closed files, which is an error-conditon that could lead to exit via log.Crit.
It also makes the printout more detailed when truncating 'dangling' items, by showing the exact number instead of approximate MB.
This PR also adds calls to fsync files before closing them, and also makes the `db inspect` command slightly more robust.
This PR fixes an issue which might result in data lost in freezer.
Whenever mutation happens in freezer, all data will be written into head data file
and it will be rotated with a new one in case the size of file reaches the threshold.
Theoretically, the rotated old data file should be fsync'd to prevent data loss.
In freezer.Sync function, we only fsync: (1) index file (2) meta file and (3) head
data file. So this PR forcibly fsync the head data file if mutation happens in the
boundary of data file.
While investigating #22374, I noticed that the Sync operation of the
freezer does not take the table lock. It also doesn't call sync for all files
if there is an error with one of them. I doubt this will fix anything, but
didn't want to drop the fix on the floor either.
* core/rawdb, cmd, ethdb, eth: implement freezer tail deletion
* core/rawdb: address comments from martin and sina
* core/rawdb: fixes cornercase in tail deletion
* core/rawdb: separate metadata into a standalone file
* core/rawdb: remove unused code
* core/rawdb: add random test
* core/rawdb: polish code
* core/rawdb: fsync meta file before manipulating the index
* core/rawdb: fix typo
* core/rawdb: address comments
* freezer: add readonly flag to table
* freezer: enforce readonly in table repair
* freezer: enforce readonly in newFreezer
* minor fix
* minor
* core/rawdb: test that writing during readonly fails
* rm unused log
* check readonly on batch append
* minor
* Revert "check readonly on batch append"
This reverts commit 2ddb5ec4ba7534bf6edbdfec158ea99a2eed5036.
* review fixes
* minor test refactor
* attempt at fixing windows issue
* add comment re windows sync issue
* k->kind
* open readonly db for genesis check
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change is a rewrite of the freezer code.
When writing ancient chain data to the freezer, the previous version first encoded each
individual item to a temporary buffer, then wrote the buffer. For small item sizes (for
example, in the block hash freezer table), this strategy causes a lot of system calls for
writing tiny chunks of data. It also allocated a lot of temporary []byte buffers.
In the new version, we instead encode multiple items into a re-useable batch buffer, which
is then written to the file all at once. This avoids performing a system call for every
inserted item.
To make the internal batching work, the ancient database API had to be changed. While
integrating this new API in BlockChain.InsertReceiptChain, additional optimizations were
also added there.
Co-authored-by: Felix Lange <fjl@twurst.com>
* core/rawdb: implement sequential reads in freezer_table
* core/rawdb, ethdb: add sequential reader to db interface
* core/rawdb: lint nitpicks
* core/rawdb: fix some nitpicks
* core/rawdb: fix flaw with deferred reads not being performed
* core/rawdb: better documentation
The Append / truncate operations were racy. When a datafile reaches 2Gb, a new file is needed. For this operation, we require a writelock, which is not needed in the 99.99% of all cases where the data does fit in the current head-file.
This transition from readlock to writelock was incorrect, and as the readlock was released, a truncate operation could slip in between, and truncate the data. This would have been fine, however, the Append operation continued writing as if no truncation had occurred, e.g writing item 5 where item 0 should reside.
This PR changes the behaviour, so that if when we run into the situation that a new file is needed, it aborts, and retries, this time with a writelock.
The outcome of the situation described above, running on this PR, would instead be that the Append operation exits with a failure.
This PR implements the following modifications
- Don't shortcut check if block is present, thus avoid disk lookup
- Don't check hash ancestry in early-check (it's still done in parallel checker)
- Don't check time.Now for every single header
Charts and background info can be found here: https://github.com/holiman/headerimport/blob/main/README.md
With these changes, writing 1M headers goes down to from 80s to 62s.
* core, eth: some fixes for freezer
* vendor, core/rawdb, cmd/geth: add db inspector
* core, cmd/utils: check ancient store path forceily
* cmd/geth, common, core/rawdb: a few fixes
* cmd/geth: support windows file rename and fix rename error
* core: support ancient plugin
* core, cmd: streaming file copy
* cmd, consensus, core, tests: keep genesis in leveldb
* core: write txlookup during ancient init
* core: bump database version
* all: freezer style syncing
core, eth, les, light: clean up freezer relative APIs
core, eth, les, trie, ethdb, light: clean a bit
core, eth, les, light: add unit tests
core, light: rewrite setHead function
core, eth: fix downloader unit tests
core: add receipt chain insertion test
core: use constant instead of hardcoding table name
core: fix rollback
core: fix setHead
core/rawdb: remove canonical block first and then iterate side chain
core/rawdb, ethdb: add hasAncient interface
eth/downloader: calculate ancient limit via cht first
core, eth, ethdb: lots of fixes
* eth/downloader: print ancient disable log only for fast sync