This change adds the ability to perform reads from freezer without size limitation. This can be useful in cases where callers are certain that out-of-memory will not happen (e.g. reading only a few elements).
The previous API was designed to behave both optimally and secure while servicing a request from a peer, whereas this change should _not_ be used when an untrusted peer can influence the query size.
This change makes the StateDB track the state key value diff of a block transition.
We already tracked current account and storage values for the purpose of updating
the state snapshot. With this PR, we now also track the original (pre-transition) values
of accounts and storage slots.
Implements [EIP 5656](https://eips.ethereum.org/EIPS/eip-5656), MCOPY instruction, and enables it for Cancun.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
Back before #27178 , we spun up a number of ethash verifiers to verify headers. So we also had tests to ensure that we were indeed able to abort verification even if we had multiple workers running.
With PR #27178, we removed the parallelism in verification, and these tests are now failing, since we now just sequentially fire away the results as fast as possible on one routine.
This change removes the (sometimes failing) tests
This change adds back the 'geth --dev' mode of operation, using a cl-mocker.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: lightclient <14004106+lightclient@users.noreply.github.com>
The clean trie cache is persisted periodically, therefore Geth can
quickly warmup the cache in next restart.
However it will reduce the robustness of system. The assumption is
held in Geth that if the parent trie node is present, then the entire
sub-trie associated with the parent are all prensent.
Imagine the scenario that Geth rewinds itself to a past block and
restart, but Geth finds the root node of "future state" in clean
cache then regard this state is present in disk, while is not in fact.
Another example is offline pruning tool. Whenever an offline pruning
is performed, the clean cache file has to be removed to aviod hitting
the root node of "deleted states" in clean cache.
All in all, compare with the minor performance gain, system robustness
is something we care more.
* core/state, light, les: make signature of ContractCode hash-independent
* push current state for feedback
* les: fix unit test
* core, les, light: fix les unittests
* core/state, trie, les, light: fix state iterator
* core, les: address comments
* les: fix lint
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Verkle trees store the code inside the trie. This PR changes the interface to pass the code, as well as the dirty flag to tell the trie package if the code is dirty and needs to be updated. This is a no-op for the MPT and the odr trie.
The state availability is checked during the creation of a state reader.
- In hash-based database, if the specified root node does not exist on disk disk, then
the state reader won't be created and an error will be returned.
- In path-based database, if the specified state layer is not available, then the
state reader won't be created and an error will be returned.
This change also contains a stricter semantics regarding the `Commit` operation: once it has been performed, the trie is no longer usable, and certain operations will return an error.
This removes the feature where top nodes of the proof can be elided.
It was intended to be used by the LES server, to save bandwidth
when the client had already fetched parts of the state and only needed
some extra nodes to complete the proof. Alas, it never got implemented
in the client.
* go.mod: update kzg libraries to use big-endian
* go.sum: ran go mod tidy
* core/testdata/precompiles: fix blob verification test
* core/testdata/precompiles: fix blob verification test
This change ensures Reheap will be called even before the London fork activates.
Since Reheap would otherwise only be called through `SetBaseFee` after London,
the list would just keep growing if the fork was not enabled or not reached yet.
* all: move main transaction pool into a subpool
* go.mod: remove superfluous updates
* core/txpool: review fixes, handle txs rejected by all subpools
* core/txpool: typos
The logs in this function are pulled straight from disk in rawdb.ReadRawReceipts and
also modified in receipts.DeriveFields, so removing the copy should be fine.
* core/txpool: abstraction prep work for secondary pools (blob pool)
* core/txpool: leave subpool concepts to a followup pr
* les: fix tests using hard coded errors
* core/txpool: use bitmaps instead of maps for tx type filtering
This changes the journal logic to mark the state object dirty immediately when it
is reset.
We're mostly adding this change to appease the fuzzer. Marking it dirty immediately
makes no difference in practice because accounts will always be modified by EVM
right after creation.
Continuing with a series of PRs to make the Trie interface more generic, this PR moves
the RLP encoding of storage slots inside the StateTrie and light.Trie implementations,
as other types of tries don't use RLP.
This PR adds a staleness-check to AccountRLP, before checking the bloom-filter and potentially going directly into the disklayer.
---------
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Regenerate receipt json code to remove omit empty. Previously, there was a discrepancy between the generated code and the source.
---------
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
* cmd/utils, node: switch to Pebble as the default db if none exists
* node: fall back to LevelDB on platforms not supporting Pebble
* core/rawdb, node: default to Pebble at the node level
* cmd/geth: fix some tests explicitly using leveldb
* ethdb/pebble: allow double closes, makes tests simpler
In this PR, all TryXXX(e.g. TryGet) APIs of trie are renamed to XXX(e.g. Get) with an error returned.
The original XXX(e.g. Get) APIs are renamed to MustXXX(e.g. MustGet) and does not return any error -- they print a log output. A future PR will change the behaviour to panic on errorrs.
The EIP150Hash was an idea where, after the fork, we hardcoded the forked hash as an extra defensive mechanism. It wasn't really used, since forks weren't contentious and for all the various testnets and private networks it's been a hassle to have around.
This change removes that config field.
---------
Signed-off-by: jsvisa <delweng@gmail.com>
This PR unifies the error handling in miner.
Whenever an error occur while applying a transaction, the transaction should be regarded as invalid and all following transactions from the same sender not executable because of the nonce restriction. The only exception is the `nonceTooLow` error which is handled separately.
Prior to this change, it was possible that transactions are erroneously deemed as 'future' although they are in fact 'pending', causing them to be dropped due to 'future' not being allowed to replace 'pending'.
This change fixes that, by doing a more in-depth inspection of the queue.
This PR removes the Debug field from vmconfig, making it so that if a tracer is set, debug=true is implied.
---------
Co-authored-by: 0xTylerHolmes <tyler@ethereum.org>
Co-authored-by: Sina Mahmoodi <1591639+s1na@users.noreply.github.com>
Currently, most of transaction validation while holding the txpool mutex: one exception being an early-on signature check.
This PR changes that, so that we do all non-stateful checks before we entering the mutex area. This means they can be performed in parallel, and to enable that, certain fields have been made atomic bools and uint64.
This change renames StateTrie methods to remove the Try* prefix.
We added the Trie methods with prefix 'Try' a long time ago, working
around the problem that most existing methods of Trie did not return the
database error. This weird naming convention has persisted until now.
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This changes the Trie interface to add the plain account address as a
parameter to all storage-related methods.
After the introduction of the TryAccount* functions, TryGet, TryUpdate and
TryDelete are now only meant to read an account's storage. In their current
form, they assume that an account storage is stored in a separate trie, and
that the hashing of the slot is independent of its account's address.
The proposed structure for a stateless storage breaks these two
assumptions: the hashing of a slot key requires the address and all slots
and accounts are stored in a single trie.
This PR therefore adds an address parameter to the interface. It is ignored
in the MPT version, so this change has no functional impact, however it
will reduce the diff size when merging verkle trees.
The meter for "for measuring the effective amount of data read" within the freezertable was never updated. This change remedies that.
---------
Signed-off-by: jsvisa <delweng@gmail.com>
When interacting with geth as a library to e.g. produce state tests, it is desirable to obtain the consensus-correct jumptable definition for a given fork. This changes adds accessors so the instructionset can be obtained and characteristics about opcodes can be inspected.
Makes clear the distinction between Finalize and FinalizedAndAssemble:
- In Finalize function, a series of state operations are applied according to consensus rules. The statedb is mutated and the root hash can be checked and compared afterwards.
This function should be used in block processing(receive afrom network and apply it locally) but not block generation.
- In FinalizeAndAssemble function, after applying state mutations, the block is also to be assembled with the latest
state root computed, updating the header.
This function should be used in block generation only.
This adds two new rules to the transaction pool:
- A future transaction can not evict a pending transaction.
- A transaction can not overspend available funds of a sender.
---
Co-authored-by: dwn1998 <42262393+dwn1998@users.noreply.github.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Here, the core.Message interface turns into a plain struct and
types.Message gets removed.
This is a breaking change to packages core and core/types. While we do
not promise API stability for package core, we do for core/types. An
exception can be made for types.Message, since it doesn't have any
purpose apart from invoking the state transition in package core.
types.Message was also marked deprecated by the same commit it
got added in, 4dca5d4db7 (November 2016).
The core.Message interface was added in December 2014, in commit
db494170dc, for the purpose of 'testing' state transitions. It's the
same change that made transaction struct fields private. Before that,
the state transition used *types.Transaction directly.
Over time, multiple implementations of the interface accrued across
different packages, since constructing a Message is required whenever
one wants to invoke the state transition. These implementations all
looked very similar, a struct with private fields exposing the fields
as accessor methods.
By changing Message into a struct with public fields we can remove all
these useless interface implementations. It will also hopefully
simplify future changes to the type with less updates to apply across
all of go-ethereum when a field is added to Message.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
This changes the test to match the comment description. Using timestampedConfig in this test case is incorrect, the comment says 'local is at Gray Glacier' and isn't aware of more forks.
This change prints out more information about the problem, in the case where geth detects a gap between leveldb and ancients, so we can determine more exactly where the gap is (what the first missing is). Also prints out more metadata.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change fixes a flaw where, in certain scenarios, the block sealer did not accurately reset the remaining gas after failing to include an invalid transaction. Fixes#26791
Fixes a race in TestNewPayloadOnInvalidTerminalBlock where setting the TTD raced with
the miner. Solution: set the TTD on the blockchain config not the genesis config.
Also fixes a race in CopyHeader which resulted in race reports all over the place.
This change adds a struct field EffectiveGasPrice in types.Receipt. The field is present
in RPC responses, but not in the Go struct, and thus can't easily be accessed via ethclient.
Co-authored-by: PulsarAI <dev@pulsar-systems.fi>
This fixes an issue where the withdrawal index was not calculated correctly
for multiple withdrawals in a single block.
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
The EmptyRootHash and EmptyCodeHash are defined everywhere in the codebase, this PR replaces all of them with unified one defined in core/types package, and also defines constants for TxRoot, WithdrawalsRoot and UncleRoot
This PR contains a small portion of the full pbss PR, namely
Remove the tracer from trie (and comitter), and instead using an accessList.
Related changes to the Nodeset.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This PR is a (superior) alternative to https://github.com/ethereum/go-ethereum/pull/26708, it handles deprecation, primarily two specific cases.
`rand.Seed` is typically used in two ways
- `rand.Seed(time.Now().UnixNano())` -- we seed it, just to be sure to get some random, and not always get the same thing on every run. This is not needed, with global seeding, so those are just removed.
- `rand.Seed(1)` this is typically done to ensure we have a stable test. If we rely on this, we need to fix up the tests to use a deterministic prng-source. A few occurrences like this has been replaced with a proper custom source.
`rand.Read` has been replaced by `crypto/rand`.`Read` in this PR.
This PR relaxes the block body ingress handling a bit: if block body withdrawals are missing (but expected to be empty), the body withdrawals are set to 'empty list' before being passed to upper layers.
This fixes an issue where a block passed from EthereumJS to geth was deemed invalid.
Logs stored on disk have minimal information. Contextual information such as block
number, index of log in block, index of transaction in block are filled in upon request.
We can fill in all these fields only having the block header and list of receipts.
But determining the transaction hash of a log requires the block body.
The goal of this PR is postponing this retrieval until we are sure we the transaction hash.
It happens often that the header bloom filter signals there might be matches in a block,
but after actually checking them reveals the logs do not match. We want to avoid fetching
the body in this case.
Note that this changes the semantics of Backend.GetLogs. Downstream callers of
GetLogs now assume log context fields have not been derived, and need to call
DeriveFields on the logs if necessary.
This is a breaking change in the tracing hooks API as well as semantics of the callTracer:
- CaptureEnter hook provided a nil value argument in case of DELEGATECALL. However to stay consistent with how delegate calls behave in EVM this hook is changed to pass in the value of the parent call.
- callTracer will return parent call's value for DELEGATECALL frames.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
* common, core, eth, les, trie: make prque generic
* les/vflux/server: fixed issues in priorityPool
* common, core, eth, les, trie: make priority also generic in prque
* les/flowcontrol: add test case for priority accumulator overflow
* les/flowcontrol: avoid priority value overflow
* common/prque: use int priority in some tests
No need to convert to int64 when we can just change the type used by the
queue.
* common/prque: remove comment about int64 range
---------
Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This change ports some changes from the main PBSS PR:
- get rid of callback function in `trie.Database.Commit` which is not required anymore
- rework the `nodeResolver` in `trie.Iterator` to make it compatible with multiple state scheme
- some other shallow changes in tests and typo-fixes
This PR moves core/beacon to beacon/engine so that beacon-chain related code has its own top level package which also can house the the beacon lightclient-code.
This PR moves some trie-related db accessor methods to a different file, and also removes the schema type. Instead of the schema type, a string is used to distinguish between hashbased/pathbased db accessors.
This also moves some code from trie package to rawdb package.
This PR is intended to be a no-functionality-change prep PR for #25963 .
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This change improves reusability of the EVM struct. Two methods are added:
- SetBlockContext(...)
- SetTracer(...)
Other attributes like the TransactionContext and the StateDB can already be updated.
BlockContext and Tracer are partially not updateable right now. This change fixes it and
opens the potential to reuse an EVM struct in more ways.
Co-authored-by: Felix Lange <fjl@twurst.com>
This change implements withdrawals as specified in EIP-4895.
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
Co-authored-by: marioevz <marioevz@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR changes the API so that uint64 is used for fork timestamps.
It's a good choice because types.Header also uses uint64 for time.
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR does a few things.
It fixes a shutdown-order flaw in the chainfreezer. Previously, the chain-freezer would shutdown the freezer backend first, and then signal for the loop to exit. This can lead to a scenario where the freezer tries to fsync closed files, which is an error-conditon that could lead to exit via log.Crit.
It also makes the printout more detailed when truncating 'dangling' items, by showing the exact number instead of approximate MB.
This PR also adds calls to fsync files before closing them, and also makes the `db inspect` command slightly more robust.
This PR fixes an issue which might result in data lost in freezer.
Whenever mutation happens in freezer, all data will be written into head data file
and it will be rotated with a new one in case the size of file reaches the threshold.
Theoretically, the rotated old data file should be fsync'd to prevent data loss.
In freezer.Sync function, we only fsync: (1) index file (2) meta file and (3) head
data file. So this PR forcibly fsync the head data file if mutation happens in the
boundary of data file.
Implementation of https://eips.ethereum.org/EIPS/eip-3860, limit and meter initcode. This PR enables EIP-3860 as part of the Shanghai fork.
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
This makes non-JS tracers execute all block txs on a single goroutine.
In the previous implementation, we used to prepare every tx pre-state
on one goroutine, and then run the transactions again with tracing enabled.
Native tracers are usually faster, so it is faster overall to use their output as
the pre-state for tracing the next transaction.
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This PR removes the notion of fakeStorage from the state objects, and instead, for any state modifications that are needed, it simply makes the changes.
This changes moves the tracking of "deleted in this block" out from snap-only domain, so that it happens regardless of whether the execution is snapshot-backed or trie-backed.
This changes the StorageTrie method to return an error when the trie
is not available. It used to return an 'empty trie' in this case, but that's
not possible anymore under PBSS.
This PR builds on #26299, but also updates the tests to the most recent version, which includes tests regarding TheMerge.
This change adds checks to the beacon consensus engine, making it more strict in validating the pre- and post-headers, and not relying on the caller to have already correctly sanitized the headers/blocks.
This PR implements resettable freezer by adding a ResettableFreezer wrapper.
The resettable freezer wraps the original freezer in a way that makes it possible to ensure atomic resets. Implementation wise, it relies on the os.Rename and os.RemoveAll to atomically delete the original freezer data and re-create a new one from scratch.
A comment suggests that contract creation happens if the recipient of a call is 0x00..00 ("zero address") but in fact the sender must be nil. The zero address is a regular valid address that is commonly used as a "burn" address.
While investigating another issue, I found that all callers of collectLogs have the
complete block available. rawdb.ReadReceipts loads the block from the database,
so it is better to use ReadRawReceipts here, and derive the receipt information using
the block which is already in memory.
This PR makes it possible to modify the flush interval time via RPC. On one extreme, `0s`, it would act as an archive node. If set to `1h`, means that after one hour of effective block processing time, the trie would be flushed. If one block takes 200ms, this means that a flush would occur every `5*3600=18000` blocks -- however, if the memory size of the cached states grows too large, it will flush sooner.
Essentially, this makes it possible to configure the node to be more or less "archive:ish", and without restarting the node while reconfiguring it.
The gcproc field tracks the amount of time spent processing blocks,
and is used to trigger a state flush to disk when a certain threshold is
reached. After the merge, single block insertion by CL is the most
common source of block processing time, but this time was not added
into gcproc.
This removes the 'time' field from logs, as well as from the tracer interface. This change makes the trace output deterministic. If a tracer needs the time they can measure it themselves. No need for evm to do this.
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This changes the Pop method to assign the zero value before
reducing slice size. Doing so ensures the backing array does not
reference removed item values.
This PR drops the legacy receipt types, the freezer-migrate command and the startup check. The previous attempt #22852 at this failed because there were users who still had legacy receipts in their db, so it had to be reverted #23247. Since then we added a command to migrate legacy dbs #24028.
As of the last hardforks all users either must have done the migration, or used the --ignore-legacy-receipts flag which will stop working now.
This PR introduces a node scheme abstraction. The interface is only implemented by `hashScheme` at the moment, but will be extended by `pathScheme` very soon.
Apart from that, a few changes are also included which is worth mentioning:
- port the changes in the stacktrie, tracking the path prefix of nodes during commit
- use ethdb.Database for constructing trie.Database. This is not necessary right now, but it is required for path-based used to open reverse diff freezer
While investigating #22374, I noticed that the Sync operation of the
freezer does not take the table lock. It also doesn't call sync for all files
if there is an error with one of them. I doubt this will fix anything, but
didn't want to drop the fix on the floor either.
It seems there is no fully typed library implementation of an LRU cache.
So I wrote one. Method names are the same as github.com/hashicorp/golang-lru,
and the new type can be used as a drop-in replacement.
Two reasons to do this:
- It's much easier to understand what a cache is for when the types are right there.
- Performance: the new implementation is slightly faster and performs zero memory
allocations in Add when the cache is at capacity. Overall, memory usage of the cache
is much reduced because keys are values are no longer wrapped in interface.
When the interpreter is configured to use extra-eips, this change makes it so that all the opcodes are deep-copied, to prevent accidental modification of the 'base' jumptable.
Closes: #26136
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR changes geth to read the eip1559 params from the chain config instead of the globals.
This way the parameters may be changed by forking the chain config code, without creating a large diff throughout the past and future usages of the parameters.
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR ports a few changes from PBSS:
- Fix the snapshot generator waiter in case the generation is not even initialized
- Refactor db inspector for ancient store
This PR fixes a regression causing snapshots not to be generated in "geth --import" mode. It also fixes the geth export command to be truly readonly, and adds a new test for geth export.
This adds a
* core/vm, tests: optimized modexp + fuzzer
* common/math: modexp optimizations
* core/vm: special case base 1 in big modexp
* core/vm: disable fastexp
This changes the error message for mismatching chain ID to show
the given and expected value. Callers expecting this error must be
changed to use errors.Is.
* ethclient/gethclient: improve time-sensitive flaky test
* eth/catalyst: fix (?) flaky test
* core: stop blockchains in tests after use
* core: fix dangling blockchain instances
* core: rm whitespace
* eth/gasprice, eth/tracers, consensus/clique: stop dangling blockchains in tests
* all: address review concerns
* core: goimports
* eth/catalyst: fix another time-sensitive test
* consensus/clique: add snapshot test run function
* core: rename stop() to stopWithoutSaving()
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR reworks tx indexer a bit. Compared to the original version, one scenario is no longer handled - upgrading from legacy geth without indexer support.
The tx indexer was introduced in 2020 and have been present through hardforks, so it can be assumed that all Geth nodes have tx indexer already. So we can simplify the tx indexer logic a bit:
- If the tail flag is not present, it means node is just initialized may or may not with an ancient store attached. In this case all blocks are regarded as unindexed
- If the tail flag is present, it means blocks below tail are unindexed, blocks above tail are indexed
This change also address some weird cornercases that could make the indexer not work after a crash.
`geth dumpgenesis` currently does not respect the content of the data directory. Instead, it outputs the genesis block created by command-line flags. This PR fixes it to read the genesis from the database, if the database already exists.
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR cleans up the configurations for pruner and snapshotter by passing a config struct.
And also, this PR disables the snapshot background generation if the chain is opened in "read-only" mode. The read-only mode is necessary in some cases. For example, we have a list of commands to open the etheruem node in "read-only" mode, like export-chain. In these cases, the snapshot background generation is non expected and should be banned explicitly.
core/blockchain: downgrade tx indexing and unindexing logs from info to debug
If a user has a finite tx lookup limit, they will see an "unindexing" info level log each time a block is imported. This information might help a user understand that they are removing the index each block and some txs may not be retrievable by hash, but overall it is generally more of a nuisance than a benefit. This change downgrades the log to a debug log.
This shortens the chain config summary in bad block reports,
and adds go-ethereum version information as well.
Co-authored-by: Felix Lange <fjl@twurst.com>
This changes the CI / release builds to use the latest Go version. It also
upgrades golangci-lint to a newer version compatible with Go 1.19.
In Go 1.19, godoc has gained official support for links and lists. The
syntax for code blocks in doc comments has changed and now requires a
leading tab character. gofmt adapts comments to the new syntax
automatically, so there are a lot of comment re-formatting changes in this
PR. We need to apply the new format in order to pass the CI lint stage with
Go 1.19.
With the linter upgrade, I have decided to disable 'gosec' - it produces
too many false-positive warnings. The 'deadcode' and 'varcheck' linters
have also been removed because golangci-lint warns about them being
unmaintained. 'unused' provides similar coverage and we already have it
enabled, so we don't lose much with this change.
This PR makes the event-sending for deleted and new logs happen in batches, to prevent OOM situation due to large reorgs.
Co-authored-by: Felix Lange <fjl@twurst.com>
* core, trie: flush preimages to db on database close
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
* rename Close to CommitPreimages for clarity
* core, trie: nitpick fixes
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
This PR allows users to pass in a config object directly to the tracers. Previously only the struct logger was configurable.
It also adds an option to the call tracer which if enabled makes it ignore any subcall and collect only information about the top-level call. See #25419 for discussion.
The tracers will silently ignore if they are passed a config they don't care about.
* core: use TryGetAccount to read where TryUpdateAccount has been used to write
* Gary's review feedback
* implement Gary's suggestion
* fix bug + rename NewSecure into NewStateTrie
* trie: add backwards-compatibility aliases for SecureTrie
* Update database.go
* make the linter happy
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
* eth: support bubbling up bad blocks from sync to the engine API
* eth/catalyst: fix typo
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
* eth/catalyst: fix typo
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
* Update eth/catalyst/api.go
* eth/catalyst: when forgetting bad hashes, also forget descendants
* eth/catalyst: minor bad block tweaks for resilience
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Martin Holst Swende <martin@swende.se>
During RPC calls such as eth_call and eth_estimateGas, st.evm.Config.NoBaseFee is set
which allows the gas price to be below the base fee. This results the tip being negative,
and balance being subtracted from the coinbase instead of added to it, which results in a
potentially negative coinbase balance interestingly. This can't happen during normal chain
processing as outside of RPC calls the gas price is required to be at least the base fee,
as NoBaseFee is false.
This change prevents this behavior by disabling fee payment when the fee is not set.
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
* Remove locking in (*BlockChain).ExportN
Since ExportN is read-only, it shouldn't need the lock. (?)
* Add hash check to detect reorgs during export.
* fix check order
* Update blockchain.go
* Update blockchain.go
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
This enables the following linters
- typecheck
- unused
- staticcheck
- bidichk
- durationcheck
- exportloopref
- gosec
WIth a few exceptions.
- We use a deprecated protobuf in trezor. I didn't want to mess with that, since I cannot meaningfully test any changes there.
- The deprecated TypeMux is used in a few places still, so the warning for it is silenced for now.
- Using string type in context.WithValue is apparently wrong, one should use a custom type, to prevent collisions between different places in the hierarchy of callers. That should be fixed at some point, but may require some attention.
- The warnings for using weak random generator are squashed, since we use a lot of random without need for cryptographic guarantees.
Previously on Geth startup we just logged the chain config is a semi-json-y format. Whilst that worked while we had a handful of hard-forks defined, currently it's kind of unwieldy.
This PR converts that original data dump and converts it into a user friendly - alas multiline - log output.
This PR adds support for block overrides when doing debug_traceCall.
- Previously, debug_traceCall against pending erroneously used a common.Hash{} stateroot when looking up the state, meaning that a totally empty state was used -- so it always failed,
- With this change, we reject executing debug_traceCall against pending.
- And we add ability to override all evm-visible header fields.
This adds a JS tracer runtime environment based on the Goja VM. The new
runtime replaces the duktape runtime, which will be removed soon.
Goja is implemented in Go and is faster for cases where the Go <-> JS
transition overhead dominates overall performance. It is faster because
duktape is written in C, and the transition cost includes the cost of using
cgo. Another reason for using Goja is that go-duktape is not maintained
anymore.
We expect the performace of JS tracing to be at least as good or better with
this change.
Previously freezer has only been used for storing ancient chain data, while obviously it can be used more. This PR unties the chain data and freezer, keep the minimal freezer structure and move all other logic (like incrementally freezing block data) into a separate structure called ChainFreezer.
This PR also extends the database interface by adding a new ancient store function AncientDatadir which can return the root directory of ancient store. The ancient root directory can be used when we want to open some other ancient-stores (e.g. reverse diff freezer).
* core: recover the state in SetChainHead if the head state is missing
* core: disable test logging
* core: address comment from martin
* core: improve log level in case state is recovered
* core, eth, les, light: rename SetChainHead to SetCanonical
This PR fixes the flaw that @rjl493456442 found in https://github.com/ethereum/go-ethereum/pull/#issuecomment-1093817551 , namely, that the snapshot iterator uses the combined (disk + difflayers) 'view', wheres the raw iterator uses only the disk 'view'.
This PR instead splits up the work: one phase is iterating the disk layer data, another phase is loading the journalled difflayers and performing the same check there.
This adds a tools.go file to import all command packages used for
go:generate. Doing so makes it possible to execute go-based code
generators using 'go run', locking in the tool version using go.mod.
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR fixes a few panics in the chain marker benchmarks. The root
cause for panic is in chain marker the genesis header/block is not
accessible, while it's expected to be obtained in tests. So this PR
avoids touching genesis header at all to avoid panic.
This commit replaces ioutil.TempDir with t.TempDir in tests. The
directory created by t.TempDir is automatically removed when the test
and all its subtests complete.
Prior to this commit, temporary directory created using ioutil.TempDir
had to be removed manually by calling os.RemoveAll, which is omitted in
some tests. The error handling boilerplate e.g.
defer func() {
if err := os.RemoveAll(dir); err != nil {
t.Fatal(err)
}
}
is also tedious, but t.TempDir handles this for us nicely.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
* core,eth: add empty tx logger hooks
* core,eth: add initial and remaining gas to tx hooks
* store tx gasLimit in js tracer
* use gasLimit to compute intrinsic cost for js tracer
* re-use rules in transitiondb
* rm logs
* rm logs
* Mv some fields from Start to TxStart
* simplify sender lookup in prestate tracer
* mv env to TxStart
* Revert "mv env to TxStart"
This reverts commit 656939634b9aff19f55a1cd167345faf8b1ec310.
* Revert "simplify sender lookup in prestate tracer"
This reverts commit ab65bce48007cab99e68232e7aac2fe008338d50.
* Revert "Mv some fields from Start to TxStart"
This reverts commit aa50d3d9b2559addc80df966111ef5fb5d0c1b6b.
* fix intrinsic gas for prestate tracer
* add comments
* refactor
* fix test case
* simplify consumedGas calc in prestate tracer
Trie tracer is an auxiliary tool to capture all deleted nodes
which can't be captured by trie.Committer. The deleted nodes
can be removed from the disk later.
* cmd,core: add simple legacy receipt converter
core/rawdb: use forEach in migrate
core/rawdb: batch reads in forEach
core/rawdb: make forEach anonymous fn
cmd/geth: check for legacy receipts on node startup
fix err msg
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
fix log
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
fix some review comments
add warning to cmd
drop isLegacy fn from migrateTable params
add test for windows rename
test replacing in windows case
* minor fix
* sanity check for tail-deletion
* add log before moving files around
* speed-up hack for mainnet
* fix mainnet check, use networkid instead
* check mainnet genesis
* review fixes
* resume previous migration attempt
* core/rawdb: lint fix
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core/beacon: eth/catalyst: updated engine api to new version
* core: implement exchangeTransitionConfig
* core/beacon: prevRandao instead of Random
* eth/catalyst: Fix ExchangeTransitionConfig, add test
* eth/catalyst: stop external miners on TTD reached
* node: implement --authrpc.vhosts flag
* core: allow for config override on non-mainnet networks
* eth/catalyst: fix peters comments
* eth/catalyst: make stop remote sealer more explicit
* eth/catalyst: add log output
* cmd/utils: rename authrpc.host to authrpc.addr
* eth/catalyst: disable the disabling of the miner
* eth: core: remove notion of terminal pow block
* eth: les: more of peters nitpicks
* eth/downloader: implement beacon sync
* eth/downloader: fix a crash if the beacon chain is reduced in length
* eth/downloader: fix beacon sync start/stop thrashing data race
* eth/downloader: use a non-nil pivot even in degenerate sync requests
* eth/downloader: don't touch internal state on beacon Head retrieval
* eth/downloader: fix spelling mistakes
* eth/downloader: fix some typos
* eth: integrate legacy/beacon sync switchover and UX
* eth: handle UX wise being stuck on post-merge TTD
* core, eth: integrate the beacon client with the beacon sync
* eth/catalyst: make some warning messages nicer
* eth/downloader: remove Ethereum 1&2 notions in favor of merge
* core/beacon, eth: clean up engine API returns a bit
* eth/downloader: add skeleton extension tests
* eth/catalyst: keep non-kiln spec, handle mining on ttd
* eth/downloader: add beacon header retrieval tests
* eth: fixed spelling, commented failing tests out
* eth/downloader: review fixes
* eth/downloader: drop peers failing to deliver beacon headers
* core/rawdb: track beacon sync data in db inspect
* eth: fix review concerns
* internal/web3ext: nit
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
* core/rawdb, cmd, ethdb, eth: implement freezer tail deletion
* core/rawdb: address comments from martin and sina
* core/rawdb: fixes cornercase in tail deletion
* core/rawdb: separate metadata into a standalone file
* core/rawdb: remove unused code
* core/rawdb: add random test
* core/rawdb: polish code
* core/rawdb: fsync meta file before manipulating the index
* core/rawdb: fix typo
* core/rawdb: address comments
This change makes use of the new code generator rlp/rlpgen to improve the
performance of RLP encoding for Header and StateAccount. It also speeds up
encoding of ReceiptForStorage using the new rlp.EncoderBuffer API.
The change is much less transparent than I wanted it to be, because Header and
StateAccount now have an EncodeRLP method defined with pointer receiver. It
used to be possible to encode non-pointer values of these types, but the new
method prevents that and attempting to encode unadressable values (even if
part of another value) will return an error. The error can be surprising and may
pop up in places that previously didn't expect any errors.
To make things work, I also needed to update all code paths (mostly in unit tests)
that lead to encoding of non-pointer values, and pass a pointer instead.
Benchmark results:
name old time/op new time/op delta
EncodeRLP/legacy-header-8 328ns ± 0% 237ns ± 1% -27.63% (p=0.000 n=8+8)
EncodeRLP/london-header-8 353ns ± 0% 247ns ± 1% -30.06% (p=0.000 n=8+8)
EncodeRLP/receipt-for-storage-8 237ns ± 0% 123ns ± 0% -47.86% (p=0.000 n=8+7)
EncodeRLP/receipt-full-8 297ns ± 0% 301ns ± 1% +1.39% (p=0.000 n=8+8)
name old speed new speed delta
EncodeRLP/legacy-header-8 1.66GB/s ± 0% 2.29GB/s ± 1% +38.19% (p=0.000 n=8+8)
EncodeRLP/london-header-8 1.55GB/s ± 0% 2.22GB/s ± 1% +42.99% (p=0.000 n=8+8)
EncodeRLP/receipt-for-storage-8 38.0MB/s ± 0% 64.8MB/s ± 0% +70.48% (p=0.000 n=8+7)
EncodeRLP/receipt-full-8 910MB/s ± 0% 897MB/s ± 1% -1.37% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
EncodeRLP/legacy-header-8 0.00B 0.00B ~ (all equal)
EncodeRLP/london-header-8 0.00B 0.00B ~ (all equal)
EncodeRLP/receipt-for-storage-8 64.0B ± 0% 0.0B -100.00% (p=0.000 n=8+8)
EncodeRLP/receipt-full-8 320B ± 0% 320B ± 0% ~ (all equal)
This PR adds an addtional API called `NewBatchWithSize` for db
batcher. It turns out that leveldb batch memory allocation is
super inefficient. The main reason is the allocation step of
leveldb Batch is too small when the batch size is large. It can
take a few second to build a leveldb batch with 100MB size.
Luckily, leveldb also offers another API called MakeBatch which can
pre-allocate the memory area. So if the approximate size of batch is
known in advance, this API can be used in this case.
It's needed in new state scheme PR which needs to commit a batch of
trie nodes in a single batch. Implement the feature in a seperate PR.
* cmd/geth: add db cmd to show metadata
* cmd/geth: better output generator status
Co-authored-by: Sina Mahmoodi <1591639+s1na@users.noreply.github.com>
* cmd: minor
Co-authored-by: Sina Mahmoodi <1591639+s1na@users.noreply.github.com>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
* freezer: add readonly flag to table
* freezer: enforce readonly in table repair
* freezer: enforce readonly in newFreezer
* minor fix
* minor
* core/rawdb: test that writing during readonly fails
* rm unused log
* check readonly on batch append
* minor
* Revert "check readonly on batch append"
This reverts commit 2ddb5ec4ba7534bf6edbdfec158ea99a2eed5036.
* review fixes
* minor test refactor
* attempt at fixing windows issue
* add comment re windows sync issue
* k->kind
* open readonly db for genesis check
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core: implement eip-4399 random opcode
* core: make vmconfig threadsafe
* core: miner: pass vmConfig by value not reference
* all: enable 4399 by Rules
* core: remove diff (f)
* tests: set proper difficulty (f)
* smaller diff (f)
* eth/catalyst: nit
* core: make RANDOM a pointer which is only set post-merge
* cmd/evm/internal/t8ntool: fix t8n tracing of 4399
* tests: set difficulty
* cmd/evm/internal/t8ntool: check that baserules are london before applying the merge chainrules
* core/vm: reverse bit order in bytes of code bitmap
This bit order is more natural for bit manipulation operations and we
can eliminate some small number of CPU instructions.
* core/vm: drop lookup table
This PR reduces the amount of work we do when answering header queries, e.g. when a peer
is syncing from us.
For some items, e.g block bodies, when we read the rlp-data from database, we plug it
directly into the response package. We didn't do that for headers, but instead read
headers-rlp, decode to types.Header, and re-encode to rlp. This PR changes that to keep it
in RLP-form as much as possible. When a node is syncing from us, it typically requests 192
contiguous headers. On master it has the following effect:
- For headers not in ancient: 2 db lookups. One for translating hash->number (even though
the request is by number), and another for reading by hash (this latter one is sometimes
cached).
- For headers in ancient: 1 file lookup/syscall for translating hash->number (even though
the request is by number), and another for reading the header itself. After this, it
also performes a hashing of the header, to ensure that the hash is what it expected. In
this PR, I instead move the logic for "give me a sequence of blocks" into the lower
layers, where the database can determine how and what to read from leveldb and/or
ancients.
There are basically four types of requests; three of them are improved this way. The
fourth, by hash going backwards, is more tricky to optimize. However, since we know that
the gap is 0, we can look up by the parentHash, and stlil shave off all the number->hash
lookups.
The gapped collection can be optimized similarly, as a follow-up, at least in three out of
four cases.
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR fixes a special corner case in transaction indexing.
When the chain is rewound by SetHead to a historical point which is even lower than the transaction indexes tail, then system will report Failed to decode block body error all the time, because the relevant blocks are already deleted.
In order to avoid this "non-critical-but-annoying" issue, we can recap the indexing target to head+1(to is excluded, so it means indexing transactions from 0 to head).
* core/vm: Remove interpreter loop interruption check
* core/vm: Unit test for interpreter loop interruption
* core/vm: Check for interpreter loop abort on every jump
* core/vm: Move interpreter.ReadOnly check into the opcode implementations
Also remove the same check from the interpreter inner loop.
* core/vm: Remove obsolete operation.writes flag
* core/vm: Capture fault states in logger
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core/vm: Remove panic added for testing
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core/vm: break loop on any error
* core/vm: move ErrExecutionReverted to opRevert()
* core/vm: use "stop token" to stop the loop
* core/vm: unconditionally pc++ in the loop
* core/vm: set return data in instruction impls
* all: work for eth1/2 transtition
* consensus/beacon, eth: change beacon difficulty to 0
* eth: updates
* all: add terminalBlockDifficulty config, fix rebasing issues
* eth: implemented merge interop spec
* internal/ethapi: update to v1.0.0.alpha.2
This commit updates the code to the new spec, moving payloadId into
it's own object. It also fixes an issue with finalizing an empty blockhash.
It also properly sets the basefee
* all: sync polishes, other fixes + refactors
* core, eth: correct semantics for LeavePoW, EnterPoS
* core: fixed rebasing artifacts
* core: light: performance improvements
* core: use keyed field (f)
* core: eth: fix compilation issues + tests
* eth/catalyst: dbetter error codes
* all: move Merger to consensus/, remove reliance on it in bc
* all: renamed EnterPoS and LeavePoW to ReachTDD and FinalizePoS
* core: make mergelogs a function
* core: use InsertChain instead of InsertBlock
* les: drop merger from lightchain object
* consensus: add merger
* core: recoverAncestors in catalyst mode
* core: fix nitpick
* all: removed merger from beacon, use TTD, nitpicks
* consensus: eth: add docstring, removed unnecessary code duplication
* consensus/beacon: better comment
* all: easy to fix nitpicks by karalabe
* consensus/beacon: verify known headers to be sure
* core: comments
* core: eth: don't drop peers who advertise blocks, nitpicks
* core: never add beacon blocks to the future queue
* core: fixed nitpicks
* consensus/beacon: simplify IsTTDReached check
* consensus/beacon: correct IsTTDReached check
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* all: mv loggers to eth/tracers
* core/vm: minor
* eth/tracers: tmp comment out testStoreCapture
* eth/tracers: uncomment and fix logger test
* eth/tracers: simplify test
* core/vm: re-add license
* core/vm: minor
* rename LogConfig to Config
* cmd, core: add flag --dev.gaslimit to allow configuring initial block gas limit in dev mode
* core: use provided gaslimit
Co-authored-by: Martin Holst Swende <martin@swende.se>
The price limit is supposed to exclude transactions with too low fee
amount. Before EIP-1559, it was sufficient to check the limit against
the gas price of the transaction. After 1559, it is more complicated
because the concept of 'transaction gas price' does not really exist.
When mining, the price limit is used to exclude transactions below a
certain effective fee amount. This change makes it apply the same check
earlier, in tx validation. Transactions below the specified fee amount
cannot enter the pool.
Fixes#23837
This PR offers two more database sub commands for exporting and importing data.
Two exporters are implemented: preimage and snapshot data respectively.
The import command is generic, it can take any data export and import into leveldb.
The data format has a 'magic' for disambiguation, and a version field for future compatibility.
It is because write known block only checks block and state without snapshot, which could lead to gap between newest snapshot and newest block state. However, new blocks which would cause snapshot to become fixed were ignored, since state was already known.
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core: write test showing that TD is not stored properly at genesis
The ToBlock method applies a default value for an empty
difficulty value. This default is not carried over through the Commit
method because the TotalDifficulty database write writes the
original difficulty value (nil) instead of the defaulty value
present on the genesis Block.
Date: 2021-10-22 08:25:32-07:00
Signed-off-by: meows <b5c6@protonmail.com>
* core: write TD value from Block, not original genesis value
This an issue where a default TD value was not written to
the database, resulting in a 0 value TD at genesis.
A test for this issue was provided at 90e3ffd393
Date: 2021-10-22 08:28:00-07:00
Signed-off-by: meows <b5c6@protonmail.com>
* core: fix tests by adding GenesisDifficulty to expected result
See prior two commits.
Date: 2021-10-22 09:16:01-07:00
Signed-off-by: meows <b5c6@protonmail.com>
* les: fix test with genesis change
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR adds a new accessor method to the freezer database. This new view offers a consistent interface, guaranteeing that all individual tables (headers, bodies etc) are all on the same number, and that this number is not changes (added/truncated) while the operation is performing.
* core/state/snapshot: fix BAD BLOCK error when snapshot is generating
* core/state/snapshot: alternative fix for the snapshot generator
* add comments and minor update
Co-authored-by: Martin Holst Swende <martin@swende.se>
This doesn't fix all go-critic warnings, just the most serious ones.
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>