EIP-4844 adds a new transaction type for blobs. Users can submit such transactions via `eth_sendRawTransaction`. In this PR we refrain from adding support to `eth_sendTransaction` and in fact it will fail if the user passes in a blob hash.
However since the chain can handle such transactions it makes sense to allow simulating them. E.g. an L2 operator should be able to simulate submitting a rollup blob and updating the L2 state. Most methods that take in a transaction object should recognize blobs. The change boils down to adding `blobVersionedHashes` and `maxFeePerBlobGas` to `TransactionArgs`. In summary:
- `eth_sendTransaction`: will fail for blob txes
- `eth_signTransaction`: will fail for blob txes
The methods that sign txes does not, as of this PR, add support the for new EIP-4844 transaction types. Resuming the summary:
- `eth_sendRawTransaction`: can send blob txes
- `eth_fillTransaction`: will fill in a blob tx. Note: here we simply fill in normal transaction fields + possibly `maxFeePerBlobGas` when blobs are present. One can imagine a more elaborate set-up where users can submit blobs themselves and we fill in proofs and commitments and such. Left for future PRs if desired.
- `eth_call`: can simulate blob messages
- `eth_estimateGas`: blobs have no effect here. They have a separate unit of gas which is not tunable in the transaction.
* cmd, core: resolve scheme from a read-write database
* cmd, core, eth: move the scheme check in the ethereum constructor
* cmd/geth: dump should in ro mode
* cmd: reverts
This pull request improves the condition to check if path state scheme is in use.
Originally, root node presence was used as the indicator if path scheme is used or not. However due to fact that root node will be deleted during the initial snap sync, this condition is no longer useful.
If PersistentStateID is present, it shows that we've already configured for path scheme.
Original problem was caused by #28595, where we made it so that as soon as we start to sync, the root of the disk layer is deleted. That is not wrong per se, but another part of the code uses the "presence of the root" as an init-check for the pathdb. And, since the init-check now failed, the code tried to re-initialize it which failed since a sync was already ongoing.
The total impact being: after a state-sync has begun, if the node for some reason is is shut down, it will refuse to start up again, with the error message: `Fatal: Failed to register the Ethereum service: waiting for sync.`.
This change also modifies how `geth removedb` works, so that the user is prompted for two things: `state data` and `ancient chain`. The former includes both the chaindb aswell as any state history stored in ancients.
---------
Co-authored-by: Martin HS <martin@swende.se>
Here we update the eth and snap protocol test suites with a new test chain,
created by the hivechain tool. The new test chain uses proof-of-stake. As such,
tests using PoW block propagation in the eth protocol are removed. The test suite
now connects to the node under test using the engine API in order to make it
accept transactions.
The snap protocol test suite has been rewritten to output test descriptions and
log requests more verbosely.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
This is the fix to issue #27483. A new hiddenBytes() is introduced to calculate the byte size of hidden items in the freezer table. When reporting the size of the freezer table, size of the hidden items will be subtracted from the total size.
---------
Co-authored-by: Yifan <Yifan Wang>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
* core: LoadChainConfig return predefined config for built-in net firstly
* cmd/geth: add a warn message for chain config in the configuration file
* consensus/parlia: change chain config log level when New parlia
* core: fix code style
* cmd, core, params: add support for the Holesky testnet
* cmd/devp2p: add support for holesky for the dns crawler
# Conflicts:
# cmd/devp2p/nodesetcmd.go
# cmd/geth/main.go
# cmd/utils/flags.go
# core/genesis.go
# params/bootnodes.go
# params/config.go
This change implements CommitteeChain which is a key component of the beacon light client. It is a passive data structure that can validate, hold and update a chain of beacon light sync committees and updates, starting from a checkpoint that proves the starting committee through a beacon block hash, header and corresponding state. Once synced to the current sync period, CommitteeChain can also validate signed beacon headers.
The dump after state-test didn't work, the problem was an error, "Already committed", which was silently ignored.
This change re-initialises the state, so the dumping works again.
This PR replaces Geth's logger package (a fork of [log15](https://github.com/inconshreveable/log15)) with an implementation using slog, a logging library included as part of the Go standard library as of Go1.21.
Main changes are as follows:
* removes any log handlers that were unused in the Geth codebase.
* Json, logfmt, and terminal formatters are now slog handlers.
* Verbosity level constants are changed to match slog constant values. Internal translation is done to make this opaque to the user and backwards compatible with existing `--verbosity` and `--vmodule` options.
* `--log.backtraceat` and `--log.debug` are removed.
The external-facing API is largely the same as the existing Geth logger. Logger method signatures remain unchanged.
A small semantic difference is that a `Handler` can only be set once per `Logger` and not changed dynamically. This just means that a new logger must be instantiated every time the handler of the root logger is changed.
----
For users of the `go-ethereum/log` module. If you were using this module for your own project, you will need to change the initialization. If you previously did
```golang
log.Root().SetHandler(log.LvlFilterHandler(log.LvlInfo, log.StreamHandler(os.Stderr, log.TerminalFormat(true))))
```
You now instead need to do
```golang
log.SetDefault(log.NewLogger(log.NewTerminalHandlerWithLevel(os.Stderr, log.LevelInfo, true)))
```
See more about reasoning here: https://github.com/ethereum/go-ethereum/issues/28558#issuecomment-1820606613
* eth/gasestimator: early exit for plain transfer and error allowance
* core, eth/gasestimator: hard guess at a possible required gas
* internal/ethapi: update estimation tests with the error ratio
* eth/gasestimator: I hate you linter
* graphql: fix gas estimation test
---------
Co-authored-by: Oren <orenyomtov@users.noreply.github.com>
There were several problems related to dumping state.
- If a preimage was missing, even if we had set the `OnlyWithAddresses` to `false`, to export them anyway, the way the mapping was constructed (using `common.Address` as key) made the entries get lost anyway. Concerns both state- and blockchain tests.
- Blockchain test execution was not configured to store preimages.
This changes makes it so that the block test executor takes a callback, just like the state test executor already does. This callback can be used to examine the post-execution state, e.g. to aid debugging of test failures.
geth --dev can be used with an existing data directory and genesis block. Since
dev mode only works with PoS, we need to verify that the merge has happened.
Co-authored-by: Felix Lange <fjl@twurst.com>
* rpc: make subscription test faster
reduces time for TestClientSubscriptionChannelClose
from 25 sec to < 1 sec.
* trie: cache trie nodes for faster sanity check
This reduces the time spent on TestIncompleteSyncHash
from ~25s to ~16s.
* core/forkid: speed up validation test
This takes the validation test from > 5s to sub 1 sec
* core/state: improve snapshot test run
brings the time for TestSnapshotRandom from 13s down to 6s
* accounts/keystore: improve keyfile test
This removes some unnecessary waits and reduces the
runtime of TestUpdatedKeyfileContents from 5 to 3 seconds
* trie: remove resolver
* trie: only check ~5% of all trie nodes
This change adds a check to ensure that transactions added to the legacy pool are not treated as 'locals' if the global locals-management has been disabled.
This change makes the pool enforce the --txpool.pricelimit setting.
This PR moves our fuzzers from tests/fuzzers into whatever their respective 'native' package is.
The historical reason why they were placed in an external location, is that when they were based on go-fuzz, they could not be "hidden" via the _test.go prefix. So in order to shove them away from the go-ethereum "production code", they were put aside.
But now we've rewritten them to be based on golang testing, and thus can be brought back. I've left (in tests/) the ones that are not production (bls128381), require non-standard imports (secp requires btcec, bn256 requires gnark/google/cloudflare deps).
This PR also adds a fuzzer for precompiled contracts, because why not.
This PR utilizes a newly rewritten replacement for go-118-fuzz-build, namely gofuzz-shim, which utilises the inputs from the fuzzing engine better.
This change allows the creation of a genesis block for verkle testnets. This makes for a chunk of code that is easier to review and still touches many discussion points.
* core/vm: set basefee to 0 internally on eth_call
* core: nicer 0-basefee, make it work for blob fees too
* internal/ethapi: make tests a bit more complex
* core: fix blob fee checker
* core: make code a bit more readable
* core: fix some test error strings
* core/vm: Get rid of weird comment
* core: dict wrong typo
This change improves GenerateChain to support internal chain history access (ChainReader)
for the consensus engine and EVM.
GenerateChain takes a `parent` block and the number of blocks to create. With my changes,
the consensus engine and EVM can now access blocks from `parent` up to the block currently
being generated. This is required to make the BLOCKHASH instruction work, and also needed
to create real clique chains. Clique uses chain history to figure out if the current signer is in-turn,
for example.
I've also added some more accessors to BlockGen. These are helpful when creating transactions:
- g.Signer returns a signer instance for the current block
- g.Difficulty returns the current block difficulty
- g.Gas returns the remaining gas amount
Another fix in this commit concerns the receipts returned by GenerateChain. The receipts now
have properly derived fields (BlockHash, etc.) and should generally match what would be
returned by the RPC API.
This adds warning logs when the read does not match the expected count.
We can also remove the size limit since the function documentation explicitly states
that callers should limit the count.
accountTrieCache and storageTrieCache were introduced in this PR:
https://github.com/bnb-chain/bsc/pull/257, which is to improve performance.
Actually the performance gain is quite limited, as there is already dirty
and clean cache for trie node.
And after big merge, these 2 cache can not be used when PBSS is enabled.
So remove these code to simplify the logic.
This PR removes panics from stacktrie (mostly), and makes the Update return errors instead. While adding tests for this, I also found that one case of possible corruption was not caught, which is now fixed.
* go.mod: upgrade prysm and the indrect dependency
prysm from v4.0.2 to v4.0.8, and run go mod tidy
* ci: upgrade go version from 1.19 to 1.20
* go-version: upgrade from v1.19 to v1.20
there is some dependency on go v1.20, such as go-libp2p v0.27.8
and also run go mod tidy
* dependency: upgrade docker version for security
it is not a big issue, since docker is only used for test purpose.
* rand: update the usage of math/rand after golang v1.20
2 APIs of math/rand module were deprecated since golang v1.20.
that is: rand.Seed() and rand.Read(), refer: ettps://pkg.go.dev/math/rand
"rand.Seed(seed int64)" has been replaced by: "r := rand.New(rand.NewSource(seed int64))",
need to initialize it with an instance before use
"rand.Read()" has been replaced by "crypto/rand.Read()"
* readme: need golang v1.20+ to build bsc
This change enhances the stacktrie constructor by introducing an option struct. It also simplifies the `Hash` and `Commit` operations, getting rid of the special handling round root node.
This changes implements faster post-selfdestruct iteration of storage slots for deletion, by using snapshot-storage+stacktrie to recover the trienodes to be deleted. This mechanism is only implemented for path-based schema.
For hash-based schema, the entire post-selfdestruct storage iteration is skipped, with this change, since hash-based does not actually perform deletion anyway.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
During snap-sync, we request ranges of values: either a range of accounts or a range of storage values. For any large trie, e.g. the main account trie or a large storage trie, we cannot fetch everything at once.
Short version; we split it up and request in multiple stages. To do so, we use an origin field, to say "Give me all storage key/values where key > 0x20000000000000000". When the server fulfils this, the server provides the first key after origin, let's say 0x2e030000000000000 -- never providing the exact origin. However, the client-side needs to be able to verify that the 0x2e03.. indeed is the first one after 0x2000.., and therefore the attached proof concerns the origin, not the first key.
So, short-short version: the left-hand side of the proof relates to the origin, and is free-standing from the first leaf.
On the other hand, (pun intended), the right-hand side, there's no such 'gap' between "along what path does the proof walk" and the last provided leaf. The proof must prove the last element (unless there are no elements).
Therefore, we can simplify the semantics for trie.VerifyRangeProof by removing an argument. This doesn't make much difference in practice, but makes it so that we can remove some tests. The reason I am raising this is that the upcoming stacktrie-based verifier does not support such fancy features as standalone right-hand borders.
This change addresses an issue in snap sync, specifically when the entire sync process can be halted due to an encountered empty storage range.
Currently, on the snap sync client side, the response to an empty (partial) storage range is discarded as a non-delivery. However, this response can be a valid response, when the particular range requested does not contain any slots.
For instance, consider a large contract where the entire key space is divided into 16 chunks, and there are no available slots in the last chunk [0xf] -> [end]. When the node receives a request for this particular range, the response includes:
The proof with origin [0xf]
A nil storage slot set
If we simply discard this response, the finalization of the last range will be skipped, halting the entire sync process indefinitely. The test case TestSyncWithUnevenStorage can reproduce the scenario described above.
In addition, this change also defines the common variables MaxAddress and MaxHash.
* cmd, core: resolve scheme from a read-write database
* cmd, core, eth: move the scheme check in the ethereum constructor
* cmd/geth: dump should in ro mode
* cmd: reverts
This change
- Removes the owner-notion from a stacktrie; the owner is only ever needed for comitting to the database, but the commit-function, the `writeFn` is provided by the caller, so the caller can just set the owner into the `writeFn` instead of having it passed through the stacktrie.
- Removes the `encoding.BinaryMarshaler`/`encoding.BinaryUnmarshaler` interface from stacktrie. We're not using it, and it is doubtful whether anyone downstream is either.
When MatcherSession encounters an error, it attempts to close the session.
Closing waits for all goroutines to finish, including the 'distributor'.
However, the distributor will not exit until all requests have returned.
This patch fixes the issue by delivering the (empty) result to the distributor
before calling Close().
Adding a space beween function opOrigin() and opcCaller() in instruciton.go.
Adding a space beween function opkeccak256() and opAddress() in instruciton.go.
When MatcherSession encounters an error, it attempts to close the session.
Closing waits for all goroutines to finish, including the 'distributor'. However, the
distributor will not exit until all requests have returned.
This patch fixes the issue by delivering the (empty) result to the distributor
before calling Close().
* parlia: reject header with `WithdrawalsHash` before shanghai fork
* log: output chainconfig when start up
* eth: fix TestOptionMaxPeersPerIP failure of goroutine order
* cmd/evm: improve flags handling
This fixes some issues with flags in cmd/evm. The supported flags did not
actually show up in help output because they weren't categorized. I'm also
adding the VM-related flags to the run command here so they can be given
after the subcommand name. So it can be run like this now:
./evm run --code 6001 --debug
* cmd/evm: enable all forks by default in run command
The default genesis was just empty with no forks at all, which is annoying because
contracts will be relying on opcodes introduced in a fork. So this changes the default to
have all forks enabled.
* core/asm: fix some issues in the assembler
This fixes minor bugs in the old assembler:
- It is now possible to have comments on the same line as an instruction.
- Errors for invalid numbers in the jump instruction are reported better
- Line numbers in errors were off by one
* rlp/rlpgen: remove build tag
This tag was supposed to prevent unstable output when types reference each other. Imagine
there are two struct types A and B, where a reference to type B is in A. If I run rlpgen
on type B first, and then on type A, the generator will see the B.EncodeRLP method and
call it. However, if I run rlpgen on type A first, it will inline the encoding of B.
The solution I chose for the initial release of rlpgen was to just ignore methods
generated by rlpgen using a build tag. But there is a problem with this: if any code in
the package calls EncodeRLP explicitly, the package can't be loaded without errors anymore
in rlpgen, because the loader ignores it. Would be nice if there was a way to just make it
ignore invalid functions during type checking (they're not necessary for rlpgen), but
golang.org/x/tools/go/packages does not provide a way of ignoring them.
Luckily, the types we use rlpgen with do not reference each other right now, so we can
just remove the build tags for now.
This change includes a lot of things, listed below.
### Split up interfaces, write vs read
The interfaces have been split up into one write-interface and one read-interface, with `Snapshot` being the gateway from write to read. This simplifies the semantics _a lot_.
Example of splitting up an interface into one readonly 'snapshot' part, and one updatable writeonly part:
```golang
type MeterSnapshot interface {
Count() int64
Rate1() float64
Rate5() float64
Rate15() float64
RateMean() float64
}
// Meters count events to produce exponentially-weighted moving average rates
// at one-, five-, and fifteen-minutes and a mean rate.
type Meter interface {
Mark(int64)
Snapshot() MeterSnapshot
Stop()
}
```
### A note about concurrency
This PR makes the concurrency model clearer. We have actual meters and snapshot of meters. The `meter` is the thing which can be accessed from the registry, and updates can be made to it.
- For all `meters`, (`Gauge`, `Timer` etc), it is assumed that they are accessed by different threads, making updates. Therefore, all `meters` update-methods (`Inc`, `Add`, `Update`, `Clear` etc) need to be concurrency-safe.
- All `meters` have a `Snapshot()` method. This method is _usually_ called from one thread, a backend-exporter. But it's fully possible to have several exporters simultaneously: therefore this method should also be concurrency-safe.
TLDR: `meter`s are accessible via registry, all their methods must be concurrency-safe.
For all `Snapshot`s, it is assumed that an individual exporter-thread has obtained a `meter` from the registry, and called the `Snapshot` method to obtain a readonly snapshot. This snapshot is _not_ guaranteed to be concurrency-safe. There's no need for a snapshot to be concurrency-safe, since exporters should not share snapshots.
Note, though: that by happenstance a lot of the snapshots _are_ concurrency-safe, being unmutable minimal representations of a value. Only the more complex ones are _not_ threadsafe, those that lazily calculate things like `Variance()`, `Mean()`.
Example of how a background exporter typically works, obtaining the snapshot and sequentially accessing the non-threadsafe methods in it:
```golang
ms := metric.Snapshot()
...
fields := map[string]interface{}{
"count": ms.Count(),
"max": ms.Max(),
"mean": ms.Mean(),
"min": ms.Min(),
"stddev": ms.StdDev(),
"variance": ms.Variance(),
```
TLDR: `snapshots` are not guaranteed to be concurrency-safe (but often are).
### Sample changes
I also changed the `Sample` type: previously, it iterated the samples fully every time `Mean()`,`Sum()`, `Min()` or `Max()` was invoked. Since we now have readonly base data, we can just iterate it once, in the constructor, and set all four values at once.
The same thing has been done for runtimehistogram.
### ResettingTimer API
Back when ResettingTImer was implemented, as part of https://github.com/ethereum/go-ethereum/pull/15910, Anton implemented a `Percentiles` on the new type. However, the method did not conform to the other existing types which also had a `Percentiles`.
1. The existing ones, on input, took `0.5` to mean `50%`. Anton used `50` to mean `50%`.
2. The existing ones returned `float64` outputs, thus interpolating between values. A value-set of `0, 10`, at `50%` would return `5`, whereas Anton's would return either `0` or `10`.
This PR removes the 'new' version, and uses only the 'legacy' percentiles, also for the ResettingTimer type.
The resetting timer snapshot was also defined so that it would expose the internal values. This has been removed, and getters for `Max, Min, Mean` have been added instead.
### Unexport types
A lot of types were exported, but do not need to be. This PR unexports quite a lot of them.
* fix: crash of highestVerifiedHeader
* fix: panic of blobpool
* fix: genesis set up
* 1. modify NewDatabaseWithNodeDB to upstream
2. fix race use of hasher in statedb
3. fix use wrong value when updateTrie
* fix dir legacypool
* fix dir blobpool
* fix dir vote
* remove diffsync related code
* fix core/state/snapshot
* disable pipeCommit for now
* fix applyTransaction for bloom setting
* CI: fast finality in gasprice test
* CI: diffFetcher was removed
* CI: downloader, remove beaconsync test
* CI: no beaconsync in downloader, remove a failed case
TestCheckpointChallenge was removed in:
https://github.com/ethereum/go-ethereum/pull/27147
since after merge, it is useless for ethereum, but might be useful for BSC.
disable the case right now, as it is not a big issue.
* CI: bsc protocol decHandlers
* CI: receipt Bloom process
* 1. skip CheckConfigForkOrder for non-parlia engine
2. all test cases in core work well now
cd core && go test ./... -v
* fix test cases in trie dir
* CI: no beaconsync in downloader, remove a failed case(redo)
* fix dir miner
* fix dir cmd/geth
* CI: filter test, BaseFee & Finality
* fix dir graphql
* remove diffStore
* fix ethclient
* fix TestRPCGetTransactionReceipt
* fix dir internal
* ut add dir ethstats and signer
* disable pipeCommit thoroughly; fix concurrent map iteration and map write in statedb
* CI: fix snap sync
it could be changed by mistake
* fix tests/Run to generate snapshot
* prepare for merge
* remove useless
* use common hasher in getDeletedStateObject, no race here
* an critical comment for state.Prepare
* do not copy nil accessList
* add omitempty tag for unused new fields of core.Genesis
* remove totalFees
* calculate fees before FinalizeAndAssemble
* revert interface Finalize of consensus
* do not double gas limit upon london block
* use Leveldb as default
* Revert "remove diffStore"
This reverts commit df343b137412b0beb25298a6ba9c3c19e47f20b1.
* Revert "remove diffsync related code"
This reverts commit 8d84b81feae5d794cb5d7fcfdb7f5f7da751941b.
* compile pass after revert
* remove diffsync
* fix dir eth/protocols/trust
* fix TestFastNode
* decHandlers for trust protocol
* keep persist diff in test
On ACD 163, it was agreed to bump the target and max blob values from `2/4` to `3/6` for future devnets until we could decide on final mainnet number. This change contains said update, making master pass all the hive tests. The final decision for mainnet cancun is still to be made.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
* core/forkid: skip genesis forks by time
* core/forkid: add comment about skipping non-zero fork times
* core/forkid: skip all time based forks in genesis using loop
* core/forkid: simplify logic for dropping time-based forks
This chang creates a GaugeInfo metrics type for registering informational (textual) metrics, e.g. geth version number. It also improves the testing for backend-exporters, and uses a shared subpackage in 'internal' to provide sample datasets and ordered registry.
Implements #21783
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This changes implements faster post-selfdestruct iteration of storage slots for deletion, by using snapshot-storage+stacktrie to recover the trienodes to be deleted. This mechanism is only implemented for path-based schema.
For hash-based schema, the entire post-selfdestruct storage iteration is skipped, with this change, since hash-based does not actually perform deletion anyway.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR makes EIP-4788 work in the engine API and miner. It also fixes some bugs related to
EIP-4844 block processing and mining. Changes in detail:
- Header.BeaconRoot has been renamed to ParentBeaconRoot.
- The engine API now implements forkchoiceUpdatedV3
- newPayloadV3 method has been updated with the parentBeaconBlockRoot parameter
- beacon root is now applied to new blocks in miner
- For EIP-4844, block creation now updates the blobGasUsed field of the header
Just some minor optimizations I figured out a while ago. By using ReadBytes instead of
Bytes on the rlp stream, we can save the allocation of a temporary buffer for the typed tx
payload.
If kind == rlp.Byte, the size reported by Stream.Kind will be zero, but we need a buffer
of size 1 for ReadBytes. Since typed txs always have to be longer than 1 byte, we can just
return an error for kind == rlp.Byte.
There is a also a small change for Log: since the first three fields of Log are the ones that
should appear in the canon encoding, we can simply ignore the remaining fields via
struct tag. Doing this removes an indirection through the rlpLog type.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change implements "EIP 4788 : Beacon block root in the EVM". It implements version-2 of EPI-4788, main difference being that the contract is an actual contract rather than a precompile, as in #27289.
Currently, we trigger the logic to (un)index transactions when the node receives a new
block. However, in some cases the node may not receive new blocks (eg, when the Geth node
is configured without peer discovery, or when it acts as an RPC node for historical-only
data).
In these situations, the Geth node user may not have previously configured txlookuplimit
(i.e. the default of around one year), but later realizes they need to index all
historical blocks. However, adding txlookuplimit=0 and restarting geth has no effect. This
change makes it check for required indexing work once, on startup, to fix the issue.
Co-authored-by: Martin Holst Swende <martin@swende.se>
FastFinality puts more infor into the header.extra field to keep vote information.
For mainnet, on epoch height, it could be 1526 bytes, which was 517 bytes before.
So the hardcoded 700 bytes for header could be no longer enough, increase it by
2 times would be enough.
this bug could cause P2P sync failure for nodes that are lagging behind, since they
would request access of ancient db, and GetBlockHeaders could be failed.
This changes the forkID calculation to ignore time-based forks that occurred before the
genesis block. It's supposed to be done this way because the spec says:
> If a chain is configured to start with a non-Frontier ruleset already in its genesis, that is NOT considered a fork.
This PR removes the newly added txpool.Transaction wrapper type, and instead adds a way
of keeping the blob sidecar within types.Transaction. It's better this way because most
code in go-ethereum does not care about blob transactions, and probably never will. This
will start mattering especially on the client side of RPC, where all APIs are based on
types.Transaction. Users need to be able to use the same signing flows they already
have.
However, since blobs are only allowed in some places but not others, we will now need to
add checks to avoid creating invalid blocks. I'm still trying to figure out the best place
to do some of these. The way I have it currently is as follows:
- In block validation (import), txs are verified not to have a blob sidecar.
- In miner, we strip off the sidecar when committing the transaction into the block.
- In TxPool validation, txs must have a sidecar to be added into the blobpool.
- Note there is a special case here: when transactions are re-added because of a chain
reorg, we cannot use the transactions gathered from the old chain blocks as-is,
because they will be missing their blobs. This was previously handled by storing the
blobs into the 'blobpool limbo'. The code has now changed to store the full
transaction in the limbo instead, but it might be confusing for code readers why we're
not simply adding the types.Transaction we already have.
Code changes summary:
- txpool.Transaction removed and all uses replaced by types.Transaction again
- blobpool now stores types.Transaction instead of defining its own blobTx format for storage
- the blobpool limbo now stores types.Transaction instead of storing only the blobs
- checks to validate the presence/absence of the blob sidecar added in certain critical places
The Go authors updated golang/x/ext to change the function signature of the slices sort method.
It's an entire shitshow now because x/ext is not tagged, so everyone's codebase just
picked a new version that some other dep depends on, causing our code to fail building.
This PR updates the dep on our code too and does all the refactorings to follow upstream...
The Go authors updated golang/x/ext to change the function signature of the slices sort method.
It's an entire shitshow now because x/ext is not tagged, so everyone's codebase just
picked a new version that some other dep depends on, causing our code to fail building.
This PR updates the dep on our code too and does all the refactorings to follow upstream...
This change removes a chainconfig parameter passed into rawdb.ReadLogs, which is not used nor needed.
It also modifies the filter loop slightly, avoiding a labeled break and instead using a method.
This change does not modify any behaviour.
Context: The UpdateContractCode method was introduced for the state storage commitment
schemes that include the whole code for their commitment computation. It must therefore be called
before the root hash is computed at the end of IntermediateRoot.
This should have no impact on the MPT since, in this context, the method is a no-op.
This adds support for the "yParity" field in transaction objects returned by RPC
APIs. We somehow forgot to add this field even though it has been in the spec for
a long time.
This change rearranges the accessor methods in block.go and fixes some minor issues with
the copy-on-write logic of block data. Fixed issues:
- Block.WithWithdrawals did not create a shallow copy of the block.
- Block.WithBody copied the header unnecessarily, and did not preserve withdrawals.
However, the bugs did not affect any code in go-ethereum because blocks are *always*
created using NewBlockWithHeader().WithBody().WithWithdrawals()
* all: implement path-based state scheme
* all: edits from review
* core/rawdb, trie/triedb/pathdb: review changes
* core, light, trie, eth, tests: reimplement pbss history
* core, trie/triedb/pathdb: track block number in state history
* trie/triedb/pathdb: add history documentation
* core, trie/triedb/pathdb: address comments from Peter's review
Important changes to list:
- Cache trie nodes by path in clean cache
- Remove root->id mappings when history is truncated
* trie/triedb/pathdb: fallback to disk if unexpect node in clean cache
* core/rawdb: fix tests
* trie/triedb/pathdb: rename metrics, change clean cache key
* trie/triedb: manage the clean cache inside of disk layer
* trie/triedb/pathdb: move journal function
* trie/triedb/path: fix tests
* trie/triedb/pathdb: fix journal
* trie/triedb/pathdb: fix history
* trie/triedb/pathdb: try to fix tests on windows
* core, trie: address comments
* trie/triedb/pathdb: fix test issues
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>