Currently, we trigger the logic to (un)index transactions when the node receives a new
block. However, in some cases the node may not receive new blocks (eg, when the Geth node
is configured without peer discovery, or when it acts as an RPC node for historical-only
data).
In these situations, the Geth node user may not have previously configured txlookuplimit
(i.e. the default of around one year), but later realizes they need to index all
historical blocks. However, adding txlookuplimit=0 and restarting geth has no effect. This
change makes it check for required indexing work once, on startup, to fix the issue.
Co-authored-by: Martin Holst Swende <martin@swende.se>
FastFinality puts more infor into the header.extra field to keep vote information.
For mainnet, on epoch height, it could be 1526 bytes, which was 517 bytes before.
So the hardcoded 700 bytes for header could be no longer enough, increase it by
2 times would be enough.
this bug could cause P2P sync failure for nodes that are lagging behind, since they
would request access of ancient db, and GetBlockHeaders could be failed.
Optimizations:
- Previously, if a transaction was reverting, EstimateGas would exhibit worst-case behavior and binary search up to the max gas limit (~40 state-clone + tx executions). This change allows EstimateGas to return after only a single unconstrained execution in this scenario.
- Uses the gas used from the unconstrained execution to bias the remaining binary search towards the likely solution in a simple way that doesn't impact the worst case. For a typical contract-invoking transaction, this reduces the median number of state-clone+executions from 25 to 18 (28% reduction).
Cleanup:
- added & improved function + code comments
- correct the EstimateGas documentation to clarify the gas limit determination is at latest block, not pending, if the blockNr is unspecified.
ReadSkeletonHeader can return nil if the header is missing, so we should
not access fields on it. Note that calling .Hash() on a nil header is fine, so there
is no need to actually check for nil.
Co-authored-by: Martin Holst Swende <martin@swende.se>
This changes the forkID calculation to ignore time-based forks that occurred before the
genesis block. It's supposed to be done this way because the spec says:
> If a chain is configured to start with a non-Frontier ruleset already in its genesis, that is NOT considered a fork.
This raises the JSON-RPC batch request limits significantly for the engine API endpoint.
The limits are now also hard-coded, so users won't get them wrong. I have chosen these limits:
maximum batch items: 2000
maximum batch response size: 250MB
While it would also be possible to disable batch limits completely for the engine API,
I think having some limits is a good safety net against misbehaving CLs. Since this
isn't configurable, we really want to ensure this limit will never become an issue in the
CL/EL communication, so I set them quite high.
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR removes the newly added txpool.Transaction wrapper type, and instead adds a way
of keeping the blob sidecar within types.Transaction. It's better this way because most
code in go-ethereum does not care about blob transactions, and probably never will. This
will start mattering especially on the client side of RPC, where all APIs are based on
types.Transaction. Users need to be able to use the same signing flows they already
have.
However, since blobs are only allowed in some places but not others, we will now need to
add checks to avoid creating invalid blocks. I'm still trying to figure out the best place
to do some of these. The way I have it currently is as follows:
- In block validation (import), txs are verified not to have a blob sidecar.
- In miner, we strip off the sidecar when committing the transaction into the block.
- In TxPool validation, txs must have a sidecar to be added into the blobpool.
- Note there is a special case here: when transactions are re-added because of a chain
reorg, we cannot use the transactions gathered from the old chain blocks as-is,
because they will be missing their blobs. This was previously handled by storing the
blobs into the 'blobpool limbo'. The code has now changed to store the full
transaction in the limbo instead, but it might be confusing for code readers why we're
not simply adding the types.Transaction we already have.
Code changes summary:
- txpool.Transaction removed and all uses replaced by types.Transaction again
- blobpool now stores types.Transaction instead of defining its own blobTx format for storage
- the blobpool limbo now stores types.Transaction instead of storing only the blobs
- checks to validate the presence/absence of the blob sidecar added in certain critical places
The Go authors updated golang/x/ext to change the function signature of the slices sort method.
It's an entire shitshow now because x/ext is not tagged, so everyone's codebase just
picked a new version that some other dep depends on, causing our code to fail building.
This PR updates the dep on our code too and does all the refactorings to follow upstream...
The Go authors updated golang/x/ext to change the function signature of the slices sort method.
It's an entire shitshow now because x/ext is not tagged, so everyone's codebase just
picked a new version that some other dep depends on, causing our code to fail building.
This PR updates the dep on our code too and does all the refactorings to follow upstream...
This should fix#27726. With enough load, it might happen that the SetPongHandler
callback gets invoked before the call to SetReadDeadline is made in pingLoop. When
this occurs, the socket will end up with a 30s read deadline even though it got the pong,
which will lead to a timeout.
The fix here is processing the pong on pingLoop, synchronizing with the code that
sends the ping.
Block takes a number and a hash. The spec is unclear on what should happen in this case, leaving it an implemenation detail. With this change, we return an error in case both number and hash are passed in.
This change removes a chainconfig parameter passed into rawdb.ReadLogs, which is not used nor needed.
It also modifies the filter loop slightly, avoiding a labeled break and instead using a method.
This change does not modify any behaviour.
Context: The UpdateContractCode method was introduced for the state storage commitment
schemes that include the whole code for their commitment computation. It must therefore be called
before the root hash is computed at the end of IntermediateRoot.
This should have no impact on the MPT since, in this context, the method is a no-op.