* p2p/discover: add more packet information in logs
This adds more fields to discv5 packet logs. These can be useful when
debugging multi-packet interactions.
The FINDNODE message also gets an additional field, OpID for debugging
purposes. This field is not encoded onto the wire.
I'm also removing topic system related message types in this change.
These will come back in the future, where support for them will be
guarded by a config flag.
* p2p/discover/v5wire: rename 'Total' to 'RespCount'
The new name captures the meaning of this field better.
Alarm is a timer utility that simplifies code where a timer needs to be rescheduled over
and over. Doing this can be tricky with time.Timer or time.AfterFunc because the channel
requires draining in some cases.
Alarm is optimized for use cases where items are tracked in a heap according to their expiry
time, and a goroutine with a for/select loop wants to be woken up whenever the next item expires.
In this application, the timer needs to be rescheduled when an item is added or removed
from the heap. Using a timer naively, these updates will always require synchronization
with the global runtime timer datastructure to update the timer using Reset. Alarm avoids
this by tracking the next expiry time and only modifies the timer if it would need to fire earlier
than already scheduled.
As an example use, I have converted p2p.dialScheduler to use Alarm instead of AfterFunc.
This changes moves the tracking of "deleted in this block" out from snap-only domain, so that it happens regardless of whether the execution is snapshot-backed or trie-backed.
This changes the StorageTrie method to return an error when the trie
is not available. It used to return an 'empty trie' in this case, but that's
not possible anymore under PBSS.
This PR builds on #26299, but also updates the tests to the most recent version, which includes tests regarding TheMerge.
This change adds checks to the beacon consensus engine, making it more strict in validating the pre- and post-headers, and not relying on the caller to have already correctly sanitized the headers/blocks.
This PR implements resettable freezer by adding a ResettableFreezer wrapper.
The resettable freezer wraps the original freezer in a way that makes it possible to ensure atomic resets. Implementation wise, it relies on the os.Rename and os.RemoveAll to atomically delete the original freezer data and re-create a new one from scratch.
This PR fixes an error in trie commit. If the trie.root is nil, it can be two possible scenarios:
- The trie was empty, and no change happens
- The trie was non-empty and all nodes are dropped
For the latter one, we should collect the deletions and apply them into database(e.g. in PBSS).
This PR adds an addtional API called `NewBatchWithSize` for db
batcher. It turns out that leveldb batch memory allocation is
super inefficient. The main reason is the allocation step of
leveldb Batch is too small when the batch size is large. It can
take a few second to build a leveldb batch with 100MB size.
Luckily, leveldb also offers another API called MakeBatch which can
pre-allocate the memory area. So if the approximate size of batch is
known in advance, this API can be used in this case.
It's needed in new state scheme PR which needs to commit a batch of
trie nodes in a single batch. Implement the feature in a seperate PR.
* metrics: add unlock address to metrics when miner module is enabled
* metrics: add miner config into metrics server
* metrics: add device-info into metrics server
* metrics: fix the format of device info
* metrics: remove device-info
This ensures that RPC method handlers will react to a timeout or
cancelled request soon after the event occurs.
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This PR adds a check that the safetxhash that we sign corresponds to the one that is expected by the input. If it differs, it tries again with the configured chainid.
* eth: fix a rare datarace on CHT challenge reply / shutdown
* trie: check childrens' existence concurrently for snap heal
* eth/protocols/snap: fix problems due to idle-but-busy peers
* eth/filters: change filter block to be by-ref (#26054)
This PR changes the block field in the filter to be a pointer, to disambiguate between empty hash and no hash
* rpc: handle wrong HTTP batch response length (#26064)
* eth/protocols/snap: throttle trie heal requests when peers DoS us (#25666)
* eth/protocols/snap: throttle trie heal requests when peers DoS us
* eth/protocols/snap: lower heal throttle log to debug
Co-authored-by: Martin Holst Swende <martin@swende.se>
* eth/protocols/snap: fix comment
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Jordan Krage <jmank88@gmail.com>
A comment suggests that contract creation happens if the recipient of a call is 0x00..00 ("zero address") but in fact the sender must be nil. The zero address is a regular valid address that is commonly used as a "burn" address.
Currently calling `debug_TraceTransaction` with a transaction hash that doesn't exist returns a confusing error: `genesis is not traceable`. This PR changes the behaviour to instead return an error message saying `transaction not found`
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This PR adds a new type event.FeedOf[T], which is like event.Feed but parameterized
over the channel element type. Performance is unchanged, and it still uses reflect. But
unlike Feed, the generic version doesn't need to type-check interface{} arguments.
All panic cases are gone from the API.
While investigating another issue, I found that all callers of collectLogs have the
complete block available. rawdb.ReadReceipts loads the block from the database,
so it is better to use ReadRawReceipts here, and derive the receipt information using
the block which is already in memory.
This PR makes it possible to modify the flush interval time via RPC. On one extreme, `0s`, it would act as an archive node. If set to `1h`, means that after one hour of effective block processing time, the trie would be flushed. If one block takes 200ms, this means that a flush would occur every `5*3600=18000` blocks -- however, if the memory size of the cached states grows too large, it will flush sooner.
Essentially, this makes it possible to configure the node to be more or less "archive:ish", and without restarting the node while reconfiguring it.