This changes TALKREQ message processing to run the handler on separate goroutine,
instead of running on the main discv5 dispatcher goroutine. It's better this way because
it allows the handler to perform blocking actions.
I'm also adding a new method TalkRequestToID here. The method allows implementing
a request flow where one node A sends TALKREQ to another node B, and node B later
sends a TALKREQ back. With TalkRequestToID, node B does not need the ENR of A to
send its request.
This PR is a (superior) alternative to https://github.com/ethereum/go-ethereum/pull/26708, it handles deprecation, primarily two specific cases.
`rand.Seed` is typically used in two ways
- `rand.Seed(time.Now().UnixNano())` -- we seed it, just to be sure to get some random, and not always get the same thing on every run. This is not needed, with global seeding, so those are just removed.
- `rand.Seed(1)` this is typically done to ensure we have a stable test. If we rely on this, we need to fix up the tests to use a deterministic prng-source. A few occurrences like this has been replaced with a proper custom source.
`rand.Read` has been replaced by `crypto/rand`.`Read` in this PR.
* dep: upgrade secp256k1 to use btcec/v2 v2.3.2 and update insecurity pkg
* build ci: upgrade go to 1.19 and golangci-lint to 1.50.1
* docs: fix format that does not follow the goimports
* dep: redirect github.com/bnb-chain/tendermint to v0.31.13
* ci: disable GOPROXY
* p2p/discover: add more packet information in logs
This adds more fields to discv5 packet logs. These can be useful when
debugging multi-packet interactions.
The FINDNODE message also gets an additional field, OpID for debugging
purposes. This field is not encoded onto the wire.
I'm also removing topic system related message types in this change.
These will come back in the future, where support for them will be
guarded by a config flag.
* p2p/discover/v5wire: rename 'Total' to 'RespCount'
The new name captures the meaning of this field better.
Alarm is a timer utility that simplifies code where a timer needs to be rescheduled over
and over. Doing this can be tricky with time.Timer or time.AfterFunc because the channel
requires draining in some cases.
Alarm is optimized for use cases where items are tracked in a heap according to their expiry
time, and a goroutine with a for/select loop wants to be woken up whenever the next item expires.
In this application, the timer needs to be rescheduled when an item is added or removed
from the heap. Using a timer naively, these updates will always require synchronization
with the global runtime timer datastructure to update the timer using Reset. Alarm avoids
this by tracking the next expiry time and only modifies the timer if it would need to fire earlier
than already scheduled.
As an example use, I have converted p2p.dialScheduler to use Alarm instead of AfterFunc.
This improves readability of function 'push'.
sort.Search(N, ...) will at most return N when no match, so ix should be compared
with N. The previous version would compare ix with N+1 in case an additional item
was appended. No bug resulted from this comparison, but it's not easy to understand
why.
Co-authored-by: Felix Lange <fjl@twurst.com>
Here we add special handling for sending an error response when the write timeout of the
HTTP server is just about to expire. This is surprisingly difficult to get right, since is
must be ensured that all output is fully flushed in time, which needs support from
multiple levels of the RPC handler stack:
The timeout response can't use chunked transfer-encoding because there is no way to write
the final terminating chunk. net/http writes it when the topmost handler returns, but the
timeout will already be over by the time that happens. We decided to disable chunked
encoding by setting content-length explicitly.
Gzip compression must also be disabled for timeout responses because we don't know the
true content-length before compressing all output, i.e. compression would reintroduce
chunked transfer-encoding.
This changes the Pop method to assign the zero value before
reducing slice size. Doing so ensures the backing array does not
reference removed item values.
It seems there is no fully typed library implementation of an LRU cache.
So I wrote one. Method names are the same as github.com/hashicorp/golang-lru,
and the new type can be used as a drop-in replacement.
Two reasons to do this:
- It's much easier to understand what a cache is for when the types are right there.
- Performance: the new implementation is slightly faster and performs zero memory
allocations in Add when the cache is at capacity. Overall, memory usage of the cache
is much reduced because keys are values are no longer wrapped in interface.
Instead of using a limit of three nodes per message, we can pack more nodes
into each message based on ENR size. In my testing, this halves the number
of sent NODES messages, because ENR size is usually < 300 bytes.
This also adds RLP helper functions that compute the encoded size of
[]byte and string.
Co-authored-by: Martin Holst Swende <martin@swende.se>
Noticed that lookupDistances for FINDNODE requests didn't consider 256 a valid
distance. This is actually part of the example in the comment above the
function, surprised that wasn't tested before.
This changes the CI / release builds to use the latest Go version. It also
upgrades golangci-lint to a newer version compatible with Go 1.19.
In Go 1.19, godoc has gained official support for links and lists. The
syntax for code blocks in doc comments has changed and now requires a
leading tab character. gofmt adapts comments to the new syntax
automatically, so there are a lot of comment re-formatting changes in this
PR. We need to apply the new format in order to pass the CI lint stage with
Go 1.19.
With the linter upgrade, I have decided to disable 'gosec' - it produces
too many false-positive warnings. The 'deadcode' and 'varcheck' linters
have also been removed because golangci-lint warns about them being
unmaintained. 'unused' provides similar coverage and we already have it
enabled, so we don't lose much with this change.
The p2p msgrate tracker is a thing which tries to estimate some mean round-trip times. However, it did so in a very curious way: if a node had 200 peers, it would sort their 200 respective rtt estimates, and then it would pick item number 2 as the mean. So effectively taking third fastest and calling it mean. This probably works "ok" when the number of peers are low (there are other factors too, such as ttlScaling which takes some of the edge off this) -- however when the number of peers is high, it becomes very skewed.
This PR instead bases the 'mean' on the square root of the length of the list. Still pretty harsh, but a bit more lenient.
This change makes use of the new code generator rlp/rlpgen to improve the
performance of RLP encoding for Header and StateAccount. It also speeds up
encoding of ReceiptForStorage using the new rlp.EncoderBuffer API.
The change is much less transparent than I wanted it to be, because Header and
StateAccount now have an EncodeRLP method defined with pointer receiver. It
used to be possible to encode non-pointer values of these types, but the new
method prevents that and attempting to encode unadressable values (even if
part of another value) will return an error. The error can be surprising and may
pop up in places that previously didn't expect any errors.
To make things work, I also needed to update all code paths (mostly in unit tests)
that lead to encoding of non-pointer values, and pass a pointer instead.
Benchmark results:
name old time/op new time/op delta
EncodeRLP/legacy-header-8 328ns ± 0% 237ns ± 1% -27.63% (p=0.000 n=8+8)
EncodeRLP/london-header-8 353ns ± 0% 247ns ± 1% -30.06% (p=0.000 n=8+8)
EncodeRLP/receipt-for-storage-8 237ns ± 0% 123ns ± 0% -47.86% (p=0.000 n=8+7)
EncodeRLP/receipt-full-8 297ns ± 0% 301ns ± 1% +1.39% (p=0.000 n=8+8)
name old speed new speed delta
EncodeRLP/legacy-header-8 1.66GB/s ± 0% 2.29GB/s ± 1% +38.19% (p=0.000 n=8+8)
EncodeRLP/london-header-8 1.55GB/s ± 0% 2.22GB/s ± 1% +42.99% (p=0.000 n=8+8)
EncodeRLP/receipt-for-storage-8 38.0MB/s ± 0% 64.8MB/s ± 0% +70.48% (p=0.000 n=8+7)
EncodeRLP/receipt-full-8 910MB/s ± 0% 897MB/s ± 1% -1.37% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
EncodeRLP/legacy-header-8 0.00B 0.00B ~ (all equal)
EncodeRLP/london-header-8 0.00B 0.00B ~ (all equal)
EncodeRLP/receipt-for-storage-8 64.0B ± 0% 0.0B -100.00% (p=0.000 n=8+8)
EncodeRLP/receipt-full-8 320B ± 0% 320B ± 0% ~ (all equal)
This enables the following linters
- typecheck
- unused
- staticcheck
- bidichk
- durationcheck
- exportloopref
- gosec
WIth a few exceptions.
- We use a deprecated protobuf in trezor. I didn't want to mess with that, since I cannot meaningfully test any changes there.
- The deprecated TypeMux is used in a few places still, so the warning for it is silenced for now.
- Using string type in context.WithValue is apparently wrong, one should use a custom type, to prevent collisions between different places in the hierarchy of callers. That should be fixed at some point, but may require some attention.
- The warnings for using weak random generator are squashed, since we use a lot of random without need for cryptographic guarantees.
This commit replaces ioutil.TempDir with t.TempDir in tests. The
directory created by t.TempDir is automatically removed when the test
and all its subtests complete.
Prior to this commit, temporary directory created using ioutil.TempDir
had to be removed manually by calling os.RemoveAll, which is omitted in
some tests. The error handling boilerplate e.g.
defer func() {
if err := os.RemoveAll(dir); err != nil {
t.Fatal(err)
}
}
is also tedious, but t.TempDir handles this for us nicely.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
This change makes use of the new code generator rlp/rlpgen to improve the
performance of RLP encoding for Header and StateAccount. It also speeds up
encoding of ReceiptForStorage using the new rlp.EncoderBuffer API.
The change is much less transparent than I wanted it to be, because Header and
StateAccount now have an EncodeRLP method defined with pointer receiver. It
used to be possible to encode non-pointer values of these types, but the new
method prevents that and attempting to encode unadressable values (even if
part of another value) will return an error. The error can be surprising and may
pop up in places that previously didn't expect any errors.
To make things work, I also needed to update all code paths (mostly in unit tests)
that lead to encoding of non-pointer values, and pass a pointer instead.
Benchmark results:
name old time/op new time/op delta
EncodeRLP/legacy-header-8 328ns ± 0% 237ns ± 1% -27.63% (p=0.000 n=8+8)
EncodeRLP/london-header-8 353ns ± 0% 247ns ± 1% -30.06% (p=0.000 n=8+8)
EncodeRLP/receipt-for-storage-8 237ns ± 0% 123ns ± 0% -47.86% (p=0.000 n=8+7)
EncodeRLP/receipt-full-8 297ns ± 0% 301ns ± 1% +1.39% (p=0.000 n=8+8)
name old speed new speed delta
EncodeRLP/legacy-header-8 1.66GB/s ± 0% 2.29GB/s ± 1% +38.19% (p=0.000 n=8+8)
EncodeRLP/london-header-8 1.55GB/s ± 0% 2.22GB/s ± 1% +42.99% (p=0.000 n=8+8)
EncodeRLP/receipt-for-storage-8 38.0MB/s ± 0% 64.8MB/s ± 0% +70.48% (p=0.000 n=8+7)
EncodeRLP/receipt-full-8 910MB/s ± 0% 897MB/s ± 1% -1.37% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
EncodeRLP/legacy-header-8 0.00B 0.00B ~ (all equal)
EncodeRLP/london-header-8 0.00B 0.00B ~ (all equal)
EncodeRLP/receipt-for-storage-8 64.0B ± 0% 0.0B -100.00% (p=0.000 n=8+8)
EncodeRLP/receipt-full-8 320B ± 0% 320B ± 0% ~ (all equal)
Some benchmarks in eth/filters were not good: they weren't reproducible, relying on geth chaindata to be present.
Another one was rejected because the receipt was lacking a backing transcation.
The p2p simulation benchmark had a lot of the warnings below, due to the framework calling both
Stop() and Close(). Apparently, the simulated adapter is the only implementation which has a Close(),
and there is no need to call both Stop and Close on it.
* core: fix warning flagging the use of DeepEqual on error
* apply the same change everywhere possible
* revert change that was committed by mistake
* fix build error
* Update config.go
* revert changes to ConfigCompatError
* review feedback
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR ensures that wiping all data associated with a node (apart from its nodekey)
will not generate already used sequence number for the ENRs, since all remote nodes
would reject them until they out-number the previously published largest one.
The big complication with this scheme is that every local update to the ENR can
potentially bump the sequence number by one. In order to ensure that local updates
do not outrun the clock, the sequence number is a millisecond-precision timestamp,
and updates are throttled to occur at most once per millisecond.
Co-authored-by: Felix Lange <fjl@twurst.com>
In p2p/dial.go, conn.flags was accessed without using sync/atomic.
This race is fixed by removing the access.
In p2p/enode/iter_test.go, a similar race is resolved by writing the field atomically.
Co-authored-by: Felix Lange <fjl@twurst.com>
This change significantly improves the performance of RLPx message reads
and writes. In the previous implementation, reading and writing of
message frames performed multiple reads and writes on the underlying
network connection, and allocated a new []byte buffer for every read.
In the new implementation, reads and writes re-use buffers, and perform
much fewer system calls on the underlying connection. This doubles the
theoretically achievable throughput on a single connection, as shown by
the benchmark result:
name old speed new speed delta
Throughput-8 70.3MB/s ± 0% 155.4MB/s ± 0% +121.11% (p=0.000 n=9+8)
The change also removes support for the legacy, pre-EIP-8 handshake encoding.
As of May 2021, no actively maintained client sends this format.
This removes the error log message that says
Ethereum peer removal failed ... err="peer not registered"
The error happened because removePeer was called multiple
times: once to disconnect the peer, and another time when the
handler exited. With this change, removePeer now has the sole
purpose of disconnecting the peer. Unregistering happens exactly
once, when the handler exits.
This change extracts the peer QoS tracking logic from eth/downloader, moving
it into the new package p2p/msgrate. The job of msgrate.Tracker is determining
suitable timeout values and request sizes per peer.
The snap sync scheduler now uses msgrate.Tracker instead of the hard-coded 15s
timeout. This should make the sync work better on network links with high latency.
This changes the definitions of Ping and Pong, adding an optional field
for the sequence number. This field was previously encoded/decoded using
the "tail" struct tag, but using "optional" is much nicer.
This removes auto-configuration of the snap.*.ethdisco.net DNS discovery tree.
Since measurements have shown that > 75% of nodes in all.*.ethdisco.net support
snap, we have decided to retire the dedicated index for snap and just use the eth
tree instead.
The dial iterators of eth and snap now use the same DNS tree in the default configuration,
so both iterators should use the same DNS discovery client instance. This ensures that
the record cache and rate limit are shared. Records will not be requested multiple times.
While testing the change, I noticed that duplicate DNS requests do happen even
when the client instance is shared. This is because the two iterators request the tree
root, link tree root, and first levels of the tree in lockstep. To avoid this problem, the
change also adds a singleflight.Group instance in the client. When one iterator
attempts to resolve an entry which is already being resolved, the singleflight object
waits for the existing resolve call to finish and returns the entry to both places.
When receiving PING from an IPv4 address over IPv6, the implementation sent
back a IPv4-in-IPv6 address. This change makes it reflect the IPv4 address.
* eth/protocols, prp/tracker: add support for req/rep rtt tracking
* p2p/tracker: sanity cap the number of pending requests
* pap/tracker: linter <3
* p2p/tracker: disable entire tracker if no metrics are enabled
This fixes the calculation of the tree branch factor. With the new
formula, we now creat at most 13 children instead of 30, ensuring
the TXT record size will be below 370 bytes.
This PR implements the first one of the "lespay" UDP queries which
is already useful in itself: the capacity query. The server pool is making
use of this query by doing a cheap UDP query to determine whether it is
worth starting the more expensive TCP connection process.