2015-07-07 02:54:22 +02:00
|
|
|
// Copyright 2015 The go-ethereum Authors
|
2015-07-22 18:48:40 +02:00
|
|
|
// This file is part of the go-ethereum library.
|
2015-07-07 02:54:22 +02:00
|
|
|
//
|
2015-07-23 18:35:11 +02:00
|
|
|
// The go-ethereum library is free software: you can redistribute it and/or modify
|
2015-07-07 02:54:22 +02:00
|
|
|
// it under the terms of the GNU Lesser General Public License as published by
|
|
|
|
// the Free Software Foundation, either version 3 of the License, or
|
|
|
|
// (at your option) any later version.
|
|
|
|
//
|
2015-07-22 18:48:40 +02:00
|
|
|
// The go-ethereum library is distributed in the hope that it will be useful,
|
2015-07-07 02:54:22 +02:00
|
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
2015-07-22 18:48:40 +02:00
|
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
2015-07-07 02:54:22 +02:00
|
|
|
// GNU Lesser General Public License for more details.
|
|
|
|
//
|
|
|
|
// You should have received a copy of the GNU Lesser General Public License
|
2015-07-22 18:48:40 +02:00
|
|
|
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
2015-07-07 02:54:22 +02:00
|
|
|
|
2015-01-27 14:33:26 +01:00
|
|
|
// Package discover implements the Node Discovery Protocol.
|
|
|
|
//
|
|
|
|
// The Node Discovery protocol provides a way to find RLPx nodes that
|
|
|
|
// can be connected to. It uses a Kademlia-like protocol to maintain a
|
|
|
|
// distributed database of the IDs and endpoints of all listening
|
|
|
|
// nodes.
|
|
|
|
package discover
|
|
|
|
|
|
|
|
import (
|
log: remove lazy, remove unused interfaces, unexport methods (#28622)
This change
- Removes interface `log.Format`,
- Removes method `log.FormatFunc`,
- unexports `TerminalHandler.TerminalFormat` formatting methods (renamed to `TerminalHandler.format`)
- removes the notion of `log.Lazy` values
The lazy handler was useful in the old log package, since it
could defer the evaluation of costly attributes until later in the
log pipeline: thus, if the logging was done at 'Trace', we could
skip evaluation if logging only was set to 'Info'.
With the move to slog, this way of deferring evaluation is no longer
needed, since slog introduced 'Enabled': the caller can thus do
the evaluate-or-not decision at the callsite, which is much more
straight-forward than dealing with lazy reflect-based evaluation.
Also, lazy evaluation would not work with 'native' slog, as in, these
two statements would be evaluated differently:
```golang
log.Info("foo", "my lazy", lazyObj)
slog.Info("foo", "my lazy", lazyObj)
```
2023-12-05 11:54:44 +01:00
|
|
|
"context"
|
2015-12-07 12:06:49 +01:00
|
|
|
"fmt"
|
2024-06-05 19:31:04 +02:00
|
|
|
"net/netip"
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
"slices"
|
2015-01-27 14:33:26 +01:00
|
|
|
"sync"
|
|
|
|
"time"
|
2015-04-23 18:47:24 +03:00
|
|
|
|
2015-04-27 00:50:18 +02:00
|
|
|
"github.com/ethereum/go-ethereum/common"
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
"github.com/ethereum/go-ethereum/common/mclock"
|
2017-02-22 14:10:07 +02:00
|
|
|
"github.com/ethereum/go-ethereum/log"
|
2023-07-06 08:20:31 -06:00
|
|
|
"github.com/ethereum/go-ethereum/metrics"
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
2018-09-25 00:59:00 +02:00
|
|
|
"github.com/ethereum/go-ethereum/p2p/enode"
|
2018-02-12 13:36:09 +01:00
|
|
|
"github.com/ethereum/go-ethereum/p2p/netutil"
|
2015-01-27 14:33:26 +01:00
|
|
|
)
|
|
|
|
|
|
|
|
const (
|
2018-02-12 13:36:09 +01:00
|
|
|
alpha = 3 // Kademlia concurrency factor
|
|
|
|
bucketSize = 16 // Kademlia bucket size
|
|
|
|
maxReplacements = 10 // Size of per-bucket replacement list
|
|
|
|
|
|
|
|
// We keep buckets for the upper 1/15 of distances because
|
|
|
|
// it's very unlikely we'll ever encounter a node that's closer.
|
|
|
|
hashBits = len(common.Hash{}) * 8
|
|
|
|
nBuckets = hashBits / 15 // Number of buckets
|
|
|
|
bucketMinDistance = hashBits - nBuckets // Log distance of closest bucket
|
|
|
|
|
|
|
|
// IP address limits.
|
|
|
|
bucketIPLimit, bucketSubnet = 2, 24 // at most 2 addresses from the same /24
|
|
|
|
tableIPLimit, tableSubnet = 10, 24
|
|
|
|
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
seedMinTableTime = 5 * time.Minute
|
|
|
|
seedCount = 30
|
|
|
|
seedMaxAge = 5 * 24 * time.Hour
|
2015-01-27 14:33:26 +01:00
|
|
|
)
|
|
|
|
|
2019-05-15 06:47:45 +02:00
|
|
|
// Table is the 'node table', a Kademlia-like index of neighbor nodes. The table keeps
|
|
|
|
// itself up-to-date by verifying the liveness of neighbors and requesting their node
|
|
|
|
// records when announcements of a new record version are received.
|
2015-01-27 14:33:26 +01:00
|
|
|
type Table struct {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
mutex sync.Mutex // protects buckets, bucket content, nursery, rand
|
|
|
|
buckets [nBuckets]*bucket // index of known nodes by distance
|
2024-05-29 15:02:26 +02:00
|
|
|
nursery []*enode.Node // bootstrap nodes
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
rand reseedingRandom // source of randomness, periodically reseeded
|
|
|
|
ips netutil.DistinctNetSet
|
|
|
|
revalidation tableRevalidation
|
2015-01-27 14:33:26 +01:00
|
|
|
|
2023-05-31 13:37:10 +02:00
|
|
|
db *enode.DB // database of known nodes
|
|
|
|
net transport
|
|
|
|
cfg Config
|
|
|
|
log log.Logger
|
|
|
|
|
|
|
|
// loop channels
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
refreshReq chan chan struct{}
|
|
|
|
revalResponseCh chan revalidationResponse
|
|
|
|
addNodeCh chan addNodeOp
|
|
|
|
addNodeHandled chan bool
|
|
|
|
trackRequestCh chan trackRequestOp
|
|
|
|
initDone chan struct{}
|
|
|
|
closeReq chan struct{}
|
|
|
|
closed chan struct{}
|
2015-09-30 05:01:49 +02:00
|
|
|
|
2024-05-29 15:02:26 +02:00
|
|
|
nodeAddedHook func(*bucket, *tableNode)
|
|
|
|
nodeRemovedHook func(*bucket, *tableNode)
|
2015-03-25 16:45:53 +01:00
|
|
|
}
|
|
|
|
|
2019-05-15 06:47:45 +02:00
|
|
|
// transport is implemented by the UDP transports.
|
2015-01-27 14:33:26 +01:00
|
|
|
type transport interface {
|
2019-04-30 13:13:22 +02:00
|
|
|
Self() *enode.Node
|
2019-06-07 15:29:16 +02:00
|
|
|
RequestENR(*enode.Node) (*enode.Node, error)
|
2019-04-30 13:13:22 +02:00
|
|
|
lookupRandom() []*enode.Node
|
|
|
|
lookupSelf() []*enode.Node
|
2019-05-15 06:47:45 +02:00
|
|
|
ping(*enode.Node) (seq uint64, err error)
|
2015-01-27 14:33:26 +01:00
|
|
|
}
|
|
|
|
|
2015-08-07 00:10:26 +02:00
|
|
|
// bucket contains nodes, ordered by their last activity. the entry
|
|
|
|
// that was most recently active is the first element in entries.
|
2018-02-12 13:36:09 +01:00
|
|
|
type bucket struct {
|
2024-05-29 15:02:26 +02:00
|
|
|
entries []*tableNode // live entries, sorted by time of last contact
|
|
|
|
replacements []*tableNode // recently seen nodes to be used if revalidation fails
|
2018-02-12 13:36:09 +01:00
|
|
|
ips netutil.DistinctNetSet
|
2023-07-06 08:20:31 -06:00
|
|
|
index int
|
2018-02-12 13:36:09 +01:00
|
|
|
}
|
2015-01-27 14:33:26 +01:00
|
|
|
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
type addNodeOp struct {
|
2024-05-29 15:02:26 +02:00
|
|
|
node *enode.Node
|
|
|
|
isInbound bool
|
|
|
|
forceSetLive bool // for tests
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
type trackRequestOp struct {
|
2024-05-29 15:02:26 +02:00
|
|
|
node *enode.Node
|
|
|
|
foundNodes []*enode.Node
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
success bool
|
|
|
|
}
|
|
|
|
|
2023-05-31 13:37:10 +02:00
|
|
|
func newTable(t transport, db *enode.DB, cfg Config) (*Table, error) {
|
|
|
|
cfg = cfg.withDefaults()
|
2015-03-25 16:45:53 +01:00
|
|
|
tab := &Table{
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
net: t,
|
|
|
|
db: db,
|
|
|
|
cfg: cfg,
|
|
|
|
log: cfg.Log,
|
|
|
|
refreshReq: make(chan chan struct{}),
|
|
|
|
revalResponseCh: make(chan revalidationResponse),
|
|
|
|
addNodeCh: make(chan addNodeOp),
|
|
|
|
addNodeHandled: make(chan bool),
|
|
|
|
trackRequestCh: make(chan trackRequestOp),
|
|
|
|
initDone: make(chan struct{}),
|
|
|
|
closeReq: make(chan struct{}),
|
|
|
|
closed: make(chan struct{}),
|
|
|
|
ips: netutil.DistinctNetSet{Subnet: tableSubnet, Limit: tableIPLimit},
|
2015-03-25 16:45:53 +01:00
|
|
|
}
|
2015-01-27 14:33:26 +01:00
|
|
|
for i := range tab.buckets {
|
2018-02-12 13:36:09 +01:00
|
|
|
tab.buckets[i] = &bucket{
|
2023-07-06 08:20:31 -06:00
|
|
|
index: i,
|
|
|
|
ips: netutil.DistinctNetSet{Subnet: bucketSubnet, Limit: bucketIPLimit},
|
2018-02-12 13:36:09 +01:00
|
|
|
}
|
2015-01-27 14:33:26 +01:00
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
tab.rand.seed()
|
|
|
|
tab.revalidation.init(&cfg)
|
2015-01-27 14:33:26 +01:00
|
|
|
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
// initial table content
|
|
|
|
if err := tab.setFallbackNodes(cfg.Bootnodes); err != nil {
|
2023-07-06 08:20:31 -06:00
|
|
|
return nil, err
|
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
tab.loadSeedNodes()
|
|
|
|
|
2023-07-06 08:20:31 -06:00
|
|
|
return tab, nil
|
|
|
|
}
|
|
|
|
|
2023-05-31 13:37:10 +02:00
|
|
|
// Nodes returns all nodes contained in the table.
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
func (tab *Table) Nodes() [][]BucketNode {
|
2015-05-21 02:11:41 +02:00
|
|
|
tab.mutex.Lock()
|
|
|
|
defer tab.mutex.Unlock()
|
2018-02-12 13:36:09 +01:00
|
|
|
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
nodes := make([][]BucketNode, len(tab.buckets))
|
|
|
|
for i, b := range &tab.buckets {
|
|
|
|
nodes[i] = make([]BucketNode, len(b.entries))
|
|
|
|
for j, n := range b.entries {
|
|
|
|
nodes[i][j] = BucketNode{
|
|
|
|
Node: n.Node,
|
|
|
|
Checks: int(n.livenessChecks),
|
|
|
|
Live: n.isValidatedLive,
|
|
|
|
AddedToTable: n.addedToTable,
|
|
|
|
AddedToBucket: n.addedToBucket,
|
|
|
|
}
|
2015-05-21 02:11:41 +02:00
|
|
|
}
|
|
|
|
}
|
2023-05-31 13:37:10 +02:00
|
|
|
return nodes
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tab *Table) self() *enode.Node {
|
|
|
|
return tab.net.Self()
|
|
|
|
}
|
|
|
|
|
2019-05-15 06:47:45 +02:00
|
|
|
// getNode returns the node with the given ID or nil if it isn't in the table.
|
|
|
|
func (tab *Table) getNode(id enode.ID) *enode.Node {
|
2019-04-30 13:13:22 +02:00
|
|
|
tab.mutex.Lock()
|
2019-05-15 06:47:45 +02:00
|
|
|
defer tab.mutex.Unlock()
|
|
|
|
|
|
|
|
b := tab.bucket(id)
|
|
|
|
for _, e := range b.entries {
|
|
|
|
if e.ID() == id {
|
2024-05-29 15:02:26 +02:00
|
|
|
return e.Node
|
2019-05-15 06:47:45 +02:00
|
|
|
}
|
2019-04-30 13:13:22 +02:00
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// close terminates the network listener and flushes the node database.
|
|
|
|
func (tab *Table) close() {
|
|
|
|
close(tab.closeReq)
|
|
|
|
<-tab.closed
|
2015-02-05 03:07:18 +01:00
|
|
|
}
|
|
|
|
|
2018-02-12 13:36:09 +01:00
|
|
|
// setFallbackNodes sets the initial points of contact. These nodes
|
2015-12-07 12:06:49 +01:00
|
|
|
// are used to connect to the network if the table is empty and there
|
|
|
|
// are no known nodes in the database.
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
2018-09-25 00:59:00 +02:00
|
|
|
func (tab *Table) setFallbackNodes(nodes []*enode.Node) error {
|
2024-05-29 15:02:26 +02:00
|
|
|
nursery := make([]*enode.Node, 0, len(nodes))
|
2015-12-07 12:06:49 +01:00
|
|
|
for _, n := range nodes {
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
2018-09-25 00:59:00 +02:00
|
|
|
if err := n.ValidateComplete(); err != nil {
|
|
|
|
return fmt.Errorf("bad bootstrap node %q: %v", n, err)
|
2015-12-07 12:06:49 +01:00
|
|
|
}
|
2024-06-05 19:31:04 +02:00
|
|
|
if tab.cfg.NetRestrict != nil && !tab.cfg.NetRestrict.ContainsAddr(n.IPAddr()) {
|
|
|
|
tab.log.Error("Bootstrap node filtered by netrestrict", "id", n.ID(), "ip", n.IPAddr())
|
2023-07-12 12:01:38 +02:00
|
|
|
continue
|
|
|
|
}
|
2024-05-29 15:02:26 +02:00
|
|
|
nursery = append(nursery, n)
|
2015-12-07 12:06:49 +01:00
|
|
|
}
|
2023-07-12 12:01:38 +02:00
|
|
|
tab.nursery = nursery
|
2015-12-07 12:06:49 +01:00
|
|
|
return nil
|
2015-01-27 14:33:26 +01:00
|
|
|
}
|
|
|
|
|
2018-02-12 13:36:09 +01:00
|
|
|
// isInitDone returns whether the table's initial seeding procedure has completed.
|
|
|
|
func (tab *Table) isInitDone() bool {
|
|
|
|
select {
|
|
|
|
case <-tab.initDone:
|
|
|
|
return true
|
|
|
|
default:
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-12-07 12:06:49 +01:00
|
|
|
func (tab *Table) refresh() <-chan struct{} {
|
|
|
|
done := make(chan struct{})
|
2015-09-30 05:01:49 +02:00
|
|
|
select {
|
2015-12-07 12:06:49 +01:00
|
|
|
case tab.refreshReq <- done:
|
2019-01-29 17:39:20 +01:00
|
|
|
case <-tab.closeReq:
|
2015-12-07 12:06:49 +01:00
|
|
|
close(done)
|
2015-09-30 05:01:49 +02:00
|
|
|
}
|
2015-12-07 12:06:49 +01:00
|
|
|
return done
|
2015-09-30 05:01:49 +02:00
|
|
|
}
|
2015-05-25 15:57:44 +03:00
|
|
|
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
// findnodeByID returns the n nodes in the table that are closest to the given id.
|
|
|
|
// This is used by the FINDNODE/v4 handler.
|
|
|
|
//
|
|
|
|
// The preferLive parameter says whether the caller wants liveness-checked results. If
|
|
|
|
// preferLive is true and the table contains any verified nodes, the result will not
|
|
|
|
// contain unverified nodes. However, if there are no verified nodes at all, the result
|
|
|
|
// will contain unverified nodes.
|
|
|
|
func (tab *Table) findnodeByID(target enode.ID, nresults int, preferLive bool) *nodesByDistance {
|
|
|
|
tab.mutex.Lock()
|
|
|
|
defer tab.mutex.Unlock()
|
|
|
|
|
|
|
|
// Scan all buckets. There might be a better way to do this, but there aren't that many
|
|
|
|
// buckets, so this solution should be fine. The worst-case complexity of this loop
|
|
|
|
// is O(tab.len() * nresults).
|
|
|
|
nodes := &nodesByDistance{target: target}
|
|
|
|
liveNodes := &nodesByDistance{target: target}
|
|
|
|
for _, b := range &tab.buckets {
|
|
|
|
for _, n := range b.entries {
|
2024-05-29 15:02:26 +02:00
|
|
|
nodes.push(n.Node, nresults)
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
if preferLive && n.isValidatedLive {
|
2024-05-29 15:02:26 +02:00
|
|
|
liveNodes.push(n.Node, nresults)
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if preferLive && len(liveNodes.entries) > 0 {
|
|
|
|
return liveNodes
|
|
|
|
}
|
|
|
|
return nodes
|
|
|
|
}
|
|
|
|
|
2024-09-30 10:56:14 +02:00
|
|
|
// appendBucketNodes adds nodes at the given distance to the result slice.
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
// This is used by the FINDNODE/v5 handler.
|
2024-09-30 10:56:14 +02:00
|
|
|
func (tab *Table) appendBucketNodes(dist uint, result []*enode.Node, checkLive bool) []*enode.Node {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
if dist > 256 {
|
|
|
|
return result
|
|
|
|
}
|
|
|
|
if dist == 0 {
|
|
|
|
return append(result, tab.self())
|
|
|
|
}
|
|
|
|
|
|
|
|
tab.mutex.Lock()
|
|
|
|
for _, n := range tab.bucketAtDistance(int(dist)).entries {
|
2024-09-30 10:56:14 +02:00
|
|
|
if !checkLive || n.isValidatedLive {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
result = append(result, n.Node)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
tab.mutex.Unlock()
|
|
|
|
|
|
|
|
// Shuffle result to avoid always returning same nodes in FINDNODE/v5.
|
|
|
|
tab.rand.Shuffle(len(result), func(i, j int) {
|
|
|
|
result[i], result[j] = result[j], result[i]
|
|
|
|
})
|
|
|
|
return result
|
|
|
|
}
|
|
|
|
|
|
|
|
// len returns the number of nodes in the table.
|
|
|
|
func (tab *Table) len() (n int) {
|
|
|
|
tab.mutex.Lock()
|
|
|
|
defer tab.mutex.Unlock()
|
|
|
|
|
|
|
|
for _, b := range &tab.buckets {
|
|
|
|
n += len(b.entries)
|
|
|
|
}
|
|
|
|
return n
|
|
|
|
}
|
|
|
|
|
|
|
|
// addFoundNode adds a node which may not be live. If the bucket has space available,
|
|
|
|
// adding the node succeeds immediately. Otherwise, the node is added to the replacements
|
|
|
|
// list.
|
|
|
|
//
|
|
|
|
// The caller must not hold tab.mutex.
|
2024-05-29 15:02:26 +02:00
|
|
|
func (tab *Table) addFoundNode(n *enode.Node, forceSetLive bool) bool {
|
|
|
|
op := addNodeOp{node: n, isInbound: false, forceSetLive: forceSetLive}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
select {
|
|
|
|
case tab.addNodeCh <- op:
|
|
|
|
return <-tab.addNodeHandled
|
|
|
|
case <-tab.closeReq:
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// addInboundNode adds a node from an inbound contact. If the bucket has no space, the
|
|
|
|
// node is added to the replacements list.
|
|
|
|
//
|
|
|
|
// There is an additional safety measure: if the table is still initializing the node is
|
|
|
|
// not added. This prevents an attack where the table could be filled by just sending ping
|
|
|
|
// repeatedly.
|
|
|
|
//
|
|
|
|
// The caller must not hold tab.mutex.
|
2024-05-29 15:02:26 +02:00
|
|
|
func (tab *Table) addInboundNode(n *enode.Node) bool {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
op := addNodeOp{node: n, isInbound: true}
|
|
|
|
select {
|
|
|
|
case tab.addNodeCh <- op:
|
|
|
|
return <-tab.addNodeHandled
|
|
|
|
case <-tab.closeReq:
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-05-29 15:02:26 +02:00
|
|
|
func (tab *Table) trackRequest(n *enode.Node, success bool, foundNodes []*enode.Node) {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
op := trackRequestOp{n, foundNodes, success}
|
|
|
|
select {
|
|
|
|
case tab.trackRequestCh <- op:
|
|
|
|
case <-tab.closeReq:
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// loop is the main loop of Table.
|
2018-02-12 13:36:09 +01:00
|
|
|
func (tab *Table) loop() {
|
2015-12-07 12:06:49 +01:00
|
|
|
var (
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
refresh = time.NewTimer(tab.nextRefreshTime())
|
|
|
|
refreshDone = make(chan struct{}) // where doRefresh reports completion
|
|
|
|
waiting = []chan struct{}{tab.initDone} // holds waiting callers while doRefresh runs
|
|
|
|
revalTimer = mclock.NewAlarm(tab.cfg.Clock)
|
|
|
|
reseedRandTimer = time.NewTicker(10 * time.Minute)
|
2015-12-07 12:06:49 +01:00
|
|
|
)
|
2018-02-12 13:36:09 +01:00
|
|
|
defer refresh.Stop()
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
defer revalTimer.Stop()
|
|
|
|
defer reseedRandTimer.Stop()
|
2018-02-12 13:36:09 +01:00
|
|
|
|
|
|
|
// Start initial refresh.
|
|
|
|
go tab.doRefresh(refreshDone)
|
|
|
|
|
2015-12-07 12:06:49 +01:00
|
|
|
loop:
|
2015-09-30 05:01:49 +02:00
|
|
|
for {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
nextTime := tab.revalidation.run(tab, tab.cfg.Clock.Now())
|
|
|
|
revalTimer.Schedule(nextTime)
|
|
|
|
|
2015-09-30 05:01:49 +02:00
|
|
|
select {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
case <-reseedRandTimer.C:
|
|
|
|
tab.rand.seed()
|
|
|
|
|
|
|
|
case <-revalTimer.C():
|
|
|
|
|
|
|
|
case r := <-tab.revalResponseCh:
|
|
|
|
tab.revalidation.handleResponse(tab, r)
|
|
|
|
|
|
|
|
case op := <-tab.addNodeCh:
|
|
|
|
tab.mutex.Lock()
|
|
|
|
ok := tab.handleAddNode(op)
|
|
|
|
tab.mutex.Unlock()
|
|
|
|
tab.addNodeHandled <- ok
|
|
|
|
|
|
|
|
case op := <-tab.trackRequestCh:
|
|
|
|
tab.handleTrackRequest(op)
|
|
|
|
|
2018-02-12 13:36:09 +01:00
|
|
|
case <-refresh.C:
|
|
|
|
if refreshDone == nil {
|
|
|
|
refreshDone = make(chan struct{})
|
|
|
|
go tab.doRefresh(refreshDone)
|
2015-09-30 05:01:49 +02:00
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
|
2015-12-07 12:06:49 +01:00
|
|
|
case req := <-tab.refreshReq:
|
|
|
|
waiting = append(waiting, req)
|
2018-02-12 13:36:09 +01:00
|
|
|
if refreshDone == nil {
|
|
|
|
refreshDone = make(chan struct{})
|
|
|
|
go tab.doRefresh(refreshDone)
|
2015-09-30 05:01:49 +02:00
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
|
2018-02-12 13:36:09 +01:00
|
|
|
case <-refreshDone:
|
2015-12-07 12:06:49 +01:00
|
|
|
for _, ch := range waiting {
|
|
|
|
close(ch)
|
|
|
|
}
|
2018-02-12 13:36:09 +01:00
|
|
|
waiting, refreshDone = nil, nil
|
2023-05-31 13:37:10 +02:00
|
|
|
refresh.Reset(tab.nextRefreshTime())
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
|
2015-09-30 05:01:49 +02:00
|
|
|
case <-tab.closeReq:
|
2015-12-07 12:06:49 +01:00
|
|
|
break loop
|
2015-05-25 15:57:44 +03:00
|
|
|
}
|
|
|
|
}
|
2015-12-07 12:06:49 +01:00
|
|
|
|
2018-02-12 13:36:09 +01:00
|
|
|
if refreshDone != nil {
|
|
|
|
<-refreshDone
|
2015-12-07 12:06:49 +01:00
|
|
|
}
|
|
|
|
for _, ch := range waiting {
|
|
|
|
close(ch)
|
|
|
|
}
|
|
|
|
close(tab.closed)
|
2015-09-30 05:01:49 +02:00
|
|
|
}
|
2015-05-25 15:57:44 +03:00
|
|
|
|
2019-05-15 06:47:45 +02:00
|
|
|
// doRefresh performs a lookup for a random target to keep buckets full. seed nodes are
|
|
|
|
// inserted if the table is empty (initial bootstrap or discarded faulty peers).
|
2015-09-30 05:01:49 +02:00
|
|
|
func (tab *Table) doRefresh(done chan struct{}) {
|
|
|
|
defer close(done)
|
|
|
|
|
2018-02-12 13:36:09 +01:00
|
|
|
// Load nodes from the database and insert
|
|
|
|
// them. This should yield a few previously seen nodes that are
|
|
|
|
// (hopefully) still alive.
|
2018-07-03 15:24:12 +02:00
|
|
|
tab.loadSeedNodes()
|
2018-02-12 13:36:09 +01:00
|
|
|
|
|
|
|
// Run self lookup to discover new neighbor nodes.
|
2019-04-30 13:13:22 +02:00
|
|
|
tab.net.lookupSelf()
|
2018-02-12 13:36:09 +01:00
|
|
|
|
2015-09-30 05:01:49 +02:00
|
|
|
// The Kademlia paper specifies that the bucket refresh should
|
|
|
|
// perform a lookup in the least recently used bucket. We cannot
|
|
|
|
// adhere to this because the findnode target is a 512bit value
|
|
|
|
// (not hash-sized) and it is not easily possible to generate a
|
|
|
|
// sha3 preimage that falls into a chosen bucket.
|
2018-02-12 13:36:09 +01:00
|
|
|
// We perform a few lookups with a random target instead.
|
|
|
|
for i := 0; i < 3; i++ {
|
2019-04-30 13:13:22 +02:00
|
|
|
tab.net.lookupRandom()
|
2015-09-30 05:01:49 +02:00
|
|
|
}
|
2018-02-12 13:36:09 +01:00
|
|
|
}
|
2015-05-25 16:23:16 +03:00
|
|
|
|
2018-07-03 15:24:12 +02:00
|
|
|
func (tab *Table) loadSeedNodes() {
|
2024-05-29 15:02:26 +02:00
|
|
|
seeds := tab.db.QuerySeeds(seedCount, seedMaxAge)
|
2018-02-12 13:36:09 +01:00
|
|
|
seeds = append(seeds, tab.nursery...)
|
|
|
|
for i := range seeds {
|
|
|
|
seed := seeds[i]
|
log: remove lazy, remove unused interfaces, unexport methods (#28622)
This change
- Removes interface `log.Format`,
- Removes method `log.FormatFunc`,
- unexports `TerminalHandler.TerminalFormat` formatting methods (renamed to `TerminalHandler.format`)
- removes the notion of `log.Lazy` values
The lazy handler was useful in the old log package, since it
could defer the evaluation of costly attributes until later in the
log pipeline: thus, if the logging was done at 'Trace', we could
skip evaluation if logging only was set to 'Info'.
With the move to slog, this way of deferring evaluation is no longer
needed, since slog introduced 'Enabled': the caller can thus do
the evaluate-or-not decision at the callsite, which is much more
straight-forward than dealing with lazy reflect-based evaluation.
Also, lazy evaluation would not work with 'native' slog, as in, these
two statements would be evaluated differently:
```golang
log.Info("foo", "my lazy", lazyObj)
slog.Info("foo", "my lazy", lazyObj)
```
2023-12-05 11:54:44 +01:00
|
|
|
if tab.log.Enabled(context.Background(), log.LevelTrace) {
|
2024-06-05 19:31:04 +02:00
|
|
|
age := time.Since(tab.db.LastPongReceived(seed.ID(), seed.IPAddr()))
|
2024-05-29 15:02:26 +02:00
|
|
|
addr, _ := seed.UDPEndpoint()
|
|
|
|
tab.log.Trace("Found seed node in database", "id", seed.ID(), "addr", addr, "age", age)
|
log: remove lazy, remove unused interfaces, unexport methods (#28622)
This change
- Removes interface `log.Format`,
- Removes method `log.FormatFunc`,
- unexports `TerminalHandler.TerminalFormat` formatting methods (renamed to `TerminalHandler.format`)
- removes the notion of `log.Lazy` values
The lazy handler was useful in the old log package, since it
could defer the evaluation of costly attributes until later in the
log pipeline: thus, if the logging was done at 'Trace', we could
skip evaluation if logging only was set to 'Info'.
With the move to slog, this way of deferring evaluation is no longer
needed, since slog introduced 'Enabled': the caller can thus do
the evaluate-or-not decision at the callsite, which is much more
straight-forward than dealing with lazy reflect-based evaluation.
Also, lazy evaluation would not work with 'native' slog, as in, these
two statements would be evaluated differently:
```golang
log.Info("foo", "my lazy", lazyObj)
slog.Info("foo", "my lazy", lazyObj)
```
2023-12-05 11:54:44 +01:00
|
|
|
}
|
2024-06-09 22:47:51 +02:00
|
|
|
tab.mutex.Lock()
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
tab.handleAddNode(addNodeOp{node: seed, isInbound: false})
|
2024-06-09 22:47:51 +02:00
|
|
|
tab.mutex.Unlock()
|
2019-05-15 06:47:45 +02:00
|
|
|
}
|
2023-05-31 13:37:10 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
func (tab *Table) nextRefreshTime() time.Duration {
|
|
|
|
half := tab.cfg.RefreshInterval / 2
|
|
|
|
return half + time.Duration(tab.rand.Int63n(int64(half)))
|
2018-02-12 13:36:09 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
// bucket returns the bucket for the given node ID hash.
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
2018-09-25 00:59:00 +02:00
|
|
|
func (tab *Table) bucket(id enode.ID) *bucket {
|
2018-10-12 11:47:24 +02:00
|
|
|
d := enode.LogDist(tab.self().ID(), id)
|
2020-04-08 09:57:23 +02:00
|
|
|
return tab.bucketAtDistance(d)
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tab *Table) bucketAtDistance(d int) *bucket {
|
2018-02-12 13:36:09 +01:00
|
|
|
if d <= bucketMinDistance {
|
|
|
|
return tab.buckets[0]
|
|
|
|
}
|
|
|
|
return tab.buckets[d-bucketMinDistance-1]
|
|
|
|
}
|
|
|
|
|
2024-06-05 19:31:04 +02:00
|
|
|
func (tab *Table) addIP(b *bucket, ip netip.Addr) bool {
|
|
|
|
if !ip.IsValid() || ip.IsUnspecified() {
|
2020-07-13 22:25:45 +02:00
|
|
|
return false // Nodes without IP cannot be added.
|
|
|
|
}
|
2024-06-05 19:31:04 +02:00
|
|
|
if netutil.AddrIsLAN(ip) {
|
2018-02-12 13:36:09 +01:00
|
|
|
return true
|
2015-08-07 00:10:26 +02:00
|
|
|
}
|
2024-06-05 19:31:04 +02:00
|
|
|
if !tab.ips.AddAddr(ip) {
|
2019-04-30 13:13:22 +02:00
|
|
|
tab.log.Debug("IP exceeds table limit", "ip", ip)
|
2015-08-07 00:10:26 +02:00
|
|
|
return false
|
|
|
|
}
|
2024-06-05 19:31:04 +02:00
|
|
|
if !b.ips.AddAddr(ip) {
|
2019-04-30 13:13:22 +02:00
|
|
|
tab.log.Debug("IP exceeds bucket limit", "ip", ip)
|
2024-06-05 19:31:04 +02:00
|
|
|
tab.ips.RemoveAddr(ip)
|
2018-02-12 13:36:09 +01:00
|
|
|
return false
|
2015-08-07 00:10:26 +02:00
|
|
|
}
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
|
2024-06-05 19:31:04 +02:00
|
|
|
func (tab *Table) removeIP(b *bucket, ip netip.Addr) {
|
|
|
|
if netutil.AddrIsLAN(ip) {
|
2018-02-12 13:36:09 +01:00
|
|
|
return
|
|
|
|
}
|
2024-06-05 19:31:04 +02:00
|
|
|
tab.ips.RemoveAddr(ip)
|
|
|
|
b.ips.RemoveAddr(ip)
|
2018-02-12 13:36:09 +01:00
|
|
|
}
|
|
|
|
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
// handleAddNode adds the node in the request to the table, if there is space.
|
|
|
|
// The caller must hold tab.mutex.
|
|
|
|
func (tab *Table) handleAddNode(req addNodeOp) bool {
|
|
|
|
if req.node.ID() == tab.self().ID() {
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
// For nodes from inbound contact, there is an additional safety measure: if the table
|
|
|
|
// is still initializing the node is not added.
|
|
|
|
if req.isInbound && !tab.isInitDone() {
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
|
|
|
b := tab.bucket(req.node.ID())
|
2024-05-29 15:02:26 +02:00
|
|
|
n, _ := tab.bumpInBucket(b, req.node, req.isInbound)
|
2024-05-28 13:30:17 -06:00
|
|
|
if n != nil {
|
|
|
|
// Already in bucket.
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
return false
|
|
|
|
}
|
|
|
|
if len(b.entries) >= bucketSize {
|
|
|
|
// Bucket full, maybe add as replacement.
|
|
|
|
tab.addReplacement(b, req.node)
|
|
|
|
return false
|
|
|
|
}
|
2024-06-05 19:31:04 +02:00
|
|
|
if !tab.addIP(b, req.node.IPAddr()) {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
// Can't add: IP limit reached.
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
|
|
|
// Add to bucket.
|
2024-05-29 15:02:26 +02:00
|
|
|
wn := &tableNode{Node: req.node}
|
|
|
|
if req.forceSetLive {
|
|
|
|
wn.livenessChecks = 1
|
|
|
|
wn.isValidatedLive = true
|
|
|
|
}
|
|
|
|
b.entries = append(b.entries, wn)
|
|
|
|
b.replacements = deleteNode(b.replacements, wn.ID())
|
|
|
|
tab.nodeAdded(b, wn)
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
return true
|
|
|
|
}
|
|
|
|
|
|
|
|
// addReplacement adds n to the replacement cache of bucket b.
|
2024-05-29 15:02:26 +02:00
|
|
|
func (tab *Table) addReplacement(b *bucket, n *enode.Node) {
|
|
|
|
if containsID(b.replacements, n.ID()) {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
// TODO: update ENR
|
|
|
|
return
|
2018-02-12 13:36:09 +01:00
|
|
|
}
|
2024-06-05 19:31:04 +02:00
|
|
|
if !tab.addIP(b, n.IPAddr()) {
|
2018-02-12 13:36:09 +01:00
|
|
|
return
|
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
|
2024-05-29 15:02:26 +02:00
|
|
|
wn := &tableNode{Node: n, addedToTable: time.Now()}
|
|
|
|
var removed *tableNode
|
|
|
|
b.replacements, removed = pushNode(b.replacements, wn, maxReplacements)
|
2018-02-12 13:36:09 +01:00
|
|
|
if removed != nil {
|
2024-06-05 19:31:04 +02:00
|
|
|
tab.removeIP(b, removed.IPAddr())
|
2018-02-12 13:36:09 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-05-29 15:02:26 +02:00
|
|
|
func (tab *Table) nodeAdded(b *bucket, n *tableNode) {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
if n.addedToTable == (time.Time{}) {
|
|
|
|
n.addedToTable = time.Now()
|
|
|
|
}
|
|
|
|
n.addedToBucket = time.Now()
|
|
|
|
tab.revalidation.nodeAdded(tab, n)
|
|
|
|
if tab.nodeAddedHook != nil {
|
|
|
|
tab.nodeAddedHook(b, n)
|
|
|
|
}
|
|
|
|
if metrics.Enabled {
|
|
|
|
bucketsCounter[b.index].Inc(1)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-05-29 15:02:26 +02:00
|
|
|
func (tab *Table) nodeRemoved(b *bucket, n *tableNode) {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
tab.revalidation.nodeRemoved(n)
|
|
|
|
if tab.nodeRemovedHook != nil {
|
|
|
|
tab.nodeRemovedHook(b, n)
|
|
|
|
}
|
|
|
|
if metrics.Enabled {
|
|
|
|
bucketsCounter[b.index].Dec(1)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// deleteInBucket removes node n from the table.
|
|
|
|
// If there are replacement nodes in the bucket, the node is replaced.
|
2024-05-29 15:02:26 +02:00
|
|
|
func (tab *Table) deleteInBucket(b *bucket, id enode.ID) *tableNode {
|
|
|
|
index := slices.IndexFunc(b.entries, func(e *tableNode) bool { return e.ID() == id })
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
if index == -1 {
|
|
|
|
// Entry has been removed already.
|
2018-02-12 13:36:09 +01:00
|
|
|
return nil
|
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
|
|
|
|
// Remove the node.
|
|
|
|
n := b.entries[index]
|
|
|
|
b.entries = slices.Delete(b.entries, index, index+1)
|
2024-06-05 19:31:04 +02:00
|
|
|
tab.removeIP(b, n.IPAddr())
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
tab.nodeRemoved(b, n)
|
|
|
|
|
|
|
|
// Add replacement.
|
2018-02-12 13:36:09 +01:00
|
|
|
if len(b.replacements) == 0 {
|
2024-06-05 19:31:04 +02:00
|
|
|
tab.log.Debug("Removed dead node", "b", b.index, "id", n.ID(), "ip", n.IPAddr())
|
2018-02-12 13:36:09 +01:00
|
|
|
return nil
|
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
rindex := tab.rand.Intn(len(b.replacements))
|
|
|
|
rep := b.replacements[rindex]
|
|
|
|
b.replacements = slices.Delete(b.replacements, rindex, rindex+1)
|
|
|
|
b.entries = append(b.entries, rep)
|
|
|
|
tab.nodeAdded(b, rep)
|
2024-06-05 19:31:04 +02:00
|
|
|
tab.log.Debug("Replaced dead node", "b", b.index, "id", n.ID(), "ip", n.IPAddr(), "r", rep.ID(), "rip", rep.IPAddr())
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
return rep
|
|
|
|
}
|
|
|
|
|
2024-05-28 13:30:17 -06:00
|
|
|
// bumpInBucket updates a node record if it exists in the bucket.
|
|
|
|
// The second return value reports whether the node's endpoint (IP/port) was updated.
|
2024-05-29 15:02:26 +02:00
|
|
|
func (tab *Table) bumpInBucket(b *bucket, newRecord *enode.Node, isInbound bool) (n *tableNode, endpointChanged bool) {
|
|
|
|
i := slices.IndexFunc(b.entries, func(elem *tableNode) bool {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
return elem.ID() == newRecord.ID()
|
|
|
|
})
|
|
|
|
if i == -1 {
|
2024-05-28 13:30:17 -06:00
|
|
|
return nil, false // not in bucket
|
|
|
|
}
|
|
|
|
n = b.entries[i]
|
|
|
|
|
|
|
|
// For inbound updates (from the node itself) we accept any change, even if it sets
|
|
|
|
// back the sequence number. For found nodes (!isInbound), seq has to advance. Note
|
|
|
|
// this check also ensures found discv4 nodes (which always have seq=0) can't be
|
|
|
|
// updated.
|
|
|
|
if newRecord.Seq() <= n.Seq() && !isInbound {
|
|
|
|
return n, false
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
}
|
|
|
|
|
2024-05-28 13:30:17 -06:00
|
|
|
// Check endpoint update against IP limits.
|
|
|
|
ipchanged := newRecord.IPAddr() != n.IPAddr()
|
|
|
|
portchanged := newRecord.UDP() != n.UDP()
|
|
|
|
if ipchanged {
|
2024-06-05 19:31:04 +02:00
|
|
|
tab.removeIP(b, n.IPAddr())
|
|
|
|
if !tab.addIP(b, newRecord.IPAddr()) {
|
2024-05-28 13:30:17 -06:00
|
|
|
// It doesn't fit with the limit, put the previous record back.
|
2024-06-05 19:31:04 +02:00
|
|
|
tab.addIP(b, n.IPAddr())
|
2024-05-28 13:30:17 -06:00
|
|
|
return n, false
|
2015-01-27 14:33:26 +01:00
|
|
|
}
|
|
|
|
}
|
2024-05-28 13:30:17 -06:00
|
|
|
|
|
|
|
// Apply update.
|
|
|
|
n.Node = newRecord
|
|
|
|
if ipchanged || portchanged {
|
|
|
|
// Ensure node is revalidated quickly for endpoint changes.
|
|
|
|
tab.revalidation.nodeEndpointChanged(tab, n)
|
|
|
|
return n, true
|
|
|
|
}
|
|
|
|
return n, false
|
2015-01-27 14:33:26 +01:00
|
|
|
}
|
|
|
|
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
func (tab *Table) handleTrackRequest(op trackRequestOp) {
|
|
|
|
var fails int
|
|
|
|
if op.success {
|
|
|
|
// Reset failure counter because it counts _consecutive_ failures.
|
2024-06-05 19:31:04 +02:00
|
|
|
tab.db.UpdateFindFails(op.node.ID(), op.node.IPAddr(), 0)
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
} else {
|
2024-06-05 19:31:04 +02:00
|
|
|
fails = tab.db.FindFails(op.node.ID(), op.node.IPAddr())
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
fails++
|
2024-06-05 19:31:04 +02:00
|
|
|
tab.db.UpdateFindFails(op.node.ID(), op.node.IPAddr(), fails)
|
2023-07-06 08:20:31 -06:00
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 14:26:09 +02:00
|
|
|
|
|
|
|
tab.mutex.Lock()
|
|
|
|
defer tab.mutex.Unlock()
|
|
|
|
|
|
|
|
b := tab.bucket(op.node.ID())
|
|
|
|
// Remove the node from the local table if it fails to return anything useful too
|
|
|
|
// many times, but only if there are enough other nodes in the bucket. This latter
|
|
|
|
// condition specifically exists to make bootstrapping in smaller test networks more
|
|
|
|
// reliable.
|
|
|
|
if fails >= maxFindnodeFailures && len(b.entries) >= bucketSize/4 {
|
|
|
|
tab.deleteInBucket(b, op.node.ID())
|
|
|
|
}
|
|
|
|
|
|
|
|
// Add found nodes.
|
|
|
|
for _, n := range op.foundNodes {
|
2024-05-29 15:02:26 +02:00
|
|
|
tab.handleAddNode(addNodeOp{n, false, false})
|
2023-07-06 08:20:31 -06:00
|
|
|
}
|
2018-02-12 13:36:09 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
// pushNode adds n to the front of list, keeping at most max items.
|
2024-05-29 15:02:26 +02:00
|
|
|
func pushNode(list []*tableNode, n *tableNode, max int) ([]*tableNode, *tableNode) {
|
2018-02-12 13:36:09 +01:00
|
|
|
if len(list) < max {
|
|
|
|
list = append(list, nil)
|
|
|
|
}
|
|
|
|
removed := list[len(list)-1]
|
|
|
|
copy(list[1:], list)
|
|
|
|
list[0] = n
|
|
|
|
return list, removed
|
|
|
|
}
|