2015-07-07 03:54:22 +03:00
|
|
|
// Copyright 2015 The go-ethereum Authors
|
2015-07-22 19:48:40 +03:00
|
|
|
// This file is part of the go-ethereum library.
|
2015-07-07 03:54:22 +03:00
|
|
|
//
|
2015-07-23 19:35:11 +03:00
|
|
|
// The go-ethereum library is free software: you can redistribute it and/or modify
|
2015-07-07 03:54:22 +03:00
|
|
|
// it under the terms of the GNU Lesser General Public License as published by
|
|
|
|
// the Free Software Foundation, either version 3 of the License, or
|
|
|
|
// (at your option) any later version.
|
|
|
|
//
|
2015-07-22 19:48:40 +03:00
|
|
|
// The go-ethereum library is distributed in the hope that it will be useful,
|
2015-07-07 03:54:22 +03:00
|
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
2015-07-22 19:48:40 +03:00
|
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
2015-07-07 03:54:22 +03:00
|
|
|
// GNU Lesser General Public License for more details.
|
|
|
|
//
|
|
|
|
// You should have received a copy of the GNU Lesser General Public License
|
2015-07-22 19:48:40 +03:00
|
|
|
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
2015-07-07 03:54:22 +03:00
|
|
|
|
2015-04-18 02:11:09 +03:00
|
|
|
package eth
|
|
|
|
|
|
|
|
import (
|
2018-11-02 23:26:45 +03:00
|
|
|
"bytes"
|
2016-10-05 16:31:48 +03:00
|
|
|
"encoding/json"
|
2015-09-01 17:35:14 +03:00
|
|
|
"errors"
|
2015-04-18 02:11:09 +03:00
|
|
|
"fmt"
|
2015-06-04 18:46:07 +03:00
|
|
|
"math"
|
2015-07-09 13:55:06 +03:00
|
|
|
"math/big"
|
2015-04-18 02:11:09 +03:00
|
|
|
"sync"
|
2016-05-17 14:17:20 +03:00
|
|
|
"sync/atomic"
|
2015-04-24 15:40:32 +03:00
|
|
|
"time"
|
2015-04-18 02:11:09 +03:00
|
|
|
|
|
|
|
"github.com/ethereum/go-ethereum/common"
|
2017-04-05 01:16:29 +03:00
|
|
|
"github.com/ethereum/go-ethereum/consensus"
|
|
|
|
"github.com/ethereum/go-ethereum/consensus/misc"
|
2015-04-18 03:21:07 +03:00
|
|
|
"github.com/ethereum/go-ethereum/core"
|
2015-04-18 02:11:09 +03:00
|
|
|
"github.com/ethereum/go-ethereum/core/types"
|
|
|
|
"github.com/ethereum/go-ethereum/eth/downloader"
|
2015-06-17 16:53:28 +03:00
|
|
|
"github.com/ethereum/go-ethereum/eth/fetcher"
|
2015-09-14 10:35:57 +03:00
|
|
|
"github.com/ethereum/go-ethereum/ethdb"
|
2015-04-22 18:56:06 +03:00
|
|
|
"github.com/ethereum/go-ethereum/event"
|
2017-02-22 15:10:07 +03:00
|
|
|
"github.com/ethereum/go-ethereum/log"
|
2015-04-18 02:11:09 +03:00
|
|
|
"github.com/ethereum/go-ethereum/p2p"
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
2018-09-25 01:59:00 +03:00
|
|
|
"github.com/ethereum/go-ethereum/p2p/enode"
|
2016-10-20 14:36:29 +03:00
|
|
|
"github.com/ethereum/go-ethereum/params"
|
2015-04-18 02:11:09 +03:00
|
|
|
"github.com/ethereum/go-ethereum/rlp"
|
|
|
|
)
|
|
|
|
|
2015-09-07 20:43:01 +03:00
|
|
|
const (
|
|
|
|
softResponseLimit = 2 * 1024 * 1024 // Target maximum size of returned blocks, headers or node data.
|
|
|
|
estHeaderRlpSize = 500 // Approximate size of an RLP encoded block header
|
2017-08-18 13:58:36 +03:00
|
|
|
|
2018-05-18 11:45:52 +03:00
|
|
|
// txChanSize is the size of channel listening to NewTxsEvent.
|
2017-08-18 13:58:36 +03:00
|
|
|
// The number is referenced from the size of tx pool.
|
|
|
|
txChanSize = 4096
|
2018-09-29 23:17:06 +03:00
|
|
|
|
|
|
|
// minimim number of peers to broadcast new blocks to
|
|
|
|
minBroadcastPeers = 4
|
2015-09-07 20:43:01 +03:00
|
|
|
)
|
2015-06-09 13:00:41 +03:00
|
|
|
|
2016-07-08 20:59:11 +03:00
|
|
|
var (
|
|
|
|
daoChallengeTimeout = 15 * time.Second // Time allowance for a node to reply to the DAO handshake challenge
|
|
|
|
)
|
|
|
|
|
2015-09-01 17:35:14 +03:00
|
|
|
// errIncompatibleConfig is returned if the requested protocols and configs are
|
|
|
|
// not compatible (low protocol version restrictions and high requirements).
|
|
|
|
var errIncompatibleConfig = errors.New("incompatible configuration")
|
|
|
|
|
2015-04-18 02:11:09 +03:00
|
|
|
func errResp(code errCode, format string, v ...interface{}) error {
|
|
|
|
return fmt.Errorf("%v - %v", code, fmt.Sprintf(format, v...))
|
|
|
|
}
|
|
|
|
|
2015-04-18 03:21:07 +03:00
|
|
|
type ProtocolManager struct {
|
2018-06-14 13:14:52 +03:00
|
|
|
networkID uint64
|
2015-10-27 16:10:30 +03:00
|
|
|
|
2017-04-10 11:43:01 +03:00
|
|
|
fastSync uint32 // Flag whether fast sync is enabled (gets disabled if we already have blocks)
|
|
|
|
acceptTxs uint32 // Flag whether we're considered synchronised (enables transaction processing)
|
2016-06-02 15:54:07 +03:00
|
|
|
|
2016-07-08 20:59:11 +03:00
|
|
|
txpool txPool
|
|
|
|
blockchain *core.BlockChain
|
2016-10-20 14:36:29 +03:00
|
|
|
chainconfig *params.ChainConfig
|
2016-01-13 20:35:48 +02:00
|
|
|
maxPeers int
|
2015-07-02 19:55:18 +03:00
|
|
|
|
|
|
|
downloader *downloader.Downloader
|
|
|
|
fetcher *fetcher.Fetcher
|
|
|
|
peers *peerSet
|
2015-04-18 03:21:07 +03:00
|
|
|
|
2015-06-26 16:54:27 +03:00
|
|
|
SubProtocols []p2p.Protocol
|
2015-04-22 18:56:06 +03:00
|
|
|
|
|
|
|
eventMux *event.TypeMux
|
2018-05-18 11:45:52 +03:00
|
|
|
txsCh chan core.NewTxsEvent
|
2018-05-10 10:04:45 +03:00
|
|
|
txsSub event.Subscription
|
2016-12-10 21:02:14 +03:00
|
|
|
minedBlockSub *event.TypeMuxSubscription
|
2015-04-24 15:40:32 +03:00
|
|
|
|
2018-11-02 23:26:45 +03:00
|
|
|
whitelist map[uint64]common.Hash
|
|
|
|
|
2015-06-09 13:03:14 +03:00
|
|
|
// channels for fetcher, syncer, txsyncLoop
|
2016-03-29 04:08:16 +03:00
|
|
|
newPeerCh chan *peer
|
|
|
|
txsyncCh chan *txsync
|
|
|
|
quitSync chan struct{}
|
|
|
|
noMorePeers chan struct{}
|
2015-06-08 20:38:39 +03:00
|
|
|
|
2015-05-01 01:23:51 +03:00
|
|
|
// wait group is used for graceful shutdowns during downloading
|
|
|
|
// and processing
|
2016-03-29 04:08:16 +03:00
|
|
|
wg sync.WaitGroup
|
2015-04-18 02:11:09 +03:00
|
|
|
}
|
|
|
|
|
2018-04-04 13:25:02 +03:00
|
|
|
// NewProtocolManager returns a new Ethereum sub protocol manager. The Ethereum sub protocol manages peers capable
|
|
|
|
// with the Ethereum network.
|
2018-11-02 23:26:45 +03:00
|
|
|
func NewProtocolManager(config *params.ChainConfig, mode downloader.SyncMode, networkID uint64, mux *event.TypeMux, txpool txPool, engine consensus.Engine, blockchain *core.BlockChain, chaindb ethdb.Database, whitelist map[uint64]common.Hash) (*ProtocolManager, error) {
|
2015-06-26 16:54:27 +03:00
|
|
|
// Create the protocol manager with the base fields
|
2015-04-18 03:21:07 +03:00
|
|
|
manager := &ProtocolManager{
|
2018-06-14 13:14:52 +03:00
|
|
|
networkID: networkID,
|
2016-03-29 04:08:16 +03:00
|
|
|
eventMux: mux,
|
|
|
|
txpool: txpool,
|
|
|
|
blockchain: blockchain,
|
2016-07-08 20:59:11 +03:00
|
|
|
chainconfig: config,
|
2016-03-29 04:08:16 +03:00
|
|
|
peers: newPeerSet(),
|
2018-11-02 23:26:45 +03:00
|
|
|
whitelist: whitelist,
|
2016-03-29 04:08:16 +03:00
|
|
|
newPeerCh: make(chan *peer),
|
|
|
|
noMorePeers: make(chan struct{}),
|
|
|
|
txsyncCh: make(chan *txsync),
|
|
|
|
quitSync: make(chan struct{}),
|
2015-04-18 02:11:09 +03:00
|
|
|
}
|
2016-05-17 14:17:20 +03:00
|
|
|
// Figure out whether to allow fast sync or not
|
2017-04-12 17:27:23 +03:00
|
|
|
if mode == downloader.FastSync && blockchain.CurrentBlock().NumberU64() > 0 {
|
2017-03-02 16:06:16 +03:00
|
|
|
log.Warn("Blockchain not empty, fast sync disabled")
|
2017-04-12 17:27:23 +03:00
|
|
|
mode = downloader.FullSync
|
2016-05-17 14:17:20 +03:00
|
|
|
}
|
2017-04-12 17:27:23 +03:00
|
|
|
if mode == downloader.FastSync {
|
2016-05-17 14:17:20 +03:00
|
|
|
manager.fastSync = uint32(1)
|
|
|
|
}
|
2015-06-26 16:54:27 +03:00
|
|
|
// Initiate a sub-protocol for every implemented version we can handle
|
2015-09-01 17:35:14 +03:00
|
|
|
manager.SubProtocols = make([]p2p.Protocol, 0, len(ProtocolVersions))
|
|
|
|
for i, version := range ProtocolVersions {
|
|
|
|
// Skip protocol version if incompatible with the mode of operation
|
2017-04-12 17:27:23 +03:00
|
|
|
if mode == downloader.FastSync && version < eth63 {
|
2015-09-01 17:35:14 +03:00
|
|
|
continue
|
|
|
|
}
|
2015-10-13 12:04:25 +03:00
|
|
|
// Compatible; initialise the sub-protocol
|
2015-09-01 17:35:14 +03:00
|
|
|
version := version // Closure for the run
|
|
|
|
manager.SubProtocols = append(manager.SubProtocols, p2p.Protocol{
|
2015-10-27 16:10:30 +03:00
|
|
|
Name: ProtocolName,
|
2015-06-26 16:54:27 +03:00
|
|
|
Version: version,
|
|
|
|
Length: ProtocolLengths[i],
|
|
|
|
Run: func(p *p2p.Peer, rw p2p.MsgReadWriter) error {
|
2015-10-27 16:10:30 +03:00
|
|
|
peer := manager.newPeer(int(version), p, rw)
|
2016-03-29 04:08:16 +03:00
|
|
|
select {
|
|
|
|
case manager.newPeerCh <- peer:
|
|
|
|
manager.wg.Add(1)
|
|
|
|
defer manager.wg.Done()
|
|
|
|
return manager.handle(peer)
|
|
|
|
case <-manager.quitSync:
|
|
|
|
return p2p.DiscQuitting
|
|
|
|
}
|
2015-06-26 16:54:27 +03:00
|
|
|
},
|
2015-10-27 16:10:30 +03:00
|
|
|
NodeInfo: func() interface{} {
|
|
|
|
return manager.NodeInfo()
|
|
|
|
},
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
2018-09-25 01:59:00 +03:00
|
|
|
PeerInfo: func(id enode.ID) interface{} {
|
2015-10-27 16:10:30 +03:00
|
|
|
if p := manager.peers.Peer(fmt.Sprintf("%x", id[:8])); p != nil {
|
|
|
|
return p.Info()
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
},
|
2015-09-01 17:35:14 +03:00
|
|
|
})
|
|
|
|
}
|
|
|
|
if len(manager.SubProtocols) == 0 {
|
|
|
|
return nil, errIncompatibleConfig
|
2015-04-18 03:21:07 +03:00
|
|
|
}
|
2015-06-16 11:58:32 +03:00
|
|
|
// Construct the different synchronisation mechanisms
|
2017-06-27 18:15:29 +03:00
|
|
|
manager.downloader = downloader.New(mode, chaindb, manager.eventMux, blockchain, nil, manager.removePeer)
|
2015-06-16 11:58:32 +03:00
|
|
|
|
2017-04-05 01:16:29 +03:00
|
|
|
validator := func(header *types.Header) error {
|
|
|
|
return engine.VerifyHeader(blockchain, header, true)
|
2015-06-18 18:00:19 +03:00
|
|
|
}
|
2015-06-16 17:39:04 +03:00
|
|
|
heighter := func() uint64 {
|
2015-09-28 19:27:31 +03:00
|
|
|
return blockchain.CurrentBlock().NumberU64()
|
2015-06-16 17:39:04 +03:00
|
|
|
}
|
2016-06-02 15:54:07 +03:00
|
|
|
inserter := func(blocks types.Blocks) (int, error) {
|
2017-05-26 16:04:12 +03:00
|
|
|
// If fast sync is running, deny importing weird blocks
|
|
|
|
if atomic.LoadUint32(&manager.fastSync) == 1 {
|
|
|
|
log.Warn("Discarded bad propagated block", "number", blocks[0].Number(), "hash", blocks[0].Hash())
|
|
|
|
return 0, nil
|
|
|
|
}
|
2017-04-10 11:43:01 +03:00
|
|
|
atomic.StoreUint32(&manager.acceptTxs, 1) // Mark initial sync done on any fetcher import
|
2017-04-06 14:25:05 +03:00
|
|
|
return manager.blockchain.InsertChain(blocks)
|
2016-06-02 15:54:07 +03:00
|
|
|
}
|
2016-04-05 16:22:04 +03:00
|
|
|
manager.fetcher = fetcher.New(blockchain.GetBlockByHash, validator, manager.BroadcastBlock, heighter, inserter, manager.removePeer)
|
2016-05-24 19:49:54 +03:00
|
|
|
|
2015-09-01 17:35:14 +03:00
|
|
|
return manager, nil
|
2015-04-18 02:11:09 +03:00
|
|
|
}
|
|
|
|
|
2015-05-26 14:00:21 +03:00
|
|
|
func (pm *ProtocolManager) removePeer(id string) {
|
2015-05-27 18:58:51 +03:00
|
|
|
// Short circuit if the peer was already removed
|
|
|
|
peer := pm.peers.Peer(id)
|
|
|
|
if peer == nil {
|
|
|
|
return
|
|
|
|
}
|
2017-03-02 16:06:16 +03:00
|
|
|
log.Debug("Removing Ethereum peer", "peer", id)
|
2015-05-18 21:33:37 +03:00
|
|
|
|
2015-05-27 18:58:51 +03:00
|
|
|
// Unregister the peer from the downloader and Ethereum peer set
|
|
|
|
pm.downloader.UnregisterPeer(id)
|
2015-05-26 14:00:21 +03:00
|
|
|
if err := pm.peers.Unregister(id); err != nil {
|
2017-03-02 16:06:16 +03:00
|
|
|
log.Error("Peer removal failed", "peer", id, "err", err)
|
2015-05-18 21:33:37 +03:00
|
|
|
}
|
2015-05-27 18:58:51 +03:00
|
|
|
// Hard disconnect at the networking layer
|
|
|
|
if peer != nil {
|
|
|
|
peer.Peer.Disconnect(p2p.DiscUselessPeer)
|
|
|
|
}
|
2015-04-30 13:38:16 +03:00
|
|
|
}
|
|
|
|
|
2017-09-05 19:18:28 +03:00
|
|
|
func (pm *ProtocolManager) Start(maxPeers int) {
|
|
|
|
pm.maxPeers = maxPeers
|
|
|
|
|
2015-04-22 18:56:06 +03:00
|
|
|
// broadcast transactions
|
2018-05-18 11:45:52 +03:00
|
|
|
pm.txsCh = make(chan core.NewTxsEvent, txChanSize)
|
|
|
|
pm.txsSub = pm.txpool.SubscribeNewTxsEvent(pm.txsCh)
|
2015-04-22 18:56:06 +03:00
|
|
|
go pm.txBroadcastLoop()
|
2017-09-05 19:18:28 +03:00
|
|
|
|
2015-04-22 18:56:06 +03:00
|
|
|
// broadcast mined blocks
|
|
|
|
pm.minedBlockSub = pm.eventMux.Subscribe(core.NewMinedBlockEvent{})
|
|
|
|
go pm.minedBroadcastLoop()
|
2015-04-24 16:37:32 +03:00
|
|
|
|
2015-06-09 13:03:14 +03:00
|
|
|
// start sync handlers
|
2015-06-08 19:24:56 +03:00
|
|
|
go pm.syncer()
|
2015-06-09 13:03:14 +03:00
|
|
|
go pm.txsyncLoop()
|
2015-04-22 18:56:06 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
func (pm *ProtocolManager) Stop() {
|
2017-03-02 16:06:16 +03:00
|
|
|
log.Info("Stopping Ethereum protocol")
|
2015-05-01 01:23:51 +03:00
|
|
|
|
2018-05-10 10:04:45 +03:00
|
|
|
pm.txsSub.Unsubscribe() // quits txBroadcastLoop
|
2015-04-22 18:56:06 +03:00
|
|
|
pm.minedBlockSub.Unsubscribe() // quits blockBroadcastLoop
|
2015-05-01 01:23:51 +03:00
|
|
|
|
2016-03-29 04:08:16 +03:00
|
|
|
// Quit the sync loop.
|
|
|
|
// After this send has completed, no new peers will be accepted.
|
|
|
|
pm.noMorePeers <- struct{}{}
|
|
|
|
|
|
|
|
// Quit fetcher, txsyncLoop.
|
|
|
|
close(pm.quitSync)
|
|
|
|
|
|
|
|
// Disconnect existing sessions.
|
|
|
|
// This also closes the gate for any new registrations on the peer set.
|
|
|
|
// sessions which are already established but not added to pm.peers yet
|
|
|
|
// will exit when they try to register.
|
|
|
|
pm.peers.Close()
|
|
|
|
|
|
|
|
// Wait for all peer handler goroutines and the loops to come down.
|
2015-05-01 01:23:51 +03:00
|
|
|
pm.wg.Wait()
|
|
|
|
|
2017-03-02 16:06:16 +03:00
|
|
|
log.Info("Ethereum protocol stopped")
|
2015-04-22 18:56:06 +03:00
|
|
|
}
|
|
|
|
|
2015-10-27 16:10:30 +03:00
|
|
|
func (pm *ProtocolManager) newPeer(pv int, p *p2p.Peer, rw p2p.MsgReadWriter) *peer {
|
|
|
|
return newPeer(pv, p, newMeteredMsgWriter(rw))
|
2015-04-18 02:11:09 +03:00
|
|
|
}
|
|
|
|
|
2015-06-26 20:42:27 +03:00
|
|
|
// handle is the callback invoked to manage the life cycle of an eth peer. When
|
|
|
|
// this function terminates, the peer is disconnected.
|
2015-04-18 03:21:07 +03:00
|
|
|
func (pm *ProtocolManager) handle(p *peer) error {
|
2018-02-27 13:52:59 +03:00
|
|
|
// Ignore maxPeers if this is a trusted peer
|
|
|
|
if pm.peers.Len() >= pm.maxPeers && !p.Peer.Info().Network.Trusted {
|
2016-01-13 20:35:48 +02:00
|
|
|
return p2p.DiscTooManyPeers
|
|
|
|
}
|
2017-03-02 16:06:16 +03:00
|
|
|
p.Log().Debug("Ethereum peer connected", "name", p.Name())
|
2015-06-26 20:42:27 +03:00
|
|
|
|
|
|
|
// Execute the Ethereum handshake
|
2018-01-30 19:39:32 +03:00
|
|
|
var (
|
|
|
|
genesis = pm.blockchain.Genesis()
|
|
|
|
head = pm.blockchain.CurrentHeader()
|
|
|
|
hash = head.Hash()
|
|
|
|
number = head.Number.Uint64()
|
|
|
|
td = pm.blockchain.GetTd(hash, number)
|
|
|
|
)
|
2018-06-14 13:14:52 +03:00
|
|
|
if err := p.Handshake(pm.networkID, td, hash, genesis.Hash()); err != nil {
|
2017-03-02 16:06:16 +03:00
|
|
|
p.Log().Debug("Ethereum handshake failed", "err", err)
|
2015-04-18 02:11:09 +03:00
|
|
|
return err
|
|
|
|
}
|
2015-07-02 19:55:18 +03:00
|
|
|
if rw, ok := p.rw.(*meteredMsgReadWriter); ok {
|
|
|
|
rw.Init(p.version)
|
|
|
|
}
|
2015-06-26 20:42:27 +03:00
|
|
|
// Register the peer locally
|
2015-05-18 21:33:37 +03:00
|
|
|
if err := pm.peers.Register(p); err != nil {
|
2017-03-02 16:06:16 +03:00
|
|
|
p.Log().Error("Ethereum peer registration failed", "err", err)
|
2015-05-18 21:33:37 +03:00
|
|
|
return err
|
|
|
|
}
|
2015-05-26 14:00:21 +03:00
|
|
|
defer pm.removePeer(p.id)
|
2015-04-18 02:11:09 +03:00
|
|
|
|
2015-06-26 20:42:27 +03:00
|
|
|
// Register the peer in the downloader. If the downloader considers it banned, we disconnect
|
2017-06-28 15:25:08 +03:00
|
|
|
if err := pm.downloader.RegisterPeer(p.id, p.version, p); err != nil {
|
2015-05-18 21:33:37 +03:00
|
|
|
return err
|
|
|
|
}
|
2015-06-09 13:03:14 +03:00
|
|
|
// Propagate existing transactions. new transactions appearing
|
2015-04-18 02:11:09 +03:00
|
|
|
// after this will be sent via broadcasts.
|
2015-06-09 13:03:14 +03:00
|
|
|
pm.syncTransactions(p)
|
|
|
|
|
2016-07-08 20:59:11 +03:00
|
|
|
// If we're DAO hard-fork aware, validate any remote peer with regard to the hard-fork
|
|
|
|
if daoBlock := pm.chainconfig.DAOForkBlock; daoBlock != nil {
|
|
|
|
// Request the peer's DAO fork header for extra-data validation
|
|
|
|
if err := p.RequestHeadersByNumber(daoBlock.Uint64(), 1, 0, false); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
// Start a timer to disconnect if the peer doesn't reply in time
|
|
|
|
p.forkDrop = time.AfterFunc(daoChallengeTimeout, func() {
|
2017-03-02 16:06:16 +03:00
|
|
|
p.Log().Debug("Timed out DAO fork-check, dropping")
|
2016-07-08 20:59:11 +03:00
|
|
|
pm.removePeer(p.id)
|
|
|
|
})
|
2016-07-19 12:00:09 +03:00
|
|
|
// Make sure it's cleaned up if the peer dies off
|
|
|
|
defer func() {
|
|
|
|
if p.forkDrop != nil {
|
|
|
|
p.forkDrop.Stop()
|
|
|
|
p.forkDrop = nil
|
|
|
|
}
|
|
|
|
}()
|
2016-07-08 20:59:11 +03:00
|
|
|
}
|
2018-11-02 23:26:45 +03:00
|
|
|
|
|
|
|
// If we have any explicit whitelist block hashes, request them
|
|
|
|
for bn := range pm.whitelist {
|
|
|
|
p.Log().Debug("Requesting whitelist block", "number", bn)
|
|
|
|
if err := p.RequestHeadersByNumber(bn, 1, 0, false); err != nil {
|
|
|
|
p.Log().Error("whitelist request failed", "err", err, "number", bn, "peer", p.id)
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-04-18 02:11:09 +03:00
|
|
|
// main loop. handle incoming messages.
|
|
|
|
for {
|
|
|
|
if err := pm.handleMsg(p); err != nil {
|
2017-03-03 12:41:52 +03:00
|
|
|
p.Log().Debug("Ethereum message handling failed", "err", err)
|
2015-04-18 02:11:09 +03:00
|
|
|
return err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-06-26 20:42:27 +03:00
|
|
|
// handleMsg is invoked whenever an inbound message is received from a remote
|
|
|
|
// peer. The remote connection is torn down upon returning any error.
|
2015-06-17 18:25:23 +03:00
|
|
|
func (pm *ProtocolManager) handleMsg(p *peer) error {
|
2015-06-26 20:42:27 +03:00
|
|
|
// Read the next message from the remote peer, and ensure it's fully consumed
|
2015-04-18 02:11:09 +03:00
|
|
|
msg, err := p.rw.ReadMsg()
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
if msg.Size > ProtocolMaxMsgSize {
|
|
|
|
return errResp(ErrMsgTooLarge, "%v > %v", msg.Size, ProtocolMaxMsgSize)
|
|
|
|
}
|
|
|
|
defer msg.Discard()
|
|
|
|
|
2015-06-26 20:42:27 +03:00
|
|
|
// Handle the message depending on its contents
|
2015-07-02 19:55:18 +03:00
|
|
|
switch {
|
|
|
|
case msg.Code == StatusMsg:
|
2015-06-30 19:05:06 +03:00
|
|
|
// Status messages should never arrive after the handshake
|
2015-04-18 02:11:09 +03:00
|
|
|
return errResp(ErrExtraStatusMsg, "uncontrolled status message")
|
|
|
|
|
2015-07-02 19:55:18 +03:00
|
|
|
// Block header query, collect the requested headers and reply
|
2016-07-21 12:36:38 +03:00
|
|
|
case msg.Code == GetBlockHeadersMsg:
|
2015-07-02 19:55:18 +03:00
|
|
|
// Decode the complex header query
|
|
|
|
var query getBlockHeadersData
|
|
|
|
if err := msg.Decode(&query); err != nil {
|
|
|
|
return errResp(ErrDecode, "%v: %v", msg, err)
|
|
|
|
}
|
2015-12-15 19:22:48 +02:00
|
|
|
hashMode := query.Origin.Hash != (common.Hash{})
|
2018-06-12 16:52:54 +03:00
|
|
|
first := true
|
|
|
|
maxNonCanonical := uint64(100)
|
2015-12-15 19:22:48 +02:00
|
|
|
|
2015-08-31 20:21:02 +03:00
|
|
|
// Gather headers until the fetch or network limits is reached
|
2015-07-02 19:55:18 +03:00
|
|
|
var (
|
|
|
|
bytes common.StorageSize
|
|
|
|
headers []*types.Header
|
|
|
|
unknown bool
|
|
|
|
)
|
|
|
|
for !unknown && len(headers) < int(query.Amount) && bytes < softResponseLimit && len(headers) < downloader.MaxHeaderFetch {
|
2015-08-31 20:21:02 +03:00
|
|
|
// Retrieve the next header satisfying the query
|
|
|
|
var origin *types.Header
|
2015-12-15 19:22:48 +02:00
|
|
|
if hashMode {
|
2018-06-12 16:52:54 +03:00
|
|
|
if first {
|
|
|
|
first = false
|
|
|
|
origin = pm.blockchain.GetHeaderByHash(query.Origin.Hash)
|
|
|
|
if origin != nil {
|
|
|
|
query.Origin.Number = origin.Number.Uint64()
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
origin = pm.blockchain.GetHeader(query.Origin.Hash, query.Origin.Number)
|
|
|
|
}
|
2015-07-02 19:55:18 +03:00
|
|
|
} else {
|
2015-08-31 18:09:50 +03:00
|
|
|
origin = pm.blockchain.GetHeaderByNumber(query.Origin.Number)
|
2015-07-02 19:55:18 +03:00
|
|
|
}
|
|
|
|
if origin == nil {
|
|
|
|
break
|
|
|
|
}
|
2015-08-31 20:21:02 +03:00
|
|
|
headers = append(headers, origin)
|
2015-09-07 20:43:01 +03:00
|
|
|
bytes += estHeaderRlpSize
|
2015-07-02 19:55:18 +03:00
|
|
|
|
2015-08-31 20:21:02 +03:00
|
|
|
// Advance to the next header of the query
|
2015-07-02 19:55:18 +03:00
|
|
|
switch {
|
2018-06-12 16:52:54 +03:00
|
|
|
case hashMode && query.Reverse:
|
2015-07-02 19:55:18 +03:00
|
|
|
// Hash based traversal towards the genesis block
|
2018-06-12 16:52:54 +03:00
|
|
|
ancestor := query.Skip + 1
|
|
|
|
if ancestor == 0 {
|
|
|
|
unknown = true
|
|
|
|
} else {
|
|
|
|
query.Origin.Hash, query.Origin.Number = pm.blockchain.GetAncestor(query.Origin.Hash, query.Origin.Number, ancestor, &maxNonCanonical)
|
|
|
|
unknown = (query.Origin.Hash == common.Hash{})
|
2015-07-02 19:55:18 +03:00
|
|
|
}
|
2018-06-12 16:52:54 +03:00
|
|
|
case hashMode && !query.Reverse:
|
2015-07-02 19:55:18 +03:00
|
|
|
// Hash based traversal towards the leaf block
|
2016-10-05 16:31:48 +03:00
|
|
|
var (
|
|
|
|
current = origin.Number.Uint64()
|
|
|
|
next = current + query.Skip + 1
|
|
|
|
)
|
|
|
|
if next <= current {
|
|
|
|
infos, _ := json.MarshalIndent(p.Peer.Info(), "", " ")
|
2017-03-02 16:06:16 +03:00
|
|
|
p.Log().Warn("GetBlockHeaders skip overflow attack", "current", current, "skip", query.Skip, "next", next, "attacker", infos)
|
2016-10-05 16:31:48 +03:00
|
|
|
unknown = true
|
|
|
|
} else {
|
|
|
|
if header := pm.blockchain.GetHeaderByNumber(next); header != nil {
|
2018-06-12 16:52:54 +03:00
|
|
|
nextHash := header.Hash()
|
|
|
|
expOldHash, _ := pm.blockchain.GetAncestor(nextHash, next, query.Skip+1, &maxNonCanonical)
|
|
|
|
if expOldHash == query.Origin.Hash {
|
|
|
|
query.Origin.Hash, query.Origin.Number = nextHash, next
|
2016-10-05 16:31:48 +03:00
|
|
|
} else {
|
|
|
|
unknown = true
|
|
|
|
}
|
2015-07-02 19:55:18 +03:00
|
|
|
} else {
|
|
|
|
unknown = true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
case query.Reverse:
|
|
|
|
// Number based traversal towards the genesis block
|
|
|
|
if query.Origin.Number >= query.Skip+1 {
|
2018-01-03 15:14:47 +03:00
|
|
|
query.Origin.Number -= query.Skip + 1
|
2015-07-02 19:55:18 +03:00
|
|
|
} else {
|
|
|
|
unknown = true
|
|
|
|
}
|
|
|
|
|
|
|
|
case !query.Reverse:
|
|
|
|
// Number based traversal towards the leaf block
|
2018-01-03 15:14:47 +03:00
|
|
|
query.Origin.Number += query.Skip + 1
|
2015-07-02 19:55:18 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return p.SendBlockHeaders(headers)
|
|
|
|
|
2016-07-21 12:36:38 +03:00
|
|
|
case msg.Code == BlockHeadersMsg:
|
2015-08-14 21:25:41 +03:00
|
|
|
// A batch of headers arrived to one of our previous requests
|
|
|
|
var headers []*types.Header
|
|
|
|
if err := msg.Decode(&headers); err != nil {
|
|
|
|
return errResp(ErrDecode, "msg %v: %v", msg, err)
|
|
|
|
}
|
2016-07-08 20:59:11 +03:00
|
|
|
// If no headers were received, but we're expending a DAO fork check, maybe it's that
|
|
|
|
if len(headers) == 0 && p.forkDrop != nil {
|
|
|
|
// Possibly an empty reply to the fork header checks, sanity check TDs
|
|
|
|
verifyDAO := true
|
|
|
|
|
|
|
|
// If we already have a DAO header, we can check the peer's TD against it. If
|
|
|
|
// the peer's ahead of this, it too must have a reply to the DAO check
|
|
|
|
if daoHeader := pm.blockchain.GetHeaderByNumber(pm.chainconfig.DAOForkBlock.Uint64()); daoHeader != nil {
|
2016-07-25 15:14:14 +03:00
|
|
|
if _, td := p.Head(); td.Cmp(pm.blockchain.GetTd(daoHeader.Hash(), daoHeader.Number.Uint64())) >= 0 {
|
2016-07-08 20:59:11 +03:00
|
|
|
verifyDAO = false
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// If we're seemingly on the same chain, disable the drop timer
|
|
|
|
if verifyDAO {
|
2017-03-02 16:06:16 +03:00
|
|
|
p.Log().Debug("Seems to be on the same side of the DAO fork")
|
2016-07-08 20:59:11 +03:00
|
|
|
p.forkDrop.Stop()
|
|
|
|
p.forkDrop = nil
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
}
|
2015-08-14 21:25:41 +03:00
|
|
|
// Filter out any explicitly requested headers, deliver the rest to the downloader
|
|
|
|
filter := len(headers) == 1
|
|
|
|
if filter {
|
2018-11-02 23:26:45 +03:00
|
|
|
// Check for any responses not matching our whitelist
|
|
|
|
if expected, ok := pm.whitelist[headers[0].Number.Uint64()]; ok {
|
|
|
|
actual := headers[0].Hash()
|
|
|
|
if !bytes.Equal(expected.Bytes(), actual.Bytes()) {
|
|
|
|
p.Log().Info("Dropping peer with non-matching whitelist block", "number", headers[0].Number.Uint64(), "hash", actual, "expected", expected)
|
|
|
|
return errors.New("whitelist block mismatch")
|
|
|
|
}
|
|
|
|
p.Log().Debug("Whitelist block verified", "number", headers[0].Number.Uint64(), "hash", expected)
|
|
|
|
}
|
|
|
|
|
2016-07-08 20:59:11 +03:00
|
|
|
// If it's a potential DAO fork check, validate against the rules
|
|
|
|
if p.forkDrop != nil && pm.chainconfig.DAOForkBlock.Cmp(headers[0].Number) == 0 {
|
|
|
|
// Disable the fork drop timer
|
|
|
|
p.forkDrop.Stop()
|
|
|
|
p.forkDrop = nil
|
|
|
|
|
|
|
|
// Validate the header and either drop the peer or continue
|
2017-04-05 01:16:29 +03:00
|
|
|
if err := misc.VerifyDAOHeaderExtraData(pm.chainconfig, headers[0]); err != nil {
|
2017-03-02 16:06:16 +03:00
|
|
|
p.Log().Debug("Verified to be on the other side of the DAO fork, dropping")
|
2016-07-08 20:59:11 +03:00
|
|
|
return err
|
|
|
|
}
|
2017-03-02 16:06:16 +03:00
|
|
|
p.Log().Debug("Verified to be on the same side of the DAO fork")
|
2016-07-26 12:26:41 +03:00
|
|
|
return nil
|
2016-07-08 20:59:11 +03:00
|
|
|
}
|
|
|
|
// Irrelevant of the fork checks, send the header to the fetcher just in case
|
2017-10-10 11:53:05 +03:00
|
|
|
headers = pm.fetcher.FilterHeaders(p.id, headers, time.Now())
|
2015-08-14 21:25:41 +03:00
|
|
|
}
|
|
|
|
if len(headers) > 0 || !filter {
|
|
|
|
err := pm.downloader.DeliverHeaders(p.id, headers)
|
|
|
|
if err != nil {
|
2017-03-02 16:06:16 +03:00
|
|
|
log.Debug("Failed to deliver headers", "err", err)
|
2015-08-14 21:25:41 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-07-21 12:36:38 +03:00
|
|
|
case msg.Code == GetBlockBodiesMsg:
|
2015-07-02 19:55:18 +03:00
|
|
|
// Decode the retrieval message
|
2015-06-04 18:46:07 +03:00
|
|
|
msgStream := rlp.NewStream(msg.Payload, uint64(msg.Size))
|
2015-07-02 19:55:18 +03:00
|
|
|
if _, err := msgStream.List(); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
// Gather blocks until the fetch or network limits is reached
|
|
|
|
var (
|
|
|
|
hash common.Hash
|
2015-08-31 20:21:02 +03:00
|
|
|
bytes int
|
2015-09-07 20:43:01 +03:00
|
|
|
bodies []rlp.RawValue
|
2015-07-02 19:55:18 +03:00
|
|
|
)
|
|
|
|
for bytes < softResponseLimit && len(bodies) < downloader.MaxBlockFetch {
|
2015-08-31 20:21:02 +03:00
|
|
|
// Retrieve the hash of the next block
|
2015-07-02 19:55:18 +03:00
|
|
|
if err := msgStream.Decode(&hash); err == rlp.EOL {
|
|
|
|
break
|
|
|
|
} else if err != nil {
|
|
|
|
return errResp(ErrDecode, "msg %v: %v", msg, err)
|
|
|
|
}
|
2015-08-31 20:21:02 +03:00
|
|
|
// Retrieve the requested block body, stopping if enough was found
|
2015-08-31 18:09:50 +03:00
|
|
|
if data := pm.blockchain.GetBodyRLP(hash); len(data) != 0 {
|
2015-09-07 20:43:01 +03:00
|
|
|
bodies = append(bodies, data)
|
|
|
|
bytes += len(data)
|
2015-07-02 19:55:18 +03:00
|
|
|
}
|
|
|
|
}
|
2015-08-31 20:21:02 +03:00
|
|
|
return p.SendBlockBodiesRLP(bodies)
|
2015-06-04 18:46:07 +03:00
|
|
|
|
2016-07-21 12:36:38 +03:00
|
|
|
case msg.Code == BlockBodiesMsg:
|
2015-09-30 19:23:31 +03:00
|
|
|
// A batch of block bodies arrived to one of our previous requests
|
|
|
|
var request blockBodiesData
|
|
|
|
if err := msg.Decode(&request); err != nil {
|
|
|
|
return errResp(ErrDecode, "msg %v: %v", msg, err)
|
|
|
|
}
|
|
|
|
// Deliver them all to the downloader for queuing
|
2018-04-04 13:25:02 +03:00
|
|
|
transactions := make([][]*types.Transaction, len(request))
|
2015-09-30 19:23:31 +03:00
|
|
|
uncles := make([][]*types.Header, len(request))
|
|
|
|
|
|
|
|
for i, body := range request {
|
2018-04-04 13:25:02 +03:00
|
|
|
transactions[i] = body.Transactions
|
2015-09-30 19:23:31 +03:00
|
|
|
uncles[i] = body.Uncles
|
|
|
|
}
|
|
|
|
// Filter out any explicitly requested bodies, deliver the rest to the downloader
|
2018-04-04 13:25:02 +03:00
|
|
|
filter := len(transactions) > 0 || len(uncles) > 0
|
2016-03-03 13:06:23 +02:00
|
|
|
if filter {
|
2018-04-04 13:25:02 +03:00
|
|
|
transactions, uncles = pm.fetcher.FilterBodies(p.id, transactions, uncles, time.Now())
|
2016-03-03 13:06:23 +02:00
|
|
|
}
|
2018-04-04 13:25:02 +03:00
|
|
|
if len(transactions) > 0 || len(uncles) > 0 || !filter {
|
|
|
|
err := pm.downloader.DeliverBodies(p.id, transactions, uncles)
|
2015-09-30 19:23:31 +03:00
|
|
|
if err != nil {
|
2017-03-02 16:06:16 +03:00
|
|
|
log.Debug("Failed to deliver bodies", "err", err)
|
2015-09-30 19:23:31 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-07-02 19:55:18 +03:00
|
|
|
case p.version >= eth63 && msg.Code == GetNodeDataMsg:
|
|
|
|
// Decode the retrieval message
|
2015-06-04 18:46:07 +03:00
|
|
|
msgStream := rlp.NewStream(msg.Payload, uint64(msg.Size))
|
2015-07-02 19:55:18 +03:00
|
|
|
if _, err := msgStream.List(); err != nil {
|
|
|
|
return err
|
2015-06-04 18:46:07 +03:00
|
|
|
}
|
2015-07-02 19:55:18 +03:00
|
|
|
// Gather state data until the fetch or network limits is reached
|
|
|
|
var (
|
|
|
|
hash common.Hash
|
|
|
|
bytes int
|
|
|
|
data [][]byte
|
|
|
|
)
|
|
|
|
for bytes < softResponseLimit && len(data) < downloader.MaxStateFetch {
|
|
|
|
// Retrieve the hash of the next state entry
|
|
|
|
if err := msgStream.Decode(&hash); err == rlp.EOL {
|
|
|
|
break
|
|
|
|
} else if err != nil {
|
|
|
|
return errResp(ErrDecode, "msg %v: %v", msg, err)
|
|
|
|
}
|
|
|
|
// Retrieve the requested state entry, stopping if enough was found
|
2018-02-05 19:40:32 +03:00
|
|
|
if entry, err := pm.blockchain.TrieNode(hash); err == nil {
|
2015-07-02 19:55:18 +03:00
|
|
|
data = append(data, entry)
|
|
|
|
bytes += len(entry)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return p.SendNodeData(data)
|
2015-06-26 20:42:27 +03:00
|
|
|
|
2015-10-05 19:37:56 +03:00
|
|
|
case p.version >= eth63 && msg.Code == NodeDataMsg:
|
|
|
|
// A batch of node state data arrived to one of our previous requests
|
|
|
|
var data [][]byte
|
|
|
|
if err := msg.Decode(&data); err != nil {
|
|
|
|
return errResp(ErrDecode, "msg %v: %v", msg, err)
|
|
|
|
}
|
|
|
|
// Deliver all to the downloader
|
|
|
|
if err := pm.downloader.DeliverNodeData(p.id, data); err != nil {
|
2017-03-02 16:06:16 +03:00
|
|
|
log.Debug("Failed to deliver node state data", "err", err)
|
2015-10-05 19:37:56 +03:00
|
|
|
}
|
|
|
|
|
2015-07-02 19:55:18 +03:00
|
|
|
case p.version >= eth63 && msg.Code == GetReceiptsMsg:
|
|
|
|
// Decode the retrieval message
|
|
|
|
msgStream := rlp.NewStream(msg.Payload, uint64(msg.Size))
|
|
|
|
if _, err := msgStream.List(); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
// Gather state data until the fetch or network limits is reached
|
|
|
|
var (
|
|
|
|
hash common.Hash
|
|
|
|
bytes int
|
2015-09-28 19:27:31 +03:00
|
|
|
receipts []rlp.RawValue
|
2015-07-02 19:55:18 +03:00
|
|
|
)
|
2015-09-28 19:27:31 +03:00
|
|
|
for bytes < softResponseLimit && len(receipts) < downloader.MaxReceiptFetch {
|
|
|
|
// Retrieve the hash of the next block
|
2015-07-02 19:55:18 +03:00
|
|
|
if err := msgStream.Decode(&hash); err == rlp.EOL {
|
|
|
|
break
|
|
|
|
} else if err != nil {
|
|
|
|
return errResp(ErrDecode, "msg %v: %v", msg, err)
|
|
|
|
}
|
2015-09-28 19:27:31 +03:00
|
|
|
// Retrieve the requested block's receipts, skipping if unknown to us
|
2018-02-05 19:40:32 +03:00
|
|
|
results := pm.blockchain.GetReceiptsByHash(hash)
|
2015-09-28 19:27:31 +03:00
|
|
|
if results == nil {
|
2016-04-05 16:22:04 +03:00
|
|
|
if header := pm.blockchain.GetHeaderByHash(hash); header == nil || header.ReceiptHash != types.EmptyRootHash {
|
2015-09-28 19:27:31 +03:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// If known, encode and queue for response packet
|
|
|
|
if encoded, err := rlp.EncodeToBytes(results); err != nil {
|
2017-03-02 16:06:16 +03:00
|
|
|
log.Error("Failed to encode receipt", "err", err)
|
2015-09-28 19:27:31 +03:00
|
|
|
} else {
|
|
|
|
receipts = append(receipts, encoded)
|
|
|
|
bytes += len(encoded)
|
2015-07-02 19:55:18 +03:00
|
|
|
}
|
|
|
|
}
|
2015-09-28 19:27:31 +03:00
|
|
|
return p.SendReceiptsRLP(receipts)
|
2015-07-02 19:55:18 +03:00
|
|
|
|
2015-09-30 19:23:31 +03:00
|
|
|
case p.version >= eth63 && msg.Code == ReceiptsMsg:
|
|
|
|
// A batch of receipts arrived to one of our previous requests
|
|
|
|
var receipts [][]*types.Receipt
|
|
|
|
if err := msg.Decode(&receipts); err != nil {
|
|
|
|
return errResp(ErrDecode, "msg %v: %v", msg, err)
|
|
|
|
}
|
|
|
|
// Deliver all to the downloader
|
|
|
|
if err := pm.downloader.DeliverReceipts(p.id, receipts); err != nil {
|
2017-03-02 16:06:16 +03:00
|
|
|
log.Debug("Failed to deliver receipts", "err", err)
|
2015-09-30 19:23:31 +03:00
|
|
|
}
|
|
|
|
|
2015-07-02 19:55:18 +03:00
|
|
|
case msg.Code == NewBlockHashesMsg:
|
2017-01-06 18:44:20 +03:00
|
|
|
var announces newBlockHashesData
|
|
|
|
if err := msg.Decode(&announces); err != nil {
|
|
|
|
return errResp(ErrDecode, "%v: %v", msg, err)
|
2015-06-04 18:46:07 +03:00
|
|
|
}
|
|
|
|
// Mark the hashes as present at the remote node
|
2015-07-02 19:55:18 +03:00
|
|
|
for _, block := range announces {
|
|
|
|
p.MarkBlock(block.Hash)
|
2015-06-04 18:46:07 +03:00
|
|
|
}
|
2015-06-08 19:24:56 +03:00
|
|
|
// Schedule all the unknown hashes for retrieval
|
2017-01-06 18:44:20 +03:00
|
|
|
unknown := make(newBlockHashesData, 0, len(announces))
|
2015-07-02 19:55:18 +03:00
|
|
|
for _, block := range announces {
|
2017-09-09 19:03:07 +03:00
|
|
|
if !pm.blockchain.HasBlock(block.Hash, block.Number) {
|
2015-07-02 19:55:18 +03:00
|
|
|
unknown = append(unknown, block)
|
2015-06-04 18:46:07 +03:00
|
|
|
}
|
2015-06-08 19:24:56 +03:00
|
|
|
}
|
2015-07-02 19:55:18 +03:00
|
|
|
for _, block := range unknown {
|
2016-07-21 12:36:38 +03:00
|
|
|
pm.fetcher.Notify(p.id, block.Hash, block.Number, time.Now(), p.RequestOneHeader, p.RequestBodies)
|
2015-06-08 19:24:56 +03:00
|
|
|
}
|
2015-04-18 02:11:09 +03:00
|
|
|
|
2015-07-02 19:55:18 +03:00
|
|
|
case msg.Code == NewBlockMsg:
|
2015-06-17 18:25:23 +03:00
|
|
|
// Retrieve and decode the propagated block
|
2015-06-29 17:32:14 +03:00
|
|
|
var request newBlockData
|
2015-04-18 02:11:09 +03:00
|
|
|
if err := msg.Decode(&request); err != nil {
|
|
|
|
return errResp(ErrDecode, "%v: %v", msg, err)
|
|
|
|
}
|
2015-04-29 23:50:58 +03:00
|
|
|
request.Block.ReceivedAt = msg.ReceivedAt
|
2016-05-24 19:49:54 +03:00
|
|
|
request.Block.ReceivedFrom = p
|
2015-04-29 20:55:30 +03:00
|
|
|
|
2015-06-17 18:25:23 +03:00
|
|
|
// Mark the peer as owning the block and schedule it for import
|
2015-06-29 12:44:00 +03:00
|
|
|
p.MarkBlock(request.Block.Hash())
|
2015-06-17 18:25:23 +03:00
|
|
|
pm.fetcher.Enqueue(p.id, request.Block)
|
|
|
|
|
2016-07-25 15:14:14 +03:00
|
|
|
// Assuming the block is importable by the peer, but possibly not yet done so,
|
|
|
|
// calculate the head hash and TD that the peer truly must have.
|
|
|
|
var (
|
|
|
|
trueHead = request.Block.ParentHash()
|
|
|
|
trueTD = new(big.Int).Sub(request.TD, request.Block.Difficulty())
|
|
|
|
)
|
2018-11-13 12:57:46 +03:00
|
|
|
// Update the peer's total difficulty if better than the previous
|
2016-07-25 15:14:14 +03:00
|
|
|
if _, td := p.Head(); trueTD.Cmp(td) > 0 {
|
|
|
|
p.SetHead(trueHead, trueTD)
|
|
|
|
|
|
|
|
// Schedule a sync if above ours. Note, this will not fire a sync for a gap of
|
2018-11-15 17:31:24 +03:00
|
|
|
// a single block (as the true TD is below the propagated block), however this
|
2016-07-25 15:14:14 +03:00
|
|
|
// scenario should easily be covered by the fetcher.
|
2016-04-05 16:22:04 +03:00
|
|
|
currentBlock := pm.blockchain.CurrentBlock()
|
2016-07-25 15:14:14 +03:00
|
|
|
if trueTD.Cmp(pm.blockchain.GetTd(currentBlock.Hash(), currentBlock.NumberU64())) > 0 {
|
2015-08-19 15:14:26 +03:00
|
|
|
go pm.synchronise(p)
|
|
|
|
}
|
2015-07-09 13:55:06 +03:00
|
|
|
}
|
2015-04-18 03:24:24 +03:00
|
|
|
|
2015-07-02 19:55:18 +03:00
|
|
|
case msg.Code == TxMsg:
|
2016-06-02 15:54:07 +03:00
|
|
|
// Transactions arrived, make sure we have a valid and fresh chain to handle them
|
2017-04-10 11:43:01 +03:00
|
|
|
if atomic.LoadUint32(&pm.acceptTxs) == 0 {
|
2016-05-17 14:17:20 +03:00
|
|
|
break
|
|
|
|
}
|
|
|
|
// Transactions can be processed, parse all of them and deliver to the pool
|
2015-06-30 19:05:06 +03:00
|
|
|
var txs []*types.Transaction
|
|
|
|
if err := msg.Decode(&txs); err != nil {
|
|
|
|
return errResp(ErrDecode, "msg %v: %v", msg, err)
|
|
|
|
}
|
|
|
|
for i, tx := range txs {
|
|
|
|
// Validate and mark the remote transaction
|
|
|
|
if tx == nil {
|
|
|
|
return errResp(ErrDecode, "transaction %d is nil", i)
|
|
|
|
}
|
|
|
|
p.MarkTransaction(tx.Hash())
|
|
|
|
}
|
2017-07-05 16:51:55 +03:00
|
|
|
pm.txpool.AddRemotes(txs)
|
2015-06-30 19:05:06 +03:00
|
|
|
|
2015-04-18 02:11:09 +03:00
|
|
|
default:
|
|
|
|
return errResp(ErrInvalidMsgCode, "%v", msg.Code)
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
2015-04-18 03:21:07 +03:00
|
|
|
|
2015-06-18 18:00:19 +03:00
|
|
|
// BroadcastBlock will either propagate a block to a subset of it's peers, or
|
|
|
|
// will only announce it's availability (depending what's requested).
|
|
|
|
func (pm *ProtocolManager) BroadcastBlock(block *types.Block, propagate bool) {
|
2015-06-18 16:09:34 +03:00
|
|
|
hash := block.Hash()
|
|
|
|
peers := pm.peers.PeersWithoutBlock(hash)
|
2015-06-04 18:46:07 +03:00
|
|
|
|
2015-06-18 18:00:19 +03:00
|
|
|
// If propagation is requested, send to a subset of the peer
|
|
|
|
if propagate {
|
2015-07-09 13:55:06 +03:00
|
|
|
// Calculate the TD of the block (it's not imported yet, so block.Td is not valid)
|
|
|
|
var td *big.Int
|
2016-04-05 16:22:04 +03:00
|
|
|
if parent := pm.blockchain.GetBlock(block.ParentHash(), block.NumberU64()-1); parent != nil {
|
|
|
|
td = new(big.Int).Add(block.Difficulty(), pm.blockchain.GetTd(block.ParentHash(), block.NumberU64()-1))
|
2015-07-09 13:55:06 +03:00
|
|
|
} else {
|
2017-03-02 16:06:16 +03:00
|
|
|
log.Error("Propagating dangling block", "number", block.Number(), "hash", hash)
|
2015-07-09 13:55:06 +03:00
|
|
|
return
|
|
|
|
}
|
|
|
|
// Send the block to a subset of our peers
|
2018-09-29 23:17:06 +03:00
|
|
|
transferLen := int(math.Sqrt(float64(len(peers))))
|
|
|
|
if transferLen < minBroadcastPeers {
|
|
|
|
transferLen = minBroadcastPeers
|
|
|
|
}
|
|
|
|
if transferLen > len(peers) {
|
|
|
|
transferLen = len(peers)
|
|
|
|
}
|
|
|
|
transfer := peers[:transferLen]
|
2015-06-18 18:00:19 +03:00
|
|
|
for _, peer := range transfer {
|
2018-05-21 11:32:42 +03:00
|
|
|
peer.AsyncSendNewBlock(block, td)
|
2015-06-18 18:00:19 +03:00
|
|
|
}
|
2017-03-02 16:06:16 +03:00
|
|
|
log.Trace("Propagated block", "hash", hash, "recipients", len(transfer), "duration", common.PrettyDuration(time.Since(block.ReceivedAt)))
|
2017-08-18 08:52:16 +03:00
|
|
|
return
|
2015-06-04 18:46:07 +03:00
|
|
|
}
|
2015-06-18 18:00:19 +03:00
|
|
|
// Otherwise if the block is indeed in out own chain, announce it
|
2017-09-09 19:03:07 +03:00
|
|
|
if pm.blockchain.HasBlock(hash, block.NumberU64()) {
|
2015-06-18 18:00:19 +03:00
|
|
|
for _, peer := range peers {
|
2018-05-21 11:32:42 +03:00
|
|
|
peer.AsyncSendNewBlockHash(block)
|
2015-06-18 18:00:19 +03:00
|
|
|
}
|
2017-03-02 16:06:16 +03:00
|
|
|
log.Trace("Announced block", "hash", hash, "recipients", len(peers), "duration", common.PrettyDuration(time.Since(block.ReceivedAt)))
|
2015-04-18 03:21:07 +03:00
|
|
|
}
|
|
|
|
}
|
2015-04-22 18:56:06 +03:00
|
|
|
|
2018-05-10 10:04:45 +03:00
|
|
|
// BroadcastTxs will propagate a batch of transactions to all peers which are not known to
|
2015-06-26 20:42:27 +03:00
|
|
|
// already have the given transaction.
|
2018-05-10 10:04:45 +03:00
|
|
|
func (pm *ProtocolManager) BroadcastTxs(txs types.Transactions) {
|
|
|
|
var txset = make(map[*peer]types.Transactions)
|
|
|
|
|
|
|
|
// Broadcast transactions to a batch of peers not knowing about it
|
|
|
|
for _, tx := range txs {
|
|
|
|
peers := pm.peers.PeersWithoutTx(tx.Hash())
|
|
|
|
for _, peer := range peers {
|
|
|
|
txset[peer] = append(txset[peer], tx)
|
|
|
|
}
|
|
|
|
log.Trace("Broadcast transaction", "hash", tx.Hash(), "recipients", len(peers))
|
|
|
|
}
|
|
|
|
// FIXME include this again: peers = peers[:int(math.Sqrt(float64(len(peers))))]
|
|
|
|
for peer, txs := range txset {
|
2018-05-21 11:32:42 +03:00
|
|
|
peer.AsyncSendTransactions(txs)
|
2015-04-22 18:56:06 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Mined broadcast loop
|
2018-05-03 15:15:33 +03:00
|
|
|
func (pm *ProtocolManager) minedBroadcastLoop() {
|
2015-04-22 18:56:06 +03:00
|
|
|
// automatically stops if unsubscribe
|
2018-05-03 15:15:33 +03:00
|
|
|
for obj := range pm.minedBlockSub.Chan() {
|
2018-07-30 12:30:09 +03:00
|
|
|
if ev, ok := obj.Data.(core.NewMinedBlockEvent); ok {
|
2018-05-03 15:15:33 +03:00
|
|
|
pm.BroadcastBlock(ev.Block, true) // First propagate block to peers
|
|
|
|
pm.BroadcastBlock(ev.Block, false) // Only then announce to the rest
|
2015-04-22 18:56:06 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-05-03 15:15:33 +03:00
|
|
|
func (pm *ProtocolManager) txBroadcastLoop() {
|
2017-08-18 13:58:36 +03:00
|
|
|
for {
|
|
|
|
select {
|
2018-05-10 10:04:45 +03:00
|
|
|
case event := <-pm.txsCh:
|
|
|
|
pm.BroadcastTxs(event.Txs)
|
2017-08-18 13:58:36 +03:00
|
|
|
|
|
|
|
// Err() channel will be closed when unsubscribing.
|
2018-05-10 10:04:45 +03:00
|
|
|
case <-pm.txsSub.Err():
|
2017-08-18 13:58:36 +03:00
|
|
|
return
|
|
|
|
}
|
2015-04-22 18:56:06 +03:00
|
|
|
}
|
|
|
|
}
|
2015-10-27 16:10:30 +03:00
|
|
|
|
2017-12-28 16:18:34 +03:00
|
|
|
// NodeInfo represents a short summary of the Ethereum sub-protocol metadata
|
|
|
|
// known about the host peer.
|
|
|
|
type NodeInfo struct {
|
|
|
|
Network uint64 `json:"network"` // Ethereum network ID (1=Frontier, 2=Morden, Ropsten=3, Rinkeby=4)
|
cmd, core, eth/tracers: support fancier js tracing (#15516)
* cmd, core, eth/tracers: support fancier js tracing
* eth, internal/web3ext: rework trace API, concurrency, chain tracing
* eth/tracers: add three more JavaScript tracers
* eth/tracers, vendor: swap ottovm to duktape for tracing
* core, eth, internal: finalize call tracer and needed extras
* eth, tests: prestate tracer, call test suite, rewinding
* vendor: fix windows builds for tracer js engine
* vendor: temporary duktape fix
* eth/tracers: fix up 4byte and evmdis tracer
* vendor: pull in latest duktape with my upstream fixes
* eth: fix some review comments
* eth: rename rewind to reexec to make it more obvious
* core/vm: terminate tracing using defers
2017-12-21 14:56:11 +03:00
|
|
|
Difficulty *big.Int `json:"difficulty"` // Total difficulty of the host's blockchain
|
|
|
|
Genesis common.Hash `json:"genesis"` // SHA3 hash of the host's genesis block
|
|
|
|
Config *params.ChainConfig `json:"config"` // Chain configuration for the fork rules
|
|
|
|
Head common.Hash `json:"head"` // SHA3 hash of the host's best owned block
|
2015-10-27 16:10:30 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// NodeInfo retrieves some protocol metadata about the running host node.
|
2018-05-03 15:15:33 +03:00
|
|
|
func (pm *ProtocolManager) NodeInfo() *NodeInfo {
|
|
|
|
currentBlock := pm.blockchain.CurrentBlock()
|
2017-12-28 16:18:34 +03:00
|
|
|
return &NodeInfo{
|
2018-06-14 13:14:52 +03:00
|
|
|
Network: pm.networkID,
|
2018-05-03 15:15:33 +03:00
|
|
|
Difficulty: pm.blockchain.GetTd(currentBlock.Hash(), currentBlock.NumberU64()),
|
|
|
|
Genesis: pm.blockchain.Genesis().Hash(),
|
|
|
|
Config: pm.blockchain.Config(),
|
2016-04-05 16:22:04 +03:00
|
|
|
Head: currentBlock.Hash(),
|
2015-10-27 16:10:30 +03:00
|
|
|
}
|
|
|
|
}
|