* dep: upgrade secp256k1 to use btcec/v2 v2.3.2 and update insecurity pkg
* build ci: upgrade go to 1.19 and golangci-lint to 1.50.1
* docs: fix format that does not follow the goimports
* dep: redirect github.com/bnb-chain/tendermint to v0.31.13
* ci: disable GOPROXY
This PR adds an addtional API called `NewBatchWithSize` for db
batcher. It turns out that leveldb batch memory allocation is
super inefficient. The main reason is the allocation step of
leveldb Batch is too small when the batch size is large. It can
take a few second to build a leveldb batch with 100MB size.
Luckily, leveldb also offers another API called MakeBatch which can
pre-allocate the memory area. So if the approximate size of batch is
known in advance, this API can be used in this case.
It's needed in new state scheme PR which needs to commit a batch of
trie nodes in a single batch. Implement the feature in a seperate PR.
* metrics: add unlock address to metrics when miner module is enabled
* metrics: add miner config into metrics server
* metrics: add device-info into metrics server
* metrics: fix the format of device info
* metrics: remove device-info
* eth: fix a rare datarace on CHT challenge reply / shutdown
* trie: check childrens' existence concurrently for snap heal
* eth/protocols/snap: fix problems due to idle-but-busy peers
* eth/filters: change filter block to be by-ref (#26054)
This PR changes the block field in the filter to be a pointer, to disambiguate between empty hash and no hash
* rpc: handle wrong HTTP batch response length (#26064)
* eth/protocols/snap: throttle trie heal requests when peers DoS us (#25666)
* eth/protocols/snap: throttle trie heal requests when peers DoS us
* eth/protocols/snap: lower heal throttle log to debug
Co-authored-by: Martin Holst Swende <martin@swende.se>
* eth/protocols/snap: fix comment
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Jordan Krage <jmank88@gmail.com>
1.remove the unnecessary NewTxsEvent subscriber, which was used for PoW resubmit check.
2.unsubscribe ASAP before another fillTransactions, to avoid block others.
* worker: add double sign check for safety.
And for corner cases, such as reorg after reorg...
use slice to record all broadcast blocks's parents to avoid overwritten.
When new block is imported, there is no need to commit the current
work, even the new imported block is offturn and itself is inturn.
That is because when offturn block is received, the inturn block is
already later to broadcast block, deliver the later block will cause
many reorg, which is not reasonable.
And also make sure all useless work can be discarded, to avoid goroutine leak.
It may not efficient if schedule fillTransactions when new transactions arrive.
It could make the CPU keep running.
To make is more efficient:
1.schedule fillTransactions when a certain amount of transaction are arrived.
2.or there is not much time left.
Currently, validator only try once to get transactions from TxPool to produce the block.
However, new transactions could arrive while the validator is committing transaction.
Validator should be allowed to add these new arrived transactions as long as
Header.Timestamp is not reached
This commit will:
** commitTransactions return with error code
** drop current mining block on new block imported
** try fillTransactions several times for the best
not use append mode to follow the GasPrice rule.
** check if there is enough time for another fillTransactions.
This avoids copying the input []byte while decoding trie nodes. In most
cases, particularly when the input slice is provided by the underlying
database, this optimization is safe to use.
For cases where the origin of the input slice is unclear, the copying version
is retained. The new code performs better even when the input must be
copied, because it is now only copied once in decodeNode.
`fillTransactions` will call `commitTransactions` twice, if the delay
timer is expired during the first call, it will make the delay timer
never be triggered in the second commitTransactions call.
Pseudo code:
x := time.NewTimer(time.Second)
<-x.C
fmt.Println("read delay 1")
<-x.C
fmt.Println("read delay 2") // will never hit
pre-seal empty block is for PoW to deliver an empty block ASAP to
gain the block mine reward.
It is useless for PoS consensus and it does not work for BSC either.
Delete the code to make worker simpler.
It could be a very old PoW logic, which try to add more transaction
into the pending block when mining is stopped.
Mining can be stopped when:
1.download started.
2.manually stopped by RPC.
It is unnecessary to add more transaction into the pending block if a validator is stopped.
And updateSnapshot() is not needed as well, it is to get the pending mining snapshot.
Right now, DelayLeftOver is used to reserve time for block finalize, not block
broadcast. And the code does not work as expected.
The general block generation could be described as:
|- fillTransactions -|- finalize a block -|- wait until the period(3s) reached -|- broadcast -|
resubmit intervalAdjust is for PoW only, to remove it to make worker simpler.
With PoW, there will be a periodic timer to check if it is the time to stop
packing transaction and start calculating the desired hash value, since other miner
could succeed in hash compute if it spends too much time packing transactions.
It will commit the current fruit to calculate root at a reasonable time.
And it will schedule a new work to get a big block if new transaction was received.
When there are too many transactions in the TxPool, the interval of the resubmit timer would be
increased and vice versa.
But it is not needed with PoS related consensus, since the block interval is determined in PoS,
and there is already a timer to stop too long packing.
This change ensures the HTTP server will always terminate within
at most 5s, even when all connections are busy and do not become
idle.
Co-authored-by: Felix Lange <fjl@twurst.com>