It may not efficient if schedule fillTransactions when new transactions arrive.
It could make the CPU keep running.
To make is more efficient:
1.schedule fillTransactions when a certain amount of transaction are arrived.
2.or there is not much time left.
Currently, validator only try once to get transactions from TxPool to produce the block.
However, new transactions could arrive while the validator is committing transaction.
Validator should be allowed to add these new arrived transactions as long as
Header.Timestamp is not reached
This commit will:
** commitTransactions return with error code
** drop current mining block on new block imported
** try fillTransactions several times for the best
not use append mode to follow the GasPrice rule.
** check if there is enough time for another fillTransactions.
This avoids copying the input []byte while decoding trie nodes. In most
cases, particularly when the input slice is provided by the underlying
database, this optimization is safe to use.
For cases where the origin of the input slice is unclear, the copying version
is retained. The new code performs better even when the input must be
copied, because it is now only copied once in decodeNode.
This removes an RPC test which takes > 90s to execute, and updates the
internal/guide tests to use lighter scrypt parameters.
Co-authored-by: Felix Lange <fjl@twurst.com>
`fillTransactions` will call `commitTransactions` twice, if the delay
timer is expired during the first call, it will make the delay timer
never be triggered in the second commitTransactions call.
Pseudo code:
x := time.NewTimer(time.Second)
<-x.C
fmt.Println("read delay 1")
<-x.C
fmt.Println("read delay 2") // will never hit
This adds a way to specify HTTP headers per request.
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
pre-seal empty block is for PoW to deliver an empty block ASAP to
gain the block mine reward.
It is useless for PoS consensus and it does not work for BSC either.
Delete the code to make worker simpler.
It could be a very old PoW logic, which try to add more transaction
into the pending block when mining is stopped.
Mining can be stopped when:
1.download started.
2.manually stopped by RPC.
It is unnecessary to add more transaction into the pending block if a validator is stopped.
And updateSnapshot() is not needed as well, it is to get the pending mining snapshot.
Right now, DelayLeftOver is used to reserve time for block finalize, not block
broadcast. And the code does not work as expected.
The general block generation could be described as:
|- fillTransactions -|- finalize a block -|- wait until the period(3s) reached -|- broadcast -|
rpc: fix connection tracking in Server
When upgrading to mapset/v2 with generics, the set element type used in
rpc.Server had to be changed to *ServerCodec because ServerCodec is not
'comparable'. While the distinction is technically correct, we know all
possible ServerCodec types, and all of them are comparable. So just use
a map instead.
resubmit intervalAdjust is for PoW only, to remove it to make worker simpler.
With PoW, there will be a periodic timer to check if it is the time to stop
packing transaction and start calculating the desired hash value, since other miner
could succeed in hash compute if it spends too much time packing transactions.
It will commit the current fruit to calculate root at a reasonable time.
And it will schedule a new work to get a big block if new transaction was received.
When there are too many transactions in the TxPool, the interval of the resubmit timer would be
increased and vice versa.
But it is not needed with PoS related consensus, since the block interval is determined in PoS,
and there is already a timer to stop too long packing.
This change ensures the HTTP server will always terminate within
at most 5s, even when all connections are busy and do not become
idle.
Co-authored-by: Felix Lange <fjl@twurst.com>
* internal/ethapi: error if tx args includes chain id that doesn't match local
* internal/ethapi: simplify code a bit
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* Remove locking in (*BlockChain).ExportN
Since ExportN is read-only, it shouldn't need the lock. (?)
* Add hash check to detect reorgs during export.
* fix check order
* Update blockchain.go
* Update blockchain.go
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
It seems there is no fully typed library implementation of an LRU cache.
So I wrote one. Method names are the same as github.com/hashicorp/golang-lru,
and the new type can be used as a drop-in replacement.
Two reasons to do this:
- It's much easier to understand what a cache is for when the types are right there.
- Performance: the new implementation is slightly faster and performs zero memory
allocations in Add when the cache is at capacity. Overall, memory usage of the cache
is much reduced because keys are values are no longer wrapped in interface.
This PR changes the pending tx subscription to return RPCTransaction types instead of normal Transaction objects. This will fix the inconsistencies with other tx returning API methods (i.e. getTransactionByHash), and also fill in the sender value for the tx.
co-authored by @s1na
This fixes a problem in the SizeConstrainedLRU. The SCLRU uses an underlying simple lru which is not thread safe.
During the Get operation, the recentness of the accessed item is updated, so it is not a pure read-operation. Therefore, the mutex we need is a full mutex, not RLock.
This PR changes the mutex to be a regular Mutex, instead of RWMutex, so a reviewer can at a glance see that all affected locations are fixed.
This changes how we read performance metrics from the Go runtime. Instead
of using runtime.ReadMemStats, we now rely on the API provided by package
runtime/metrics.
runtime/metrics provides more accurate information. For example, the new
interface has better reporting of memory use. In my testing, the reported
value of held memory more accurately reflects the usage reported by the OS.
The semantics of metrics system/memory/allocs and system/memory/frees have
changed to report amounts in bytes. ReadMemStats only reported the count of
allocations in number-of-objects. This is imprecise: 'tiny objects' are not
counted because the runtime allocates them in batches; and certain
improvements in allocation behavior, such as struct size optimizations,
will be less visible when the number of allocs doesn't change.
Changing allocation reports to be in bytes makes it appear in graphs that
lots more is being allocated. I don't think that's a problem because this
metric is primarily interesting for geth developers.
The metric system/memory/pauses has been changed to report statistical
values from the histogram provided by the runtime. Its name in influxdb has
changed from geth.system/memory/pauses.meter to
geth.system/memory/pauses.histogram.
We also have a new histogram metric, system/cpu/schedlatency, reporting the
Go scheduler latency.
This adds an option to direct log output to a file. This feature has been
requested a lot. It's sometimes useful to have this available when running
geth in an environment that doesn't easily allow redirecting the output.
Notably, there is no support for log rotation with this change. The --log.file option
opens the file once on startup and then keeps writing to the file handle.
This can become an issue when external log rotation tools are involved, so it's
best not to use them with this option for now.
When the interpreter is configured to use extra-eips, this change makes it so that all the opcodes are deep-copied, to prevent accidental modification of the 'base' jumptable.
Closes: #26136
Co-authored-by: Martin Holst Swende <martin@swende.se>
PR #26082 added account listing to OnSignerStartup but did not consider the case where a user has a large number of accounts which would be annoying to display.
This PR updates showAccounts() so that if there are more than 20 accounts available the user sees the first 20 displayed in the console followed by: First 20 accounts listed (N more available).
Co-authored-by: Martin Holst Swende <martin@swende.se>
Many of the other types have a function to convert the type to a big.Int,
but Address was missing this function.
It is useful to be able to turn an Address into a big.Int when doing
EVM-like computations natively in Go. Sometimes a Solidity address
type is casted to a uint256 and having a Big method on the Address
type makes this easy.