Commit Graph

19 Commits

Author SHA1 Message Date
zjubfd
2ce00adb55
[R4R] performance improvement in many aspects (#257)
* focus on performance improvement in many aspects.

1. Do BlockBody verification concurrently;
2. Do calculation of intermediate root concurrently;
3. Preload accounts before processing blocks;
4. Make the snapshot layers configurable.
5. Reuse some object to reduce GC.

add

* rlp: improve decoder stream implementation (#22858)

This commit makes various cleanup changes to rlp.Stream.

* rlp: shrink Stream struct

This removes a lot of unused padding space in Stream by reordering the
fields. The size of Stream changes from 120 bytes to 88 bytes. Stream
instances are internally cached and reused using sync.Pool, so this does
not improve performance.

* rlp: simplify list stack

The list stack kept track of the size of the current list context as
well as the current offset into it. The size had to be stored in the
stack in order to subtract it from the remaining bytes of any enclosing
list in ListEnd. It seems that this can be implemented in a simpler
way: just subtract the size from the enclosing list context in List instead.

* rlp: use atomic.Value for type cache (#22902)

All encoding/decoding operations read the type cache to find the
writer/decoder function responsible for a type. When analyzing CPU
profiles of geth during sync, I found that the use of sync.RWMutex in
cache lookups appears in the profiles. It seems we are running into
CPU cache contention problems when package rlp is heavily used
on all CPU cores during sync.

This change makes it use atomic.Value + a writer lock instead of
sync.RWMutex. In the common case where the typeinfo entry is present in
the cache, we simply fetch the map and lookup the type.

* rlp: optimize byte array handling (#22924)

This change improves the performance of encoding/decoding [N]byte.

    name                     old time/op    new time/op    delta
    DecodeByteArrayStruct-8     336ns ± 0%     246ns ± 0%  -26.98%  (p=0.000 n=9+10)
    EncodeByteArrayStruct-8     225ns ± 1%     148ns ± 1%  -34.12%  (p=0.000 n=10+10)

    name                     old alloc/op   new alloc/op   delta
    DecodeByteArrayStruct-8      120B ± 0%       48B ± 0%  -60.00%  (p=0.000 n=10+10)
    EncodeByteArrayStruct-8     0.00B          0.00B          ~     (all equal)

* rlp: optimize big.Int decoding for size <= 32 bytes (#22927)

This change grows the static integer buffer in Stream to 32 bytes,
making it possible to decode 256bit integers without allocating a
temporary buffer.

In the recent commit 088da24, Stream struct size decreased from 120
bytes down to 88 bytes. This commit grows the struct to 112 bytes again,
but the size change will not degrade performance because Stream
instances are internally cached in sync.Pool.

    name             old time/op    new time/op    delta
    DecodeBigInts-8    12.2µs ± 0%     8.6µs ± 4%  -29.58%  (p=0.000 n=9+10)

    name             old speed      new speed      delta
    DecodeBigInts-8   230MB/s ± 0%   326MB/s ± 4%  +42.04%  (p=0.000 n=9+10)

* eth/protocols/eth, les: avoid Raw() when decoding HashOrNumber (#22841)

Getting the raw value is not necessary to decode this type, and
decoding it directly from the stream is faster.

* fix testcase

* debug no lazy

* fix can not repair

* address comments

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-07-29 17:16:53 +08:00
Martin Holst Swende
8cde2966af
eth, core: speed up some tests (#22000) 2020-12-15 18:52:51 +01:00
Martin Holst Swende
eb87121300
core/bloombits: faster generator (#21625)
* core/bloombits: add benchmark

* core/bloombits: optimize inserts
2020-10-06 16:34:29 +03:00
ucwong
9e04c5ec83
core/bloombits: use single channel for shutdown (#20878)
This replaces the two-stage shutdown scheme with the one we
use almost everywhere else: a single quit channel signalling
termination.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-07-29 13:49:12 +02:00
Sheldon
9e24491c65 core/bloombits, light: fix typos (#17235) 2018-07-24 11:24:27 +03:00
gary rong
dcdd57df62 core, ethdb: two tiny fixes (#17183)
* ethdb: fix memory database

* core: fix bloombits checking

* core: minor polish
2018-07-18 13:41:36 +03:00
Wuxiang
8f8774cf6d all: fix various typos (#16533)
* fix typo

* fix typo

* fix typo
2018-04-19 16:32:02 +03:00
Péter Szilágyi
463014126f
core/bloombits: handle non 8-bit boundary section matches 2017-11-15 14:10:35 +02:00
ferhat elmas
9619a61024 all: gofmt -w -s (#15419) 2017-11-08 11:45:52 +01:00
Felföldi Zsolt
8d434f6a6f les, core/bloombits: post-LES/2 fixes (#15391)
* les: fix topic ID

* core/bloombits: fix interface conversion
2017-10-27 17:18:53 +03:00
Péter Szilágyi
0095531a58 core, eth, les: fix messy code (#15367)
* core, eth, les: fix messy code

* les: fixed tx status test and rlp encoding

* core: add a workaround for light sync
2017-10-25 12:18:44 +03:00
Felföldi Zsolt
ca376ead88 les, light: LES/2 protocol version (#14970)
This PR implements the new LES protocol version extensions:

* new and more efficient Merkle proofs reply format (when replying to
  a multiple Merkle proofs request, we just send a single set of trie
  nodes containing all necessary nodes)
* BBT (BloomBitsTrie) works similarly to the existing CHT and contains
  the bloombits search data to speed up log searches
* GetTxStatusMsg returns the inclusion position or the
  pending/queued/unknown state of a transaction referenced by hash
* an optional signature of new block data (number/hash/td) can be
  included in AnnounceMsg to provide an option for "very light
  clients" (mobile/embedded devices) to skip expensive Ethash check
  and accept multiple signatures of somewhat trusted servers (still a
  lot better than trusting a single server completely and retrieving
  everything through RPC). The new client mode is not implemented in
  this PR, just the protocol extension.
2017-10-24 15:19:09 +02:00
Péter Szilágyi
0e7d019e0e
core: fire tx event on replace, expand tests 2017-10-20 14:42:19 +03:00
Péter Szilágyi
2ab2a9f131 core/bloombits, eth/filters: handle null topics (#15195)
When implementing the new bloombits based filter, I've accidentally broke null
topics by removing the special casing of common.Hash{} filter rules, which
acted as the wildcard topic until now.

This PR fixes the regression, but instead of using the magic hash
common.Hash{} as the null wildcard, the PR reworks the code to handle nil
topics during parsing, converting a JSON null into nil []common.Hash topic.
2017-09-27 12:14:52 +02:00
Péter Szilágyi
564c8f3ae6
core/bloombits: drop nil-matcher special case 2017-09-06 11:14:22 +03:00
Zsolt Felfoldi
451ffdb62b
core/bloombits: use general filters instead of addresses and topics 2017-09-06 11:14:21 +03:00
Zsolt Felfoldi
6ff2c02991
core/bloombits: AddBloom index parameter and fixes variable names 2017-09-06 11:14:20 +03:00
Péter Szilágyi
f585f9eee8
core, eth: clean up bloom filtering, add some tests 2017-09-06 11:14:19 +03:00
Zsolt Felfoldi
4ea4d2dc34
core, eth: add bloombit indexer, filter based on it 2017-09-06 11:13:13 +03:00