The Go authors updated golang/x/ext to change the function signature of the slices sort method.
It's an entire shitshow now because x/ext is not tagged, so everyone's codebase just
picked a new version that some other dep depends on, causing our code to fail building.
This PR updates the dep on our code too and does all the refactorings to follow upstream...
Makes the float-gauges lock-free
name old time/op new time/op delta
CounterFloat64Parallel-8 1.45µs ±10% 0.85µs ± 6% -41.65% (p=0.008 n=5+5)
---------
Co-authored-by: Exca-DK <dev@DESKTOP-RI45P4J.localdomain>
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change switches to use the smaller influxdata/influxdb1-client package instead of depending on the whole infuxdb package. The new smaller client is very similar to the influxdb-v2 client, which made it possible to refactor the two reporters to reuse code a lot more.
This PR adds counter metrics for the CPU system and the Geth process.
Currently the only metrics available for these items are gauges. Gauges are
fine when the consumer scrapes metrics data at the same interval as Geth
produces new values (every 3 seconds), but it is likely that most consumers
will not scrape that often. Intervals of 10, 15, or maybe even 30 seconds
are probably more common.
So the problem is, how does the consumer estimate what the CPU was doing in
between scrapes. With a counter, it's easy ... you just subtract two
successive values and divide by the time to get a nice, accurate average.
But with a gauge, you can't do that. A gauge reading is an instantaneous
picture of what was happening at that moment, but it gives you no idea
about what was going on between scrapes. Taking an average of values is
meaningless.
This PR changes metrics collection to actually measure the time interval between collections, rather
than assume 3 seconds. I did some ad hoc profiling, and on slower hardware (eg, my Raspberry Pi 4)
I routinely saw intervals between 3.3 - 3.5 seconds, with some being as high as 4.5 seconds. This
will generally cause the CPU gauge readings to be too high, and in some cases can cause impossibly
large values for the CPU load metrics (eg. greater than 400 for a 4 core CPU).
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR is a (superior) alternative to https://github.com/ethereum/go-ethereum/pull/26708, it handles deprecation, primarily two specific cases.
`rand.Seed` is typically used in two ways
- `rand.Seed(time.Now().UnixNano())` -- we seed it, just to be sure to get some random, and not always get the same thing on every run. This is not needed, with global seeding, so those are just removed.
- `rand.Seed(1)` this is typically done to ensure we have a stable test. If we rely on this, we need to fix up the tests to use a deterministic prng-source. A few occurrences like this has been replaced with a proper custom source.
`rand.Read` has been replaced by `crypto/rand`.`Read` in this PR.
* dep: upgrade secp256k1 to use btcec/v2 v2.3.2 and update insecurity pkg
* build ci: upgrade go to 1.19 and golangci-lint to 1.50.1
* docs: fix format that does not follow the goimports
* dep: redirect github.com/bnb-chain/tendermint to v0.31.13
* ci: disable GOPROXY
This changes how we read performance metrics from the Go runtime. Instead
of using runtime.ReadMemStats, we now rely on the API provided by package
runtime/metrics.
runtime/metrics provides more accurate information. For example, the new
interface has better reporting of memory use. In my testing, the reported
value of held memory more accurately reflects the usage reported by the OS.
The semantics of metrics system/memory/allocs and system/memory/frees have
changed to report amounts in bytes. ReadMemStats only reported the count of
allocations in number-of-objects. This is imprecise: 'tiny objects' are not
counted because the runtime allocates them in batches; and certain
improvements in allocation behavior, such as struct size optimizations,
will be less visible when the number of allocs doesn't change.
Changing allocation reports to be in bytes makes it appear in graphs that
lots more is being allocated. I don't think that's a problem because this
metric is primarily interesting for geth developers.
The metric system/memory/pauses has been changed to report statistical
values from the histogram provided by the runtime. Its name in influxdb has
changed from geth.system/memory/pauses.meter to
geth.system/memory/pauses.histogram.
We also have a new histogram metric, system/cpu/schedlatency, reporting the
Go scheduler latency.
This changes the CI / release builds to use the latest Go version. It also
upgrades golangci-lint to a newer version compatible with Go 1.19.
In Go 1.19, godoc has gained official support for links and lists. The
syntax for code blocks in doc comments has changed and now requires a
leading tab character. gofmt adapts comments to the new syntax
automatically, so there are a lot of comment re-formatting changes in this
PR. We need to apply the new format in order to pass the CI lint stage with
Go 1.19.
With the linter upgrade, I have decided to disable 'gosec' - it produces
too many false-positive warnings. The 'deadcode' and 'varcheck' linters
have also been removed because golangci-lint warns about them being
unmaintained. 'unused' provides similar coverage and we already have it
enabled, so we don't lose much with this change.
This enables the following linters
- typecheck
- unused
- staticcheck
- bidichk
- durationcheck
- exportloopref
- gosec
WIth a few exceptions.
- We use a deprecated protobuf in trezor. I didn't want to mess with that, since I cannot meaningfully test any changes there.
- The deprecated TypeMux is used in a few places still, so the warning for it is silenced for now.
- Using string type in context.WithValue is apparently wrong, one should use a custom type, to prevent collisions between different places in the hierarchy of callers. That should be fixed at some point, but may require some attention.
- The warnings for using weak random generator are squashed, since we use a lot of random without need for cryptographic guarantees.
This PR adds flag to enable InfluxDB v2 (--metrics.influxdbv2), flags for v2-specific features (--metrics.influxdb.token, --metrics.influxdb.bucket), also carries over addition of support for specifying organization (--metrics.influxdb.organization), but still retains backwards compatibility with InfluxDB v1.
* remove uneeded convertion type
* remove redundant type in composite literal
* omit explicit type where implicit
* remove unused redundant parenthesis
* remove redundant import alias duktape
* metrics: zero temp variable in updateMeter
Previously the temp variable was not updated properly after summing it to count.
This meant we had astronomically high metrics, now we zero out the temp whenever we
sum it onto the snapshot count
* metrics: move temp variable to be aligned, unit tests
Moves the temp variable in MeterSnapshot to be 64-bit aligned because of the atomic bug.
Adds a unit test, that catches the previous bug.
Exposing /debug/metrics and /debug/metrics/prometheus was dependent
on --pprof, which also exposes other HTTP APIs. This change makes it possible
to run the metrics server on an independent endpoint without enabling pprof.
* replace gosigar with gopsutil
* removed check for whether GOOS is openbsd
* removed accidental import of runtime
* potential fix for difference in units between gosig and gopsutil
* fixed lint error
* remove multiplication factor
* uses cpu.ClocksPerSec as the multiplication factor
* changed dependency from shirou to renaynay (#20)
* updated dep
* switching back from using renaynay fork to using upstream as PRs were merged on upstream
* removed empty line
* optimized imports
* tidied go mod
The test failed due to what appears to be fluctuations in time.Sleep, which is
not the actual method under test. This change modifies it so we compare the
metered Max to the actual time instead of the desired time.
This removes the dashboard project. The dashboard was an experimental
browser UI for geth which displayed metrics and chain information in
real time. We are removing it because it has marginal utility and nobody
on the team can maintain it.
Removing the dashboard removes a lot of dependency code and shaves
6 MB off the geth binary size.