This update resolves an issue where StringSliceFlag would not be
rendered correctly in help output + mention that -H can be used multiple times
Co-authored-by: Martin Holst Swende <martin@swende.se>
* cmd/geth: add a verkle subcommand
* fix copyright year
* remove unused command parameters
* check that the output file was successfully written to
Co-authored-by: Martin Holst Swende <martin@swende.se>
* cmd/geth: goimports fix
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change adds a code generator tool for creating EncodeRLP method
implementations. The generated methods will behave identically to the
reflect-based encoder, but run faster because there is no reflection overhead.
Package rlp now provides the EncoderBuffer type for incremental encoding. This
is used by generated code, but the new methods can also be useful for
hand-written encoders.
There is also experimental support for generating DecodeRLP, and some new
methods have been added to the existing Stream type to support this. Creating
decoders with rlpgen is not recommended at this time because the generated
methods create very poor error reporting.
More detail about package rlp changes:
* rlp: externalize struct field processing / validation
This adds a new package, rlp/internal/rlpstruct, in preparation for the
RLP encoder generator.
I think the struct field rules are subtle enough to warrant extracting
this into their own package, even though it means that a bunch of
adapter code is needed for converting to/from rlpstruct.Type.
* rlp: add more decoder methods (for rlpgen)
This adds new methods on rlp.Stream:
- Uint64, Uint32, Uint16, Uint8, BigInt
- ReadBytes for decoding into []byte
- MoreDataInList - useful for optional list elements
* rlp: expose encoder buffer (for rlpgen)
This exposes the internal encoder buffer type for use in EncodeRLP
implementations.
The new EncoderBuffer type is a sort-of 'opaque handle' for a pointer to
encBuffer. It is implemented this way to ensure the global encBuffer pool
is handled correctly.
This change updates our urfave/cli dependency to the v2 branch of the library.
There are some Go API changes in cli v2:
- Flag values can now be accessed using the methods ctx.Bool,
ctx.Int, ctx.String, ... regardless of whether the flag is 'local' or
'global'.
- v2 has built-in support for flag categories. Our home-grown category
system is removed and the categories of flags are assigned as part of
the flag definition.
For users, there is only one observable difference with cli v2: flags must now
strictly appear before regular arguments. For example, the following command is
now invalid:
geth account import mykey.json --password file.txt
Instead, the command must be invoked as follows:
geth account import --password file.txt mykey.json
This enables the following linters
- typecheck
- unused
- staticcheck
- bidichk
- durationcheck
- exportloopref
- gosec
WIth a few exceptions.
- We use a deprecated protobuf in trezor. I didn't want to mess with that, since I cannot meaningfully test any changes there.
- The deprecated TypeMux is used in a few places still, so the warning for it is silenced for now.
- Using string type in context.WithValue is apparently wrong, one should use a custom type, to prevent collisions between different places in the hierarchy of callers. That should be fixed at some point, but may require some attention.
- The warnings for using weak random generator are squashed, since we use a lot of random without need for cryptographic guarantees.
#23773 added a JS tracer which uses Goja as its engine. In this PR I remove the previous tracer which used duktape as well as remove the dependencies.
This PR also comes with 2 fixes in the Goja tracer and one small behavioural change:
I had handled errors in the native Go functions by panicing. My oversight was that Goja only handles panics with a Goja.Value as argument. The difference is panic(goja.Value) allows JS to catch the exception whereas Interrupt(error) doesn't.
There was a race in how I handled Stop.
Because of 1. some of the methods that simply return nil on error (like memory.slice) now throw an exception.
This adds a JS tracer runtime environment based on the Goja VM. The new
runtime replaces the duktape runtime, which will be removed soon.
Goja is implemented in Go and is faster for cases where the Go <-> JS
transition overhead dominates overall performance. It is faster because
duktape is written in C, and the transition cost includes the cost of using
cgo. Another reason for using Goja is that go-duktape is not maintained
anymore.
We expect the performace of JS tracing to be at least as good or better with
this change.
See ethereum/go-ethereum#24554 and btcsuite/btcd#1839
This is an attempt to resolve a Go module dependency issue that arises
when both 'github.com/btcsuite/btcd/btcec/v2' and the older, non-v2
btcd module are required as dependencies.
This adds a tools.go file to import all command packages used for
go:generate. Doing so makes it possible to execute go-based code
generators using 'go run', locking in the tool version using go.mod.
Co-authored-by: Felix Lange <fjl@twurst.com>
This updates the no-cgo implementations in the crypto package to use
the github.com/btcsuite/btcd/btcec/v2 module instead of the older btcec
package that was part of the main github.com/btcsuite/btcd module.
name old time/op new time/op delta
EcrecoverSignature-32 198µs ± 0% 144µs ± 0% -27.11%
VerifySignature-32 177µs ± 0% 128µs ± 0% -27.44%
DecompressPubkey-32 20.9µs ± 0% 10.1µs ± 0% -51.51%
Use (*ModNScalar).IsOverHalfOrder instead of math/big.Int when checking
for malleable signatures.
* rpc, node: refactor request validation and add jwt validation
* node, rpc: fix error message, ignore engine api in RegisterAPIs
* node: make authenticated port configurable
* eth/catalyst: enable unauthenticated version of engine api
* node: rework obtainjwtsecret (backport later)
* cmd/geth: added auth port flag
* node: happy lint, happy life
* node: refactor authenticated api
Modifies the authentication mechanism to use default values
* node: trim spaces and newline away from secret
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
* go.mod: update azure-storage-blob-go
update Azure/azure-storage-blob-go from v0.7.0 to v0.14.0.
relation #24396.
* internal/build: fix for breaking changes of azure-storage-blob-go
fix for breaking changes of update Azure/azure-storage-blob-go from v0.7.0 to v0.14.0.
relation #24396.
* internal/build: switch azure sdk from Azure/azure-storage-blob-go to Azure/azure-sdk-for-go/sdk/storage/azblob.
* internal/build refactor appending BlobItems
* internal/build: fix azure blobstore client to include container id
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
This change adds a code generator tool for creating EncodeRLP method
implementations. The generated methods will behave identically to the
reflect-based encoder, but run faster because there is no reflection overhead.
Package rlp now provides the EncoderBuffer type for incremental encoding. This
is used by generated code, but the new methods can also be useful for
hand-written encoders.
There is also experimental support for generating DecodeRLP, and some new
methods have been added to the existing Stream type to support this. Creating
decoders with rlpgen is not recommended at this time because the generated
methods create very poor error reporting.
More detail about package rlp changes:
* rlp: externalize struct field processing / validation
This adds a new package, rlp/internal/rlpstruct, in preparation for the
RLP encoder generator.
I think the struct field rules are subtle enough to warrant extracting
this into their own package, even though it means that a bunch of
adapter code is needed for converting to/from rlpstruct.Type.
* rlp: add more decoder methods (for rlpgen)
This adds new methods on rlp.Stream:
- Uint64, Uint32, Uint16, Uint8, BigInt
- ReadBytes for decoding into []byte
- MoreDataInList - useful for optional list elements
* rlp: expose encoder buffer (for rlpgen)
This exposes the internal encoder buffer type for use in EncodeRLP
implementations.
The new EncoderBuffer type is a sort-of 'opaque handle' for a pointer to
encBuffer. It is implemented this way to ensure the global encBuffer pool
is handled correctly.
* internal: support optional filter expression for debug.stacks
* internal/debug: fix string regexp
* internal/debug: support searching for line numbers too
This PR adds flag to enable InfluxDB v2 (--metrics.influxdbv2), flags for v2-specific features (--metrics.influxdb.token, --metrics.influxdb.bucket), also carries over addition of support for specifying organization (--metrics.influxdb.organization), but still retains backwards compatibility with InfluxDB v1.
This removes auto-configuration of the snap.*.ethdisco.net DNS discovery tree.
Since measurements have shown that > 75% of nodes in all.*.ethdisco.net support
snap, we have decided to retire the dedicated index for snap and just use the eth
tree instead.
The dial iterators of eth and snap now use the same DNS tree in the default configuration,
so both iterators should use the same DNS discovery client instance. This ensures that
the record cache and rate limit are shared. Records will not be requested multiple times.
While testing the change, I noticed that duplicate DNS requests do happen even
when the client instance is shared. This is because the two iterators request the tree
root, link tree root, and first levels of the tree in lockstep. To avoid this problem, the
change also adds a singleflight.Group instance in the client. When one iterator
attempts to resolve an entry which is already being resolved, the singleflight object
waits for the existing resolve call to finish and returns the entry to both places.
This upgrades the cloudflare client dependency to v0.14.0. The new
version changes the API because all methods now require a context
parameter. This change also reduces the log level of the 'Skipping...'
message to debug, following a similar change in the AWS deployer.
This updates the DNS deployer to use AWS SDK v2. Migration is relatively
seamless, although there were two locations that required a slightly
different approach to achieve the same results. In particular, waiting for
DNS change propagation is very different with SDK v2.
This change also optimizes DNS updates by publishing all changes before
waiting for propagation.
This replaces the github.com/pborman/uuid dependency with
github.com/google/uuid because the former is only a wrapper for
the latter (since v1.0.0).
Co-authored-by: Felix Lange <fjl@twurst.com>
* accounts/scwallet: use go-ethereum crypto instead of go-ecdh
github.com/wsddn/go-ecdh is a wrapper package for ECDH functionality
with any elliptic curve.
Since 'generic' ECDH is not required in accounts/scwallet (the curve is
always secp256k1), we can just use the standard library functionality
and our own crypto libraries to perform ECDH and save a dependency.
* Update accounts/scwallet/securechannel.go
Co-authored-by: Guillaume Ballet <gballet@gmail.com>
* Use the correct key
Co-authored-by: Guillaume Ballet <gballet@gmail.com>
* internal/build: implement signify's signing func
* Add signify to the ci utility
* fix output file format
* Add unit test for signify
* holiman's + travis' feedback
* internal/build: verify signify's output
* crypto: move signify to common dir
* use go-minisign to verify binaries
* more holiman feedback
* crypto, ci: support minisign output
* only accept one-line trusted comments
* configurable untrusted comments
* code cleanup in tests
* revert to use ed25519 from the stdlib
* bug: fix for empty untrusted comments
* write timestamp as comment if trusted comment isn't present
* rename line checker to commentHasManyLines
* crypto: added signify fuzzer (#6)
* crypto: added signify fuzzer
* stuff
* crypto: updated signify fuzzer to fuzz comments
* crypto: repro signify crashes
* rebased fuzzer on build-signify branch
* hide fuzzer behind gofuzz build flag
* extract key data inside a single function
* don't treat \r as a newline
* travis: fix signing command line
* do not use an external binary in tests
* crypto: move signify to crypto/signify
* travis: fix formatting issue
* ci: fix linter build after package move
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
* core/vm: use fixed uint256 library instead of big
* core/vm: remove intpools
* core/vm: upgrade uint256, fixes uint256.NewFromBig
* core/vm: use uint256.Int by value in Stack
* core/vm: upgrade uint256 to v1.0.0
* core/vm: don't preallocate space for 1024 stack items (only 16)
Co-authored-by: Martin Holst Swende <martin@swende.se>
* replace gosigar with gopsutil
* removed check for whether GOOS is openbsd
* removed accidental import of runtime
* potential fix for difference in units between gosig and gopsutil
* fixed lint error
* remove multiplication factor
* uses cpu.ClocksPerSec as the multiplication factor
* changed dependency from shirou to renaynay (#20)
* updated dep
* switching back from using renaynay fork to using upstream as PRs were merged on upstream
* removed empty line
* optimized imports
* tidied go mod
golang-lru is now a go module, and the upgrade corrects a couple
of minor issues. In particular, the library could crash if you inserted
nil into an LRU cache.
This revision of go-duktype fixes the following warning
```
duk_logging.c: In function ‘duk__logger_prototype_log_shared’:
duk_logging.c:184:64: warning: ‘Z’ directive writing 1 byte into a region of size between 0 and 9 [-Wformat-overflow=]
184 | sprintf((char *) date_buf, "%04d-%02d-%02dT%02d:%02d:%02d.%03dZ",
| ^
In file included from /usr/include/stdio.h:867,
from duk_logging.c:5:
/usr/include/x86_64-linux-gnu/bits/stdio2.h:36:10: note: ‘__builtin___sprintf_chk’ output between 25 and 85 bytes into a destination of size 32
36 | return __builtin___sprintf_chk (__s, __USE_FORTIFY_LEVEL - 1,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
37 | __bos (__s), __fmt, __va_arg_pack ());
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
This replaces the JavaScript interpreter used by the console with goja,
which is actively maintained and a lot faster than otto. Clef still uses otto
and eth/tracers still uses duktape, so we are currently dependent on three
different JS interpreters. We're looking to replace the remaining uses of otto
soon though.