The various Is methods (IsPrivate, IsLoopback, etc) did not work as expected for IPv4-mapped IPv6 addresses, returning false for addresses which would return true in their traditional IPv4 forms.
Affected range
<1.23.8
Fixed version
1.23.8
EPSS Score
0.019%
EPSS Percentile
4th percentile
Description
The net/http package improperly accepts a bare LF as a line terminator in chunked data chunk-size lines. This can permit request smuggling if a net/http server is used in conjunction with a server that incorrectly accepts a bare LF as part of a chunk-ext.
Affected range
<1.24.11
Fixed version
1.24.11
EPSS Score
0.016%
EPSS Percentile
3rd percentile
Description
Within HostnameError.Error(), when constructing an error string, there is no limit to the number of hosts that will be printed out. Furthermore, the error string is constructed by repeated string concatenation, leading to quadratic runtime. Therefore, a certificate provided by a malicious actor can result in excessive resource consumption.
Affected range
<1.24.8
Fixed version
1.24.8
EPSS Score
0.026%
EPSS Percentile
6th percentile
Description
The ParseAddress function constructs domain-literal address components through repeated string concatenation. When parsing large domain-literal components, this can cause excessive CPU consumption.
Affected range
<1.24.8
Fixed version
1.24.8
EPSS Score
0.026%
EPSS Percentile
6th percentile
Description
The processing time for parsing some invalid inputs scales non-linearly with respect to the size of the input.
This affects programs which parse untrusted PEM inputs.
Affected range
<1.24.8
Fixed version
1.24.8
EPSS Score
0.014%
EPSS Percentile
2nd percentile
Description
Validating certificate chains which contain DSA public keys can cause programs to panic, due to a interface cast that assumes they implement the Equal method.
This affects programs which validate arbitrary certificate chains.
Affected range
<1.24.9
Fixed version
1.24.9
EPSS Score
0.015%
EPSS Percentile
3rd percentile
Description
Due to the design of the name constraint checking algorithm, the processing time of some inputs scale non-linearly with respect to the size of the certificate.
This affects programs which validate arbitrary certificate chains.
Affected range
<1.22.7
Fixed version
1.22.7
EPSS Score
0.147%
EPSS Percentile
36th percentile
Description
Calling Parse on a "// +build" build tag line with deeply nested expressions can cause a panic due to stack exhaustion.
Affected range
<1.22.7
Fixed version
1.22.7
EPSS Score
0.298%
EPSS Percentile
53rd percentile
Description
Calling Decoder.Decode on a message which contains deeply nested structures can cause a panic due to stack exhaustion. This is a follow-up to CVE-2022-30635.
Affected range
<1.21.12
Fixed version
1.21.12
EPSS Score
0.618%
EPSS Percentile
69th percentile
Description
The net/http HTTP/1.1 client mishandled the case where a server responds to a request with an "Expect: 100-continue" header with a non-informational (200 or higher) status. This mishandling could leave a client connection in an invalid state, where the next request sent on the connection will fail.
An attacker sending a request to a net/http/httputil.ReverseProxy proxy can exploit this mishandling to cause a denial of service by sending "Expect: 100-continue" requests which elicit a non-informational response from the backend. Each such request leaves the proxy with an invalid connection, and causes one subsequent request using that connection to fail.
Affected range
<1.21.8
Fixed version
1.21.8
EPSS Score
1.498%
EPSS Percentile
81st percentile
Description
The ParseAddressList function incorrectly handles comments (text within parentheses) within display names. Since this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers.
Affected range
<1.21.9
Fixed version
1.21.9
EPSS Score
66.635%
EPSS Percentile
98th percentile
Description
An attacker may cause an HTTP/2 endpoint to read arbitrary amounts of header data by sending an excessive number of CONTINUATION frames.
Maintaining HPACK state requires parsing and processing all HEADERS and CONTINUATION frames on a connection. When a request's headers exceed MaxHeaderBytes, no memory is allocated to store the excess headers, but they are still parsed.
This permits an attacker to cause an HTTP/2 endpoint to read arbitrary amounts of header data, all associated with a request which is going to be rejected. These headers can include Huffman-encoded data which is significantly more expensive for the receiver to decode than for an attacker to send.
The fix sets a limit on the amount of excess header frames we will process before closing a connection.
Affected range
>=1.21.0-0 <1.21.4
Fixed version
1.21.4
EPSS Score
0.097%
EPSS Percentile
27th percentile
Description
The filepath package does not recognize paths with a ??\ prefix as special.
On Windows, a path beginning with ??\ is a Root Local Device path equivalent to a path beginning with \?. Paths with a ??\ prefix may be used to access arbitrary locations on the system. For example, the path ??\c:\x is equivalent to the more common path c:\x.
Before fix, Clean could convert a rooted path such as \a..??\b into the root local device path ??\b. Clean will now convert this to .??\b.
Similarly, Join(, ??, b) could convert a seemingly innocent sequence of path elements into the root local device path ??\b. Join will now convert this to .??\b.
In addition, with fix, IsAbs now correctly reports paths beginning with ??\ as absolute, and VolumeName correctly reports the ??\ prefix as a volume name.
UPDATE: Go 1.20.11 and Go 1.21.4 inadvertently changed the definition of the volume name in Windows paths starting with ?, resulting in filepath.Clean(?\c:) returning ?\c: rather than ?\c:\ (among other effects). The previous behavior has been restored.
Affected range
>=1.21.0-0 <1.21.3
Fixed version
1.21.3
EPSS Score
94.419%
EPSS Percentile
100th percentile
Description
A malicious HTTP/2 client which rapidly creates requests and immediately resets them can cause excessive server resource consumption. While the total number of requests is bounded by the http2.Server.MaxConcurrentStreams setting, resetting an in-progress request allows the attacker to create a new request while the existing one is still executing.
With the fix applied, HTTP/2 servers now bound the number of simultaneously executing handler goroutines to the stream concurrency limit (MaxConcurrentStreams). New requests arriving when at the limit (which can only happen after the client has reset an existing, in-flight request) will be queued until a handler exits. If the request queue grows too large, the server will terminate the connection.
This issue is also fixed in golang.org/x/net/http2 for users manually configuring HTTP/2.
The default stream concurrency limit is 250 streams (requests) per HTTP/2 connection. This value may be adjusted using the golang.org/x/net/http2 package; see the Server.MaxConcurrentStreams setting and the ConfigureServer function.
Affected range
>=1.21.0-0 <1.21.3
Fixed version
1.21.3
EPSS Score
0.163%
EPSS Percentile
38th percentile
Description
A malicious HTTP/2 client which rapidly creates requests and immediately resets them can cause excessive server resource consumption. While the total number of requests is bounded by the http2.Server.MaxConcurrentStreams setting, resetting an in-progress request allows the attacker to create a new request while the existing one is still executing.
With the fix applied, HTTP/2 servers now bound the number of simultaneously executing handler goroutines to the stream concurrency limit (MaxConcurrentStreams). New requests arriving when at the limit (which can only happen after the client has reset an existing, in-flight request) will be queued until a handler exits. If the request queue grows too large, the server will terminate the connection.
This issue is also fixed in golang.org/x/net/http2 for users manually configuring HTTP/2.
The default stream concurrency limit is 250 streams (requests) per HTTP/2 connection. This value may be adjusted using the golang.org/x/net/http2 package; see the Server.MaxConcurrentStreams setting and the ConfigureServer function.
Affected range
<1.22.7
Fixed version
1.22.7
EPSS Score
0.160%
EPSS Percentile
37th percentile
Description
Calling Decoder.Decode on a message which contains deeply nested structures can cause a panic due to stack exhaustion. This is a follow-up to CVE-2022-30635.
Affected range
<1.23.10
Fixed version
1.23.10
EPSS Score
0.010%
EPSS Percentile
1st percentile
Description
Proxy-Authorization and Proxy-Authenticate headers persisted on cross-origin redirects potentially leaking sensitive information.
Affected range
<1.24.11
Fixed version
1.24.11
EPSS Score
0.021%
EPSS Percentile
5th percentile
Description
An excluded subdomain constraint in a certificate chain does not restrict the usage of wildcard SANs in the leaf certificate. For example a constraint that excludes the subdomain test.example.com does not prevent a leaf certificate from claiming the SAN *.example.com.
Affected range
<1.23.12
Fixed version
1.23.12
EPSS Score
0.020%
EPSS Percentile
5th percentile
Description
If the PATH environment variable contains paths which are executables (rather than just directories), passing certain strings to LookPath ("", ".", and ".."), can result in the binaries listed in the PATH being unexpectedly returned.
Affected range
<1.21.8
Fixed version
1.21.8
EPSS Score
0.362%
EPSS Percentile
58th percentile
Description
When parsing a multipart form (either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue, Request.PostFormValue, or Request.FormFile), limits on the total size of the parsed form were not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing very long lines to cause allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion.
With fix, the ParseMultipartForm function now correctly limits the maximum size of form lines.
Affected range
<1.22.11
Fixed version
1.22.11
EPSS Score
0.048%
EPSS Percentile
15th percentile
Description
A certificate with a URI which has a IPv6 address with a zone ID may incorrectly satisfy a URI name constraint that applies to the certificate chain.
Certificates containing URIs are not permitted in the web PKI, so this only affects users of private PKIs which make use of URIs.
Affected range
<1.22.11
Fixed version
1.22.11
EPSS Score
0.078%
EPSS Percentile
24th percentile
Description
The HTTP client drops sensitive headers after following a cross-domain redirect. For example, a request to a.com/ containing an Authorization header which is redirected to b.com/ will not send that header to b.com.
In the event that the client received a subsequent same-domain redirect, however, the sensitive headers would be restored. For example, a chain of redirects from a.com/, to b.com/1, and finally to b.com/2 would incorrectly send the Authorization header to b.com/2.
Affected range
<1.21.8
Fixed version
1.21.8
EPSS Score
0.445%
EPSS Percentile
63rd percentile
Description
Verifying a certificate chain which contains a certificate with an unknown public key algorithm will cause Certificate.Verify to panic.
This affects all crypto/tls clients, and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert. The default behavior is for TLS servers to not verify client certificates.
Affected range
<1.23.10
Fixed version
1.23.10
EPSS Score
0.008%
EPSS Percentile
1st percentile
Description
os.OpenFile(path, os.O_CREATE|O_EXCL) behaved differently on Unix and Windows systems when the target path was a dangling symlink. On Unix systems, OpenFile with O_CREATE and O_EXCL flags never follows symlinks. On Windows, when the target path was a symlink to a nonexistent location, OpenFile would create a file in that location. OpenFile now always returns an error when the O_CREATE and O_EXCL flags are both set and the target path is a symlink.
Affected range
<1.21.11
Fixed version
1.21.11
EPSS Score
0.006%
EPSS Percentile
0th percentile
Description
The archive/zip package's handling of certain types of invalid zip files differs from the behavior of most zip implementations. This misalignment could be exploited to create an zip file with contents that vary depending on the implementation reading the file. The archive/zip package now rejects files containing these errors.
Affected range
<1.21.8
Fixed version
1.21.8
EPSS Score
0.273%
EPSS Percentile
50th percentile
Description
If errors returned from MarshalJSON methods contain user controlled data, they may be used to break the contextual auto-escaping behavior of the html/template package, allowing for subsequent actions to inject unexpected content into templates.
Affected range
<1.24.8
Fixed version
1.24.8
EPSS Score
0.025%
EPSS Percentile
6th percentile
Description
The Reader.ReadResponse function constructs a response string through repeated string concatenation of lines. When the number of lines in a response is large, this can cause excessive CPU consumption.
Affected range
<1.24.8
Fixed version
1.24.8
EPSS Score
0.019%
EPSS Percentile
4th percentile
Description
When Conn.Handshake fails during ALPN negotiation the error contains attacker controlled information (the ALPN protocols sent by the client) which is not escaped.
Affected range
<1.24.8
Fixed version
1.24.8
EPSS Score
0.029%
EPSS Percentile
7th percentile
Description
Despite HTTP headers having a default limit of 1MB, the number of cookies that can be parsed does not have a limit. By sending a lot of very small cookies such as "a=;", an attacker can make an HTTP server allocate a large amount of structs, causing large memory consumption.
Affected range
<1.24.8
Fixed version
1.24.8
EPSS Score
0.033%
EPSS Percentile
9th percentile
Description
Parsing a maliciously crafted DER payload could allocate large amounts of memory, causing memory exhaustion.
Affected range
<1.24.8
Fixed version
1.24.8
EPSS Score
0.025%
EPSS Percentile
6th percentile
Description
The Parse function permits values other than IPv6 addresses to be included in square brackets within the host component of a URL. RFC 3986 permits IPv6 addresses to be included within the host component, enclosed within square brackets. For example: "http://[::1]/". IPv4 addresses and hostnames must not appear within square brackets. Parse did not enforce this requirement.
Affected range
>=1.21.0-0 <1.21.4
Fixed version
1.21.4
EPSS Score
0.040%
EPSS Percentile
12th percentile
Description
On Windows, The IsLocal function does not correctly detect reserved device names in some cases.
Reserved names followed by spaces, such as "COM1 ", and reserved names "COM" and "LPT" followed by superscript 1, 2, or 3, are incorrectly reported as local.
With fix, IsLocal now correctly reports these names as non-local.
Affected range
>=1.21.0-0 <1.21.5
Fixed version
1.21.5
EPSS Score
0.048%
EPSS Percentile
15th percentile
Description
A malicious HTTP sender can use chunk extensions to cause a receiver reading from a request or response body to read many more bytes from the network than are in the body.
A malicious HTTP client can further exploit this to cause a server to automatically read a large amount of data (up to about 1GiB) when a handler fails to read the entire body of a request.
Chunk extensions are a little-used HTTP feature which permit including additional metadata in a request or response body sent using the chunked encoding. The net/http chunked encoding reader discards this metadata. A sender can exploit this by inserting a large metadata segment with each byte transferred. The chunk reader now produces an error if the ratio of real body to encoded bytes grows too small.
Affected range
<1.24.8
Fixed version
1.24.8
EPSS Score
0.014%
EPSS Percentile
2nd percentile
Description
tar.Reader does not set a maximum size on the number of sparse region data blocks in GNU tar pax 1.0 sparse files. A maliciously-crafted archive containing a large number of sparse regions can cause a Reader to read an unbounded amount of data from the archive into memory. When reading from a compressed source, a small compressed input can result in large allocations.
Affected range
<1.22.7
Fixed version
1.22.7
EPSS Score
0.073%
EPSS Percentile
22nd percentile
Description
Calling any of the Parse functions on Go source code which contains deeply nested literals can cause a panic due to stack exhaustion.
Affected range
<1.21.8
Fixed version
1.21.8
EPSS Score
0.454%
EPSS Percentile
63rd percentile
Description
When following an HTTP redirect to a domain which is not a subdomain match or exact match of the initial domain, an http.Client does not forward sensitive headers such as "Authorization" or "Cookie". For example, a redirect from foo.com to www.foo.com will forward the Authorization header, but a redirect to bar.com will not.
A maliciously crafted HTTP redirect could cause sensitive headers to be unexpectedly forwarded.
Affected range
<1.22.12
Fixed version
1.22.12
EPSS Score
0.017%
EPSS Percentile
3rd percentile
Description
Due to the usage of a variable time instruction in the assembly implementation of an internal function, a small number of bits of secret scalars are leaked on the ppc64le architecture. Due to the way this function is used, we do not believe this leakage is enough to allow recovery of the private key when P-256 is used in any well known protocols.
Applications and libraries which misuse the ServerConfig.PublicKeyCallback callback may be susceptible to an authorization bypass.
The documentation for ServerConfig.PublicKeyCallback says that "A call to this function does not guarantee that the key offered is in fact used to authenticate." Specifically, the SSH protocol allows clients to inquire about whether a public key is acceptable before proving control of the corresponding private key. PublicKeyCallback may be called with multiple keys, and the order in which the keys were provided cannot be used to infer which key the client successfully authenticated with, if any. Some applications, which store the key(s) passed to PublicKeyCallback (or derived information) and make security relevant determinations based on it once the connection is established, may make incorrect assumptions.
For example, an attacker may send public keys A and B, and then authenticate with A. PublicKeyCallback would be called only twice, first with A and then with B. A vulnerable application may then make authorization decisions based on key B for which the attacker does not actually control the private key.
Since this API is widely misused, as a partial mitigation golang.org/x/crypto@v0.31.0 enforces the property that, when successfully authenticating via public key, the last key passed to ServerConfig.PublicKeyCallback will be the key used to authenticate the connection. PublicKeyCallback will now be called multiple times with the same key, if necessary. Note that the client may still not control the last key passed to PublicKeyCallback if the connection is then authenticated with a different method, such as PasswordCallback, KeyboardInteractiveCallback, or NoClientAuth.
Users should be using the Extensions field of the Permissions return value from the various authentication callbacks to record data associated with the authentication attempt instead of referencing external state. Once the connection is established the state corresponding to the successful authentication attempt can be retrieved via the ServerConn.Permissions field. Note that some third-party libraries misuse the Permissions type by sharing it across authentication attempts; users of third-party libraries should refer to the relevant projects for guidance.
Affected range
<0.43.0
Fixed version
0.43.0
EPSS Score
0.018%
EPSS Percentile
4th percentile
Description
SSH clients receiving SSH_AGENT_SUCCESS when expecting a typed response will panic and cause early termination of the client process.
Affected range
<0.35.0
Fixed version
0.35.0
EPSS Score
0.217%
EPSS Percentile
44th percentile
Description
SSH servers which implement file transfer protocols are vulnerable to a denial of service attack from clients which complete the key exchange slowly, or not at all, causing pending content to be read into memory, but never transmitted.
Terrapin is a prefix truncation attack targeting the SSH protocol. More precisely, Terrapin breaks the integrity of SSH's secure channel. By carefully adjusting the sequence numbers during the handshake, an attacker can remove an arbitrary amount of messages sent by the client or server at the beginning of the secure channel without the client or server noticing it.
To mitigate this protocol vulnerability, OpenSSH suggested a so-called "strict kex" which alters the SSH handshake to ensure a Man-in-the-Middle attacker cannot introduce unauthenticated messages as well as convey sequence number manipulation across handshakes.
Warning: To take effect, both the client and server must support this countermeasure.
As a stop-gap measure, peers may also (temporarily) disable the affected algorithms and use unaffected alternatives like AES-GCM instead until patches are available.
The SSH specifications of ChaCha20-Poly1305 (chacha20-poly1305@openssh.com) and Encrypt-then-MAC (*-etm@openssh.com MACs) are vulnerable against an arbitrary prefix truncation attack (a.k.a. Terrapin attack). This allows for an extension negotiation downgrade by stripping the SSH_MSG_EXT_INFO sent after the first message after SSH_MSG_NEWKEYS, downgrading security, and disabling attack countermeasures in some versions of OpenSSH. When targeting Encrypt-then-MAC, this attack requires the use of a CBC cipher to be practically exploitable due to the internal workings of the cipher mode. Additionally, this novel attack technique can be used to exploit previously unexploitable implementation flaws in a Man-in-the-Middle scenario.
The attack works by an attacker injecting an arbitrary number of SSH_MSG_IGNORE messages during the initial key exchange and consequently removing the same number of messages just after the initial key exchange has concluded. This is possible due to missing authentication of the excess SSH_MSG_IGNORE messages and the fact that the implicit sequence numbers used within the SSH protocol are only checked after the initial key exchange.
In the case of ChaCha20-Poly1305, the attack is guaranteed to work on every connection as this cipher does not maintain an internal state other than the message's sequence number. In the case of Encrypt-Then-MAC, practical exploitation requires the use of a CBC cipher; while theoretical integrity is broken for all ciphers when using this mode, message processing will fail at the application layer for CTR and stream ciphers.
This attack targets the specification of ChaCha20-Poly1305 (chacha20-poly1305@openssh.com) and Encrypt-then-MAC (*-etm@openssh.com), which are widely adopted by well-known SSH implementations and can be considered de-facto standard. These algorithms can be practically exploited; however, in the case of Encrypt-Then-MAC, we additionally require the use of a CBC cipher. As a consequence, this attack works against all well-behaving SSH implementations supporting either of those algorithms and can be used to downgrade (but not fully strip) connection security in case SSH extension negotiation (RFC8308) is supported. The attack may also enable attackers to exploit certain implementation flaws in a man-in-the-middle (MitM) scenario.
Allocation of Resources Without Limits or Throttling
Affected range
<0.45.0
Fixed version
0.45.0
CVSS Score
5.3
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L
EPSS Score
0.082%
EPSS Percentile
24th percentile
Description
SSH servers parsing GSSAPI authentication requests do not validate the number of mechanisms specified in the request, allowing an attacker to cause unbounded memory consumption.
Out-of-bounds Read
Affected range
<0.45.0
Fixed version
0.45.0
CVSS Score
5.3
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L
EPSS Score
0.051%
EPSS Percentile
16th percentile
Description
SSH Agent servers do not validate the size of messages when processing new identity requests, which may cause the program to panic if the message is malformed due to an out of bounds read.
github.com/coredns/coredns1.11.1 (golang)
pkg:golang/github.com/coredns/coredns@1.11.1 Allocation of Resources Without Limits or Throttling
A Denial of Service (DoS) vulnerability was discovered in the CoreDNS DNS-over-QUIC (DoQ) server implementation. The server previously created a new goroutine for every incoming QUIC stream without imposing any limits on the number of concurrent streams or goroutines. A remote, unauthenticated attacker could open a large number of streams, leading to uncontrolled memory consumption and eventually causing an Out Of Memory (OOM) crash — especially in containerized or memory-constrained environments.
Impact: High availability loss (OOM kill or unresponsiveness)
This issue affects deployments with quic:// enabled in the Corefile. A single attacker can cause the CoreDNS instance to become unresponsive using minimal bandwidth and CPU.
The patch introduces two key mitigation mechanisms:
max_streams: Caps the number of concurrent QUIC streams per connection. Default: 256.
worker_pool_size: Introduces a server-wide, bounded worker pool to process incoming streams. Default: 1024.
This eliminates the 1:1 stream-to-goroutine model and ensures that CoreDNS remains resilient under high concurrency. The new configuration options are exposed through the quic Corefile block:
quic { max_streams 256 worker_pool_size 1024 }
These defaults are generous and aligned with typical DNS-over-QUIC client behavior.
Please consult our security guide for more information regarding our security process.
Incorrect Conversion between Numeric Types
Affected range
>=1.2.0 <1.12.4
Fixed version
1.12.4
CVSS Score
7.1
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:L/A:H
EPSS Score
0.059%
EPSS Percentile
18th percentile
Description
Summary
The CoreDNS etcd plugin contains a TTL confusion vulnerability where lease IDs are incorrectly used as TTL values, enabling cache pinning for very long periods. This can effectively cause a denial of service for DNS updates/changes to affected services.
Details
In plugin/etcd/etcd.go, the TTL() function casts the 64-bit etcd lease ID to a uint32 and uses it as the TTL:
func(e *Etcd)TTL(kv *mvccpb.KeyValue, serv *msg.Service)uint32{ etcdTTL :=uint32(kv.Lease)// BUG: Lease ID != TTL duration // ... rest of function uses etcdTTL as actual TTL }
Lease IDs are identifiers, not durations. Large lease IDs can produce very large TTLs after truncation, causing downstream resolvers and clients to cache answers for years.
This enables cache pinning attacks, such as:
Attacker has etcd write access (compromised service account, misconfigured RBAC/TLS, exposed etcd, insider).
Attacker writes/updates a key and attaches any lease (the actual lease duration is irrelevant; the ID is misused).
CoreDNS serves the record with an extreme TTL; downstream resolvers/clients cache it for a very long time.
Even after fixing/deleting the key (or restarting CoreDNS), clients continue to use the cached answer until their caches expire or enforce their own TTL caps.
Some resolvers implement TTL caps, but values and defaults vary widely and are not guaranteed.
Query the DNS record and observe the record TTL at 28 years:
$ dig +noall +answer @127.0.0.1 -p 1053 large-lease-service.skydns.local A large-lease-service.skydns.local. 901209123 IN A 192.168.1.101
Impact
Affects any CoreDNS deployment using the etcd plugin for service discovery.
Availability: High as service changes (IP rotations, failovers, rollbacks) may be ignored for extended periods by caches.
Integrity: Low as stale/incorrect answers persist abnormally long. (Note: attacker with etcd write could already point to malicious endpoints; the bug magnifies persistence.)
Confidentiality: None.
The bug was introduced in #1702 as part of the CoreDNS v1.2.0 release.
Mitigation
The TTL function should utilise etcd's Lease API to determine the proper TTL for leased records. Add configurable limits for minimum and maximum TTL when passing lease records, to clamp potentially extreme TTL values set as lease grant.
Credit
Thanks to @thevilledev for disclovering this vulnerability and contributing a fix.
For more information
Please consult our security guide for more information regarding our security process.
In affected releases of gRPC-Go, it is possible for an attacker to send HTTP/2 requests, cancel them, and send subsequent requests, which is valid by the HTTP/2 protocol, but would cause the gRPC-Go server to launch more concurrent method handlers than the configured maximum stream limit.
This vulnerability was addressed by #6703 and has been included in patch releases: 1.56.3, 1.57.1, 1.58.3. It is also included in the latest release, 1.59.0.
Along with applying the patch, users should also ensure they are using the grpc.MaxConcurrentStreams server option to apply a limit to the server's resources used for any single connection.
An attacker can send HTTP/2 requests, cancel them, and send subsequent requests. This is valid by the HTTP/2 protocol, but would cause the gRPC-Go server to launch more concurrent method handlers than the configured maximum stream limit, grpc.MaxConcurrentStreams. This results in a denial of service due to resource consumption.
A malicious HTTP/2 client which rapidly creates requests and immediately resets them can cause excessive server resource consumption. While the total number of requests is bounded by the http2.Server.MaxConcurrentStreams setting, resetting an in-progress request allows the attacker to create a new request while the existing one is still executing.
With the fix applied, HTTP/2 servers now bound the number of simultaneously executing handler goroutines to the stream concurrency limit (MaxConcurrentStreams). New requests arriving when at the limit (which can only happen after the client has reset an existing, in-flight request) will be queued until a handler exits. If the request queue grows too large, the server will terminate the connection.
This issue is also fixed in golang.org/x/net/http2 for users manually configuring HTTP/2.
The default stream concurrency limit is 250 streams (requests) per HTTP/2 connection. This value may be adjusted using the golang.org/x/net/http2 package; see the Server.MaxConcurrentStreams setting and the ConfigureServer function.
The HTTP/2 protocol allows clients to indicate to the server that a previous stream should be canceled by sending a RST_STREAM frame. The protocol does not require the client and server to coordinate the cancellation in any way, the client may do it unilaterally. The client may also assume that the cancellation will take effect immediately when the server receives the RST_STREAM frame, before any other data from that TCP connection is processed.
Abuse of this feature is called a Rapid Reset attack because it relies on the ability for an endpoint to send a RST_STREAM frame immediately after sending a request frame, which makes the other endpoint start working and then rapidly resets the request. The request is canceled, but leaves the HTTP/2 connection open.
The HTTP/2 Rapid Reset attack built on this capability is simple: The client opens a large number of streams at once as in the standard HTTP/2 attack, but rather than waiting for a response to each request stream from the server or proxy, the client cancels each request immediately.
The ability to reset streams immediately allows each connection to have an indefinite number of requests in flight. By explicitly canceling the requests, the attacker never exceeds the limit on the number of concurrent open streams. The number of in-flight requests is no longer dependent on the round-trip time (RTT), but only on the available network bandwidth.
In a typical HTTP/2 server implementation, the server will still have to do significant amounts of work for canceled requests, such as allocating new stream data structures, parsing the query and doing header decompression, and mapping the URL to a resource. For reverse proxy implementations, the request may be proxied to the backend server before the RST_STREAM frame is processed. The client on the other hand paid almost no costs for sending the requests. This creates an exploitable cost asymmetry between the server and the client.
Multiple software artifacts implementing HTTP/2 are affected. This advisory was originally ingested from the swift-nio-http2 repo advisory and their original conent follows.
swift-nio-http2 is vulnerable to a denial-of-service vulnerability in which a malicious client can create and then reset a large number of HTTP/2 streams in a short period of time. This causes swift-nio-http2 to commit to a large amount of expensive work which it then throws away, including creating entirely new Channels to serve the traffic. This can easily overwhelm an EventLoop and prevent it from making forward progress.
swift-nio-http2 1.28 contains a remediation for this issue that applies reset counter using a sliding window. This constrains the number of stream resets that may occur in a given window of time. Clients violating this limit will have their connections torn down. This allows clients to continue to cancel streams for legitimate reasons, while constraining malicious actors.
Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
The tokenizer incorrectly interprets tags with unquoted attribute values that end with a solidus character (/) as self-closing. When directly using Tokenizer, this can result in such tags incorrectly being marked as self-closing, and when using the Parse functions, this can result in content following such tags as being placed in the wrong scope during DOM construction, but only when tags are in foreign content (e.g.
Affected range
<0.33.0
Fixed version
0.33.0
EPSS Score
0.157%
EPSS Percentile
37th percentile
Description
An attacker can craft an input to the Parse functions that would be processed non-linearly with respect to its length, resulting in extremely slow parsing. This could cause a denial of service.
Uncontrolled Resource Consumption
Affected range
<0.23.0
Fixed version
0.23.0
CVSS Score
5.3
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L
EPSS Score
66.635%
EPSS Percentile
98th percentile
Description
An attacker may cause an HTTP/2 endpoint to read arbitrary amounts of header data by sending an excessive number of CONTINUATION frames. Maintaining HPACK state requires parsing and processing all HEADERS and CONTINUATION frames on a connection. When a request's headers exceed MaxHeaderBytes, no memory is allocated to store the excess headers, but they are still parsed. This permits an attacker to cause an HTTP/2 endpoint to read arbitrary amounts of header data, all associated with a request which is going to be rejected. These headers can include Huffman-encoded data which is significantly more expensive for the receiver to decode than for an attacker to send. The fix sets a limit on the amount of excess header frames we will process before closing a connection.
Misinterpretation of Input
Affected range
<0.36.0
Fixed version
0.36.0
CVSS Score
4.4
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:L
EPSS Score
0.023%
EPSS Percentile
5th percentile
Description
Matching of hosts against proxy patterns can improperly treat an IPv6 zone ID as a hostname component. For example, when the NO_PROXY environment variable is set to "*.example.com", a request to "[::1%25.example.com]:80` will incorrectly match and not be proxied.
As a result, in the face of a malicious request whose Authorization header consists of Bearer followed by many period characters, a call to that function incurs allocations to the tune of O(n) bytes (where n stands for the length of the function's argument), with a constant factor of about 16. Relevant weakness: CWE-405: Asymmetric Resource Consumption (Amplification)
Unclear documentation of the error behavior in ParseWithClaims can lead to situation where users are potentially not checking errors in the way they should be. Especially, if a token is both expired and invalid, the errors returned by ParseWithClaims return both error codes. If users only check for the jwt.ErrTokenExpired using error.Is, they will ignore the embedded jwt.ErrTokenSignatureInvalid and thus potentially accept invalid tokens.
We have back-ported the error handling logic from the v5 branch to the v4 branch. In this logic, the ParseWithClaims function will immediately return in "dangerous" situations (e.g., an invalid signature), limiting the combined errors only to situations where the signature is valid, but further validation failed (e.g., if the signature is valid, but is expired AND has the wrong audience). This fix is part of the 4.5.1 release.
We are aware that this changes the behaviour of an established function and is not 100 % backwards compatible, so updating to 4.5.1 might break your code. In case you cannot update to 4.5.0, please make sure that you are properly checking for all errors ("dangerous" ones first), so that you are not running in the case detailed above.
token, err :=/* jwt.Parse or similar */ if token.Valid { fmt.Println("You look nice today") }elseif errors.Is(err, jwt.ErrTokenMalformed){ fmt.Println("That's not even a token") }elseif errors.Is(err, jwt.ErrTokenUnverifiable){ fmt.Println("We could not verify this token") }elseif errors.Is(err, jwt.ErrTokenSignatureInvalid){ fmt.Println("This token has an invalid signature") }elseif errors.Is(err, jwt.ErrTokenExpired)|| errors.Is(err, jwt.ErrTokenNotValidYet){ // Token is either expired or not active yet fmt.Println("Timing is everything") }else{ fmt.Println("Couldn't handle this token:", err) }
golang.org/x/oauth20.12.0 (golang)
pkg:golang/golang.org/x/oauth2@0.12.0 Improper Validation of Syntactic Correctness of Input
Affected range
<0.27.0
Fixed version
0.27.0
CVSS Score
7.5
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score
0.116%
EPSS Percentile
31st percentile
Description
An attacker can pass a malicious malformed token which causes unexpected memory to be consumed during parsing.
github.com/nats-io/nkeys0.4.4 (golang)
pkg:golang/github.com/nats-io/nkeys@0.4.4 Use of Hard-coded Cryptographic Key
NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.
The cryptographic key handling library, nkeys, recently gained support for encryption, not just for signing/authentication. This is used in nats-server 2.10 (Sep 2023) and newer for authentication callouts.
The nkeys library's "xkeys" encryption handling logic mistakenly passed an array by value into an internal function, where the function mutated that buffer to populate the encryption key to use. As a result, all encryption was actually to an all-zeros key.
This affects encryption only, not signing.
FIXME: FILL IN IMPACT ON NATS-SERVER AUTH CALLOUT SECURITY.
Upgrade the nats-server.
For any application handling auth callouts in Go, if using the nkeys library, update the dependency, recompile and deploy that in lockstep.
This vulnerability allows an attacker with a trusted public key to cause a Denial-of-Service (DoS) condition by crafting a malicious JSON Web Encryption (JWE) token with an exceptionally high compression ratio. When this token is processed by the recipient, it results in significant memory allocation and processing time during decompression.
The attacker needs to obtain a valid public key to compress the payload. It needs to be valid so that the recipient can use to successfully decompress the payload. Furthermore in context JWT processing in the v2 versions, the recipient must explicitly allow JWE handling .
The attacker then crafts a message with high compression ratio, e.g. a payload with very high frequency of repeating patterns that can decompress to a much larger size. If the payload is large enough, recipient who is decompressing the data will have to allocate a large amount of memory, which then can lead to a denial of service.
The original report includes a reference to [1], but there are some very subtle differences between this library and the aforementioned issue. The most important aspect is that the referenced issue focuses on JWT processing, whereas this library is intentionally divided into parts that comprise JOSE, i.e. JWT, JWS, JWE, JWK. In particular, v2 of this library does not attempt to handle JWT payload enveloped in a JWE message automatically (v1 attempted to do this automatically, but it was never stable).
Reflecting this subtle difference, the approach taken to mitigate this vulnerability is slightly different from the referenced issue. The referenced issue limits the size of JWT when parsing, but the fixes for this library limits the maximum size of the decompressed data when decrypting JWE messages. Therefore the fix in this library is applicable regardless of the usage context, and a limit is now imposed on the size of the message that our JWE implementation can handle.
Modified from the original report to fit the vulnerability better:
// The value below just needs to be "large enough" so that the it puts enough strain on the // recipient's environment. The value below is a safe size on my machine to run the test // without causing problems. When you increase the payload size, at some point the processing // will be slow enough to virtually freeze the program or cause a memory allocation error const payloadSize =1<<31 privkey, err := rsa.GenerateKey(rand.Reader,2048) require.NoError(t, err,`rsa.GenerateKey should succeed`) pubkey :=&privkey.PublicKey payload := strings.Repeat("x", payloadSize) encrypted, err := jwe.Encrypt([]byte(payload), jwe.WithKey(jwa.RSA_OAEP, pubkey), jwe.WithContentEncryption("A128CBC-HS256"), jwe.WithCompress(jwa.Deflate)) require.NoError(t, err,`jwe.Encrypt should succeed`) _, err = jwe.Decrypt(encrypted, jwe.WithKey(jwa.RSA_OAEP, privkey))// Will be allocating large amounts of memory require.Error(t, err,`jwe.Decrypt should fail`)
The JWE key management algorithms based on PBKDF2 require a JOSE Header Parameter called p2c (PBES2 Count). This parameter dictates the number of PBKDF2 iterations needed to derive a CEK wrapping key. Its primary purpose is to intentionally slow down the key derivation function, making password brute-force and dictionary attacks more resource- intensive.
Therefore, if an attacker sets the p2c parameter in JWE to a very large number, it can cause a lot of computational consumption, resulting in a DOS attack
This seems to also affect other functions that calls Parse internally, like jws.Verify.
My understanding of these functions from the docs is that they are supposed to fail gracefully on invalid input and don't require any prior validation.
Based on the stack trace in the PoC, the issue seems to be that the processing done in jws/message.go:UnmarshalJSON() assumes that if a signature field is present, then a protected field is also present. If this is not the case, then the subsequent call to getB64Value(sig.protected) will dereference sig.protected, which is nil.
The vulnerability can be used to crash / DOS a system doing JWS verification.
github.com/rs/cors1.10.0 (golang)
pkg:golang/github.com/rs/cors@1.10.0 Allocation of Resources Without Limits or Throttling
Affected range
>=1.9.0 <1.11.0
Fixed version
1.11.0
EPSS Score
0.128%
EPSS Percentile
33rd percentile
Description
Middleware causes a prohibitive amount of heap allocations when processing malicious preflight requests that include a Access-Control-Request-Headers (ACRH) header whose value contains many commas. This behavior can be abused by attackers to produce undue load on the middleware/server as an attempt to cause a denial of service.
OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities
Affected range
>=1.9.0 <1.11.0
Fixed version
1.11.0
Description
Middleware causes a prohibitive amount of heap allocations when processing malicious preflight requests that include a Access-Control-Request-Headers (ACRH) header whose value contains many commas. This behavior can be abused by attackers to produce undue load on the middleware/server as an attempt to cause a denial of service.
gopkg.in/square/go-jose.v22.6.0 (golang)
pkg:golang/gopkg.in/square/go-jose.v2@2.6.0 Improper Handling of Highly Compressed Data (Data Amplification)
An attacker could send a JWE containing compressed data that used large amounts of memory and CPU when decompressed by Decrypt or DecryptMulti. Those functions now return an error if the decompressed data would exceed 250kB or 10x the compressed size (whichever is larger). Thanks to Enze Wang@Alioth and Jianjun Chen@Zhongguancun Lab (@zer0yu and @chenjj) for reporting.
In Eclipse Paho Go MQTT v3.1 library (paho.mqtt.golang) versions <=1.5.0 UTF-8 encoded strings, passed into the library, may be incorrectly encoded if their length exceeds 65535 bytes. This may lead to unexpected content in packets sent to the server (for example, part of an MQTT topic may leak into the message body in a PUBLISH packet).
The issue arises because the length of the data passed in was converted from an int64/int32 (depending upon CPU) to an int16 without checks for overflows. The int16 length was then written, followed by the data (e.g. topic). This meant that when the data (e.g. topic) was over 65535 bytes then the amount of data written exceeds what the length field indicates. This could lead to a corrupt packet, or mean that the excess data leaks into another field (e.g. topic leaks into message body).
google.golang.org/protobuf1.31.0 (golang)
pkg:golang/google.golang.org/protobuf@1.31.0 Loop with Unreachable Exit Condition ('Infinite Loop')
The protojson.Unmarshal function can enter an infinite loop when unmarshaling certain forms of invalid JSON. This condition can occur when unmarshaling into a message which contains a google.protobuf.Any value, or when the UnmarshalOptions.DiscardUnknown option is set.