If we pass a pointer to NULL, the memory allocated by OpenSSL has to be
freed with OPENSSL_free(). Otherwise, this can lead to random
crashes/freezes for Windows builds as seen on AppVeyor. To not
complicate things for callers of this macro, we allocate our own memory,
which we already do for other i2d_* calls.
If no valid key is configured (e.g. because it's inadvertently uninitialized),
we should not just reuse the previous key.
The `key_set` flag is not necessary anymore because a non-NULL key is set
during initialization since 6b347d5232 ("openssl: Ensure underlying hash
algorithm is available during HMAC init").
In some cases, the algorithms that have been compiled into a plugin have
to be disabled at runtime. Based on the array returned by the get_features()
function the optionally provided function can strip algorithms or even
callbacks or registrations from a plugin, giving us a handy and powerful way
for runtime feature configuration aside from the plugin list.
Signed-off-by: Thomas Egerer <thomas.egerer@secunet.com>
This adds helper functions to determine the first or last directory separator
in a string and to check if a given character is a separator.
Paths starting with a separator are now also considered absolute on
Windows as these are rooted at the current drive.
Note that it's fine to use DIRECTORY_SEPARATOR when combining strings as
Windows API calls accept both forward and backward slashes as separators.
Co-authored-by: Michał Skalski <mskalski@enigma.com.pl>
References #3684.
Apparently, we should use OPENSSL_free() to release memory allocated by
OpenSSL. While it generally maps to free() that's apparently not the
case on Windows, where the ECP test vectors caused `ACCESS_VIOLATION
exception` crashes (not always the same vector).
Fixes: 74e02ff5e6 ("openssl: Mainly use EVP interface for ECDH")
Functions like ECDH_compute_key() will be removed with OpenSSL 3 (which
will require additional changes as other functions will be deprecated or
removed too).
ECDH_compute_key() was not used because it only gives x-coordinate of
the result. However, the default setting, as per the errata mentioned,
is to use x-coordinate only.
Use ECDH_compute_key() for this setting as it additionally allows HW
offload of the computation using dynamic engine feature in OpenSSL.
EC_POINT_mul() doesn't allow HW offload.
Signed-off-by: Mahantesh Salimath <mahantesh@nvidia.com>
To align with RFC 4519, section 2.31/32, the abbreviation for surname
is changed to "SN" that was previously used for serialNumber, which does
not have an abbreviation.
This mapping had its origins in the X.509 patch for FreeS/WAN that was
started in 2000. It was aligned with how OpenSSL did this in earlier
versions. However, there it was changed already in March 2002 (commit
ffbe98b7630d604263cfb1118c67ca2617a8e222) to make it compatible with
RFC 2256 (predecessor of RFC 4519).
Co-authored-by: Tobias Brunner <tobias@strongswan.org>
Closesstrongswan/strongswan#179.
Depending on from where bison is called, the file might not end up in
the same directory as the .y file, but the location of the Makefile.
This has been seen on FreeBSD.
There is a conflict between Flex's bison-bridge and Bison's api.prefix
options. Apparently, the former was added without consulting the Bison
devs and requires YYSTYPE, which is not added to the header anymore by
the latter. Instead, we just provide the proper definition of yyflex()
manually (as recommended by the Bison docs), so the option is not
required anymore.
Without threads handling the resolution, there is no point waiting
for a reply. If no subsequent resolution successfully starts a
thread (there might not even be one), we'd wait indefinitely.
Fixes#3634.
The DN is otherwise not parsed until compared/printed. This avoids
false detections as ASN.1 DN if e.g. an email address starts with "0",
which is 0x30 = ASN.1 sequence tag, and the next character denotes
the exact length of the rest of the string (see the unit tests for an
example).
None of our build environments seem to require these declarations. And
current versions of MinGW-w64 define them as inline functions in stdio.h
so these declarations clashed with that ("static declaration of '...'
follows non-static declaration").
In FIPS mode, libgcrypt uses a DRBG, which behaves differently when the
length passed to gcry_create_nonce() or gcry_randomize() is <= 0. It
expects a struct and explicitly checks that the passed pointer is not
NULL.
Commit 27756b081c (revocation: Check that nonce in OCSP response matches)
introduced strict nonce validation to prevent replay attacks with OCSP
responses having a longer lifetime. However, many commercial CAs (such as
Digicert) do not support nonces in responses, as they reuse once-issued OCSP
responses for the OCSP lifetime. This can be problematic for replay attack
scenarios, but is nothing we can fix at our end.
With the mentioned commit, such OCSP responses get completely unusable,
requiring the fallback to CRL based revocation. CRLs don't provide any
replay protection either, so there is nothing gained security-wise, but may
require a download of several megabytes CRL data.
To make use of replay protection where available, but fix OCSP verification
where it is not, do nonce verification only if the response actually contains
a nonce. To be safe against replay attacks, one has to fix the OCSP responder
or use a different CA, but this is not something we can enforce.
Fixes#3557.
The x509 plugin accepted CRL signers since forever, to be precise, since
dffb176f2b ("CRLSign keyUsage or CA basicConstraint are sufficient
for CRL validation")).
References #3529.
If it takes a while to start one of the threads, another thread might already
have passed the usleep() call previously used and re-enabled cancelability
so that the loop that checked for it would never terminate.
This reduces the clustering problem (primary clustering) but is not
completely free of it (secondary clustering) it still reduces the maximum
and average probing lengths.
With the previous approach we'd require at least an additional pointer
per item to store them in a list (15-18% increase in the overhead per
item). Instead we switch from handling collisions with overflow lists to
an open addressing scheme and store the actual table as variable-sized
indices pointing into an array of all inserted items in their original
order.
This can reduce the memory overhead even compared to the previous
implementation (especially for smaller tables), but because the array for
items is preallocated whenever the table is resized, it can be worse for
certain numbers of items. However, avoiding all the allocations required
by the previous design is actually a big advantage.
Depending on the usage pattern, the performance can improve quite a bit (in
particular when inserting many items). The raw lookup performance is a bit
slower as probing lengths increase with open addressing, but there are some
caching benefits due to the compact storage. So for general usage the
performance should be better. For instance, one test I did was counting the
occurrences of words in a list of 1'000'000 randomly selected words from a
dictionary of ~58'000 words (i.e. using a counter stored under each word as
key). The new implementation was ~8% faster on average while requiring
10% less memory.
Since we can't remove items from the array (would change the indices of all
items that follow it) we just mark them as removed and remove them once the
hash table is resized/rehashed (the cells in the hash table for these may
be reused). Due to this the latter may also happen if the number of stored
items does not increase e.g. after a series of remove/put operations (each
insertion requires storage in the array, no matter if items were removed).
So if the capacity is exhausted, the table is resized/rehashed (after lots
of removals the size may even be reduced) and all items marked as removed
are simply skipped.
Compared to the previous implementation the load factor/capacity is
lowered to reduce chances of collisions and to avoid primary clustering to
some degree. However, the latter in particular, but the open addressing
scheme in general, make this implementation completely unsuited for the
get_match() functionality (purposefully hashing to the same value and,
therefore, increasing the probing length and clustering). And keeping the
keys optionally sorted would complicate the code significantly. So we just
keep the existing hashlist_t implementation without adding code to maintain
the overall insertion order (we could add that feature optionally later, but
with the mentioned overhead for one or two pointers).
The maximum size is currently not changed. With the new implementation
this translates to a hard limit for the maximum number of items that can be
held in the table (=CAPACITY(MAX_SIZE)). Since this equals 715'827'882
items with the current settings, this shouldn't be a problem in practice,
the table alone would require 20 GiB in memory for that many items. The
hashlist_t implementation doesn't have that limitation due to the overflow
lists (it can store beyond it's capacity) but it itself would require over
29 GiB of memory to hold that many items.
The main intention here is that we can change the hashtable_t
implementation without being impeded by the special requirements imposed
by get_match() and sorting the keys/items in buckets.
This can improve negative lookups, but is mostly intended to be used
with get_match() so keys/items can be matched/enumerated in a specific
order. It's like storing sorted linked lists under a shared key but
with less memory overhead.