Newer versions of systemd etc. seem to require quite a lot of entropy
from /dev/random while booting, which can block and therefore delay the
start of other services (in particular sshd) by more than a minute.
Using the host's /dev/urandom via VirtIO RNG, we can avoid blocking the
guests.
The required kernel options are added for kernel versions 5.4+.
While `pos` was moved to the end, `len` was not adjusted (i.e. set to 0)
so later calls could write beyond the buffer. However, the last port
written might have been incomplete, so instead we just reset the string.
Don't abort the script if the version is reported as UNKNOWN, which happens
on CI hosts where the repository is only cloned with a certain depth (which
may not include the latest tag).
Also, never map VERSION to UNKNOWN.
Fixes: 2e522952c7 ("configure: Optionally use version information obtained from Git in executables")
If it takes a while to start one of the threads, another thread might already
have passed the usleep() call previously used and re-enabled cancelability
so that the loop that checked for it would never terminate.
The variable GIT_VERSION is always defined, either obtained from Git or
a file that is embedded in tarballs when they are built. Optionally,
that version is declared as VERSION in config.h so it will be used e.g. in
the daemons when they print the version number.
There is a check that should catch missing tags (i.e. if the version number
in AC_INIT() isn't a prefix of the version obtained via Git).
These changes store all CA certificates in vici_authority_t, which avoids
issues with unloading authority sections or clearing credentials.
Closesstrongswan/strongswan#172.
This way we only have one reference for each CA certificate, whether it
is loaded in an authority section, a connection or via load-certs() command.
It also avoids enumerating CA certificates multiple times if they are
loaded in different ways.
With the previous approach, CA certificates that were not re-loaded via
load-cert() (e.g. from tokens or via absolute paths) would not be available
anymore after the clear-creds() command was used. This avoids this
issue, but can cause duplicate CA certificates to get stored and enumerated,
so there might be a scaling factor.
This changes the hashtable implementation to that it maintains insertion
order. This is then used in the vici plugin to store connections in a
hash table instead of a linked list, which makes managing them quite a
bit faster if there are lots of connections.
The old implementation is extracted into a new class (hashlist_t), which
optionally supports sorting keys and provides the previous get_match()
function.
This reduces the clustering problem (primary clustering) but is not
completely free of it (secondary clustering) it still reduces the maximum
and average probing lengths.
With the previous approach we'd require at least an additional pointer
per item to store them in a list (15-18% increase in the overhead per
item). Instead we switch from handling collisions with overflow lists to
an open addressing scheme and store the actual table as variable-sized
indices pointing into an array of all inserted items in their original
order.
This can reduce the memory overhead even compared to the previous
implementation (especially for smaller tables), but because the array for
items is preallocated whenever the table is resized, it can be worse for
certain numbers of items. However, avoiding all the allocations required
by the previous design is actually a big advantage.
Depending on the usage pattern, the performance can improve quite a bit (in
particular when inserting many items). The raw lookup performance is a bit
slower as probing lengths increase with open addressing, but there are some
caching benefits due to the compact storage. So for general usage the
performance should be better. For instance, one test I did was counting the
occurrences of words in a list of 1'000'000 randomly selected words from a
dictionary of ~58'000 words (i.e. using a counter stored under each word as
key). The new implementation was ~8% faster on average while requiring
10% less memory.
Since we can't remove items from the array (would change the indices of all
items that follow it) we just mark them as removed and remove them once the
hash table is resized/rehashed (the cells in the hash table for these may
be reused). Due to this the latter may also happen if the number of stored
items does not increase e.g. after a series of remove/put operations (each
insertion requires storage in the array, no matter if items were removed).
So if the capacity is exhausted, the table is resized/rehashed (after lots
of removals the size may even be reduced) and all items marked as removed
are simply skipped.
Compared to the previous implementation the load factor/capacity is
lowered to reduce chances of collisions and to avoid primary clustering to
some degree. However, the latter in particular, but the open addressing
scheme in general, make this implementation completely unsuited for the
get_match() functionality (purposefully hashing to the same value and,
therefore, increasing the probing length and clustering). And keeping the
keys optionally sorted would complicate the code significantly. So we just
keep the existing hashlist_t implementation without adding code to maintain
the overall insertion order (we could add that feature optionally later, but
with the mentioned overhead for one or two pointers).
The maximum size is currently not changed. With the new implementation
this translates to a hard limit for the maximum number of items that can be
held in the table (=CAPACITY(MAX_SIZE)). Since this equals 715'827'882
items with the current settings, this shouldn't be a problem in practice,
the table alone would require 20 GiB in memory for that many items. The
hashlist_t implementation doesn't have that limitation due to the overflow
lists (it can store beyond it's capacity) but it itself would require over
29 GiB of memory to hold that many items.
The main intention here is that we can change the hashtable_t
implementation without being impeded by the special requirements imposed
by get_match() and sorting the keys/items in buckets.
This can improve negative lookups, but is mostly intended to be used
with get_match() so keys/items can be matched/enumerated in a specific
order. It's like storing sorted linked lists under a shared key but
with less memory overhead.
They are not marked as temporary addresses so make sure we always return
them whether temporary addresses are preferred as source addresses or not
as we need to enumerate them when searching for addresses in traffic selectors
to install routes.
Fixes: 9f12b8a61c ("kernel-netlink: Enumerate temporary IPv6 addresses according to config")
We don't track CHILD_SA down events anymore and rely on NM's initial timeout
to let the user know if the connection failed initially. So we also don't
have to explicitly differentiate between initial connection failures and
later ones like we do an Android. Also, with the default retransmission
settings, there will only be one keying try as NM's timeout is lower than
the combined retransmission timeout of 165s.
There is no visual indicator while the connection is reestablished later.
Fixes#3300.