strongswan/src/libstrongswan/collections
Tobias Brunner 45376040ce hashtable: Maintain insertion order when enumerating
With the previous approach we'd require at least an additional pointer
per item to store them in a list (15-18% increase in the overhead per
item).  Instead we switch from handling collisions with overflow lists to
an open addressing scheme and store the actual table as variable-sized
indices pointing into an array of all inserted items in their original
order.

This can reduce the memory overhead even compared to the previous
implementation (especially for smaller tables), but because the array for
items is preallocated whenever the table is resized, it can be worse for
certain numbers of items.  However, avoiding all the allocations required
by the previous design is actually a big advantage.

Depending on the usage pattern, the performance can improve quite a bit (in
particular when inserting many items).  The raw lookup performance is a bit
slower as probing lengths increase with open addressing, but there are some
caching benefits due to the compact storage.  So for general usage the
performance should be better.  For instance, one test I did was counting the
occurrences of words in a list of 1'000'000 randomly selected words from a
dictionary of ~58'000 words (i.e. using a counter stored under each word as
key).  The new implementation was ~8% faster on average while requiring
10% less memory.

Since we can't remove items from the array (would change the indices of all
items that follow it) we just mark them as removed and remove them once the
hash table is resized/rehashed (the cells in the hash table for these may
be reused).  Due to this the latter may also happen if the number of stored
items does not increase e.g. after a series of remove/put operations (each
insertion requires storage in the array, no matter if items were removed).
So if the capacity is exhausted, the table is resized/rehashed (after lots
of removals the size may even be reduced) and all items marked as removed
are simply skipped.

Compared to the previous implementation the load factor/capacity is
lowered to reduce chances of collisions and to avoid primary clustering to
some degree.  However, the latter in particular, but the open addressing
scheme in general, make this implementation completely unsuited for the
get_match() functionality (purposefully hashing to the same value and,
therefore, increasing the probing length and clustering).  And keeping the
keys optionally sorted would complicate the code significantly.  So we just
keep the existing hashlist_t implementation without adding code to maintain
the overall insertion order (we could add that feature optionally later, but
with the mentioned overhead for one or two pointers).

The maximum size is currently not changed.  With the new implementation
this translates to a hard limit for the maximum number of items that can be
held in the table (=CAPACITY(MAX_SIZE)).  Since this equals 715'827'882
items with the current settings, this shouldn't be a problem in practice,
the table alone would require 20 GiB in memory for that many items.  The
hashlist_t implementation doesn't have that limitation due to the overflow
lists (it can store beyond it's capacity) but it itself would require over
29 GiB of memory to hold that many items.
2020-07-20 13:50:11 +02:00
..
array.c array: Avoid overflow in size calculation 2020-01-28 15:29:40 +01:00
array.h Spelling fixes 2020-02-11 18:23:07 +01:00
blocking_queue.c Unify format of HSR copyright statements 2018-05-23 16:32:53 +02:00
blocking_queue.h Unify format of HSR copyright statements 2018-05-23 16:32:53 +02:00
dictionary.h Unify format of HSR copyright statements 2018-05-23 16:32:53 +02:00
enumerator.c enumerator: Fall back to lstat() if stat() fails when enumerating dirs/files 2020-02-13 11:54:19 +01:00
enumerator.h Spelling fixes 2020-02-11 18:23:07 +01:00
hashlist.c hashtable: Maintain insertion order when enumerating 2020-07-20 13:50:11 +02:00
hashtable.c hashtable: Maintain insertion order when enumerating 2020-07-20 13:50:11 +02:00
hashtable.h hashtable: Maintain insertion order when enumerating 2020-07-20 13:50:11 +02:00
hashtable_profiler.h hashtable: Maintain insertion order when enumerating 2020-07-20 13:50:11 +02:00
linked_list.c linked-list: Order of insert_before/remove_at calls doesn't matter anymore 2018-06-26 15:11:02 +02:00
linked_list.h linked-list: Order of insert_before/remove_at calls doesn't matter anymore 2018-06-26 15:11:02 +02:00