reassemble: Improve perf of free_all_reassembled_fragments.

When we're walking the list of fragments to free, if we encounter
FD_VISITED_FREE, we can conclude traversal of this fragment list immediately
(and go to the next hash bucket), since everything subsequent to this point in
the list has already been processed by free_all_reassembled_fragments.  This
trims an O(n^2) hash table iteration down to O(n).

Before this change, a very ugly 1.1 GByte TFTP capture (with lots of
out-of-order and retransmitted blocks) takes 4 hours to process with
tftp.defragment=TRUE -- output completes after 1.25 hours, and then about
2.75 hours of time is spent doing repeated list traversals within
free_all_reassembled_fragments...(!)  With this change, the same test completes
in 1.25 hours, with the cleanup taking just 71 msec.

Tested also with reassemble_test under Valgrind; No issues/leaks were reported.
This commit is contained in:
Darius Davis 2021-02-21 20:54:40 +10:00
parent 297246093b
commit f895014f68
1 changed files with 6 additions and 6 deletions

View File

@ -376,12 +376,12 @@ free_all_reassembled_fragments(gpointer key_arg _U_, gpointer value,
* fragments to array and later free them in
* free_fragments()
*/
if (fd_head->flags != FD_VISITED_FREE) {
if (fd_head->flags & FD_SUBSET_TVB)
fd_head->tvb_data = NULL;
g_ptr_array_add(allocated_fragments, fd_head);
fd_head->flags = FD_VISITED_FREE;
}
if (fd_head->flags == FD_VISITED_FREE)
break;
if (fd_head->flags & FD_SUBSET_TVB)
fd_head->tvb_data = NULL;
g_ptr_array_add(allocated_fragments, fd_head);
fd_head->flags = FD_VISITED_FREE;
}
return TRUE;