Experience has shown that:
1. The current logging methods are not very reliable or practical.
A logging bitmask makes little sense as the user-facing interface (who
would want debug but not crtical messages for example?); it's
computer-friendly and user-unfriendly. More importantly the console
log level preference is initialized too late in the startup process
to be used for the logging subsystem and that fact raises a number
of annoying and hard-to-fix usability issues.
2. Coding around G_MESSAGES_DEBUG to comply with our log level mask
and not clobber the user's settings or not create unexpected log misses
is unworkable and generally follows the principle of most surprise.
The fact that G_MESSAGES_DEBUG="all" can leak to other programs using
GLib is also annoying.
3. The non-structured GLib logging API is very opinionated and lacks
configurability beyond replacing the log handler.
4. Windows GUI has some special code to attach to a console,
but it would be nice to abstract away the rest under a single
interface.
5. Using this logger seems to be noticeably faster.
Deprecate the console log level preference and extend our API to
implement a log handler in wsutil/wslog.h to provide easy-to-use,
flexible and dependable logging during all execution phases.
Log levels have a hierarchy, from most verbose to least verbose
(debug to error). When a given level is set everything above that
is also enabled.
The log level can be set with an environment variable or a command
line option (parsed as soon as possible but still later than the
environment). The default log level is "message".
Dissector logging is not included because it is not clear what log
domain they should use. An explosion to thousands of domains is
not desirable and putting everything in a single domain is probably
too coarse and noisy. For now I think it makes sense to let them do
their own thing using g_log_default_handler() and continue using the
G_MESSAGES_DEBUG mechanism with specific domains for each individual
dissector.
In the future a mechanism may be added to selectively enable these
domains at runtime while trying to avoid the problems introduced
by G_MESSAGES_DEBUG.
* use SPEC names for fields
* decode missing controller ID
* fix reject payload parsing (size is 4 and not 32; no reserved field)
* simplify custom decoding using CF_FUNC.
Upgrade our vcpkg bundle to one that includes GLib 2.66.4 and libxml2
2.9.10.
Avoid running pkgconfig on Windows so that we don't find Strawberry
Perl's headers.
dissect_diameter_mip6_feature_vector is checking whether the data argument
(assigned to diam_sub_dis_inf) is null, but later dereferences it outside
the conditional, so if it was null it would crash anyway. It doesn't seem
possible for the data argument to actually be null, so this commit removes
the redundant check. I'm also adding an assert to document the non-null
assumption.
Bug found by clang static analyzer.
Fixes#17427.
In ca86d0ab38 I introduced a regression
that caused the A-bis/OML dissector to not recognize ip.access
specific messages as such. The value returned by tvb_memeql() needs
to be reverted, because it returns 0 on success, just like memcmp().
Additionally, I noticed that some implementations of the ip.access
dialect do terminate the manufacturer ID string with a null character,
while some do not. Handle both cases properly.
In rare circumstances where port numbers are reused and sequence
numbers are lower in the later conversations, disabling TCP
sequence numbers analysis while enabling out-of-order reassembly
was leading to reassembly inconsistency. Closes#15096.
If there is DCP SET block with 0 block length, it is dissected
as erroneous block since DCP SET block can not have 0 block
length. Moreover, DCPBlockLength is not decoded if DCP option
and suboption is 0. However, each DCP block must have
Option/Suboption/DCPBlockLength. This is also fixed.
There's no reason to limit the tvb offset input parameter of this CRC8
function to a guint8, particularly now that the User Packet CRCs
later in the Base Band Frame are being checked.
Since the gssapi handler can cope fine with ntlm blobs, remove the
heuristic in ntlmssp and call the gssapi dissector directly. In turn
we get kerberos support, including decpryption with keytab etc.
When there are more packets on the stream after credssp, like tpkt-rpd
data, the credssp heuristics fails when invoked by tls and then even the
packets for which the credssp heuristics succeeded do not get dissected
as credssp but as tpkt-continuation data.
To work around that, call the credssp heuristic dissector directly from
the rdp dissector before trying fastpath.
Leave the credssp heursitics in TLS for other protocols such as HTTP
where it may work.