Commit e921b804d0 removed the
user data parameter from logging, so remove it here.
Explain how the debugging defines work.
If DEBUG_DUMPCAP is defined and dumpcap is a capture child, don't send
logs to stderr with normal formatting, because that will be connected to
the sync pipe. Don't send them to stdout either, because that can be
connected to a data pipe (e.g., for retrieving interface information.)
Instead, send it to stderr with the special formatting so that the
parent recognizes it.
Use va_copy if both DEBUG_DUMPCAP and DEBUG_CHILD_DUMPCAP are defined,
avoiding undefined behavior that can lead to segfaults.
Set the log level to DEBUG when running as a capture child if the
DEBUG defines are set, because sync_pipe_start doesn't pass along
log level information. If you turned on the extra #define, you
presumably want to debug.
If logging to a file, open the file before any log messages.
Get rid of a check for the log level being below the default level.
It's either redundant of a check already done in ws_log_full, or it
prevents logs from being shown when dumpcap is run standalone with
logging options.
Fix the file name in the introductory comment.
Update a comment to note that a base64 value is handled, in some ways,
like a nested element, even though it's not nested in the way that an
object or array is.
Have json_dumper_bad() write current stack depth and the current and
previous types in, if possible, symbolic or numeric form; don't dump
other information. Also have it set JSON_DUMPER_FLAGS_ERROR, so no
other routine needs to do so.
Add routines to check for dumper stack overflow *and* underflow and
report them with appropriate messages, and use them in routines that
push onto or pop off of that stack, respectively.
This means that the stack depth won't overflow or underflow, so we can
make it unsigned (as it will never underflow below 0) and don't need to
check for negative or bigger-than-the-stack values.
Pull check out of json_dumper_check_state() into various existing or new
routines (for common code to call in those existing routines), and have
the error messages passed to json_dumper_bad() give a more detailed
explanation of the particular problem detected.
Check whether last received packet ended transfer on STALL only if there
was active transfer key set. This fixes failed transfer type assertion
for control transfers without data stage that were STALLed by device
(during status stage).
Set the "profile_filename" property on the special System Default
QAction in the CopyFromProfileButton so that the action will actually
do something when triggered.
Fix#13373
This adds the following KDEs defined by the Wi-SUN FAN specification:
- Pairwise Transient Key KDE (PTKID)
- Group Transient Key Liveness KDE (GTKL)
- Node Role KDE (NR)
- LFN Group Transient Key KDE (LGTK)
- LFN Group Transient Key Liveness KDE (LGTKL)
The Wi-SUN FAN specification describes the format of the EAPOL-Key frame
in section 6.5.2.2 (Authentication and PMK Installation Flow):
Descriptor Type = 2
Key Information:
1. Key Descriptor Version = 2
2. Key Type = 0
3. Install = 0
4. Key Ack = 0
5. Key MIC = 0
6. Secure = 0
7. Error = 0
8. Request = 1
9. Encrypted Key Data = 0
10. SMK Message = 0
11. Reserved = 0
Key Length = 0
Key Replay Counter = see [IEEE802.11] section 11.6.2.
Key Nonce = 0
EAPOL-Key IV = 0
Key RSC = 0
Key MIC = 0
Key Data Length = length of Key Data field in octets.
Key Data = PMKID KDE if the PMK is live, PTKID KDE if the PTK is live, GTKL
KDE, Node Role KDE, and LGTKL KDE.
The current dissector will try do decrypt if the Key Type is 0 while the
Encrypted Key Data is unset, which appears to be for supporting
non-standard WPA implementations. The Key Data is not encrypted in
Wi-SUN, so a workaround is made to dissect the Key Data if the Key
Length is 0.
Defined in the Wi-SUN FAN specification as:
id-kp-wisun-fan-device ::= {
iso(1)
identified-organization(3)
dod(6)
internet(1)
private(4)
enterprise(1)
Wi-SUN (45605)
FieldAreaNetwork(1)
}
Pop up a dialog about bad coloring rules when reading the file
(e.g., when first starting Wireshark), rather than waiting until
you try to edit them.
Have that dialog have details of the problem with the filter
instead of a generic message. The report_warning code will
consolidate multiple warnings into one if more than one filter
has an error, rather than have lots of pop-ups.
Since the dialog (or console message, in the unlikely event that
somehow the colorfilters are read in a CLI tool) is called from
the color filters code, get rid of the separate non-specific
pop-up in ColoringRulesDialog and the special preference for
having a bogus filter.
Now, if the user has a bogus filter in the current profile's
colorfilter, they'll get a useful pop-up warning at startup,
when that filter is disabled. For filters imported / copied from
other profiles through the coloring rules dialog, they'll get the
same useful pop-up.
For trying to enable a disabled coloring rules with an error, or
inserting a *new* coloring rule with invalid filter expression (despite
the editor's Red background warning about an invalid expression),
there's already both the hint at the bottom of the screen and the
OK button becomes disabled. (Maybe the hint could be larger or
bold or something when there's an error.)
Fix#14906. Fix#15034
Add routines to open and close an object, and use them. The open
routine takes a member name as an argument, sets it, and begins an
object; the close routine ends the object.
Have sharkd_json_response_close() end the object, just as
sharkd_json_response_open() begins it.
Have sharkd_session_process_tap_stats_node_cb() take a key and use that
when opening the array.
Have sharkd_session_process_frame_cb_tree() take a key and use that when
opening the array.
This makes the structure of the code better mirror the structure of the
JSON objects it marshals.
If there's a key for a string value, but there's no string value or no
format for a string value, crash with a null-pointer dereference rather
than putting out the key and then, on the next operation, getting a
"json_dumper_bad(): Bad json_dumper state: illegal transition" error as
in, for example, issue #18886. This way, it will be a bit more obvious
what the true error is.
If thre's no key for a base-64 value, crash rather than not setting the
key, for the same reason.
Update the example typical location for the temporary directory
on Windows in the manpages to something newer than where Windows NT
or Windows 98 might put it.
Fix#18463
If dfilter_compile() succeeds, but the filter contains deprecated
tokens, don't report an error from dfilter_compile() as a warning, as
there *is* no error from dfilter_compile(). Instead, report "Filter
contains deprecated tokens". (Feel free to improve the error text.)
Fixes the crash, at least, in #18886.
The token format used by rtp-analyse and rtp-download expect the SSRC
field to be a hex string parsable by `ws_hexstrtou32()` as seen in
sharkd_session.c:760. The output from tap:rtp-streams was displaying
it as an unsigned integer.
For consistency, this field is now displayed as a hex string in the
output.
If the call to download an RTP stream did not match any payloads, Sharkd
would not return any information at all.
This now returns an error message indicating that there is no RTP data
available.
Adds three new selftests and sample pcap.
A negative number of bits in a bit item isn't allowed. Treat it
as a very large number (i.e., as unsigned), and throw a
ReportedBoundsError. This was already happening in most cases,
but not in the edge case of a number of bits between -1 and -7
(which was being rounded up to 0 octets and passed our length checks.)
Fix#18877