The length specified in a TvbRange is the *actual packet length*, not
the *sliced-to* length, so use tvb_new_subset_length() to cut it short.
This fixes the fix for #15655, and addresses at least some of the issues
in #17255.
(cherry picked from commit cda18f951e)
* Since c3342930 we don't free anymore the entries in the files hashtables.
The cleanest solution is probably to convert these hashtables into two
wmem_map_t structures and let the wmem core handling any cleanup.
* b0f5b2c174 added supported for chained compression; the uncompressed
tvb must be freed
(cherry picked from commit e677a909e1)
IXFR and AXFR queries can have multiple DNS responses. As all responses
belong to one transaction, they have the same transaction ID.
We shouldn't handle them as retransmits.
Fix: wireshark/wireshark#17293
(cherry picked from commit 07fb47111e)
If a header declares a function, or anything else requiring the extern
"C" decoration, have it wrap the declaration itself; don't rely on the
header itself being included inside extern "C".
(cherry picked from commit 2820156fbd)
If a header declares a function, or anything else requiring the extern
"C" decoration, have it wrap the declaration itself; don't rely on the
header itself being included inside extern "C".
(cherry picked from commit 1e1f4e6b5f)
This patch fixes a bug in the current TECMP dissector that leads to
wrong timestamps, whenever the reserved flag is set to true.
Closes: #17279
(cherry picked from commit 5d709459c4)
This commit should be a proper fix for the regression reported in #17250
(7fd71536 is a simple workaround). Such regression has been introduced by
b287e716 while fixing the infinite loop reported in #16897.
b287e716, while fixing the infinite loop, broke the decoding of perfectly
valid tags not yet supported by Wireshark.
AFAIK, the root cause of the infinite loop is the overflow of the `offset`
variable. Therefore checking for this overflow should be sufficient to avoid
the loop.
Note that we already check for sensible values for the 'tag_len' variable;
we should update `total_tag_len` accordingly.
Some words about testing: other than correctly handling unknown but valid
tags, it is important that this commit doesn't reintroduce the infinite
loop bug.
Fortunately #16897 provided a POC trace. Unfortunately, if you revert
b287e716, this POC doesn't work anymore in master-3.4 and master branches,
but it still triggers the infinite loop in master-3.2 branch.
Therefore I have been able to manually check that this MR + the
overflow check is enough to avoid the infinite loop bug, at least in master-3.2.
Some traffic with unknown but valid tags is available in e2ee14ae03.
(cherry picked from commit 142cfb03ac)
Regression introduced by b287e7165e.
To avoid an infinite loop with malformed packets, that commit stops
parsing the tags list after finding an unknown tag.
When this "unknown" tag is perfectly valid but not supported by
Wireshark, we don't decode any subsequent (valid) tags anymore.
GQUIC is going to die soon and it is quite unlikely it will change in
the next future. Therefore the best/quick solution is simply decoding
any valid tag.
Close#17250
(cherry picked from commit 7fd7153696)
These will be backported, for the benefit of Lua scripts that want those
specific file types/subtypes (typically in order to write files of those
types); that allows those types to be fetched without having to know the
right string to hand to wslua_wtap_name_to_file_type_subtype().
(cherry picked from commit bc3cc17bc4)
In the proto tree, copy URLs instead of opening them.
In the export dialog, enable previews only if the advertised MIME type
*and* the contents of the file are plain text, GIF, JPEG, or PNG.
Add warnings to the wslua browser_open_url and browser_open_data_file
documentation.
Fixes#17232.
(cherry picked from commit e99c9afce8)
Recommend the use of wtap_name_to_file_type_subtype() to get filetype
values, unless you need to run on older versions of Wireshark that don't
have it.
Don't even *mention* wtap_filetypes in the documentation for the new
wtap_ routines, as, if you have those routines, you have
wtap_name_to_file_type_subtype(), because it's one of those routines.
Fix references to "nul" while we're at it - it's "nil" in Lua.
(That part of the WSDG - the Lua reference - is generated, so this
involves changing the source code implementing the Lua routines.)
(cherry picked from commit 5b3c3d0682)
Provide Lua version of wtap_file_type_subtype_string(),
wtap_file_type_subtype_short_string(), and
wtap_short_string_to_file_type_subtype().
This will be backported to the 3.2 and 3.4 branches, to allow scripts
not run on the bleeding-edge version to use them.
(cherry picked from commit f0ebc50762)
In our first pass through our options, look for ones that might require
extcap. Call extcap_register_preferences() only when that's the case.
Warn about missing extcap preferences only when we've loaded them.
(cherry picked from commit c7f66cf934)
Conflicts:
tshark.c
Without this patch, any Linux cooked packet capture on HDLC / frame
relay devices will not be dispatched to the proper dissector.
Such packets do carry a proper sll_hatype set to ARPHRD_FRAD and should
be dispatched accordingly. However, the packet-fr dissector so far
did not register itself accordingly.
(cherry picked from commit b83f92a458)
When the refid contains non-ascii chars, the conversion function
returns a string longer than 4 chars. This results in an invalid
string if the output is limited to 4 bytes. Incidentally this
results in an invalid PDML output as well that caught this bug
in the first place.
Fix: #17112.
10204490d7 / MR 80 ensured that we didn't grow field.usages due to an
underflow, but it neglected to check for a sane array size. Add another
check to make sure we don't wmem_array_grow() too much. Fixes#17165 and
fixes#16809 more completely.
(cherry picked from commit 785e291c1b)
Usage Minimum and Usage Maximum are an inclusive, closed interval.
This fixes an fencepost error where the Usage Maximum value was
not being included as a possible value in the bitfield. Related
to #17014
(cherry picked from commit 5ca608f519)
It has the "feature" that, if handed a negative value, it might just
exit. gmtime() doesn't have that "feature", and is sufficiently
thread-safe for our purposes; use it instead, and check to make sure it
doesn't return a null pointer.
The previous fix for #17179 still used gmtime_s(); this doesn't, so it's
a better fix for #17179.
(cherry picked from commit bf265d7e7a)
This corrects 2 issues with the detection heuristic for f5ethtrailers
causing trailers to be missed.
Fixes#17171Fixes#17172
(cherry picked from commit b297afee3e)
Do not use FT_IPV6 as an interface identifier could be wrongly identified
as an IPv4-Compatible IPv6 Address format by inet_ntop() and displayed
as such.
(cherry picked from commit f64eddfd01)
Conflicts:
epan/dissectors/packet-nas_5gs.c
Do not use FT_IPV6 as an interface identifier could be wrongly identified
as an IPv4-Compatible IPv6 Address format by inet_ntop() and displayed
as such.
(cherry picked from commit b794e4798a)
In dump_dfilter_macro_t(), if the dfilter_macro_t pointer is null, just
give up after printing the message that indicates that.
This should squelch several nullPointerRedundantCheck warnings from
cppcheck.
(cherry picked from commit 05b9e53777)
When unable to decrypt SH packets we should visualize an error, via
expert info. This way we handle SH and LH errors in the same way.
Close#17077
(cherry picked from commit 9faf6d4e7b)
This patch fixes the PNI TFString, which was wrong. Correct is:
0 = "... contains no Partial Network ..."
1 = "... contains Partial Network ..."
Fixes#17154
(cherry picked from commit 238446dc91)
The HW version is correctly parsed as 2 bytes but shown as 3 bytes in
the dissection. This is fixed here.
Fixes#17133
(cherry picked from commit 1546a0af26)
Add the file format interpretations of Enhanced Packet Block options which
are being read by wiretap, but missing from the file format dissector.
(cherry picked from commit c657a6f5e7)
When a Segment Routing Header is present in the IPv6 packet provisions
have to be made to setup the right destination address for the pseudo
header used in checksum calculations. When segments are left in the header
the first address in the list has to replace the destination address.
Closes#17097
(cherry picked from commit 7052994a19)
Two interlocking problems cause the dissection of FC to fail in some cases,
as shown in the capture of the related issue.
The FC dissector assumes that ETHERTYPE_UNK in the data structure passed
to it is coming from the MDS header dissector only, and thus that header
sizes have to be taken into account. This is not / no longer the case.
It always passes down ETHERTYPE_FCFT. Therefore the MDS header size
checking does not apply to ETHERTYP_UNK, so is removed as condition.
The other FC related dissectors were forced to setup a data structure to
pass to FC for it to handle that part of the frame. Because these weren't
related to ethernet, these lazily set the ethertype field in the data
structure to 0. This unfortunately matches ETHERTYPE_UNK, triggering the
MDS header size checking in FC, leading to this issue. With the first
problem resolved, now make it explicit that unknown ethertype is indicated
by ETHERTYPE_UNK, not '0'.
Addresses primary part of issue #17084
(cherry picked from commit 3f0fc1b232)
When the user enters row to SNMP Users table in wireshark and Authentication model is set to MD5, row is ignored in processing. The reason is that constant for MD5 is 0, but the code checks if the value is defined by simple 'usm_p.user_assoc' condition. Therefore 0 never succeeds.
As item can have only listed values, I think the check can be removed.
Function verified on sample.
I propose to cherry pick the change to all stable branches.
(cherry picked from commit 7f376c7ced)
We must be able to correctly detect valid coalesced packets and
recognize them from random padding.
Close#17011Close#16914
(cherry picked from commit 0af60377b4)
This bug affects Lua plugin dissectors for encapsulation protocols like
GRE. Typically the dissector creates a range for the payload packet, then
calls the next dissector with a tvb derived from the range, using
TvbRange_tvb(). The original version calls
tvb_new_subset_length_caplen() using the remaining capture length for the
reported_len argument. The fix passes -1 as the reported length, and
tvb_new_subset_length_caplen() calculates the new reported_len as required.
The bug only affects large packets captured with a snaplen and
truncated, then decoded with a Lua plugin for the encapsulation header.
Here's the typical bug symptom, gleaned from tshark decode of
an encapsulated IP payload:
[Expert Info (Error/Protocol): IPv4 total length exceeds packet length (114 bytes)]
[IPv4 total length exceeds packet length (114 bytes)]
Closes#15655.
(cherry picked from commit e7ec6739b6)
format_text uses the wrong bitmask when checking for two byte UTF-8
characters, resulting in rejecting half the possible two bytes characters,
including all of Arabic and Greek, and substituting REPLACEMENT CHARACTER
for them. Fixes#17070, and add some comments about the current behavior
that doesn't match existing comments.
(cherry picked from commit 770746cca8)
Fix error handlers in Listener draw() and reset() to avoid getting
LUA_ERRERR from lua_pcall(). Added error handler for Listener draw()
callback.
Handle LUA_ERRERR from lua_pcall() to avoid assert on this.
Changed some capitalized words in various error message.
Closes#16974.
(cherry picked from commit d104571e8a)
At least one ns-3 capture has DMG frames (as indicated by the channel
number being in the 60 GHz band - radiotap currently has no DMG metadata
field) that have the +HTC/Order flag subfield set but have no HT Control
field, causing them to be misdissected.
802.11-2016 says that DMG frames should never have +HTC/Order set; if it
*is* set in a QoS frame known to be a DMG frame, flag it with an expert
info item and don't treat it as having an HT Control field.
Update a bunch of comments to give more information, put comments in the
appropriate places, and speak of 802.11-2016 rather than older standards.
While we're at it, update the title and description of the +HTC/Order
flag to reflect its name as of 802.11-2016.
(cherry picked from commit 3c640ca04a)
Don't assume that the Internet has our best interests at heart when it
gives us the size of our decompression buffer. Assign an arbitrary limit
of 50 MB.
This fixes#16739 in that it takes care of
** (process:17681): WARNING **: 20:03:07.440: Dissector bug, protocol Kafka, in packet 31: ../epan/proto.c:7043: failed assertion "end >= fi->start"
which is different from the original error output. It looks like *that*
might have taken care of in one of the other recent Kafka bug fixes.
The decompression routines return a success or failure status. Use
gbooleans instead of ints for that.
(cherry picked from commit f4374967bb)
Make sure _proto_tree_add_bits_ret_val allocates a bits array using the
packet scope, otherwise we leak memory. Fixes#17032.
(cherry picked from commit a9fc769d7b)
Back in 2017, commit d7bab0b46e introduced
printing the TEI in COL_INFO. Unfortunatelky it contained a typo and
stated "TEI:1%u" instead of "TEI:%u". So TEI 0 became TEI 10, etc. -
causing some confusion.
Let's remote that extraneous '1' and at the same time print the sapi
with two digits for better alignment of multiple lines. It is a
two-digit decimal value (0..63).
(cherry picked from commit 9c5ea50b0a)
That's QoS-frame only; for non-QoS frames, the +HTC/Order subfield
doesn't mean there's an HT Control field.
Update the reference to the part of the 802.11 standard mentioning that
subfield to 802.11-2016.
(cherry picked from commit 1fa5687fad)
It's clearer to say
if (A) {
if (B) {
do this;
} else {
do that;
}
}
than to say
if (A && B) {
do this;
} else if (A && !B) {
do that;
}
(cherry picked from commit baee4a41c7)
Change
case DATA_FRAME:
if (condition) {
do stuff;
break;
}
do other stuff;
break;
to
case DATA_FRAME:
if (condition) {
do stuff;
} else {
do other stuff;
}
break;
to make it clearer that it's "do this if condition is true, else do
that".
(cherry picked from commit 258fb14821)
After a key update, we should update Packet Protection cipher but
we shouldn't touch the Header Protection one.
With the current code, PP and HP ciphers are quite entangled and we
always reset both of them. Therefore, at the second key update we
reset the used 1-RTT HP cipher too; no wonder even header decryption
fails from that point on.
To properly fix this issue, all the ciphers structures has been rewritten,
clearly separating PP code from HP one.
Close#16920Close#16916
(cherry picked from commit 5e45f770fd)