Do the integer conversion for ranges in the parser. This is more
conventional, I think, and allows removing the unnecessary integer
syntax tree node type.
Try to minimize the number and complexity of lexical rules for
ranges. But it seems we need to keep different states for integer
and punctuation because of the need to disambiguate the ranges
[-n-n] and [-n--n].
If we have a STRING value in an expression and a numeric comparison
we must also check if it matches a value string before throwing
a type error.
Add appropriate tests to the test suite.
Fixes 4d2f469212.
If the RPC dissector doesn't have all the bytes of the a fragment
and thus needs to do TCP desegmentation, but can't or won't for some
reason, then don't try to defragment either, regardless of what the
defragmentation preference says. Fix#11198.
A function is grammatically an identifier that is followed by '(' and ')'
according to some rules. We should avoid assuming a token is a function
just because it matches a registered function name.
Before:
Filter: foobar(http.user_agent) contains "UPDATE"
dftest: Syntax error near "(".
After:
Filter: foobar(http.user_agent) contains "UPDATE"
dftest: The function 'foobar' does not exist.
This has the problem that a function cannot have the same name
as a protocol but that limitation already existed before.
Properly support BEP 42: the 'ip' string includes the port, so the
expected length is 6 octets, not 4. That key also appears on the top
level, and sorts before the 'r' key, so add it to heuristics.
Take the opportunity to strengthen the heuristics; certain other keys
never sort before others, and we know the types of several of the keys.
That allows us to go from seven possibilities for the first four bytes
to four possibilities for the first five bytes, which is surely precise
enough to enable the heuristic by default.
Sort the value_strings.
Q_OBJECT is only needed for signals+slots, translations, and other
meta-object services. Remove it in some classes, since having it means
we're generating and compiling code unnecessarily.
Instead of checking for an error return and throwing the exception
then do it where the errors occurs. This takes advantage of the nice
properties of error exceptions to reduce the amount of error
checking code.
Octal escape sequences \NNN can have between 1 and 3 digits. If
the sequence had less than 3 digits the parser got out of sync
with an incorrect double increment of the pointer and errors out
parsing sequences like \0, \2 or \33.
Before:
Filter: ip.proto == '\33'
dftest: "'\33'" is too long to be a valid character constant.
After:
Filter: ip.proto == '\33'
Constants:
00000 PUT_FVALUE 27 <FT_UINT8> -> reg#1
Instructions:
00000 READ_TREE ip.proto -> reg#0
00001 IF-FALSE-GOTO 3
00002 ANY_EQ reg#0 == reg#1
00003 RETURN
Fixes#16525.
packet-li5g.c used to parse the LI x2/x3 PDU header which defined in ETSI TS 103 221-2
lix2 used to parse the x2 xIRI payload, the ASN.1 defined in 3GPP 33.128.
Add the dissector generated by asnwer
will merge this file in a new request, so, delete it from the 5G LI branch
Add a comment line stating the 3gpp document in lix2.asn
fix the commit warning
Test to see if the start of a packet looks like SMPP before
calling tcp_dissect_pdus, so that we don't calculate a bogus
length (and fail to process many packets) if the capture
starts in the middle of a TCP connection.
When the heuristic dissector has found SMPP, mark it as a
conversation with the SMPP dissector.
There's room for more improvement by scanning through the current
segment to look for the PDU start, but this makes it work
considerably better, at least as well as 1.10.x. Improves #11306.
Several improvements to dissect_description_of_velocity()
- Velocity Type is first 4 bits, do not increase offset after this
- Direction of Vertical Speed is bit 7
- Only increase curr_offset in this function
Instead of using 3 operations (new + free + reassign_to_parent) to transform
the tree use a simpler single replace operation instead.
This also avoids having to manually copy token values.
The set search and replace method is now obsolete.
Calculate the hashes for a file after the wtap_open_offline, to avoid
spending time calculating them for files that aren't known capture
formats. We wouldn't print the checksums in those cases anyway,
and the time savings can be considerable on large non-capture files.
added SR Policy Name TLV
added SR Policy Candidate Path Identifiers TLV
added SR Policy Candidate Path Name TLV
added SR Policy Candidate Path Preference TLV
Extended Association ID TLV format for assoc_type 6 included
Removed development comments and formatted code
Association type field values are displayed, according to IANA registered values
fixed filters for extended-association-id TLVs
For some reason (copy and paste?) the SGSN number field was substituted
for the VLR number field, and then later the latter got commented out
as it was being unused.
Clean up syntax error code. TEST and SET are never returned by
the tokenizer.
Remove unnecessary range_body() grammar element. Fix a comment.
Move the stnode_token_value() function to its proper place.
Allow an entity in the grammar as range body. Perform a stronger
sanity check during semantic analysis everywhere a range is used.
This is both safer (unless we want to allow FIELD bodies only, but
functions are allowed too) and also provides better error messages.
Previously a range of range only compiled on the RHS. Now it can
appear on both sides of a relation.
This fixes a crash with STRING entities similar to #10690 for
UNPARSED.
This also adds back support for slicing functions that was removed
in f3f833ccec (by accident presumably).
Ping #10690
This makes 'stnode_tostr()' more useful for end-user error reporting.
For debugging purposes we tack on the type name in the debug specific
code instead.