"cf.dfcode" if the new filter doesn't compile, because the filter
currently in effect will be the one that was last applied - just free up
the text of the new filter, and whatever memory was allocated for the
new filter code.
This means we allocate a new dfilter when a new filter is to be applied,
rather than recycling stuff from the old filter, as we want the old
filter code to remain around if the new filter doesn't compile.
This means that "cf.dfilter" and "cf.dfcode" will be null if there's no
filter in effect.
svn path=/trunk/; revision=803
succeeded or failed, and, if it succeeded, have it fill in the IP
address if found through a pointer passed as the second argument.
Have it first try interpreting its first argument as a dotted-quad IP
address, with "inet_aton()", and, if that fails, have it try to
interpret it as a host name with "gethostbyname()"; don't bother with
"gethostbyaddr()", as we should be allowed to filter on IP addresses
even if there's no host name associated with them (there's no guarantee
that "gethostbyaddr()" will succeed if handed an IP address with no
corresponding name - and it looks as if FreeBSD 3.2, at least, may not
succeed in that case).
Add a "dfilter_fail()" routine that takes "printf()"-like arguments and
uses them to set an error message for the parse; doing so means that
even if the filter expression is syntactically valid, we treat it as
being invalid. (Is there a better way to force a parse to fail from
arbitrary places in routines called by the parser?)
Use that routine in the lexical analyzer.
If that error message was set, use it as is as the failure message,
rather than adding "Unable to parse filter string XXX" to it.
Have the code to handle IP addresses and host names in display filters
check whether "get_host_ipaddr()" succeeded or failed and, if it failed,
arrange that the parse fail with an error message indicating the source
of the problem.
svn path=/trunk/; revision=802
parser/lexical analyzer in question are needed only in the ".c" files
for the generated parser and lexical analyzer, and Flex and Byacc/Bison
put them there; don't bother putting them in a header file, just
directly declare the functions with the right names.
svn path=/trunk/; revision=801
Also added first pass of state keeping. I am using glib's hash
functions.
Modelled after packet-ncp.c.
We will need to standardize the <proto>_init_protocol functions called in
file.c at some stage ...
I will have a couple of more goes at the state keeping before I am finished.
At the moment, the infrastructure is there but I do nothing with it.
svn path=/trunk/; revision=798
bit of 'control' to check to see if it's an information frame:
#define XDLC_HAS_PAYLOAD(control) \
(((control) & 0x1) == XDLC_I || (control) == (XDLC_UI|XDLC_U))
I had erroneously AND'ed with 0x3 when I first put the AND in there.
svn path=/trunk/; revision=797
Get rid of the declaration of the non-existent "dfilter_yyerror()", and
put in some #defines to work around the fact that the #defines to
replace "yy" with "dfilter_" in the names of Flex-generated and
Yacc-generated routines aren't put into a header file, they're put into
".c" files.
Have it remember the error message it was handed (unless it's Yacc's
boring "parse error" message).
When generating the message to be shown to the user on a parse error,
make it be the "Unable to parse filter string" message, and, if a
non-boring error message was supplied to "dfilter_error()", take that
error message onto the end.
Don't panic if a field type we don't yet support in the parser is seen;
generate an error, telling the user we don't support filter on that type
yet.
Don't assume that "global_df" has been set if we see an empty statement
(if the first token was the end-marker, because, say, the first token
the lexical analyzer found was a field of a type not yet supported in
filter expressions, "global_df" won't have been set).
svn path=/trunk/; revision=783
the stuff referred to by those pointers goes past the end of the packet,
that's not a reason not to return the length of the DNS or NBNS name
itself - you can tag that name even though it's bad. Therefore,
"get_dns_name()" should return the length of the part of the name it's
looked at even if that name contains a pointer to stuff that goes past
the end of the packet.
This means you can't check its return value to see if it's negative, and
treat it as an error if it is; remove that stuff.
Add checks to make sure the type and class fields in an RR don't go past
the end of the packet.
svn path=/trunk/; revision=781
the random packets I generated. I'm not convinced that all the problems
are gone. We now:
1. Check that the bytes are indded in the frame before accessing them
in dissect_dns_query() and dissect_dns_answer(). If not, we
return 0, which means "0-byte increment".
2. Check the return value of the two functions above in
dissect_query_records() and dissect_answer_records(), which have
loops that call those two functions above. If a 0-byte
increment is found, the loop is broken to avoid an infinite loop.
svn path=/trunk/; revision=778
file (which could be WTAP_ENCAP_UNKNOWN, if we couldn't determine it, or
WTAP_ENCAP_PER_PACKET, if we could determine the encapsulation of
packets in the file, but they didn't all have the same encapsulation).
This may be useful in the future, if we allow files to be saved in
different capture file formats - we'd have to specify, when creating the
capture file, the per-file encapsulation, for those formats that don't
support per-packet encapsulations (we wouldn't be able to save a
multi-encapsulation capture in those formats).
Make the code to read "iptrace" files set the per-file packet
encapsulation - set it to the type of the first packet seen, and, if any
subsequent packets have a different encapsulation, set it to
WTAP_ENCAP_PER_PACKET.
svn path=/trunk/; revision=772
Assign a range of Wiretap errors for zlib errors, and have
"wtap_strerror()" use "zError()" to get an error message for
them.
Have the internal "file_error()" routine return 0 for no error
and a Wiretap error code for an error.
svn path=/trunk/; revision=769
in the color selection wheel.
Added his patch to file.c to look for bogus frame_data pointers, but made
it a g_assert().
Modified my previous patch to colors.c to skip bad color display filters.
I skipped them, but they still appeared in the color dialogue. Now bad
filtes are not put into the color filter list, so they don't appear in
the color dialogue. As a [good] side-effect, the next time you save
your color filter list, the bad filters are removed from the colorfilters
file.
svn path=/trunk/; revision=768
segfault on a bad colorfilters file. This file now works as expected;
that is, the second filter is ignored:
# DO NOT EDIT THIS FILE! It was created by Ethereal
@ipx@ipx@[65535,65535,65535][65535,19104,22902]
@bad@bad@[65535,65535,65535][65535,19104,22902]
svn path=/trunk/; revision=765
Please, please in new dissector routines, check for
truncated packets, especially when string operations
or loop on bytes are used (to avoid display of erroneous
data and infinite loop or segmentation violation) !
svn path=/trunk/; revision=763
- add display filter for AARP
proto.c:
- register a dummy protocol before the first one (aarp)
since the first entry can not be filtered (bug ?)
Gilbert, could you check this ?
svn path=/trunk/; revision=762
as an argument. ("time_t" could be 64 bits - I think it is 64 bits on
some platforms, e.g. Alpha Linux - and it's typically signed rather
than unsigned.)
svn path=/trunk/; revision=760
1. Fix some silly errors.
2. Dont decode beyond Word Count if errcode > 0
3. Decode a bunch mode SMBs
Next is to keep state so we can do a better job ...
svn path=/trunk/; revision=758
the name of the current save file - we no longer have the "-F" flag, and
"-S" automatically reads from the capture file as packets arrive, so
there's no need to manually open the capture file.
svn path=/trunk/; revision=757