commit 63abf52ec640a019f8c45c1208f0dfb585641781
Padding: add offset!=length check to reduce safety check calls
Adds another check when parsing a set. The check "offset !=
self.header.length" allows to skip the padding checks if the offset is
the same as the length, not calling rest_is_padding_zeroes and wasting
CPU time.
commit 8d1cf9cac12c45c0af70591b646d898ba5c923fc
Finish IPFIX padding handling
Tested implementation of IPFIX set padding handling. Uses TK-Khaw's
proposed no_padding_last_offset calculation, extended as modulo
calculation to match multiple data set records.
Tests were conducted by capturing live traffic on a test machine with
tcpdump, then this capture file was read in by softflowd 1.1.0, with the
collector.py as the export target. The exported IPFIX (v10) packets were
then using both no padding and padding, so that tests could be
validated.
Closes#34
Signed-off-by: Dominik Pataky <software+pynetflow@dpataky.eu>
commit 51ce4eaa268e4bda5be89e1d430477d12fc8a72c
Fix and optimize padding calculation for IPFIX sets.
Refs #34
commit 9d3c4135385ca9714b7631a0c5af46feb891a9fb
Author: Khaw Teng Kang <tk.khaw@attrelogix.com>
Date: Tue Jul 5 16:29:12 2022 +0800
Reverted changes to template_record, data_length is now computed using field length in template.
Signed-off-by: Khaw Teng Kang <tk.khaw@attrelogix.com>
commit 3c4f8e62892876d4a2d42288843890b97244df55
IPFIX: handle padding (zero bytes) in sets
Adds a check to each IPFIX set ID branch, checking if the rest of the
bytes in this set is padding/zeroes.
Refs #34
Signed-off-by: Dominik Pataky <software+pynetflow@dpataky.eu>
Signals INT and TERM were not correctly handled in the 'while True' loop
of the yielding listener function. Now, the loop breaks as expected,
terminating the listener thread and the application.
This commit replaces multiple occurences of new features which were not
yet implemented with Python 3.5.3, which is the reference backwards
compatibility version for this package. The version is based on the
current Python version in Debian Stretch (oldstable). According to
pkgs.org, all other distros use 3.6+, so 3.5.3 is the lower boundary.
Changes:
* Add maxsize argument to functools.lru_cache decorator
* Replace f"" with .format()
* Replace variable type hints "var: type = val" with "# type:" comments
* Replace pstats.SortKey enum with strings in performance tests
Additionally, various styling fixes were applied.
The version compatibility was tested with tox, pyenv and Python 3.5.3,
but there is no tox.ini yet which automates this test.
Bump patch version number to 0.10.3
Update author's email address.
Resolves#27
The function send_recv_packets in tests stored all processed
ExportPackets by default in a list. Memory usage tests were therefore
based on this high amount of stored objects, since no instance of any
ExportPacket was deleted until exit.
With the new parameter store_packets the caller can define how many
packets should be stored during receiving, as to test multiple
scenarios.
Three such scenarios are implemented: don't store any packet, store
maximum of 500 at a time and store all packets. This comes much closer
to the real world scenario of the collector, which uses a "for export in
listener.get" loop, dumping any new ExportPacket to file immediatelly
and then deleting the object.
Yet, the case where all packets are stored must still be covered as
well, because the collector might not be the only implementation which
uses listener.get, so finding memory leaks should be covered.
The collector should catch both v9 and IPFIX template errors - syntax
error corrected. The v9 ExportPacket.templates attribute is now
@property and read-only.
At differnt points in the tool set, NetFlow (v9) is set as the default
case. Now that IPFIX is on its way to be supported as well, adapt all
occurences where a differentiation must be done.
Adds a new module, IPFIX. The collector already recognizes version 10 in
the header, meaning IPFIX. The parser is able to dissect the export
package and all sets with their headers.
Missing is the handling of the templates in the data sets - a feature
needed for the whole parsing process to complete.
The collector is able to parse templates in an export and then use these
templates to parse dataflows inside the same export packet. But the test
implementation was based on the assumption, that the templates always
arrive first in the packet. Now, a mixed order is also processed
successfully. Test included.
Beginning with this commit, the reference implementations of the
collector and analyzer are now included in the package. They are
callable by running `python3 -m netflow.collector` or `.analyzer`, with
the same flags as before. Use `-h` to list them.
Additional fixes are contained in this commit as well, e.g. adding more
version prefixes and moving parts of code from __init__ to utils, to fix
circular imports.