This function never accepted any arguments. However, a sloppy
declaration in the header file as logging_vty_add_cmds() allowed
to pass any number of any type arguments until recently.
Change-Id: Icdbe2f253c9e17ff82bd3b1dc3d4fbea4ad6f333
Do not turn some compiler warnings into errors by default. This part was
copied from openbsc.git 34f012 ("Turn some compiler warnings into
errors"), where it was added before --enable-werror was available.
We build with --enable-werror during development and in CI. If the code
is built with a different compiler that throws additional warnings, it
should not stop the build.
Related: OS#5289
Change-Id: Ib5602017545d68f0fdb0b4df7ed3087a2cb1775c
The vty ts slot number was adjusted in commmit
df088b0ea93d3d5851ee680ae95afa30a9359730 in libosmo-abis
Change-Id: I97fc56461f800afb067f815bb85fbfab102d86f0
libosmo-abis Change-Id Ifb22b5544cf06012fa529828dfdf3f0d73b07e7d
fixed spelling from existant -> existent, which breaks some of the
tests here.
Making this change catches up, but will of course fail when older
libosmo-abis versions are used. Given the niche nature of
osmo-e1-recorder, I think it's not worth investing time into that.
Change-Id: Ib7430bf940dea33df79abe01baae670f188ff82e
libosmo-abisrecently marked the 'out_cb' of the subchan_demux
as 'const', which caused compiler warnings/errors.
Related: libosmo-abis.git Ia082b9fddf03d02afd007825a1588a3ef0dbedae
Change-Id: I0cf430980e50fa8094f0efc7642004fb5923c4c6
The original format included a 'struct timeval' into the packet header,
which unfortunately is non-potrable between e.g. i386 and amd64.
Change-Id: I0d22ad8f772d173c2252c2f6c562faee2e578806
This reflects what happens in libosmo-abis during 'show e1'
and makes sure we don't attempt to write for more TS than exist
(e.g. in the T1 case).
Change-Id: Iaeac2d080ae3ddc27901cbc4be5220100e9820a8
storage.c:90:7: error: implicit declaration of function ‘writev’; did you mean ‘write’? [-Werror=implicit-function-declaration]
90 | rc = writev(g_out_fd, iov, ARRAY_SIZE(iov));
| ^~~~~~
| write
Change-Id: If98737199f5a6e8fb37a4fd6403ee973dcf70612
Finally the bit ordering and bit format of the SuperChannel has been
figured out.
* the data as read from DAHDI must be flipped (0->1 / 1->0). why?
* the data must be read lsb-first when converting into a bit-buffer
We are now getting the following output e.g. for a OM2000 "TX
Configuration Request" message:
fa 03 8a 8a 80 80 00 13 00 b0 0b 00 ff 01 20 00 2e 2b 1c 00 06 01 95 81 76 00 e9 bf
^ lapd hdr ^ OML ^l ^OM2000 TX Config Req for 43 dBm
In the super channel mode, it seems the BTS transmits one byte in each
timeslot, accross the entire link.
This basically means that if you have a 10 byte long signalling message
to be sent, its first byte will be in TS1 up until the tenth byte in
TS10.
As we are reading in 160 byte chunks from the E1 timeslots, we build a
matrix with 160 columns (for each byte) and 24/30 rows (timeslots). So
we write 24 times 160 bytes into the matrix.
Once we have completed all timeslots, we start to read the matrix by
reading byte 0 of each timeslot (in incrementing TS order), next byte 1
of each timeslot, ... until we end up having read 160 times 24 bytes
from the matrix.
The resulting bitstream needs to be HDLC-synchronized and the resulting
messages passed up for further decoding.
The SIGN mode implies that LAPD instances are bound to the timeslots, which is
of course not what we want in a pure capturing/recording scenario.
Instead, use the new E1INP_TS_TYPE_HDLC mode, which allows us to capture
any HDLC framed messages on E1/T1 timeslots, whether LAPD or e.g. MTP.