I am not sure how other developers do this. There are probably better ways to
make testing faster but I kind of like it this way.
I just call the twl3025_power_off_now function when the power key is pressed.
Change-Id: I1e55910acd8584c74e5e190b3334a8cf6987f5f3
When a dedicated channel is activated, in chan_nr2mf_task_mask()
we calculate a bitmask of the corresponding multi-frame tasks to
be enabled. Three logical kinds of the multi-frame tasks exist:
- primary (master) - the main burst processing task,
e.g. MF_TASK_{TCH_F_ODD,SDCCH4_0,GPRS_PDTCH};
- secondary - additional burst processing task (optional),
e.g. MF_TASK_GPRS_PTCCH;
- measurement - neighbour measurement task (optional),
e.g. MF_TASK_NEIGH_{PM51,PM26E,PM26O}.
By default, the primary task is set to MF_TASK_BCCH_NORM (0x00).
Due to a mistake, the secondary task has also been set to BCCH,
so when we switch to a dedicated mode, we also enable the BCCH.
This leads to a race condition between the multi-frame tasks,
when both primary and secondary ones read bursts from the DSP
at the same time, so the firmware hangs because of that:
nb_cmd(0) and rxnb.msg != NULL
BURST ID 2!=0 BURST ID 3!=1
This regression was introduced together with experimental PDCH
support [1]. Let's use value -1 to indicate that the secondary
task is not set, and apply it properly.
Change-Id: I4d667b2106fd8453eac9e24019bdfb14358d75e3
Fixes: [1] I44531bbe8743c188cc5d4a6ca2a63000e41d6189
Related: OS#3155
For more information, see 3GPP TS 44.014, sections:
- 5.1 "Single-slot TCH loops", and
- 8 "Message definitions and contents".
This feature has nothing to do with the Mobility Management, so
let's handle GSM48_PDISC_TEST messages in the Radio Resources
layer implementation (gsm48_mm.c -> gsm48_rr.c).
Change-Id: If8efc57c7017aa8ea47b37c472d1bbb1914389ca
This reverts commit 6e1c82d298.
Unfortunately, solving one problem it introduced even more regressions.
Change-Id: If29b4f6718cbc8af18fe18a5e3eca3912e8af01e
Related: OS#4658
TRX Toolkit is still backwards compatible with Python2, but Python3
does much better in terms of performance. Also, on Debian Stretch
that is used as a base for our Docker images, Python 2.7 is still
the default. Let's require Python3 in shebang.
Change-Id: I8a1d7c59d3b5d49ec2ed94a7c77905e02134f216
In order to reflect the UL/DL delay caused by the premature burst
scheduling (a.k.a. 'fn-advance') in a virtual environment, the
Transceiver implementation now queues all to be transmitted bursts,
so they remain in the queue until the appropriate time of transmission.
The API user is supposed to call recv_data_msg() in order to obtain
a L12TRX message on the TRXD (data) inteface, so it gets queued by
this function. Then, to ensure the timeous transmission, the user
of this implementation needs to call clck_tick() on each TDMA
frame. Both functions are thread-safe (queue mutex).
In a multi-trx configuration, the use of queue additionally ensures
proper burst aggregation on multiple TRXD connections, so all L12TRX
messages are guaranteed to be sent in the right order, i.e. with
monolithically-increasing TDMA frame numbers.
Of course, this change increases the overall CPU usage, given that
each transceiver gets its own queue, and we need to serve them all
on every TDMA frame. According to my measurements, when running
test cases from ttcn3-bts-test, the average load is ~50% higher
than what it used to be. Still not significantly high, though.
Change-Id: Ie66ef9667dc8d156ad578ce324941a816c07c105
Related: OS#4658, OS#4546
Running with cProfile shows that there are quite a lot calls:
469896 0.254 0.000 0.254 0.000 trx_list.py:37(__getitem__)
Let's better avoid using it in performance critical parts.
Change-Id: I2bbc0a2af8218af0b9a02d8e16d4216cf602892a
In general, premature scheduling of to be transmitted bursts
inevitably increases the time delay between Uplink and Downlink.
The more we advance TDMA frame number, the greater gets this
delay. 20 TDMA frames is definitely more than a regular
transceiver needs to pre-process a burst before transmission.
Change-Id: Ia9b142b59d95f2cd7b2394596cf72c0bcd36d711
Related: OS#4487
When running together with fake_trx.py (mostly used back-end), it
is currently possible that Downlink bursts are received in a wrong
order if more than one transceiver is configured (multi-trx mode).
This is how it looks like:
DTRXD DEBUG trx_if.c:612 RX burst tn=3 fn=629 rssi=-86 toa=0
DSCHD DEBUG sched_lchan_tchf.c:60 Traffic received on TCH/F: fn=629 ts=3 bid=1
DTRXD DEBUG trx_if.c:612 RX burst tn=3 fn=630 rssi=-86 toa=0
DSCHD DEBUG sched_lchan_tchf.c:60 Traffic received on TCH/F: fn=630 ts=3 bid=2
DTRXD DEBUG trx_if.c:612 RX burst tn=3 fn=631 rssi=-86 toa=0
DSCHD DEBUG sched_lchan_tchf.c:60 Traffic received on TCH/F: fn=631 ts=3 bid=3
DTRXD DEBUG trx_if.c:612 RX burst tn=3 fn=633 (!) rssi=-86 toa=0
DSCHD NOTICE sched_trx.c:663 Substituting (!) lost TDMA frame 632 on TCH/F
DSCHD DEBUG sched_lchan_tchf.c:60 Traffic received on TCH/F: fn=632 ts=3 bid=0
DSCHD DEBUG sched_lchan_tchf.c:60 Traffic received on TCH/F: fn=633 ts=3 bid=1
DTRXD DEBUG trx_if.c:612 RX burst tn=3 fn=632 (!) rssi=-86 toa=0
DTRXD NOTICE sched_trx.c:640 Too many (>104) contiguous TDMA frames elapsed (2715647)
since the last processed fn=633 (current fn=632)
so here a burst with TDMA fn=633 was received earlier than a burst
with TDMA fn=632. The burst loss detection logic considered the
latter one as lost, and substituted it with a dummy burst. When
finally the out-of-order burst with TDMA fn=632 was received, we
got the large number of allegedly elapsed frames:
((632 + 2715648) - 633) % 2715648 == 2715647
Given that late bursts get substituted, the best thing we can do
is to reject them and log an error. Passing them to the logical
channel handler (again) might lead to undefined behaviour.
Change-Id: I873c8555ea2ca190b1689227bb0fdcba87188772
Related: OS#4658, OS#4546
It's not something that we should be trying to fix, if the whole
TDMA multi-frame is lost. For some yet unknown reason, sometimes
the difference between the last processed TDMA frame number and
the current one is so huge, so trxcon eats a lot of CPU trying
to compensate nearly the whole TDMA hyper-frame:
sched_trx.c:640 Too many (>104) contiguous TDMA frames elapsed (2715647)
since the last processed fn=633 (current fn=632)
Let's just print a warning and do not compensate more than one
TDMA multi-frame period corresponding to the current layout.
Change-Id: I56251d0d2f6fa19195ff105d3bdfbc22df6db8cd
This change fixes several warnings reported by GCC 10.1.0:
apps/rssi/main.c:238:30: warning: 'sprintf' may write a terminating
nul past the end of the destination
apps/rssi/main.c:238:4: note: 'sprintf' output between 10 and 17
bytes into a destination of size 16
apps/rssi/main.c:413:26: warning: '.' directive writing 1 byte into
a region of size between 0 and 9
apps/rssi/main.c:413:3: note: 'sprintf' output between 10 and 20
bytes into a destination of size 16
Change-Id: I7980727b78f7622d792d82170f73c90ac5770397
These symbols are defined, but never used:
- struct last_rach - seems to be copy-pasted from prim_rach.c,
- tall_msgb_ctx - already defined in libosmocore.
Change-Id: I6077c8e9b441f7848d1a4c25a8b5e1aed82f4b7d
By default RSSI on the Rx side is computed based on transmitter's
tx power and then substracting the the Rx path loss.
If FAKE_RSSI is used, then the values in there are used instead.
A default hardcoded value of tx nominal power = 50 dBm is set to keep
old behavior of RSSI=-60dB after calculations.
Change-Id: I3ee1a32ca22c3272e66b3ca78e4f67d283844c80
L1CTL is using the network byte order, because this protocol is
spoken between different devices and architectures. Somehow I
forgot about this while adding SETFH command back in 2018.
Change-Id: Ia2f70f0d5e35b6bf05e1fa6fb51a15c1bbe3ca4c
Related: OS#4546
Jenkins build #2516 has uncovered a problem in DATADumpFile.parse_msg():
======================================================================
FAIL: test_parse_empty (test_data_dump.DATADump_Test)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/build/src/target/trx_toolkit/test_data_dump.py",
line 138, in test_parse_empty
self.assertEqual(msg, False)
AssertionError: None != False
I did a quick investigation, and figured out that this failure
happens when trying to call parse_msg() with idx == 0, because
DATADumpFile._seek2msg() basically does nothing in this case
and thus always returns True. The None itself comes from
DATADumpFile._parse_msg().
Let's ensure that DATADumpFile.parse_msg() always returns None,
even if DATADumpFile._seek2msg() fails. Also, update the unit
test, so we always test a wide range of 'idx' values.
Change-Id: Ifcfa9c5208636a0f9309f5ba8e47d282dc6a03f4
It would make sense to send the ARFCN list in parameters of SETFH
command, if there was a clear distinction between transceivers in
fake_trx.py, i.e. which one is an MS and which is a BTS.
Right now, every Transceiver is an abstract entity that emits
and receives bursts. So when you convert an ARFCN to a pair of
Downlink/Uplink frequencies, you don't know whether it maps
as Rx/Tx or as Tx/Rx for a given Transceiver.
Of course, we could assume that this is an MS specific feature,
and a pair of Downlink/Uplink frequencies always corresponds to
Rx/Tx, but what if some day we would need to implement and test
a similar approach for the BTS side? Also, by sending frequency
values in kHz (rather than ARFCNs) we can avoid inconsistency
with the existing RXTUNE / TXTUNE commands.
Change-Id: Ia2bf08797f1a37b56cf47945694b901f92765b58
Related: I587e4f5da67c7b7f28e010ed46b24622c31a3fdd
Related: OS#4546
There are two ways to implement frequency hopping:
a) The Transceiver is configured with the hopping parameters, in
particular HSN, MAIO, and the list of ARFCNs (channels), so the
actual Rx/Tx frequencies are changed by the Transceiver itself
depending on the current TDMA frame number.
b) The L1 maintains several Transceivers (two or more), so each
instance is assigned one dedicated RF carrier frequency, and
hence the number of available hopping frequencies is equal to
the number of Transceivers. In this case, it's the task of
the L1 to commutate bursts between Transceivers (frequencies).
Variant a) is commonly known as "synthesizer frequency hopping"
whereas b) is known as "baseband frequency hopping".
For the MS side, a) is preferred, because a phone usually has only
one Transceiver (per RAT). On the other hand, b) is more suitable
for the BTS side, because it's relatively easy to implement and
there is no technical limitation on the amount of Transceivers.
FakeTRX obviously does support b) since multi-TRX feature has been
implemented, as well as a) by resolving UL/DL frequencies using a
preconfigured (by the L1) set of the hopping parameters. The later
can be enabled using the SETFH control command:
CMD SETFH <HSN> <MAIO> <RXF1> <TXF1> [... <RXFN> <TXFN>]
where <RXFN> and <TXFN> is a pair of Rx/Tx frequencies (in kHz)
corresponding to one ARFCN the Mobile Allocation. Note that the
channel list is expected to be sorted in ascending order.
NOTE: in the current implementation, mode a) applies to the whole
Transceiver and all its timeslots, so using in for the BTS side
does not make any sense (imagine BCCH hopping together with DCCH).
Change-Id: I587e4f5da67c7b7f28e010ed46b24622c31a3fdd
L1CTL handling code should not be involved in such high level checks, so
while at it, move the check into a separate function in gsm48_rr.c and
add a length check. gsm48_rr_tx_voice() is the only caller of
l1ctl_tx_traffic_req().
Related: SYS#4924
Change-Id: Iba84f5d60ff5b1a2db8fb6af5131e185965df7c9
Use newly added audio / loopback config vty node to provide audio
loopback from mobile app. Only FR is supported for now.
Change-Id: Icd0b8d00c855db1a6ff5e35e10c8ff67b7ad5c83