This patch enables setting cipher and integrity algorithms
in Amarisoft eNB and srsENB via scenario files. If no
settings are defined following defaults are applied:
- Cipher algorithm: EEA0, EEA2, EEA1
- Integrity algorithm: EIA2, EIA1, EIA0
Example of setting cipher algorithms:
- 4g:srsue-rftype@uhd+srsenb-rftype@uhd+mod-enb-cipher@eea1+mod-enb-cipher@eea0+mod-enb-nprb@6
Change-Id: I595206b7d49016fb6d0aec175c828d9537c53886
this patch adds the stdout counter to count events happening
on the stdout (known from the UE already) to the common
process class so they can also be used from the eNB (and other objects)
In addition, we add a PRACH counter to be used for tests.
Change-Id: I434f072b8aa6f4dce9f90889c6b40832f6798ff8
To expand the test capacities we would like to introduce
Android UEs as new modems. Currently the following tests
are supported:
- Ping
- iPerf3 DL/UL
- RRC Mobile MT Ping
In the following is a small description.
Prerequisites:
- Android UE
- Rooted (Ping, iPerf, RRC Idle MT Ping)
- Qualcomm baseband with working diag_mdlog (RRC Idle MT Ping)
- iPerf3
- Dropbear
- OGT Slave Unit
- Android SDK Platform-Tools
(https://developer.android.com/studio/releases/platform-tools#downloads)
- Pycrate (https://github.com/P1sec/pycrate)
- SCAT
clone https://github.com/bedrankara/scat/ & install dependencies
checkout branch ogt
symlink scat (ln -s ~/scat/scat.py /usr/local/bin/scat)
Infrastructure explaination:
The Android UEs are connected to the OGT Units via USB. We
activate tethering and set up a SSH server (with Dropbear).
We chose tethering over WiFi to have a more stable route
for the ssh connection. We forward incoming connections to
the OGT unit hosting the Android UE(s) on specific ports
to the UEs via iptables. This enables OGT to issue commands
directly to the UEs. In case of local execution we use ADB
to issue commands to the AndroidUE. The set up was tested
with 5 Android UEs connected in parallel but it should be
scalable to the number of available IPs in the respective
subnet. Furthermore, we need to cross compile Dropbear
and iPerf3 to use them on the UEs. These tools have to be
added to the $PATH variable of the UEs.
Examplary set up:
In this example we have two separate OGT units (master
and slave) and two Android UEs that are connected to the
slave unit. An illustration may be found here: https://ibb.co/6BXSP2C
On UE 1:
ip address add 192.168.42.130/24 dev rndis0
ip route add 192.168.42.0/24 dev rndis0 table local_network
dropbearmulti dropbear -F -E -p 130 -R -T /data/local/tmp/authorized_keys -U 0 -G 0 -N root -A
On UE 2:
ip address add 192.168.42.131/24 dev rndis0
ip route add 192.168.42.0/24 dev rndis0 table local_network
dropbearmulti dropbear -F -E -p 131 -R -T /data/local/tmp/authorized_keys -U 0 -G 0 -N root -A
On OGT slave unit:
sudo ip link add name ogt type bridge
sudo ip l set eth0 master ogt
sudo ip l set enp0s20f0u1 master ogt
sudo ip l set enp0s20f0u2 master ogt
sudo ip a a 192.168.42.1/24 dev ogt
sudo ip link set ogt up
Now we have to manually connect to every UE from OGT Master
to set up SSH keys and verify that the setup works.
Therefore, use:
ssh -p [UE-PORT] root@[OGT SLAVE UNIT's IP]
Finally, to finish the setup procedure create the
remote_run_dir for Android UEs on the slave unit like
following:
mkdir /osmo-gsm-tester-androidue
chown jenkins /osmo-gsm-tester-androidue
Example for modem in resource.conf:
- label: mi5g
type: androidue
imsi: '901700000034757'
ki: '85E9E9A947B9ACBB966ED7113C7E1B8A'
opc: '3E1C73A29B9C293DC5A763E42C061F15'
apn:
apn: 'srsapn'
mcc: '901'
mnc: '70'
select: 'True'
auth_algo: 'milenage'
features: ['4g', 'dl_qam256', 'qc_diag']
run_node:
run_type: ssh
run_addr: 100.113.1.170
ssh_user: jenkins
ssh_addr: 100.113.1.170
ue_ssh_port: 130
adb_serial_id: '8d3c79a7'
scat_parser:
run_type: local
run_addr: 127.0.0.1
adb_serial_id: '8d3c79a7'
Example for default-suites.conf:
- 4g:ms-label@mi5g+srsenb-rftype@uhd+mod-enb-nprb@25+mod-enb-txmode@1
Change-Id: I79a5d803e869a868d4dac5e0d4c2feb38038dc5c
In jenkins, I still saw incidents of the entire log becoming colored
after a colored stderr snippet was printed to the log. Make absolutely
sure that no unterminated ANSI coloring is leaked.
Change-Id: Ib9ac1eea4a12d6d43ac8614491f016bbe9ca17b1
Jenkins does support showing ANSI colors on the web, but apparently not
in the junit results output. Strip ansi colors from report fragment
<system-out> text, to make it less annoying to read those on jenkins.
Change-Id: I656ecc23bbfd3f25bdf012c890e0c998168844d3
Allow enriching the junit output with arbitrary subtasks within a test.
The current aim is, for handover tests, to not just show that a test
failed, but to show exactly which steps worked and which didn't, e.g.:
handover.py/01_bts0_started PASSED
handover.py/02.1_ms0_attach PASSED
handover.py/02.2_ms1_attach PASSED
handover.py/02.3_subscribed_in_msc PASSED
handover.py/03_call_established PASSED
handover.py/04.1_bts1_started FAILED
In this case it is immediately obvious from looking at the jenkins
results analyzer that bts1 is the cause of the test failure, and it is
visible which parts of the test are flaky, over time.
First user Will be the upcoming handover_2G suite, in
I0b2671304165a1aaae2b386af46fbd8b098e3bd8.
Change-Id: I4ca9100b6f8db24d1f7e0a09b3b7ba88b8ae3b59
Retrieve a test's own logging. The aim is to provide logging belonging
to a given report fragment in the junit XML output, will be used by
upcoming test.report_fragment() feature.
Change-Id: Idfa0a45f3e6a18dd4fe692e81d732c70b5cffb76
In a test, I called print() on a multi-line string and saw the log
showing each line 0.2 seconds apart. redirect.stdout seems to be pretty
inefficient.
Instead, put a print() function into the testenv, to directly call log()
on the strings passed to print().
The initial idea for redirect_stdout was that we could print() in any
deeper functions called from a test script. But we have no such nested
print() anywhere, only in test scripts themselves.
As a result of this, a multi-line print() in test scripts now no longer
puts the log prefix (timestamp, test name...) and suffix (backtrace /
source position) to each single line, but prints the multiline block
between a single log prefix and suffix -- exactly like the log()
function does everywhere else.
I actually briefly implemented adding the log prefix to each separate
line everywhere, but decided that it is not a good idea: in some places
we log config file snippets and other lists, and prepending the log
prefix to each line makes pasting such a snippet from (say) a jenkins
log super cumbersome. And the log prefix (backtrace) attached on each
separate line makes multiline blocks very noisy, unreadable.
Change-Id: I0972c66b9165bd7f2b0b387e0335172849199193
test.Test() overrides name() in order to provide source line number
information. However, overriding name() is the wrong place for that, as
name() is also often used for identifying an object - when listing the
tests of a suite, the line number should not appear in the test name.
For example, the line number sometimes ends up in the test results in
jenkins, making 'foo.py' and 'foo.py:23' two distinct report items.
Instead, add a separate function Origin.src() that defaults to name(),
but specific classes can override src() if they wish to provide more
detailed information with the object name.
Override src() in Test, not name().
Use src() in backtraces.
The suite_test.ok shows that the backtracing in the log remains
unchanged, but the place where the test name is printed is corrected:
I am 'test_suite' / 'hello_world.py:23'
becomes
I am 'test_suite' / 'hello_world.py'
(Notice that "[LINENR]" in suite_test.ok is a masking of an actual
number, done within the selftest suite)
Change-Id: I0c4698fa2b3db3de777d8b6dcdcee84e433c62b7
Allow showing log lines matching specific regexes, from a specific start
point of a log.
My use case is to echo the handover related logging after an expected
handover failed, so that the reason is visible already in the console
output of a jenkins run. So far I would need to open the endless bsc log
and look up the matching place in it to get a conclusion about why a
handover failed.
Change-Id: Ib6569f7486e9d961bd79a5f24232e58d053667a1
Remove ARFCNs as a concept from resource pool, assign a fixed ARFCN to
each BTS and TRX in the resource pools.
Using ARFCNs on specific bands as resources was an idea that is hard to
implement, because specific BTS dictate selection of bands which
influences which ARFCNs can be picked. That means reserving ARFCN
resources is only possible after reserving specific BTS resources, but
the tester is currently not capable of such two-stage resolution.
Writing handover tests, I got the problem that both BTS in a scenario
attempt to use the same ARFCN.
The by far easiest solution is to assign one fixed ARFCN to each BTS and
TRX. If ever needed, a scenario modifier can still configure different
ARFCNs.
(Due to uncertainty about OC2G operation stability, I prefer to leave
OC2G on ARFCN 50, as it happened to end up being configured before this
patch.)
Change-Id: I0a6c60544226f4261f9106013478d6a27fc39f38
On non-debug log level, show something like this at the beginning of
each suite:
03:45:49.439720 tst handover:sysmo+secondbts-trx-b200: RESERVED RESOURCES for handover:
bts
sysmoBTS 1002
Ettus B200
ip_address
10.42.42.2
10.42.42.3
10.42.42.4
10.42.42.5
10.42.42.6
10.42.42.7
modem
sierra_1st
sierra_2nd
Change-Id: Ic23556eafee654c93d13c5ef405028da09bd51d7
In the end of a test suite, do not omit the passed tests. For example,
running handover against N BTS combinations, it was hard to summarize
which BTS models actually succeeded, with only the failures listed.
Besides the "FAIL" listings, now print something like this in the end:
PASS: handover:sysmo+secondbts-trx-b200 (pass: 1)
pass: handover.py (198.8 sec)
PASS: handover:sysmo+secondbts-trx-umtrx (pass: 1)
pass: handover.py (192.7 sec)
PASS: handover:trx-b200+secondbts-trx-umtrx (pass: 1)
pass: handover.py (193.1 sec)
Change-Id: Ib85a5b90e267c2ed2f844691187ecadc8939b1bb
remote_port defines a custom/additional port for
connections over ssh. It may be used in case several
ssh instances share one IP address.
Change-Id: I2c93fd2ea1c10c333d00eafd3c1066c35796e398
* add new UE feature
* enable in srsue.conf.templ
* add new table for maximum rates
* add config scenario to enable SIB option for QAM64
Change-Id: I6ac2c9989a761e91b93d76c2507f55f0140b202d
Due to the integration of DL-QAM256 another table for DL max rates is needed.
Therefore, I added the parameter 'qam256' to the feature list in the resource.cfg.
The patch also enables the correct UE settings in the config file.
Change-Id: I2d34395449cdcfb31db66ea887d9adbee551e757
Before this patch, almost everything was in place to support concurrent
osmo-gsm-tester instances sharing a common state dir. However, during
resource reservation, if the reservation couldn't be done due to too
many resources being in use, osmo-gsm-tester would fail and skip the
test suite.
With this patch, OGT will wait until some reserved resources are
released and then try requesting the reservation again.
Change-Id: I938602ee890712fda82fd3f812d8edb1bcd05e08
The remotely run script is moved into a new subdir called "external",
where external utils to be used by osmo-gsm-tester (exernal to its own
process) are placed.
It needs to be in another directory because python files in obj/ are
loaded at startup of osmo-gsm-tester to dynamically load schemas.
Change-Id: I633a85294694f2c6efd58535729e9b8af166b3ff
report generation failed when duration was not set correctly
and None was returned. Use 0 as duration by default.
Change-Id: Ia654c67bf2dcce432f84e869550c516d8d5a07a0
tests can now use 'tenv.test().set_kpis(some_dict)' to set any kind of
data as KPIs, which will be presented in the junit report.
The representation of KPIs in the xml file doesn't follow the junit
format, mainly because it has no support for per-test properties.
Change-Id: I00e976f65a202e82d440bf33708f06c8ce2643e2
Timeout value can be specified by test in suite.conf:
config:
suite:
<suite_name>:
<test_name>:
timeout: 2 # 2 seconds timeout
Change-Id: I522f51f77f8be64ebfdb5d5e07ba92baf82d7706
This feature is not really implemented and maybe never was. In any case,
it makes sense to have that working per-test so we can specify different
values per test in case it's needed.
Change-Id: I3c1b95c10e974da87ec9abd25578d8bcc0bc55a3
process object always used timeout=300 while runnig wth launch_sync().
Let's allow replacing that value beforehand so that iperf3 can
pre-configure the process object and caller doesn't need to care about
calculating expected time.
Change-Id: I7f6c5078f648013515919aa35ebcdb3ef157b5e4
Setting the log.ctx manually is not needed anymore and it's actually
harmful since all palces where it was used, a log.Origin already in path
was being passed, causing a origin loop.
Change-Id: I0511b9f7bc59e3c7f2269ff3155d0c95db58d063
This way tests which require a very specific config file can override
specific template files used by object classes.
Change-Id: I65d1b1e826d2d430ee83810d998b98d0ccaa07cd
After this commit, in some situations ssh related errors are printed
directly in the exception to quickly find cause of the issue.
Example:
FAIL: ping.py (5.0 sec) Error: rm-remote-dir(pid=25913): launch_sync(): local ssh process exited with status 255 (ssh: connect to host 10.42.42.110 port 22: No route to host) [trial↪4g:srsue-rftype@zmq+srsenb-rftype@zmq+mod-enb-nprb@6↪ping.py:9↪ping.py↪srsepc_10.42.42.118↪host-jenkins@10.42.42.110↪rm-remote-dir(pid=25913)]
Change-Id: Ia16c7dec96f70d761600ad6a50d9df8382d9c2c8
Before, it would show somethig like:
"""
osmo_gsm_tester.core.log.Error: Exited in error 255
"""
Now:
"""
osmo_gsm_tester.core.log.Error: rm-remote-dir(pid=24820): Exited in error 255 [trial↪4g:srsue-rftype@zmq+srsenb-rftype@zmq+mod-enb-nprb@6↪ping.py:9↪ping.py↪srsepc_10.42.42.118↪host-jenkins@10.42.42.110↪rm-remote-dir(pid=24820)]
"""
Change-Id: I8873f67a2f3df21c4dd552c92510535bf95e2c9d
That's not needed and will produce some parent loop detection in
log.find_on_stack() if logging is called under that stack frame.
Change-Id: I4ab7e8977fa9bad5c8956b7c1df1513b27bb5aa2
tgz files in trials can be categorized in subdirectories, allowing to
select different bianry files at runtime based on the target run node
which is going to run them. This way for instance one can have a binary
linked against libs for eg. CentOS under run_label "centos/" or an ARM
target under "arm", and then use "run_label: arm" on the resource using
it.
Change-Id: Iaf2e97da3aff693395f44f0e93b184d4846cf6da
Since the process is run in the background through the wrapper bash
script, stdin was disabled there. By explicitly redirecting the bash
process stdin we make sure it is always able to read from it.
Change-Id: I6cb7979aae0a7457919f353cbeb4c3b78cdd4919
This is useful since remote processes we run under ssh end up merging
both remote stdout and sterr into local stdout.
Change-Id: Ibbfb099a667f21641075faa1858e0b9acd706fd2
The API was doing far more stuff than its name indicated. Even more
important stuff, like making sure the process is killed at the end with
-9 after ssh connection is dropped.
Change-Id: If043ecab509b34b0922824d73db916196274ec64
This allows inheriting suites or scenarios from eg. sysmocom/ dir, while
still allowing to apply new suites and scenarios on top.
Change-Id: Icecdae32d400a6b6da2ebf167c1c795f7a74ae96