doc/manual: Refactor, rewrite, improve and update most of the User Manual

* Some TODOs are added as comments which actually require code changes.
  These are details which showed up as incongruences or missing bits
  while writing the documentation for them.

* Some sections are introduced but still waiting to be writen soon:
** Debugging section
** Docker Setup section
** Ansible Setup section
** Troubleshooting (add jenkins red cross button sending kill -9)
** resources.conf attribute list needs to be converted to a table

* Device related setup needs to be updated and extended
* Parametrized scenarios need to be documented
* 4G resources documentation needs to be added.

Change-Id: Ifc2a3c74d45336cc988b76c0ff68a85311e4dd40
This commit is contained in:
Pau Espin 2020-03-10 11:46:39 +01:00 committed by pespin
parent 990b520b1f
commit 7e0b2ddfb8
12 changed files with 930 additions and 767 deletions

2
.gitignore vendored
View File

@ -19,6 +19,6 @@ doc/manuals/*.pdf
doc/manuals/*__*.png
doc/manuals/*.check
doc/manuals/generated/
doc/manuals/osmomsc-usermanual.xml
doc/manuals/osmo-gsm-tester-manual.xml
doc/manuals/common
doc/manuals/build

View File

@ -0,0 +1,6 @@
[[ansible]]
== Ansible Setup
Available in osmocom's osmo-ci.git subdirectory 'ansible/', see there 'gsm-tester/README.md'.
//TODO: Explain more where to find, how to build, how to use.

View File

@ -1,166 +1,43 @@
== Configuration
[[config_paths]]
=== Config Paths
=== Schemas
The osmo-gsm-tester looks for configuration files in various standard
directories in this order:
All configuration attributes in {app-name} are stored and provided as YAML
files, which are handled internally mostly as sets of dictionaries, lists and
scalars. Each of these configurations have a known format, which is called
'schema'. Each provided configuration is validated against its 'schema' at parse
time. Hence, 'schemas' can be seen as a namespace containing a structured tree
of configuration attributes. Each attribute has a schema type assigned which
constrains the type of value it can hold.
- '$HOME/.config/osmo-gsm-tester/'
- '/usr/local/etc/osmo-gsm-tester/'
- '/etc/osmo-gsm-tester/'
There are several well-known schemas used across {app-name}, and they are
described in following sub-sections.
The config location can also be set by an environment variable
'$OSMO_GSM_TESTER_CONF', which then overrides the above locations.
[[schema_resources]]
==== Schema 'resources'
The osmo-gsm-tester expects to find the following configuration files in a
configuration directory:
This schema defines all the attributes which can be assigned to
a _resource_, and it is used to validate the <<resources_conf,resources.conf>>
file. Hence, the <<resources_conf,resources.conf>> contains a list of elements
for each resource type.
- 'paths.conf'
- 'resources.conf'
- 'default-suites.conf' (optional)
- 'defaults.conf' (optional)
It is important to understand that the content in this schema refers to a list of
resources for each resource class. Since a list is ordered by definition, it
clearly identifies specific resources by order. This is important when applying
filters or modifiers, since they are applied per-resource in the list. One can
for instance apply attribute A to first resource of class C, while not applying
it or applying another attribute B to second resources of the same class. As a
result, complex forms can be used to filter and modify a list of resources
required by a testsuite.
These are described in detail in the following sections.
On the other hand, it's also important to note that lists for simple or scalar
types are currently being treated as unordered sets, which mean combination of
filters or modifiers apply differently. In the future, it may be possible to
have both behaviors for scalar/simple types by using also the YAML 'set' type in
{app-handle}.
=== Format: YAML, and its Drawbacks
The general configuration format used is YAML. The stock python YAML parser
does have several drawbacks: too many complex possibilities and alternative
ways of formatting a configuration, but at the time of writing seems to be the
only widely used configuration format that offers a simple and human readable
formatting as well as nested structuring. It is recommended to use only the
exact YAML subset seen in this manual in case the osmo-gsm-tester should move
to a less bloated parser in the future.
Careful: if a configuration item consists of digits and starts with a zero, you
need to quote it, or it may be interpreted as an octal notation integer! Please
avoid using the octal notation on purpose, it is not provided intentionally.
[[paths_conf]]
=== 'paths.conf'
The 'paths.conf' file defines where to store the global state (of reserved
resources) and where to find suite and scenario definitions.
Any relative paths found in a 'paths.conf' file are interpreted as relative to
the directory of that 'paths.conf' file.
Example:
----
state_dir: '/var/tmp/osmo-gsm-tester/state'
suites_dir: '/usr/local/src/osmo-gsm-tester/suites'
scenarios_dir: './scenarios'
----
[[state_dir]]
==== 'state_dir'
It contains global or system-wide state for osmo-gsm-tester. In a typical state
dir you can find the following files:
'last_used_msisdn.state'::
Contains last used msisdn number, which is automatically increased every
time osmo-gsm-tester needs to assign a new subscriber in a test.
'lock'::
Lock file used to implement a mutual exclusion zone around the
'reserved_resources.state' file.
'reserved_resources.state'::
File containing a set of reserved resources by any number of
osmo-gsm-tester instances. Each osmo-gsm-tester instance is responsible
to clear its resources from the list once it is done using them and are
no longer reserved.
If you would like to set up several separate configurations (not typical), note
that the 'state_dir' is used to reserve resources, which only works when all
configurations that share resources also use the same 'state_dir'.
This way, several concurrent users of osmo-gsm-tester (ie. several
osmo-gsm-tester processes running in parallel) can run without interfering with
each other (e.g. using same ARFCN, same IP or same ofono modem path).
[[suites_dir]]
==== 'suites_dir'
Suites contain a set of tests which are designed to be run together to test a
set of features given a specific set of resources. As a result, resources are
allocated per suite and not per test.
Tests for a given suite are located in the form of '.py' python scripts in the
same directory where the 'suite.conf' lays.
[[scenarios_dir]]
==== 'scenarios_dir'
This dir contains scenario configuration files.
Scenarios define constraints to serve the resource requests of a 'suite.conf',
to select specific resources from the general resource pool specified in 'resources.conf'.
All 'times' attributes are expanded before matching. For example, if a 'suite.conf'
requests two BTS, we may enforce that both BTS should be of type 'osmo-bts-sysmo' in
these ways:
----
resources:
bts:
- type: osmo-bts-sysmo
- type: osmo-bts-sysmo
----
or alternatively,
----
resources:
bts:
- times: 2
type: osmo-bts-sysmo
----
If only one resource is specified in the scenario, then the resource allocator
assumes the restriction is to be applied to the first resource and that remaining
resources have no restrictions to be taken into consideration.
To apply restrictions only on the second resource, the first element can be left
emtpy, like:
----
resources:
bts:
- {}
- type: osmo-bts-sysmo
----
On the 'osmo_gsm_tester.py' command line and the 'default_suites.conf', any number of
such scenario configurations can be combined in the form:
----
<suite_name>:<scenario>[+<scenario>[+...]]
----
e.g.
----
my_suite:sysmo+tch_f+amr
----
[[resources_conf]]
=== 'resources.conf'
The 'resources.conf' file defines which hardware is connected to the main unit,
as well as which limited configuration items (like IP addresses or ARFCNs)
should be used.
These resources are allocated dynamically and are not configured explicitly:
- MSISDN: phone numbers are dealt out to test scripts in sequence on request.
A 'resources.conf' is structured as a list of items for each resource type,
where each item has one or more settings -- for an example, see
<<resources_conf_example>>.
These kinds of resource are known:
//TODO: update this list and use a table for each resource type
These kinds of resources and their attributes are known:
'ip_address'::
List of IP addresses to run osmo-nitb instances on. The main unit
@ -251,6 +128,278 @@ These kinds of resource are known:
- 'voice'
- 'ussd'
[[schema_want]]
==== Schema 'want'
This schema is basically the same as the <<schema_resources,resources>> one, but
with an extra 'times' attribute for each resource item. All 'times' attributes
are expanded before matching. For example, if a 'suite.conf' requests two BTS,
one may enforce that both BTS should be of type 'osmo-bts-sysmo' in these ways:
----
resources:
bts:
- type: osmo-bts-sysmo
- type: osmo-bts-sysmo
----
or alternatively,
----
resources:
bts:
- times: 2
type: osmo-bts-sysmo
----
[[schema_conf]]
==== Schema 'conf'
This schema is used by <<suite_conf,suite.conf>> and <<scenario_conf,scenario.conf>> files. It contains 3 main element sections:::
[[schema_conf_sec_resources]]
- Section 'resources': Contains a set of elements validated with <<schema_resources,resources>>
schema. In <<suite_conf,suite.conf>> it is used to construct the list of
requested resources. In <<scenario_conf,scenario.conf>>, it is used to inject
attributes to the initial <<suites_conf,suites.conf>> _resources_ section and
hence further restrain it.
[[schema_conf_sec_modifiers]]
- Section 'modifiers': Both in <<suite_conf,suite.conf>> and
<<scenario_conf,scenario.conf>>, values presented in here are injected into
the content of the <<schema_conf_sec_resources,resources section>> after
_resource_ allocation, hereby overwriting attributes passed to the object
class instance managing the specific _resource_ (matches by resource type and
list position). Since it is combined with the content of
<<schema_conf_sec_resources,resources section>>, it is clear that the
<<schema_resources,resources schema>> is used to validate this content.
[[schema_conf_sec_config]]
- Section 'config': Contains configuration attributes for {app-name} classes which are
not _resources_, and hence cannot be configured with <<schema_modifiers,modifiers>>.
They can overwrite values provided in the <<defaults_conf,defaults.conf>> file.
//TODO: defaults.timeout should be change in code to be config.test_timeout or similar
//TODO: 'config' should be split into its own schema and validate defaults.conf
[[config_paths]]
=== Config Paths
The osmo-gsm-tester looks for configuration files in various standard
directories in this order:
- '$HOME/.config/osmo-gsm-tester/'
- '/usr/local/etc/osmo-gsm-tester/'
- '/etc/osmo-gsm-tester/'
The config location can also be set by an environment variable
'$OSMO_GSM_TESTER_CONF', which then overrides the above locations.
The osmo-gsm-tester expects to find the following configuration files in a
configuration directory:
- <<paths_conf,paths.conf>>
- <<resource_conf,resources.conf>>
- <<default_suites_conf,default-suites.conf>> (optional)
- <<defaults_conf,defaults.conf>> (optional)
These are described in detail in the following sections.
[[paths_conf]]
==== 'paths.conf'
The 'paths.conf' file defines where to store the global state (of reserved
resources) and where to find suite and scenario definitions.
Any relative paths found in a 'paths.conf' file are interpreted as relative to
the directory of that 'paths.conf' file.
There's not yet any well-known schema to validate this file contents since it
has only 3 attributes.
.Sample paths.conf file:
----
state_dir: '/var/tmp/osmo-gsm-tester/state'
suites_dir: '/usr/local/src/osmo-gsm-tester/suites'
scenarios_dir: './scenarios'
----
[[state_dir]]
===== 'state_dir'
It contains global or system-wide state for osmo-gsm-tester. In a typical state
dir you can find the following files:
'last_used_*.state'::
Contains stateful content spanning accross {app-name} instances and
runs. For instance, 'last used msisdn number.state' is automatically
(and atomically) increased every time osmo-gsm-tester needs to assign a
new subscriber in a test, ensuring tests get unique msisdn numbers.
'reserved_resources.state'::
File containing a set of reserved resources by any number of
osmo-gsm-tester instances (aka pool of allocated resources). Each
osmo-gsm-tester instance is responsible to clear its resources from the
list once it is done using them and are no longer reserved.
'lock'::
Lock file used to implement a mutual exclusion zone around any state
files in the 'state_dir', to prevent race conditions between different
{app-name} instances running in parallel.
This way, several concurrent users of osmo-gsm-tester (ie. several
osmo-gsm-tester processes running in parallel) can run without interfering with
each other (e.g. using same ARFCN, same IP or same ofono modem path).
If you would like to set up several separate configurations (not typical), note
that the 'state_dir' is used to reserve resources, which only works when all
configurations that share resources also use the same 'state_dir'. It's also
important to notice that since resources are stored in YAML dictionary form, if
same physical device is described differently in several
<<resource_conf,resources.conf>> files (used by different {app-name} instances),
resource allocation may not work as expected.
[[suites_dir]]
===== 'suites_dir'
Suites contain a set of tests which are designed to be run together to test a
set of features given a specific set of resources. As a result, resources are
allocated per suite and not per test.
Tests for a given suite are located in the form of '.py' python scripts in the
same directory where the <<suite_conf,suite.conf>> lays.
Tests in the same testsuite willing to use some shared code can do so by putting
it eg. in '$suites_dir/$suitename/lib/testlib.py':
----
#!/usr/bin/env python3
from osmo_gsm_tester.testenv import *
def my_shared_code(foo):
return foo.bar()
----
and then in the test itself use it this way:
----
#!/usr/bin/env python3
from osmo_gsm_tester.testenv import *
import testlib
suite.test_import_modules_register_for_cleanup(testlib)
from testlib import my_shared_code
bar = my_shared_code(foo)
----
.Sample 'suites_dir' directory tree:
----
suites_dir/
|-- suiteA
| |-- suite.conf
| '-- testA.py
|-- suiteB
| |-- testB.py
| |-- testC.py
| |-- lib
| | '-- testlib.py
| '-- suite.conf
----
[[suite_conf]]
===== 'suite.conf'
This file content is parsed using the <<schema_want,Want>> schema.
It provides
{app-name} with the base restrictions (later to be further filtered by
<<scenario_conf,scenario>> files) to apply when allocating resources.
It can also override attributes for the allocated resources through the
<<schema_want,modifiers>> section (to be further modified by
<<scenario_conf,scenario>> files later on). Similary it can do the same for
general configuration options (no per-resource) through the
<<schema_want,config>> section.
.Sample 'suite.conf' file:
----
resources:
ip_address:
- times: 9 # msc, bsc, hlr, stp, mgw*2, sgsn, ggsn, iperf3srv
bts:
- times: 1
modem:
- times: 2
features:
- gprs
- voice
- times: 2
features:
- gprs
config:
bsc:
net:
codec_list:
- fr1
defaults:
timeout: 50s
----
[[scenarios_dir]]
===== 'scenarios_dir'
This dir contains scenario configuration files.
.Sample 'scenarios_dir' directory tree:
----
scenarios_dir/
|-- scenarioA.conf
'-- scenarioB.conf
----
[[scenario_conf]]
===== 'scenario conf file'
Scenarios define further constraints to serve the resource requests of a
<<suite_conf,suite.conf>>, ie. to select specific resources from the general
resource pool specified in <<resource_conf,resources.conf>>.
If only one resource is specified in the scenario, then the resource allocator
assumes the restriction is to be applied to the first resource and that remaining
resources have no restrictions to be taken into consideration.
To apply restrictions only on the second resource, the first element can be left
emtpy, like:
----
resources:
bts:
- {}
- type: osmo-bts-sysmo
----
On the 'osmo_gsm_tester.py' command line and the
<<default_suites_conf,default_suites.conf>>, any number of such scenario
configurations can be combined in the form:
----
<suite_name>:<scenario>[+<scenario>[+...]]
----
e.g.
----
my_suite:sysmo+tch_f+amr
----
[[resources_conf]]
==== 'resources.conf'
//TODO: update this section
The 'resources.conf' file defines which hardware is connected to the main unit,
as well as which limited configuration items (like IP addresses or ARFCNs)
should be used.
A 'resources.conf' is validated by the <<schema_resources,resources schema>>.
That means it is structured as a list of items for each resource type, where
each item has one or more attributes -- for an example, see
<<resources_conf_example>>.
Side note: at first sight it might make sense to the reader to rather structure
e.g. the 'ip_address' or 'arfcn' configuration as +
'"arfcn: GSM-1800: [512, 514, ...]"', +
@ -262,25 +411,22 @@ that is repeated numerous times. No special notation for these cases is
available (yet).
[[default_suites]]
=== 'default-suites.conf' (optional)
==== 'default-suites.conf' (optional)
The 'default-suites.conf' file contains a list of 'suite:scenario+scenario+...'
The 'default-suites.conf' file contains a YAML list of 'suite:scenario+scenario+...'
combination strings as defined by the 'osmo-gsm-tester.py -s' commandline
option. If invoking the 'osmo-gsm-tester.py' without any suite definitions, the
'-s' arguments are taken from this file instead. Each of these suite + scenario
combinations is run in sequence.
A suite name must match the name of a directory in the 'suites_dir' as defined
by 'paths.conf'.
A suite name must match the name of a directory in the
<<suites_dir,suites_dir/>> as defined by <<paths_conf,paths.conf>>.
A scenario name must match the name of a configuration file in the
'scenarios_dir' as defined by 'paths.conf' (optionally without the '.conf'
suffix).
For 'paths.conf', see <<paths_conf>>.
Example of a 'default-suites.conf' file:
<<scenarios_dir,scnearios_dir/>> as defined by <<paths_conf,paths.conf>>
(optionally without the '.conf' suffix).
.Sample 'default-suites.conf' file:
----
- sms:sysmo
- voice:sysmo+tch_f
@ -292,24 +438,28 @@ Example of a 'default-suites.conf' file:
- voice:trx+dyn_ts
----
=== 'defaults.conf' (optional)
==== 'defaults.conf' (optional)
In {app-name} object instances requested by the test and created by the suite
relate to a specific allocated resource. That's not always the case, and even if
it the case the information stored in <<resources_conf,resources.conf>> for that
resource may not contain tons of attributes which the object class needs to
manage the resource.
For this exact reason, the 'defaults.conf' file exist. It contains a set of
default attributes and values (in YAML format) that object classes can use to
fill in the missing gaps, or to provide values which can easily be changed or
overwritten by <<suite_conf,suite.conf>> or <<scenario_conf,scenario.conf>>
files through modifiers.
Each binary run by osmo-gsm-tester, e.g. 'osmo-nitb' or 'osmo-bts-sysmo',
typically has a configuration file template that is populated with values for a
trial run.
trial run. Hence, a <<suite_conf,suite.conf>>, <<scenario_conf,scenario.conf>>
or a <<resources_conf,resources.conf>> providing a similar setting always has
precedence over the values given in a 'defaults.conf'
Some of these values are provided by the 'resources.conf' from the allocated
resource(s), but not all values can be populated this way: some osmo-nitb
configuration values like the network name, encryption algorithm or timeslot
channel combinations are in fact not resources (only the nitb's interface
address is). These additional settings may be provided by the scenario
configurations, but in case the provided scenarios leave some values unset,
they are taken from this 'defaults.conf'. (A 'scenario.conf' or a
'resources.conf' providing a similar setting always has precedence over the
values given in a 'defaults.conf').
Example of a 'defaults.conf':
.Sample 'defaults.conf' file:
----
nitb:
net:
@ -359,3 +509,53 @@ bsc_bts:
- phys_chan_config: TCH/F_TCH/H_PDCH
- phys_chan_config: TCH/F_TCH/H_PDCH
----
=== Example Setup
{app-name} comes with an example official setup which is the one used to run
Osmocom's setup. There are actually two different available setups: a
production one and an RnD one, used to develop {app-name} itself. These two set
ups share mostly all configuration, main difference being the
<<resources_conf,resources.conf>> file being used.
All {app-name} related configuration for that environment is publicly available in 'osmo-gsm-tester.git' itself:::
- <<paths_conf,paths.conf>>: Available Available under 'example/', with its paths already configured to take
required bits from inside the git repository.
- <<suite_dir,suites_dir>>: Available under 'suites/'
- <<scenarios_dir,scenarios_dir>>: Available under 'example/scenarios/'
- <<resource_conf,resources.conf>>: Available under 'example/' as
'resources.conf.prod' for Production setup and as 'resources.conf.rnd' for the
RnD setup. One must use a symbolic link to have it available as 'resources.conf'.
//TODO: resources.conf file path should be modifiable through paths.conf!
==== Typical Invocations
Each invocation of osmo-gsm-tester deploys a set of pre-compiled binaries for
the Osmocom core network as well as for the Osmocom based BTS models. To create
such a set of binaries, see <<trials>>.
Examples for launching test trials:
- Run the default suites (see <<default_suites>>) on a given set of binaries:
----
osmo-gsm-tester.py path/to/my-trial
----
- Run an explicit choice of 'suite:scenario' combinations:
----
osmo-gsm-tester.py path/to/my-trial -s sms:sysmo -s sms:trx -s sms:nanobts
----
- Run one 'suite:scenario1+scenario2' combination, setting log level to 'debug'
and enabling logging of full python tracebacks, and also only run just the
'mo_mt_sms.py' test from the suite, e.g. to investigate a test failure:
----
osmo-gsm-tester.py path/to/my-trial -s sms:sysmo+foobar -l dbg -T -t mo_mt
----
A test script may also be run step-by-step in a python debugger, see
<<debugging>>.

View File

@ -0,0 +1,6 @@
[[docker]]
== Docker Setup
Available in osmocom's docker-playground.git subdirectory 'osmo-gsm-tester/'.
//TODO: Explain more where to find, how to build, how to use.

View File

@ -1,49 +1,11 @@
== Installation on Main Unit
== {app-name} Installation
The main unit is a general purpose computer that orchestrates the tests. It
runs the core network components, controls the modems and so on. This can be
anything from a dedicated production rack unit to your laptop at home.
=== Trial Builder
This manual will assume that tests are run from a jenkins build slave, by a user
named 'jenkins' that belong to group 'osmo-gsm-tester'. The user configuration
for manual test runs and/or a different user name is identical, simply replace
the user name or group.
=== Osmo-gsm-tester Dependencies
On a Debian/Ubuntu based system, these commands install the packages needed to
run the osmo-gsm-tester.py code, i.e. install these on your main unit:
----
apt-get install \
dbus \
tcpdump \
sqlite3 \
python3 \
python3-yaml \
python3-mako \
python3-gi \
ofono \
patchelf \
sudo \
libcap2-bin \
python3-pip
pip3 install pydbus
pip3 install git+git://github.com/podshumok/python-smpplib.git
----
IMPORTANT: ofono may need to be installed from source to contain the most
recent fixes needed to operate your modems. This depends on the modem hardware
used and the tests run. Please see <<hardware_modems>>.
To run osmo-bts-trx with a USRP attached, you may need to install a UHD driver.
Please refer to http://osmocom.org/projects/osmotrx/wiki/OsmoTRX#UHD for
details; the following is an example for the B200 family USRP devices:
----
apt-get install libuhd-dev uhd-host
/usr/lib/uhd/utils/uhd_images_downloader.py
----
The Trial Builder is the jenkins build slave (host) building all sysroot binary
packages used later by {app-name} to run the tests. It's purpose is to build the
sysroots and provide them to {app-anme}, for instance, as jenkins job artifacts
which the {app-name} runner job can fetch.
[[jenkins_deps]]
==== Osmocom Build Dependencies
@ -56,11 +18,109 @@ aware of specific requirements for BTS hardware: for example, the
osmo-bts-sysmo build needs the sysmoBTS SDK installed on the build slave, which
should match the installed sysmoBTS firmware.
==== Add Build Jobs
There are various jenkins-build-* scripts in osmo-gsm-tester/contrib/, which
can be called as jenkins build jobs to build and bundle binaries as artifacts,
to be run on the osmo-gsm-tester main unit and/or BTS hardware.
Be aware of the dependencies, as hinted at in <<jenkins_deps>>.
While the various binaries could technically be built on the osmo-gsm-tester
main unit, it is recommended to use a separate build slave, to take load off
of the main unit.
Please note nowadays we set up all the osmocom jenkins jobs (including
{app-name} ones) using 'jenkins-job-builder'. You can find all the
configuration's in Osmocom's 'osmo-ci.git' files 'jobs/osmo-gsm-tester-*.yml.
Explanation below on how to set up jobs manually is left as a reference for
other projects.
On your jenkins master, set up build jobs to call these scripts -- typically
one build job per script. Look in contrib/ and create one build job for each of
the BTS types you would like to test, as well as one for the 'build-osmo-nitb'.
These are generic steps to configure a jenkins build
job for each of these build scripts, by example of the
jenkins-build-osmo-nitb.sh script; all that differs to the other scripts is the
"osmo-nitb" part:
* 'Project name': "osmo-gsm-tester_build-osmo-nitb" +
(Replace 'osmo-nitb' according to which build script this is for)
* 'Discard old builds' +
Configure this to taste, for example:
** 'Max # of build to keep': "20"
* 'Restrict where this project can be run': Choose a build slave label that
matches the main unit's architecture and distribution, typically a Debian
system, e.g.: "linux_amd64_debian8"
* 'Source Code Management':
** 'Git'
*** 'Repository URL': "git://git.osmocom.org/osmo-gsm-tester"
*** 'Branch Specifier': "*/master"
*** 'Additional Behaviors'
**** 'Check out to a sub-directory': "osmo-gsm-tester"
* 'Build Triggers' +
The decision on when to build is complex. Here are some examples:
** Once per day: +
'Build periodically': "H H * * *"
** For the Osmocom project, the purpose is to verify our software changes.
Hence we would like to test every time our code has changed:
*** We could add various git repositories to watch, and enable 'Poll SCM'.
*** On jenkins.osmocom.org, we have various jobs that build the master branches
of their respective git repositories when a new change was merged. Here, we
can thus trigger e.g. an osmo-nitb build for osmo-gsm-tester everytime the
master build has run: +
'Build after other projects are built': "OpenBSC"
*** Note that most of the Osmocom projects also need to be re-tested when their
dependencies like libosmo* have changed. Triggering on all those changes
typically causes more jenkins runs than necessary: for example, it rebuilds
once per each dependency that has rebuilt due to one libosmocore change.
There is so far no trivial way known to avoid this. It is indeed safest to
rebuild more often.
* 'Build'
** 'Execute Shell'
+
----
#!/bin/sh
set -e -x
./osmo-gsm-tester/contrib/jenkins-build-osmo-nitb.sh
----
+
(Replace 'osmo-nitb' according to which build script this is for)
* 'Post-build Actions'
** 'Archive the artifacts': "*.tgz, *.md5" +
(This step is important to be able to use the built binaries in the run job
below.)
TIP: When you've created one build job, it is convenient to create further
build jobs by copying the first one and, e.g., simply replacing all "osmo-nitb"
with "osmo-bts-trx".
[[install_main_unit]]
=== Main Unit
The main unit is a general purpose computer that orchestrates the tests. It
runs the core network components, controls the modems and so on. This can be
anything from a dedicated production rack unit to your laptop at home.
This manual will assume that tests are run from a jenkins build slave, by a user
named 'jenkins' that belongs to group 'osmo-gsm-tester'. The user configuration
for manual test runs and/or a different user name is identical, simply replace
the user name or group.
Please, note installation steps and dependencies needed will depend on lots of
factors, like your distribution, your specific setup, which hardware you plan to
support, etc.
This section aims at being one place to document the rationale behind certain
configurations being done in one way or another. For an up to date step by step
detailed way to install and maintain the Osmocom {app-name} setup, one will want
to look at the <<ansible,ansible scripts section>>.
[[configure_jenkins_slave]]
=== Jenkins Build and Run Slave
==== Create 'jenkins' User on Main Unit
==== Create 'jenkins' User
On the main unit, create a jenkins user:
@ -176,81 +236,6 @@ Configure the node as:
The build slave should be able to start now.
==== Add Build Jobs
There are various jenkins-build-* scripts in osmo-gsm-tester/contrib/, which
can be called as jenkins build jobs to build and bundle binaries as artifacts,
to be run on the osmo-gsm-tester main unit and/or BTS hardware.
Be aware of the dependencies, as hinted at in <<jenkins_deps>>.
While the various binaries could technically be built on the osmo-gsm-tester
main unit, it is recommended to use a separate build slave, to take load off
of the main unit.
On your jenkins master, set up build jobs to call these scripts -- typically
one build job per script. Look in contrib/ and create one build job for each of
the BTS types you would like to test, as well as one for the 'build-osmo-nitb'.
These are generic steps to configure a jenkins build
job for each of these build scripts, by example of the
jenkins-build-osmo-nitb.sh script; all that differs to the other scripts is the
"osmo-nitb" part:
* 'Project name': "osmo-gsm-tester_build-osmo-nitb" +
(Replace 'osmo-nitb' according to which build script this is for)
* 'Discard old builds' +
Configure this to taste, for example:
** 'Max # of build to keep': "20"
* 'Restrict where this project can be run': Choose a build slave label that
matches the main unit's architecture and distribution, typically a Debian
system, e.g.: "linux_amd64_debian8"
* 'Source Code Management':
** 'Git'
*** 'Repository URL': "git://git.osmocom.org/osmo-gsm-tester"
*** 'Branch Specifier': "*/master"
*** 'Additional Behaviors'
**** 'Check out to a sub-directory': "osmo-gsm-tester"
* 'Build Triggers' +
The decision on when to build is complex. Here are some examples:
** Once per day: +
'Build periodically': "H H * * *"
** For the Osmocom project, the purpose is to verify our software changes.
Hence we would like to test every time our code has changed:
*** We could add various git repositories to watch, and enable 'Poll SCM'.
*** On jenkins.osmocom.org, we have various jobs that build the master branches
of their respective git repositories when a new change was merged. Here, we
can thus trigger e.g. an osmo-nitb build for osmo-gsm-tester everytime the
master build has run: +
'Build after other projects are built': "OpenBSC"
*** Note that most of the Osmocom projects also need to be re-tested when their
dependencies like libosmo* have changed. Triggering on all those changes
typically causes more jenkins runs than necessary: for example, it rebuilds
once per each dependency that has rebuilt due to one libosmocore change.
There is so far no trivial way known to avoid this. It is indeed safest to
rebuild more often.
* 'Build'
** 'Execute Shell'
+
----
#!/bin/sh
set -e -x
./osmo-gsm-tester/contrib/jenkins-build-osmo-nitb.sh
----
+
(Replace 'osmo-nitb' according to which build script this is for)
* 'Post-build Actions'
** 'Archive the artifacts': "*.tgz, *.md5" +
(This step is important to be able to use the built binaries in the run job
below.)
TIP: When you've created one build job, it is convenient to create further
build jobs by copying the first and, e.g., simply replacing all "osmo-nitb"
with "osmo-bts-trx".
==== Add Run Job
This is the jenkins job that runs the tests on the GSM hardware:
@ -258,6 +243,15 @@ This is the jenkins job that runs the tests on the GSM hardware:
* It sources the artifacts from jenkins' build jobs.
* It runs on the osmo-gsm-tester main unit.
Sample script to run {app-name} as a jenkins job can be found in
'osmo-gsm-tester.git' file 'contrib/jenkins-run.sh'.
Please note nowadays we set up all the osmocom jenkins jobs (including
{app-name} ones) using 'jenkins-job-builder'. You can find all the
configuration's in Osmocom's 'osmo-ci.git' files 'jobs/osmo-gsm-tester-*.yml.
Explanation below on how to set up jobs manually is left as a reference for
other projects.
Here is the configuration for the run job:
* 'Project name': "osmo-gsm-tester_run"
@ -330,10 +324,47 @@ Details:
and 'trial-N-bin.tgz' archives are produced by the 'jenkins-run.sh' script,
both for successful and failing runs.
=== Install osmo-gsm-tester on Main Unit
==== Install osmo-gsm-tester
This assumes you have already created the jenkins user (see <<configure_jenkins_slave>>).
Dependencies needed will depend on lots of factors, like your distribution, your
specific setup, which hardware you plan to support, etc.
On a Debian/Ubuntu based system, these commands install the packages needed to
run the osmo-gsm-tester.py code, i.e. install these on your main unit:
----
apt-get install \
dbus \
tcpdump \
sqlite3 \
python3 \
python3-setuptools \
python3-yaml \
python3-mako \
python3-gi \
python3-numpy \
python3-wheel \
ofono \
patchelf \
sudo \
libcap2-bin \
python3-pip \
udhcpc \
iperf3 \
locales
pip3 install \
"git+https://github.com/podshumok/python-smpplib.git@master#egg=smpplib" \
pydbus \
pyusb \
pysispm
----
IMPORTANT: ofono may need to be installed from source to contain the most
recent fixes needed to operate your modems. This depends on the modem hardware
used and the tests run. Please see <<hardware_modems>>.
==== User Permissions
On the main unit, create a group for all users that should be allowed to use
@ -350,8 +381,6 @@ NOTE: you may also need to add users to the 'usrp' group, see
A user added to a group needs to re-login for the group permissions to take
effect.
This group needs the following permissions:
===== Paths
Assuming that you are using the example config, prepare a system wide state
@ -384,7 +413,7 @@ Put a DBus configuration file in place that allows the 'osmo-gsm-tester' group
to access the org.ofono DBus path:
----
cat > /etc/dbus-1/system.d/osmo-gsm-tester.conf <<END
# cat > /etc/dbus-1/system.d/osmo-gsm-tester.conf <<END
<!-- Additional rules for the osmo-gsm-tester to access org.ofono from user
land -->
@ -402,6 +431,21 @@ END
(No restart of dbus nor ofono necessary.)
[[install_slave_unit]]
=== Slave Unit(s)
The slave units are the hosts used by {app-name} to run proceses on. It may be
the <<install_main_unit,Main Unit>> itself and processes will be run locally, or
it may be a remote host were processes are run usually through SSH.
This guide assumes slaves unit(s) use same configuration as the Main Unit, that
is, it runs under 'jenkins' user which is a member of the 'osmo-gsm-tester' user
group. In order to do so, follow the instruction under the
<<install_main_unit,Main Unit>> section above. Keep in mind the 'jenkins' user
on the Main Unit will need to be able to log in through SSH as the slave unit
'jenkins' user to run the processes. No direct access from Jenkins Master node
is required here.
[[install_capture_packets]]
===== Capture Packets
@ -464,6 +508,10 @@ contains stdout and stderr for that process (because this dir is set as CWD).
sysctl -w kernel.core_pattern=core
----
TIP: Files required to be installed under '/etc/security/limits.d/' can be found
under 'osmo-gsm-tester.git/utils/limits.d/', so one can simply cp them from
there.
===== Allow Realtime Priority
Certain binaries should be run with real-time priority, like 'osmo-bts-trx'.
@ -476,25 +524,20 @@ echo "@osmo-gsm-tester - rtprio 99" > /etc/security/limits.d/osmo-gsm-tester_all
Re-login the user to make these changes take effect.
[[user_config_uhd]]
===== UHD
TIP: Files required to be installed under '/etc/security/limits.d/' can be found
under 'osmo-gsm-tester.git/utils/limits.d/', so one can simply cp them from
there.
Grant permission to use the UHD driver to run USRP devices for osmo-bts-trx, by
adding the jenkins user to the 'usrp' group:
----
gpasswd -a jenkins usrp
----
===== Allow CAP_NET_RAW capability
===== Allow capabilities: 'CAP_NET_RAW', 'CAP_NET_ADMIN', 'CAP_SYS_ADMIN'
Certain binaries require 'CAP_NET_RAW' to be set, like 'osmo-bts-octphy' as it
uses a 'AF_PACKET' socket.
uses a 'AF_PACKET' socket. Similarly, others (like osmo-ggsn) require
'CAP_NET_ADMIN' to be able to create tun devices, and so on.
To be able to set the following capability without being root, osmo-gsm-tester
uses sudo to gain permissions to set the capability.
This is the script that osmo-gsm-tester expects on the main unit:
This is the script that osmo-gsm-tester expects on the host running the process:
----
echo /usr/local/bin/osmo-gsm-tester_setcap_net_raw.sh <<EOF
@ -504,7 +547,7 @@ EOF
chmod +x /usr/local/bin/osmo-gsm-tester_setcap_net_raw.sh
----
Now, again on the main unit, we need to provide sudo access to this script for
Now, again on the same host, we need to provide sudo access to this script for
osmo-gsm-tester:
----
@ -515,6 +558,32 @@ chmod 0440 /etc/sudoers.d/osmo-gsm-tester_setcap_net_raw
The script file name 'osmo-gsm-tester_setcap_net_raw.sh' is important, as
osmo-gsm-tester expects to find a script with this name in '$PATH' at run time.
TIP: Files required to be installed under '/etc/sudoers.d/' can be found
under 'osmo-gsm-tester.git/utils/sudoers.d/', so one can simply cp them from
there.
TIP: Files required to be installed under '/usr/local/bin/' can be found
under 'osmo-gsm-tester.git/utils/bin/', so one can simply cp them from
there.
[[user_config_uhd]]
===== UHD
Grant permission to use the UHD driver to run USRP devices for osmo-bts-trx, by
adding the jenkins user to the 'usrp' group:
----
gpasswd -a jenkins usrp
----
To run osmo-bts-trx with a USRP attached, you may need to install a UHD driver.
Please refer to http://osmocom.org/projects/osmotrx/wiki/OsmoTRX#UHD for
details; the following is an example for the B200 family USRP devices:
----
apt-get install libuhd-dev uhd-host
/usr/lib/uhd/utils/uhd_images_downloader.py
----
==== Log Rotation
@ -578,87 +647,3 @@ sed -i 's#\.\./suites#/usr/local/src/osmo-gsm-tester/suites#' osmo-gsm-tester/pa
NOTE: The configuration will be looked up in various places, see
<<config_paths>>.
== Hardware Choice and Configuration
=== SysmoBTS
To use the SysmoBTS in the osmo-gsm-tester, the following systemd services must
be disabled:
----
systemctl mask osmo-nitb osmo-bts-sysmo osmo-pcu sysmobts-mgr
----
This stops the stock setup keeping the BTS in operation and hence allows the
osmo-gsm-tester to install and launch its own versions of the SysmoBTS
software.
==== IP Address
To ensure that the SysmoBTS is always reachable at a fixed known IP address,
configure the eth0 to use a static IP address:
Adjust '/etc/network/interfaces' and replace the line
----
iface eth0 inet dhcp
----
with
----
iface eth0 inet static
address 10.42.42.114
netmask 255.255.255.0
gateway 10.42.42.1
----
You may set the name server in '/etc/resolve.conf' (most likely to the IP of
the gateway), but this is not really needed by the osmo-gsm-tester.
==== Allow Core Files
In case a binary run for the test crashes, a core file of the crash should be
written. This requires a limits rule. Append a line to /etc/limits like:
----
ssh root@10.42.42.114
echo "* C16384" >> /etc/limits
----
==== Reboot
Reboot the BTS and make sure that the IP address for eth0 is now indeed
10.42.42.114, and that no osmo* programs are running.
----
ip a
ps w | grep osmo
----
==== SSH Access
Make sure that the jenkins user on the main unit is able to login on the
sysmoBTS, possibly erasing outdated host keys after a new rootfs was loaded:
On the main unit, for example do:
----
su - jenkins
ssh root@10.42.42.114
----
Fix any problems until you get a login on the sysmoBTS.
[[hardware_modems]]
=== Modems
TODO: describe modem choices and how to run ofono
[[hardware_trx]]
=== osmo-bts-trx
TODO: describe B200 family

View File

@ -0,0 +1,82 @@
== Hardware Choice and Configuration
=== SysmoBTS
To use the SysmoBTS in the osmo-gsm-tester, the following systemd services must
be disabled:
----
systemctl mask osmo-nitb osmo-bts-sysmo osmo-pcu sysmobts-mgr
----
This stops the stock setup keeping the BTS in operation and hence allows the
osmo-gsm-tester to install and launch its own versions of the SysmoBTS
software.
==== IP Address
To ensure that the SysmoBTS is always reachable at a fixed known IP address,
configure the eth0 to use a static IP address:
Adjust '/etc/network/interfaces' and replace the line
----
iface eth0 inet dhcp
----
with
----
iface eth0 inet static
address 10.42.42.114
netmask 255.255.255.0
gateway 10.42.42.1
----
You may set the name server in '/etc/resolve.conf' (most likely to the IP of
the gateway), but this is not really needed by the osmo-gsm-tester.
==== Allow Core Files
In case a binary run for the test crashes, a core file of the crash should be
written. This requires a limits rule. Append a line to /etc/limits like:
----
ssh root@10.42.42.114
echo "* C16384" >> /etc/limits
----
==== Reboot
Reboot the BTS and make sure that the IP address for eth0 is now indeed
10.42.42.114, and that no osmo* programs are running.
----
ip a
ps w | grep osmo
----
==== SSH Access
Make sure that the jenkins user on the main unit is able to login on the
sysmoBTS, possibly erasing outdated host keys after a new rootfs was loaded:
On the main unit, for example do:
----
su - jenkins
ssh root@10.42.42.114
----
Fix any problems until you get a login on the sysmoBTS.
[[hardware_modems]]
=== Modems
TODO: describe modem choices and how to run ofono
[[hardware_trx]]
=== osmo-bts-trx
TODO: describe B200 family

View File

@ -1,31 +1,56 @@
== Introduction with Examples
== Introduction
The osmo-gsm-tester is software to run automated tests of real GSM hardware,
{app-name} is a software to run automated tests on real GSM hardware,
foremost to verify that ongoing Osmocom software development continues to work
with various BTS models, while being flexibly configurable and extendable.
with various BTS models, while being flexibly configurable and extendable to
work for other technologies, setups and projects. It can be used for instance to
test a 3G or 4G network.
A 'main unit' (general purpose computer) is connected via ethernet and/or USB to
any number of BTS models and to any number of GSM modems via USB. The modems
and BTS instances' RF transceivers are typically wired directly to each other
via RF distribution chambers to bypass the air medium and avoid disturbing real
production cellular networks. Furthermore, the setup may include adjustable RF
attenuators to model various distances between modems and base stations.
{app-name} (python3 process) runs on a host (general purpose computer) named
the 'main unit'. It may optionally be connected to any number of 'slave units',
which {app-name} may use to orchestrate processes remotely, usually through SSH.
The osmo-gsm-tester software runs on the main unit to orchestrate the various
GSM hardware and run predefined test scripts. It typically receives binary
packages from a jenkins build service. It then automatically configures and
launches an Osmocom core network on the main unit and sets up and runs BTS
models as well as modems to form a complete ad-hoc GSM network. On this setup,
predefined test suites, combined with various scenario definitions, are run to
verify stability of the system.
Hardware devices such as BTS, SDRs, modems, smart plugs, etc. are then connected
to either the main unit or slaves units via IP, raw ethernet, USB or any other
means·
The osmo-gsm-tester is implemented in Python (version 3). It uses the ofono
daemon to control the modems connected via USB. BTS software is either run
directly on the main unit (e.g. for osmo-bts-trx, osmo-bts-octphy), run via SSH
(e.g. for a sysmoBTS) or assumed to run on a connected BTS model (e.g. for
ip.access nanoBTS).
The modems and BTS instances' RF transceivers are typically wired directly to
each other via RF distribution chambers to bypass the air medium and avoid
disturbing real production cellular networks. Furthermore, the setup may include
adjustable RF attenuators to model various distances between modems and base
stations.
.Typical osmo-gsm-tester setup
Each of these devices, having each a different physical setup and configuration,
supported features, attributes, etc., is referred in {app-name} terminology as a
_resource_. Each _resource_ is an instance of _resource class_. A
_resource_class_ may be for instance a _modem_ or a _bts_. For instance, an
{app-name} setup may have 2 _modem_ instances and 1 _bts_ instances. Each of
these _resources_ are listed and described in configuration files passed to
{app-name}, which maintains a pool of _resources_ (available, in use, etc.).
{app-name} typically receives from a jenkins build service the software or
firmware binary packages to be used and tested. {app-name} then launches a
specific set of testsuites which, in turn, contain each a set of python test
scripts. Each test uses the _testenv_ API provided by {app-name} to configure,
launch and manage the different nodes and processes from the provided binary
packages to form a complete ad-hoc GSM network.
Testsuites themselves contain configuration files to list how many resources it
requires to run its tests. It also provides means to _filter_ which kind of
_resources_ will be needed based on their attributes. This allows, for instance,
asking {app-name} to provide a _modem_ supporting GPRS, or to provide a specific
model of _bts_ such as a nanoBTS. Testsuites also allow receiving _modifiers_,
which overwrite some of the default values that {app-name} itself or different
_resources_ use.
Moreover, one may want to run the same testsuite several tiems, each with
different set of _resources_. For instance, one may want to run a testsuite with
a sysmoBTS and later with a nanoBTS. This is supported by leaving the testsuite
configuration generic enough and then passing _scenarios_ to it, which allow
applying extra _filters_ or _modifiers_. Scenarios can also be combined to
filter further or to apply further modifications.
.Sample osmo-gsm-tester node 2G setup
[graphviz]
----
digraph G {
@ -35,8 +60,8 @@ digraph G {
label = "GSM Hardware";
style=dotted
modem0 [shape=box label="Modems..."]
modem1 [shape=box label="Modems..."]
modem0 [shape=box label="Modem (Quectel EC20)"]
modem1 [shape=box label="Modems (SierraWireless MC7455)"]
osmo_bts_sysmo [label="sysmocom sysmoBTS\nrunning osmo-bts-sysmo" shape=box]
B200 [label="Ettus B200" shape=box]
sysmoCell5K [label="sysmocom sysmoCell5000" shape=box]
@ -46,13 +71,16 @@ digraph G {
{modem0 modem1 osmo_bts_sysmo B200 octphy nanoBTS sysmoCell5K}->rf_distribution [dir=both arrowhead="curve" arrowtail="curve"]
}
subgraph cluster_slave_unit {
label = "Slave Unit"
osmo_trx [label="osmo-trx"]
}
subgraph cluster_main_unit {
label = "Main Unit"
osmo_gsm_tester [label="Osmo-GSM-Tester\ntest suites\n& scenarios"]
subgraph {
rank=same
ofono [label="ofono daemon"]
osmo_trx [label="osmo-trx"]
osmo_bts_trx [label="osmo-bts-trx"]
osmo_bts_octphy [label="osmo-bts-octphy"]
OsmoNITB [label="BSC + Core Network\n(Osmo{NITB,MSC,BSC,HLR,...})"]
@ -62,339 +90,15 @@ digraph G {
jenkins->osmo_gsm_tester [label="trial\n(binaries)"]
osmo_gsm_tester->jenkins [label="results"]
ofono->{modem0 modem1} [label="USB"]
ofono->{modem0 modem1} [label="QMI/USB"]
osmo_gsm_tester->{OsmoNITB osmo_bts_trx osmo_bts_octphy}
osmo_gsm_tester->osmo_bts_sysmo [taillabel="SSH"]
osmo_gsm_tester->{osmo_trx, osmo_bts_sysmo} [taillabel="SSH"]
osmo_gsm_tester->ofono [taillabel="DBus"]
osmo_trx->B200 [label="USB"]
osmo_bts_trx->{osmo_trx sysmoCell5K} [dir=both label="UDP"]
osmo_trx->B200 [label="UHD/USB"]
osmo_bts_trx->{osmo_trx sysmoCell5K} [dir=both label="TRXC+TRXD/UDP"]
osmo_bts_octphy->octphy [label="raw eth"]
{osmo_bts_sysmo nanoBTS}->OsmoNITB [label="IP"]
{B200 octphy}->OsmoNITB [label="eth" style=invis]
{osmo_bts_trx osmo_bts_octphy}->OsmoNITB
}
----
.Example of how to select resources and configurations: scenarios may pick specific resources (here BTS and ARFCN), remaining requirements are picked as available (here two modems and a NITB interface)
[graphviz]
----
digraph G {
rankdir=TB;
suite_scenarios [label="Suite+Scenarios selection\nsms:sysmo+band1800"]
subgraph {
rank=same;
suite
scenarios
}
subgraph cluster_scenarios {
label = "Scenarios";
u_sysmoBTS [label="Scenario: sysmo\nbts: type: osmo-bts-sysmo"]
u_trx [label="Scenario: trx\nbts: type: osmo-bts-trx"]
u_arfcn [label="Scenario: band1800\narfcn: band: GSM-1800"]
}
subgraph cluster_suite {
label = "Suite: sms";
requires [label="Requirements (suite.conf):\nmodem: times: 2\nbts\nip_address\narfcn"]
subgraph cluster_tests {
label = "Test Scripts (py)";
mo_mt_sms
etc
}
}
subgraph cluster_resources {
label = "Resources";
rankdir=TB;
nitb_addr1 [label="NITB interface addr\n10.42.42.1"]
nitb_addr2 [label="NITB interface addr\n10.42.42.2"]
Modem0
Modem1
Modem2
sysmoBTS [label="osmo-bts-sysmo"]
osmo_bts_trx [label="osmo-bts-trx"]
arfcn1 [label="arfcn: 512\nband: GSM-1800"]
arfcn2 [label="arfcn: 540\nband: GSM-1900"]
arfcn1->arfcn2 [style=invis]
nitb_addr1->nitb_addr2 [style=invis]
Modem0 -> Modem1 -> Modem2 [style=invis]
sysmoBTS -> osmo_bts_trx [style=invis]
}
suite_scenarios -> {suite scenarios}
scenarios -> { u_arfcn u_sysmoBTS }
suite -> requires
requires -> Modem0
requires -> Modem1
requires -> sysmoBTS
requires -> arfcn1
requires -> nitb_addr1
{ u_sysmoBTS u_arfcn } -> requires [label="influences\nresource\nselection"]
}
----
.Example of a "trial" containing binaries built by a jenkins
[graphviz]
----
digraph G {
subgraph cluster_trial {
label = "Trial (binaries)"
sysmo [label="osmo-bts-sysmo.build-23.tgz\n(osmo-bts-sysmo\n+ deps\ncompiled for sysmoBTS)"]
trx [label="osmo-bts.build-5.tgz\n(osmo-bts-octphy + osmo-bts-trx\n+ deps\ncompiled for main unit)"]
nitb [label="osmo-nitb.build-42.tgz\n(osmo-nitb\n+ deps\ncompiled for main unit)"]
checksums [label="checksums.md5"]
checksums -> {sysmo trx nitb}
}
}
----
=== Typical Test Script
A typical single test script (part of a suite) may look like this:
----
#!/usr/bin/env python3
from osmo_gsm_tester.testenv import *
hlr = suite.hlr()
bts = suite.bts()
mgcpgw = suite.mgcpgw(bts_ip=bts.remote_addr())
msc = suite.msc(hlr, mgcpgw)
bsc = suite.bsc(msc)
stp = suite.stp()
ms_mo = suite.modem()
ms_mt = suite.modem()
hlr.start()
stp.start()
msc.start()
mgcpgw.start()
bsc.bts_add(bts)
bsc.start()
bts.start()
hlr.subscriber_add(ms_mo)
hlr.subscriber_add(ms_mt)
ms_mo.connect(msc.mcc_mnc())
ms_mt.connect(msc.mcc_mnc())
ms_mo.log_info()
ms_mt.log_info()
print('waiting for modems to attach...')
wait(ms_mo.is_connected, msc.mcc_mnc())
wait(ms_mt.is_connected, msc.mcc_mnc())
wait(msc.subscriber_attached, ms_mo, ms_mt)
sms = ms_mo.sms_send(ms_mt)
wait(ms_mt.sms_was_received, sms)
----
=== Resource Resolution
- A global configuration 'resources.conf' defines which hardware is connected to the
osmo-gsm-tester main unit.
- Each suite contains a number of test scripts. The amount of resources a test
may use is defined by the test suite's 'suite.conf'.
- Which specific modems, BTS models, NITB IP addresses etc. are made available
to a test run is typically determined by 'suite.conf' and a combination of scenario
configurations -- or picked automatically if not.
[[resources_conf_example]]
=== Typical 'resources.conf'
A global configuration of hardware may look like below; for details, see
<<resources_conf>>.
----
ip_address:
- addr: 10.42.42.2
- addr: 10.42.42.3
- addr: 10.42.42.4
- addr: 10.42.42.5
- addr: 10.42.42.6
bts:
- label: sysmoBTS 1002
type: osmo-bts-sysmo
ipa_unit_id: 1
addr: 10.42.42.114
band: GSM-1800
ciphers:
- a5_0
- a5_1
- a5_3
- label: Ettus B200
type: osmo-bts-trx
ipa_unit_id: 6
addr: 10.42.42.50
band: GSM-1800
launch_trx: true
ciphers:
- a5_0
- a5_1
- label: sysmoCell 5000
type: osmo-bts-trx
ipa_unit_id: 7
addr: 10.42.42.51
band: GSM-1800
trx_remote_ip: 10.42.42.112
ciphers:
- a5_0
- a5_1
- label: OCTBTS 3500
type: osmo-bts-octphy
ipa_unit_id: 8
addr: 10.42.42.52
band: GSM-1800
trx_list:
- hw_addr: 00:0c:90:2e:80:1e
net_device: eth1
- hw_addr: 00:0c:90:2e:87:52
net_device: eth1
arfcn:
- arfcn: 512
band: GSM-1800
- arfcn: 514
band: GSM-1800
- arfcn: 516
band: GSM-1800
- arfcn: 546
band: GSM-1900
- arfcn: 548
band: GSM-1900
modem:
- label: sierra_1
path: '/sierra_1'
imsi: '901700000009031'
ki: '80A37E6FDEA931EAC92FFA5F671EFEAD'
auth_algo: 'xor'
ciphers:
- a5_0
- a5_1
features:
- 'sms'
- 'voice'
- label: gobi_0
path: '/gobi_0'
imsi: '901700000009030'
ki: 'BB70807226393CDBAC8DD3439FF54252'
auth_algo: 'xor'
ciphers:
- a5_0
- a5_1
features:
- 'sms'
----
=== Typical 'suites/*/suite.conf'
The configuration that reserves a number of resources for a test suite may look
like this:
----
resources:
ip_address:
- times: 1
bts:
- times: 1
modem:
- times: 2
features:
- sms
----
It may also request e.g. specific BTS models, but this is typically left to
scenario configurations.
=== Typical 'scenarios/*.conf'
For a suite as above run as-is, any available resources are picked. This may be
combined with any number of scenario definitions to constrain which specific
resources should be used, e.g.:
----
resources:
bts:
- type: osmo-bts-sysmo
----
Which 'ip_address' or 'modem' is used in particular doesn't really matter, so
it can be left up to the osmo-gsm-tester to pick these automatically.
Any number of such scenario configurations can be combined in the form
'<suite_name>:<scenario>+<scenario>+...', e.g. 'my_suite:sysmo+tch_f+amr'.
=== Typical Invocations
Each invocation of osmo-gsm-tester deploys a set of pre-compiled binaries for
the Osmocom core network as well as for the Osmocom based BTS models. To create
such a set of binaries, see <<trials>>.
Examples for launching test trials:
- Run the default suites (see <<default_suites>>) on a given set of binaries:
----
osmo-gsm-tester.py path/to/my-trial
----
- Run an explicit choice of 'suite:scenario' combinations:
----
osmo-gsm-tester.py path/to/my-trial -s sms:sysmo -s sms:trx -s sms:nanobts
----
- Run one 'suite:scenario' combination, setting log level to 'debug' and
enabling logging of full python tracebacks, and also only run just the
'mo_mt_sms.py' test from the suite, e.g. to investigate a test failure:
----
osmo-gsm-tester.py path/to/my-trial -s sms:sysmo -l dbg -T -t mo_mt
----
A test script may also be run step-by-step in a python debugger, see
<<debugging>>.
=== Resource Reservation for Concurrent Trials
While a test suite runs, the used resources are noted in a global state
directory in a reserved-resources file. This way, any number of trials may be
run consecutively without resource conflicts. Any test trial will only use
resources that are currently not reserved by any other test suite. The
reservation state is human readable.
The global state directory is protected by a file lock to allow access by
separate processes.
Also, the binaries from a trial are never installed system-wide, but are run
with a specific 'LD_LIBRARY_PATH' pointing at the trial's 'inst', so that
several trials can run consecutively without conflicting binary versions. For
some specific binaries which require extra permissions (such as osmo-bts-octphy
requiring 'CAP_NET_RAW'), 'patchelf' program is used to modify the binary
'RPATH' field instead because the OS dynamic linker skips 'LD_LIBRARY_PATH' for
binaries with special permissions.
Once a test suite run is complete, all its reserved resources are torn down (if
the test scripts have not done so already), and the reservations are released
automatically.
If required resources are unavailable, the test trial fails. For consecutive
test trials, a test run needs to either wait for resources to become available,
or test suites need to be scheduled to make sense. (*<- TODO*)

View File

@ -0,0 +1,107 @@
== Resource Resolution
- A global configuration <<resources_conf,resources.conf>> defines which hardware is plugged to the
{app-name} setup, be it the main unit or any slave unit. This list becomes the
'resource pool'.
- Each suite contains a number of test scripts. The amount of resources a test
may use is defined by the test suite's <<suite_conf,suite.conf>>.
- Which specific modems, BTS models, NITB IP addresses etc. are made available
to a test run is typically determined by <<suite_conf,suite.conf>> and a combination of <<scenario_conf,scenario
configurations>> -- or picked automatically if not.
.Example of how to select resources and configurations: scenarios may pick specific resources (here BTS and ARFCN), remaining requirements are picked as available (here two modems and a NITB interface)
[graphviz]
----
digraph G {
rankdir=TB;
suite_scenarios [label="Suite+Scenarios selection\nsms:sysmo+band1800+mod-bts0-chanallocdescend"]
subgraph {
rank=same;
suite
scenarios
defaults_conf [label="defaults.conf:\nbsc: net: encryption: a5_0"]
}
subgraph cluster_scenarios {
label = "Scenarios";
u_sysmoBTS [label="Scenario: sysmo\nresources: bts: type: osmo-bts-sysmo"]
u_trx [label="Scenario: trx\nresources: bts: type: osmo-bts-trx"]
u_arfcn [label="Scenario: band1800\nresources: arfcn: band: GSM-1800"]
u_chanallocdesc [label="Scenario: band1800\nmodifiers: bts: channel_allocator: descending"]
}
subgraph cluster_suite {
label = "Suite: sms";
requires [label="Requirements (suite.conf):\nmodem: times: 2\nbts\nip_address\narfcn"]
subgraph cluster_tests {
label = "Test mo_mt_sms.py";
obj_nitb [label="object NITB\n(process using 10.42.42.2)"]
bts0 [label="object bts[0]"]
modem0 [label="object modem[0]"]
modem1 [label="object modem[1]"]
}
}
subgraph cluster_resources {
label = "Available Resources (not already allocated by other Osmo-GSM-Tester instance)";
rankdir=TB;
nitb_addrA [label="NITB interface addr\n10.42.42.1"]
nitb_addrA [label="NITB interface addr\n10.42.42.2"]
ModemA
ModemB
ModemC
sysmoBTS [label="osmo-bts-sysmo"]
osmo_bts_trx [label="osmo-bts-trx"]
arfcnA [label="arfcn: 512\nband: GSM-1800"]
arfcnB [label="arfcn: 540\nband: GSM-1900"]
arfcnA->arfcnB [style=invis]
nitb_addrA->nitb_addrB [style=invis]
ModemA -> ModemB -> ModemC [style=invis]
sysmoBTS -> osmo_bts_trx [style=invis]
}
suite_scenarios -> {suite scenarios}
scenarios -> { u_arfcn u_sysmoBTS u_chanallocdesc }
suite -> requires
requires -> ModemA
requires -> ModemB
requires -> sysmoBTS
requires -> arfcnA
requires -> nitb_addrA
{ u_sysmoBTS u_arfcn } -> requires [label="influences\nresource\nselection"]
u_chanallocdesc -> bts0 [label="influences\nbts[0]\nbehavior"]
defaults_conf -> obj_nitb [label="provides default values"]
}
----
=== Resource Reservation for Concurrent Trials
While a test suite runs, the used resources are noted in a global state
directory in a reserved-resources file. This way, any number of trials may be
run consecutively without resource conflicts. Any test trial will only use
resources that are currently not reserved by any other test suite. The
reservation state is human readable.
The global state directory is protected by a file lock to allow access by
separate processes.
Also, the binaries from a trial are never installed system-wide, but are run
with a specific 'LD_LIBRARY_PATH' pointing at the <<trials,trial's inst>>, so that
several trials can run consecutively without conflicting binary versions. For
some specific binaries which require extra permissions (such as osmo-bts-octphy
requiring 'CAP_NET_RAW'), 'patchelf' program is used to modify the binary
'RPATH' field instead because the OS dynamic linker skips 'LD_LIBRARY_PATH' for
binaries with special permissions.
Once a test suite run is complete, all its reserved resources are torn down (if
the test scripts have not done so already), and the reservations are released
automatically.
If required resources are unavailable, the test trial fails. For consecutive
test trials, a test run needs to either wait for resources to become available,
or test suites need to be scheduled to make sense. (*<- TODO*)

View File

@ -1,13 +1,29 @@
[[trials]]
== Trial: Binaries to be Tested
A trial is a set of pre-built binaries to be tested. They are typically built
A trial is a set of pre-built sysroot archives to be tested. They are typically built
by jenkins using the build scripts found in osmo-gsm-tester's source in the
'contrib/' dir, see <<install_add_jenkins_slave>>.
A trial comes in the form of a directory containing a number of '*.tgz' tar
archives as well as a 'checksums.md5' file to verify the tar archives'
integrity.
A trial comes in the form of a directory containing a number of '<inst-name>.*tgz' tar
archives (containing different sysroots) as well as a 'checksums.md5' file to
verify the tar archives' integrity.
.Example of a "trial" containing binaries built by a jenkins job
[graphviz]
----
digraph G {
subgraph cluster_trial {
label = "Trial (binaries)"
sysmo [label="osmo-bts-sysmo.build-23.tgz\n(osmo-bts-sysmo\n+ deps\ncompiled for sysmoBTS)"]
trx [label="osmo-bts.build-5.tgz\n(osmo-bts-octphy + osmo-bts-trx\n+ deps\ncompiled for main unit)"]
nitb [label="osmo-nitb.build-42.tgz\n(osmo-nitb\n+ deps\ncompiled for main unit)"]
checksums [label="checksums.md5"]
checksums -> {sysmo trx nitb}
}
}
----
When the osmo-gsm-tester is invoked to run on such a trial directory, it will
create a sub directory named 'inst' and unpack the tar archives into it.
@ -28,4 +44,22 @@ The script in 'contrib/jenkins-run.sh' takes care of related tasks such as
* generating md5 sums for the various tar.gz containing software builds to be tested,
* cleaning up after the build,
* saving extra logs such as journalctl output from ofonod,
* generating a final .tar.gz file with all the logs and reports.
* generating a final .tar.gz file with all the logs and reports to store as jenkins archives.
{app-name} tests create objects to manage the allocated resources during test
lifetime. These objects, in turn, usually run and manage processes started from
the trail's sysroot binaries. {app-name} provide APIs for those object classes
to discover, unpack and run those binaries. An object class simply needs to
request the name of the sysroot it wants to use (for instance 'osmo-bsc'), and
{app-name} will take care of preparing everything and providing the sysroot path
to it. It's a duty of the resource class to copy over the sysroot to the
destination if the intention is to run the binary remotely on another host.
When seeking a sysroot of a given name '<inst-name>' in the 'inst/' directory,
{app-name} will look for 'tgz' files starting with the pattern '<inst-name>.'
(up to the first dot). That means, suffixes are available for {app-name} user to
identify the content, for instance having an incrementing version counter or a
commit hash. Hence, these example files are considered valid and will be
selected by {app-name} for 'osmo-bsc': 'osmo-bsc.tgz', 'osmo-bsc.build-23.tgz',
'osmo-bsc.5f3e0dd2.tgz', 'osmo-bsc.armv7.build-2.tgz'. If either none or more
than one valid file is found matching the pattern, an exception will be thrown.

View File

@ -0,0 +1,15 @@
== Troubleshooting
=== Format: YAML, and its Drawbacks
The general configuration format used is YAML. The stock python YAML parser
does have several drawbacks: too many complex possibilities and alternative
ways of formatting a configuration, but at the time of writing seems to be the
only widely used configuration format that offers a simple and human readable
formatting as well as nested structuring. It is recommended to use only the
exact YAML subset seen in this manual in case the osmo-gsm-tester should move
to a less bloated parser in the future.
Careful: if a configuration item consists of digits and starts with a zero, you
need to quote it, or it may be interpreted as an octal notation integer! Please
avoid using the octal notation on purpose, it is not provided intentionally.

View File

@ -21,10 +21,21 @@
<jobtitle>Senior Developer</jobtitle>
</affiliation>
</author>
<author>
<firstname>Pau</firstname>
<surname>Espin Pedrol</surname>
<email>pespin@sysmocom.de</email>
<authorinitials>PE</authorinitials>
<affiliation>
<shortaffil>sysmocom</shortaffil>
<orgname>sysmocom - s.f.m.c. GmbH</orgname>
<jobtitle>Software Developer</jobtitle>
</affiliation>
</author>
</authorgroup>
<copyright>
<year>2017</year>
<year>2017-2020</year>
<holder>sysmocom - s.f.m.c. GmbH</holder>
</copyright>

View File

@ -1,20 +1,33 @@
Osmo-GSM-Tester Manual
======================
Neels Hofmeyr <nhofmeyr@sysmocom.de>
:app-name: Osmo-GSM-Tester
{app-name} Manual
=================
Neels Hofmeyr <nhofmeyr@sysmocom.de>, Pau Espin Pedrol <pespin@sysmocom.de>
== WARNING: Work in Progress
*NOTE: The osmo-gsm-tester is still in pre-alpha stage: some parts are still
incomplete, and details will still change and move around.*
*NOTE: {app-name} is still under heavy development stage: some parts are still
incomplete, and details can still change and move around as new features are
added and improvements made.*
include::{srcdir}/chapters/intro.adoc[]
include::{srcdir}/chapters/install.adoc[]
include::{srcdir}/chapters/trial.adoc[]
include::{srcdir}/chapters/config.adoc[]
include::{srcdir}/chapters/trial.adoc[]
include::{srcdir}/chapters/resource_pool.adoc[]
include::{srcdir}/chapters/test_api.adoc[]
include::{srcdir}/chapters/install.adoc[]
include::{srcdir}/chapters/install_device.adoc[]
include::{srcdir}/chapters/ansible.adoc[]
include::{srcdir}/chapters/docker.adoc[]
include::{srcdir}/chapters/debugging.adoc[]
include::{srcdir}/chapters/troubleshooting.adoc[]