refactor: fix error handling; fix log.Origin; only one trial

A bit of refactoring to fix logging and error reporting, and simplify the code.
This transmogrifies some of the things committed in
0ffb414406 "Add JUnit XML reports; refactor test
reporting", which did not fully match the code structuring ideas used in
osmo-gsm-tester. Also solve some problems present from the start of the code
base.

Though this is a bit of a code bomb, it would take a lot of time to separate
this into smaller bits: these changes are closely related and resulted
incrementally from testing error handling and logging details. I hope it's ok.

Things changed / problems fixed:

Allow only a single trial to be run per cmdline invocation: unbloat trial and
suite invocation in osmo-gsm-tester.py.

There is a SuiteDefinition, intended to be immutable, and a mutable SuiteRun.
SuiteDefinition had a list of tests, which was modified by the SuiteRun to
record test results. Instead, have only the test basenames in the
SuiteDefinition and create a new set of Test() instances for each SuiteRun, to
ensure that no state leaks between separate suite runs.

State leaking across runs can be seen in
http://jenkins.osmocom.org/jenkins/view/osmo-gsm-tester/job/osmo-gsm-tester_run/453/
where an earlier sms test for sysmo succeeds, but its state gets overwritten by
the later sms test for trx that fails. The end result is that both tests
failed, although the first run was successful.

Fix a problem with Origin: log.Origin allowed to be __enter__ed more than once,
skipping the second entry. The problem there is that we'd still __exit__ twice
or more, popping the Origin off the stack even though it should still remain.
We could count __enter__ recurrences, but instead, completely disallow entering
a second time.

A code path should have one 'with' statement per object, at pivotal points like
run_suites or run_tests. Individual utility functions should not do 'with' on a
central object. The structure needed is, in pseudo code:

  try:
    with trial:
      try:
        with suite_run:
	  try:
	    with test:
	      test_actions()

The 'with' needs to be inside the 'try', so that the exception can be handled
in __exit__ before it reaches the exception logging.

To clarify this, like test exceptions caught in Test.run(), also move suite
exception handling from Trial into SuiteRun.run_tests(). There are 'with self'
in Test.run() and SuiteRun.run_tests(), which are well placed, because these
are pivotal points in the main code path.

Log output: clearly separate logging of distinct suites and test scripts, by
adding more large_separator() calls at the start of each test. Place these
separator calls in more logical places. Add separator size and spacing args.

Log output: print tracebacks only once, for the test script where they happen.

Have less state that duplicates other state: drop SuiteRun.test_failed_ctr and
suite.test_skipped_ctr, instead add SuiteRun.count_test_results().

For test failure reporting, store the traceback text in a separate member var.

In the text report, apply above changes and unclutter to achieve a brief and
easy to read result overview: print less filler characters, drop the starting
times, drop the tracebacks. This can be found in the individual test logs.
Because the tracebacks are no longer in the text report, the suite_test.py can
just print the reports and expect that output instead of asserting individual
contents.

In the text report, print duration in precision of .1 seconds.

Add origin information and a traceback text to the junit XML result to give
more context when browsing the result XML. For 'AssertionError', add the source
line of where the assertion hit.

Drop the explicit Failure exception. We don't need one specific exception to
mark a failure, instead any arbitrary exception is treated as a failure. Use
the exception's class name as fail_type.

Though my original idea was to use raising exceptions as the only way to cause
a test failure, I'm keeping the set_fail() function as an alternative way,
because it allows test specific cleanup and may come in handy later. To have
both ways integrate seamlessly, shift some result setting into 'finally'
clauses and make sure higher levels (suite, trial) count the contained items'
stati.

Minor tweak: write the 'pass' and 'skip' reports in lower case so that the
'FAIL' stands out.

Minor tweak: pass the return code that the program exit should return further
outward, so that the exit(1) call does not cause a SystemExit exception to be
logged.

The aims of this patch are:
- Logs are readable so that it is clear which logging belongs to which test and
  suite.
- The logging origins are correct (vs. parents gone missing as previously)
- A single test error does not cause following tests or suites to be skipped.
- An exception "above" Exception, i.e. SystemExit and the like, *does*
  immediately abort all tests and suites, and the results for tests that were
  not run are reported as "unknown" (rather than skipped on purpose):
  - Raising a SystemExit aborts all.
  - Hitting ctrl-c aborts all.
- The resulting summary in the log is brief and readable.

Change-Id: Ibf0846d457cab26f54c25e6906a8bb304724e2d8
This commit is contained in:
Neels Hofmeyr 2017-06-06 19:41:17 +02:00
parent f9e86935a3
commit 6ccda11a98
14 changed files with 305 additions and 226 deletions

View File

@ -38,4 +38,4 @@ nested print just prints
01:02:03 tst level2: nested l2 log() from within l3 scope [level1↪level2] [log_test.py:146]
01:02:03 tst level3: ERR: ValueError: bork [level1↪level2↪level3] [log_test.py:147: raise ValueError('bork')]
- Enter the same Origin context twice
01:02:03 tst level2: nested log [level1↪level2] [log_test.py:159]
disallowed successfully

View File

@ -152,10 +152,13 @@ except Exception:
log.log_exn()
print('- Enter the same Origin context twice')
with Thing('level1'):
l2 = Thing('level2')
with l2:
with l2:
l2.log('nested log')
try:
t = Thing('foo')
with t:
with t:
raise RuntimeError('this should not be reached')
except AssertionError:
print('disallowed successfully')
pass
# vim: expandtab tabstop=4 shiftwidth=4

View File

@ -7,7 +7,7 @@ cnf -: DBG: Found path suites_dir as [PATH]/selftest/suite_test
- no suite.conf
cnf -: DBG: Found path suites_dir as [PATH]/selftest/suite_test
cnf empty_dir: DBG: reading suite.conf
--- [PATH]/selftest/suite_test/empty_dir/suite.conf: ERR: FileNotFoundError: [Errno 2] No such file or directory: '[PATH]/selftest/suite_test/empty_dir/suite.conf' [empty_dir↪[PATH]/selftest/suite_test/empty_dir/suite.conf]
--- [PATH]/selftest/suite_test/empty_dir/suite.conf: ERR: FileNotFoundError: [Errno 2] No such file or directory: '[PATH]/selftest/suite_test/empty_dir/suite.conf'
- valid suite dir
cnf -: DBG: Found path suites_dir as [PATH]/selftest/suite_test
cnf test_suite: DBG: reading suite.conf
@ -24,7 +24,10 @@ resources:
- run hello world test
cnf -: DBG: Found config file resources.conf as [PATH]/selftest/suite_test/resources.conf in ./suite_test which is [PATH]/selftest/suite_test
cnf -: DBG: Found path state_dir as [PATH]/selftest/suite_test/test_work/state_dir
tst test_suite: Suite run start
---------------------------------------------------------------------
trial test_suite
---------------------------------------------------------------------
tst test_suite: reserving resources in [PATH]/selftest/suite_test/test_work/state_dir ...
tst test_suite: DBG: {combining='resources'}
tst test_suite: DBG: {definition_conf={bts=[{'times': '1'}], ip_address=[{'times': '1'}], modem=[{'times': '2'}]}} [test_suite↪(combining_scenarios='resources')↪test_suite]
@ -49,37 +52,93 @@ tst test_suite: DBG: Picked - _hash: 19c69e45aa090fb511446bd00797690aa82ff52f
ki: 47FDB2D55CE6A10A85ABDAD034A5B7B3
label: m7802
path: /wavecom_1
tst hello_world.py:[LINENR] START [test_suite↪hello_world.py]
----------------------------------------------
trial test_suite hello_world.py
----------------------------------------------
tst hello_world.py:[LINENR]: hello world [test_suite↪hello_world.py:[LINENR]]
tst hello_world.py:[LINENR]: I am 'test_suite' / 'hello_world.py:[LINENR]' [test_suite↪hello_world.py:[LINENR]]
tst hello_world.py:[LINENR]: one [test_suite↪hello_world.py:[LINENR]]
tst hello_world.py:[LINENR]: two [test_suite↪hello_world.py:[LINENR]]
tst hello_world.py:[LINENR]: three [test_suite↪hello_world.py:[LINENR]]
tst hello_world.py:[LINENR] PASS [test_suite↪hello_world.py]
tst test_suite: PASS
pass: all 6 tests passed (5 skipped).
tst hello_world.py:[LINENR] Test passed (N.N sec) [test_suite↪hello_world.py]
---------------------------------------------------------------------
trial test_suite PASS
---------------------------------------------------------------------
PASS: test_suite (pass: 1, skip: 5)
pass: hello_world.py (N.N sec)
skip: mo_mt_sms.py
skip: mo_sms.py
skip: test_error.py
skip: test_fail.py
skip: test_fail_raise.py
- a test with an error
tst test_suite: Suite run start [suite.py:[LINENR]]
tst test_error.py:[LINENR] START [test_suite↪test_error.py] [suite.py:[LINENR]]
---------------------------------------------------------------------
trial test_suite
---------------------------------------------------------------------
----------------------------------------------
trial test_suite test_error.py
----------------------------------------------
tst test_error.py:[LINENR]: I am 'test_suite' / 'test_error.py:[LINENR]' [test_suite↪test_error.py:[LINENR]] [test_error.py:[LINENR]]
tst test_error.py:[LINENR]: ERR: AssertionError: [test_error.py:[LINENR]: assert False]
tst test_error.py:[LINENR] FAIL (AssertionError) [test_suite↪test_error.py] [suite.py:[LINENR]]
tst test_suite: FAIL [suite.py:[LINENR]]
tst test_error.py:[LINENR]: ERR: AssertionError: test_error.py:[LINENR]: assert False [test_error.py:[LINENR]] [test_suite↪test_error.py:[LINENR]] [suite.py:[LINENR]]
tst test_error.py:[LINENR]: Test FAILED (N.N sec) [test_suite↪test_error.py:[LINENR]] [suite.py:[LINENR]]
---------------------------------------------------------------------
trial test_suite FAIL
---------------------------------------------------------------------
FAIL: test_suite (fail: 1, skip: 5)
skip: hello_world.py (N.N sec)
skip: mo_mt_sms.py
skip: mo_sms.py
FAIL: test_error.py (N.N sec) AssertionError: test_error.py:[LINENR]: assert False [test_error.py:[LINENR]]
skip: test_fail.py
skip: test_fail_raise.py
- a test with a failure
tst test_suite: Suite run start [suite.py:[LINENR]]
tst test_fail.py:[LINENR] START [test_suite↪test_fail.py] [suite.py:[LINENR]]
---------------------------------------------------------------------
trial test_suite
---------------------------------------------------------------------
----------------------------------------------
trial test_suite test_fail.py
----------------------------------------------
tst test_fail.py:[LINENR]: I am 'test_suite' / 'test_fail.py:[LINENR]' [test_suite↪test_fail.py:[LINENR]] [test_fail.py:[LINENR]]
tst test_fail.py:[LINENR] FAIL (EpicFail) [test_suite↪test_fail.py] [suite.py:[LINENR]]
tst test_suite: FAIL [suite.py:[LINENR]]
tst test_fail.py:[LINENR]: ERR: EpicFail: This failure is expected [test_suite↪test_fail.py:[LINENR]] [suite.py:[LINENR]]
tst test_fail.py:[LINENR]: Test FAILED (N.N sec) [test_suite↪test_fail.py:[LINENR]] [suite.py:[LINENR]]
---------------------------------------------------------------------
trial test_suite FAIL
---------------------------------------------------------------------
FAIL: test_suite (fail: 1, skip: 5)
skip: hello_world.py (N.N sec)
skip: mo_mt_sms.py
skip: mo_sms.py
skip: test_error.py (N.N sec)
FAIL: test_fail.py (N.N sec) EpicFail: This failure is expected
skip: test_fail_raise.py
- a test with a raised failure
tst test_suite: Suite run start [suite.py:[LINENR]]
tst test_fail_raise.py:[LINENR] START [test_suite↪test_fail_raise.py] [suite.py:[LINENR]]
tst test_fail_raise.py:[LINENR]: I am 'test_suite' / 'test_fail_raise.py:[LINENR]' [test_suite↪test_fail_raise.py:[LINENR]] [test_fail_raise.py:[LINENR]]
tst test_fail_raise.py:[LINENR]: ERR: Failure: ('EpicFail', 'This failure is expected') [test_fail_raise.py:[LINENR]: raise Failure('EpicFail', 'This failure is expected')]
tst test_fail_raise.py:[LINENR] FAIL (EpicFail) [test_suite↪test_fail_raise.py] [suite.py:[LINENR]]
tst test_suite: FAIL [suite.py:[LINENR]]
---------------------------------------------------------------------
trial test_suite
---------------------------------------------------------------------
----------------------------------------------
trial test_suite test_fail_raise.py
----------------------------------------------
tst test_fail_raise.py:[LINENR]: ERR: ExpectedFail: This failure is expected [test_fail_raise.py:[LINENR]] [test_suite↪test_fail_raise.py:[LINENR]] [suite.py:[LINENR]]
tst test_fail_raise.py:[LINENR]: Test FAILED (N.N sec) [test_suite↪test_fail_raise.py:[LINENR]] [suite.py:[LINENR]]
---------------------------------------------------------------------
trial test_suite FAIL
---------------------------------------------------------------------
FAIL: test_suite (fail: 1, skip: 5)
skip: hello_world.py (N.N sec)
skip: mo_mt_sms.py
skip: mo_sms.py
skip: test_error.py (N.N sec)
skip: test_fail.py (N.N sec)
FAIL: test_fail_raise.py (N.N sec) ExpectedFail: This failure is expected [test_fail_raise.py:[LINENR]]
- graceful exit.

View File

@ -1,2 +1,3 @@
/[^ ]*/selftest/ [PATH]/selftest/
\.py:[0-9]* .py:[LINENR]
\([0-9.]+ sec\) (N.N sec)

View File

@ -20,7 +20,8 @@ assert(isinstance(s_def, suite.SuiteDefinition))
print(config.tostr(s_def.conf))
print('- run hello world test')
s = suite.SuiteRun(None, 'test_suite', s_def)
trial = log.Origin('trial')
s = suite.SuiteRun(trial, 'test_suite', s_def)
results = s.run_tests('hello_world.py')
print(report.suite_to_text(s))
@ -29,26 +30,17 @@ log.style_change(src=True)
print('\n- a test with an error')
results = s.run_tests('test_error.py')
output = report.suite_to_text(s)
assert 'FAIL: [test_suite] 1 failed ' in output
assert 'FAIL: [test_error.py]' in output
assert "type:'AssertionError' message: AssertionError()" in output
assert 'assert False' in output
print(output)
print('\n- a test with a failure')
results = s.run_tests('test_fail.py')
output = report.suite_to_text(s)
assert 'FAIL: [test_suite] 1 failed ' in output
assert 'FAIL: [test_fail.py]' in output
assert "type:'EpicFail' message: This failure is expected" in output
assert "test.set_fail('EpicFail', 'This failure is expected')" in output
print(output)
print('\n- a test with a raised failure')
results = s.run_tests('test_fail_raise.py')
output = report.suite_to_text(s)
assert 'FAIL: [test_suite] 1 failed ' in output
assert 'FAIL: [test_fail_raise.py]' in output
assert "type:'EpicFail' message: This failure is expected" in output
assert "raise Failure('EpicFail', 'This failure is expected')" in output
print(output)
print('\n- graceful exit.')
# vim: expandtab tabstop=4 shiftwidth=4

View File

@ -1,6 +1,3 @@
#!/usr/bin/env python3
from osmo_gsm_tester.test import *
print('I am %r / %r' % (suite.name(), test.name()))
raise Failure('EpicFail', 'This failure is expected')
class ExpectedFail(Exception):
pass
raise ExpectedFail('This failure is expected')

View File

@ -98,7 +98,7 @@ def main():
# is easiest to maintain.
parser.add_argument('-V', '--version', action='store_true',
help='Show version')
parser.add_argument('trial_package', nargs='+',
parser.add_argument('trial_package',
help='Directory containing binaries to test')
parser.add_argument('-s', '--suite-scenario', dest='suite_scenario', action='append',
help='''A suite-scenarios combination
@ -128,7 +128,7 @@ OSMO_GSM_TESTER_CONF env and default locations)''')
print('combinations:', repr(args.suite_scenario))
print('series:', repr(args.series))
print('trials:', repr(args.trial_package))
print('trial:', repr(args.trial_package))
print('tests:', repr(args.test))
# create a default log to stdout
@ -157,72 +157,52 @@ OSMO_GSM_TESTER_CONF env and default locations)''')
if not combination_strs:
raise RuntimeError('Need at least one suite:scenario or series to run')
# make sure all suite:scenarios exist
suite_scenarios = []
for combination_str in combination_strs:
suite_scenarios.append(suite.load_suite_scenario_str(combination_str))
# pick tests and make sure they exist
test_names = []
for test_name in (args.test or []):
found = False
if test_name.startswith('=') and not test_name.endswith('.py'):
test_name = test_name + '.py'
for suite_scenario_str, suite_def, scenarios in suite_scenarios:
for test in suite_def.tests:
for def_test_name in suite_def.test_basenames:
if test_name.startswith('='):
match = test_name[1:] == test.name()
match = test_name[1:] == def_test_name
else:
match = test_name in test.name()
match = test_name in def_test_name
if match:
found = True
test_names.append(test.name())
test_names.append(def_test_name)
if not found:
raise RuntimeError('No test found for %r' % test_name)
if test_names:
test_names = sorted(set(test_names))
print(repr(test_names))
trials = []
for trial_package in args.trial_package:
t = trial.Trial(trial_package)
t.verify()
trials.append(t)
with trial.Trial(args.trial_package) as current_trial:
current_trial.verify()
for suite_scenario_str, suite_def, scenarios in suite_scenarios:
current_trial.add_suite_run(suite_scenario_str, suite_def, scenarios)
current_trial.run_suites(test_names)
trials_run = []
any_failed = False
for current_trial in trials:
try:
with current_trial:
for suite_scenario_str, suite_def, scenarios in suite_scenarios:
suite_run = suite.SuiteRun(current_trial, suite_scenario_str, suite_def, scenarios)
current_trial.add_suite(suite_run)
status = current_trial.run_suites(test_names)
if status == trial.Trial.FAIL:
any_failed = True
trials_run.append(current_trial)
except Exception:
# Do not catch here subclasses of BaseException such as SystemExit, let them finish the program
any_failed = True
current_trial.log_exn()
sys.stderr.flush()
sys.stdout.flush()
if not any_failed:
log.large_separator('All trials passed:\n ' + ('\n '.join(mytrial.name() for mytrial in trials_run)))
else:
for mytrial in trials_run:
log.large_separator('Trial Report for %s' % mytrial.name())
mytrial.log_report()
exit(1)
if current_trial.status != trial.Trial.PASS:
return 1
return 0
if __name__ == '__main__':
rc = 2
try:
main()
rc = main()
except:
# Tell the log to show the exception, then terminate the program with the exception anyway.
# Since exceptions within test runs should be caught and evaluated, this is basically about
# exceptions during command line parsing and such, so it's appropriate to abort immediately.
log.log_exn()
raise
exit(rc)
# vim: expandtab tabstop=4 shiftwidth=4

View File

@ -206,13 +206,16 @@ class LogTarget:
log_str = log_str + '\n'
self.log_write_func(log_str)
def large_separator(self, *msgs):
def large_separator(self, *msgs, sublevel=1, space_above=True):
sublevel = max(1, min(3, sublevel))
msg = ' '.join(msgs)
sep = '-' * int(23 * (5 - sublevel))
if not msg:
msg = '------------------------------------------'
self.log_write_func('------------------------------------------\n'
'%s\n'
'------------------------------------------\n' % msg)
msg = sep
lines = [sep, msg, sep, '']
if space_above:
lines.insert(0, '')
self.log_write_func('\n'.join(lines))
def level_str(level):
if level == L_TRACEBACK:
@ -231,9 +234,9 @@ def _log_all_targets(origin, category, level, src, messages, named_items=None):
for target in LogTarget.all_targets:
target.log(origin, category, level, src, messages, named_items)
def large_separator(*msgs):
def large_separator(*msgs, sublevel=1, space_above=True):
for target in LogTarget.all_targets:
target.large_separator(*msgs)
target.large_separator(*msgs, sublevel=sublevel, space_above=space_above)
def get_src_from_caller(levels_up=1):
caller = getframeinfo(stack()[levels_up][0])
@ -327,6 +330,9 @@ class Origin:
def err(self, *messages, **named_items):
self._log(L_ERR, messages, named_items)
def trace(self, *messages, **named_items):
self._log(L_TRACEBACK, messages, named_items)
def log_exn(self, exc_info=None):
log_exn(self, self._log_category, exc_info)
@ -373,10 +379,8 @@ class Origin:
def set_child_of(self, parent_origin):
# avoid loops
if self._parent_origin is not None:
return False
if parent_origin == self:
return False
assert self._parent_origin is None
assert parent_origin is not self
self._parent_origin = parent_origin
return True

View File

@ -41,7 +41,7 @@ def suite_to_junit(suite):
testsuite.set('timestamp', datetime.fromtimestamp(round(suite.start_timestamp)).isoformat())
testsuite.set('time', str(math.ceil(suite.duration)))
testsuite.set('tests', str(len(suite.tests)))
testsuite.set('failures', str(suite.test_failed_ctr))
testsuite.set('failures', str(suite.count_test_results()[2]))
for test in suite.tests:
testcase = test_to_junit(test)
testsuite.append(testcase)
@ -54,31 +54,56 @@ def test_to_junit(test):
if test.status == suite.Test.SKIP:
skip = et.SubElement(testcase, 'skipped')
elif test.status == suite.Test.FAIL:
failure = et.SubElement(testcase, 'failure')
failure.set('type', test.fail_type)
failure.text = test.fail_message
failure = et.SubElement(testcase, 'failure')
failure.set('type', test.fail_type or 'failure')
failure.text = test.fail_message
if test.fail_tb:
system_err = et.SubElement(testcase, 'system-err')
system_err.text = test.fail_tb
return testcase
def trial_to_text(trial):
msg = '\n%s [%s]\n ' % (trial.status, trial.name())
msg += '\n '.join(suite_to_text(result) for result in trial.suites)
return msg
suite_failures = []
count_fail = 0
count_pass = 0
for suite in trial.suites:
if suite.passed():
count_pass += 1
else:
count_fail += 1
suite_failures.append(suite_to_text(suite))
summary = ['%s: %s' % (trial.name(), trial.status)]
if count_fail:
summary.append('%d suites failed' % count_fail)
if count_pass:
summary.append('%d suites passed' % count_pass)
msg = [', '.join(summary)]
msg.extend(suite_failures)
return '\n'.join(msg)
def suite_to_text(suite):
if suite.test_failed_ctr:
return 'FAIL: [%s] %d failed out of %d tests run (%d skipped):\n %s' % (
suite.name(), suite.test_failed_ctr, len(suite.tests), suite.test_skipped_ctr,
'\n '.join([test_to_text(t) for t in suite.tests]))
if not suite.tests:
return 'no tests were run.'
return 'pass: all %d tests passed (%d skipped).' % (len(suite.tests), suite.test_skipped_ctr)
passed, skipped, failed = suite.count_test_results()
details = []
if failed:
details.append('fail: %d' % failed)
if passed:
details.append('pass: %d' % passed)
if skipped:
details.append('skip: %d' % skipped)
msgs = ['%s: %s (%s)' % (suite.status, suite.name(), ', '.join(details))]
msgs.extend([test_to_text(t) for t in suite.tests])
return '\n '.join(msgs)
def test_to_text(test):
ret = "%s: [%s]" % (test.status, test.name())
if test.status != suite.Test.SKIP:
ret += " (%s, %d sec)" % (datetime.fromtimestamp(round(test.start_timestamp)).isoformat(), test.duration)
msgs = ['%s: %s' % (test.status, test.name())]
if test.start_timestamp:
msgs.append('(%.1f sec)' % test.duration)
if test.status == suite.Test.FAIL:
ret += " type:'%s' message: %s" % (test.fail_type, test.fail_message.replace('\n', '\n '))
return ret
msgs.append('%s: %s' % (test.fail_type, test.fail_message))
return ' '.join(msgs)
# vim: expandtab tabstop=4 shiftwidth=4

View File

@ -31,16 +31,6 @@ from . import test
class Timeout(Exception):
pass
class Failure(Exception):
'''Test failure exception, provided to be raised by tests. fail_type is
usually a keyword used to quickly identify the type of failure that
occurred. fail_msg is a more extensive text containing information about
the issue.'''
def __init__(self, fail_type, fail_msg):
self.fail_type = fail_type
self.fail_msg = fail_msg
class SuiteDefinition(log.Origin):
'''A test suite reserves resources for a number of tests.
Each test requires a specific number of modems, BTSs etc., which are
@ -65,43 +55,32 @@ class SuiteDefinition(log.Origin):
self.read_conf()
def read_conf(self):
with self:
self.dbg('reading %s' % SuiteDefinition.CONF_FILENAME)
if not os.path.isdir(self.suite_dir):
raise RuntimeError('No such directory: %r' % self.suite_dir)
self.conf = config.read(os.path.join(self.suite_dir,
SuiteDefinition.CONF_FILENAME),
SuiteDefinition.CONF_SCHEMA)
self.load_tests()
self.dbg('reading %s' % SuiteDefinition.CONF_FILENAME)
if not os.path.isdir(self.suite_dir):
raise RuntimeError('No such directory: %r' % self.suite_dir)
self.conf = config.read(os.path.join(self.suite_dir,
SuiteDefinition.CONF_FILENAME),
SuiteDefinition.CONF_SCHEMA)
self.load_test_basenames()
def load_tests(self):
with self:
self.tests = []
for basename in sorted(os.listdir(self.suite_dir)):
if not basename.endswith('.py'):
continue
self.tests.append(Test(self, basename))
def load_test_basenames(self):
self.test_basenames = []
for basename in sorted(os.listdir(self.suite_dir)):
if not basename.endswith('.py'):
continue
self.test_basenames.append(basename)
def add_test(self, test):
with self:
if not isinstance(test, Test):
raise ValueError('add_test(): pass a Test() instance, not %s' % type(test))
if test.suite is None:
test.suite = self
if test.suite is not self:
raise ValueError('add_test(): test already belongs to another suite')
self.tests.append(test)
class Test(log.Origin):
UNKNOWN = 'UNKNOWN'
SKIP = 'SKIP'
PASS = 'PASS'
SKIP = 'skip'
PASS = 'pass'
FAIL = 'FAIL'
def __init__(self, suite, test_basename):
self.suite = suite
def __init__(self, suite_run, test_basename):
self.suite_run = suite_run
self.basename = test_basename
self.path = os.path.join(self.suite.suite_dir, self.basename)
self.path = os.path.join(self.suite_run.definition.suite_dir, self.basename)
super().__init__(self.path)
self.set_name(self.basename)
self.set_log_category(log.C_TST)
@ -111,38 +90,38 @@ class Test(log.Origin):
self.fail_type = None
self.fail_message = None
def run(self, suite_run):
assert self.suite is suite_run.definition
def run(self):
try:
with self:
log.large_separator(self.suite_run.trial.name(), self.suite_run.name(), self.name(), sublevel=3)
self.status = Test.UNKNOWN
self.start_timestamp = time.time()
test.setup(suite_run, self, ofono_client, sys.modules[__name__], event_loop)
self.log('START')
test.setup(self.suite_run, self, ofono_client, sys.modules[__name__], event_loop)
with self.redirect_stdout():
util.run_python_file('%s.%s' % (self.suite.name(), self.name()),
util.run_python_file('%s.%s' % (self.suite_run.definition.name(), self.basename),
self.path)
if self.status == Test.UNKNOWN:
self.set_pass()
except Exception as e:
self.log_exn()
if isinstance(e, Failure):
ftype = e.fail_type
fmsg = e.fail_msg + '\n' + traceback.format_exc().rstrip()
if hasattr(e, 'msg'):
msg = e.msg
else:
ftype = type(e).__name__
fmsg = repr(e) + '\n' + traceback.format_exc().rstrip()
if isinstance(e, resource.NoResourceExn):
fmsg += suite_run.resource_status_str()
self.set_fail(ftype, fmsg, False)
finally:
if self.status == Test.PASS or self.status == Test.SKIP:
self.log(self.status)
else:
self.log('%s (%s)' % (self.status, self.fail_type))
return self.status
msg = str(e)
if isinstance(e, AssertionError):
# AssertionError lacks further information on what was
# asserted. Find the line where the code asserted:
msg += log.get_src_from_tb(sys.exc_info()[2])
# add source file information to failure report
if hasattr(e, 'origins'):
msg += ' [%s]' % e.origins
tb_str = traceback.format_exc()
if isinstance(e, resource.NoResourceExn):
tb_str += self.suite_run.resource_status_str()
self.set_fail(type(e).__name__, msg, tb_str)
except BaseException as e:
# when the program is aborted by a signal (like Ctrl-C), escalate to abort all.
self.err('TEST RUN ABORTED: %s' % type(e).__name__)
raise
def name(self):
l = log.get_line_for_src(self.path)
@ -150,17 +129,26 @@ class Test(log.Origin):
return '%s:%s' % (self._name, l)
return super().name()
def set_fail(self, fail_type, fail_message, tb=True):
def set_fail(self, fail_type, fail_message, tb_str=None):
self.status = Test.FAIL
self.duration = time.time() - self.start_timestamp
self.fail_type = fail_type
self.fail_message = fail_message
if tb:
self.fail_message += '\n' + ''.join(traceback.format_stack()[:-1]).rstrip()
if tb_str is None:
# populate an exception-less call to set_fail() with traceback info
tb_str = ''.join(traceback.format_stack()[:-1])
self.fail_tb = tb_str
self.err('%s: %s' % (self.fail_type, self.fail_message))
if self.fail_tb:
self.trace(self.fail_tb)
self.log('Test FAILED (%.1f sec)' % self.duration)
def set_pass(self):
self.status = Test.PASS
self.duration = time.time() - self.start_timestamp
self.log('Test passed (%.1f sec)' % self.duration)
def set_skip(self):
self.status = Test.SKIP
@ -172,6 +160,7 @@ class SuiteRun(log.Origin):
FAIL = 'FAIL'
trial = None
status = None
resources_pool = None
reserved_resources = None
objects_to_clean_up = None
@ -179,13 +168,20 @@ class SuiteRun(log.Origin):
_config = None
_processes = None
def __init__(self, current_trial, suite_scenario_str, suite_definition, scenarios=[]):
self.trial = current_trial
def __init__(self, trial, suite_scenario_str, suite_definition, scenarios=[]):
self.trial = trial
self.definition = suite_definition
self.scenarios = scenarios
self.set_name(suite_scenario_str)
self.set_log_category(log.C_TST)
self.resources_pool = resource.ResourcesPool()
self.status = SuiteRun.UNKNOWN
self.load_tests()
def load_tests(self):
self.tests = []
for test_basename in self.definition.test_basenames:
self.tests.append(Test(self, test_basename))
def register_for_cleanup(self, *obj):
assert all([hasattr(o, 'cleanup') for o in obj])
@ -198,11 +194,8 @@ class SuiteRun(log.Origin):
obj.cleanup()
def mark_start(self):
self.tests = []
self.start_timestamp = time.time()
self.duration = 0
self.test_failed_ctr = 0
self.test_skipped_ctr = 0
self.status = SuiteRun.UNKNOWN
def combined(self, conf_name):
@ -233,27 +226,27 @@ class SuiteRun(log.Origin):
if self.reserved_resources:
raise RuntimeError('Attempt to reserve resources twice for a SuiteRun')
self.log('reserving resources in', self.resources_pool.state_dir, '...')
with self:
self.reserved_resources = self.resources_pool.reserve(self, self.resource_requirements())
self.reserved_resources = self.resources_pool.reserve(self, self.resource_requirements())
def run_tests(self, names=None):
self.log('Suite run start')
try:
self.mark_start()
event_loop.register_poll_func(self.poll)
if not self.reserved_resources:
self.reserve_resources()
for test in self.definition.tests:
if names and not test.name() in names:
test.set_skip()
self.test_skipped_ctr += 1
self.tests.append(test)
continue
with self:
st = test.run(self)
if st == Test.FAIL:
self.test_failed_ctr += 1
self.tests.append(test)
with self:
log.large_separator(self.trial.name(), self.name(), sublevel=2)
self.mark_start()
event_loop.register_poll_func(self.poll)
if not self.reserved_resources:
self.reserve_resources()
for test in self.tests:
if names and not test.name() in names:
test.set_skip()
continue
test.run()
except Exception:
self.log_exn()
except BaseException as e:
# when the program is aborted by a signal (like Ctrl-C), escalate to abort all.
self.err('SUITE RUN ABORTED: %s' % type(e).__name__)
raise
finally:
# if sys.exit() called from signal handler (e.g. SIGINT), SystemExit
# base exception is raised. Make sure to stop processes in this
@ -261,14 +254,33 @@ class SuiteRun(log.Origin):
self.stop_processes()
self.objects_cleanup()
self.free_resources()
event_loop.unregister_poll_func(self.poll)
self.duration = time.time() - self.start_timestamp
if self.test_failed_ctr:
self.status = SuiteRun.FAIL
else:
self.status = SuiteRun.PASS
self.log(self.status)
return self.status
event_loop.unregister_poll_func(self.poll)
self.duration = time.time() - self.start_timestamp
passed, skipped, failed = self.count_test_results()
# if no tests ran, count it as failure
if passed and not failed:
self.status = SuiteRun.PASS
else:
self.status = SuiteRun.FAIL
log.large_separator(self.trial.name(), self.name(), self.status, sublevel=2, space_above=False)
def passed(self):
return self.status == SuiteRun.PASS
def count_test_results(self):
passed = 0
skipped = 0
failed = 0
for test in self.tests:
if test.status == Test.PASS:
passed += 1
elif test.status == Test.FAIL:
failed += 1
else:
skipped += 1
return (passed, skipped, failed)
def remember_to_stop(self, process):
if self._processes is None:

View File

@ -33,7 +33,6 @@ sleep = None
poll = None
prompt = None
Timeout = None
Failure = None
def setup(suite_run, _test, ofono_client, suite_module, event_module):
global trial, suite, test, resources, log, dbg, err, wait, wait_no_raise, sleep, poll, prompt, Failure, Timeout
@ -49,7 +48,6 @@ def setup(suite_run, _test, ofono_client, suite_module, event_module):
sleep = lambda *args, **kwargs: event_module.sleep(suite_run, *args, **kwargs)
poll = event_module.poll
prompt = suite_run.prompt
Failure = suite_module.Failure
Timeout = suite_module.Timeout
# vim: expandtab tabstop=4 shiftwidth=4

View File

@ -79,15 +79,15 @@ class Trial(log.Origin):
log.FileLogTarget(run_dir.new_child(FILE_LOG_BRIEF))
.style_change(src=False, all_origins_on_levels=(log.L_ERR, log.L_TRACEBACK))
]
self.log('Trial start')
log.large_separator(self.name(), sublevel=1)
self.log('Detailed log at', detailed_log)
self.take()
super().__enter__()
return self
def __exit__(self, *exc_info):
super().__exit__(*exc_info)
self.log('Trial end')
self.log_report()
for lt in self.log_targets:
lt.remove()
self.log_targets = None
@ -182,25 +182,30 @@ class Trial(log.Origin):
except:
pass
def add_suite(self, suite_run):
def add_suite_run(self, suite_scenario_str, suite_def, scenarios):
suite_run = suite.SuiteRun(self, suite_scenario_str, suite_def, scenarios)
self.suites.append(suite_run)
def run_suites(self, names=None):
self.status = Trial.UNKNOWN
for suite_run in self.suites:
log.large_separator(self.name(), suite_run.name())
st = suite_run.run_tests(names)
if st == suite.SuiteRun.FAIL:
self.status = Trial.FAIL
elif self.status == Trial.UNKNOWN:
self.status = Trial.PASS
self.log(self.status)
for suite_run in self.suites:
try:
suite_run.run_tests(names)
except BaseException as e:
# when the program is aborted by a signal (like Ctrl-C), escalate to abort all.
self.err('TRIAL RUN ABORTED: %s' % type(e).__name__)
raise
finally:
if suite_run.status != suite.SuiteRun.PASS:
self.status = Trial.FAIL
if self.status == Trial.UNKNOWN:
self.status = Trial.PASS
junit_path = self.get_run_dir().new_file(self.name()+'.xml')
self.log('Storing JUnit report in', junit_path)
report.trial_to_junit_write(self, junit_path)
return self.status
def log_report(self):
log.large_separator(self.name(), self.status)
self.log(report.trial_to_text(self))
# vim: expandtab tabstop=4 shiftwidth=4

View File

@ -2,4 +2,4 @@
from osmo_gsm_tester.test import *
# This can be used to verify that a test failure is reported properly.
test.set_fail('EpicFail', 'This failure is expected')
test.set_fail('ExpectedFail', 'This failure is expected')

View File

@ -1,5 +1,8 @@
#!/usr/bin/env python3
from osmo_gsm_tester.test import *
class ExpectedExn(Exception):
pass
# This can be used to verify that a test failure is reported properly.
raise Failure('EpicFail', 'This failure is expected')
raise ExpectedExn('This failure is expected')