add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<?xml version="1.0"?>
|
2020-09-11 18:24:23 +00:00
|
|
|
<testsuite name='Titan' tests='36' failures='5' errors='0' skipped='0' inconc='0' time='MASKED'>
|
2018-06-11 20:23:10 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_sai_err_invalid_imsi' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_sai' time='MASKED'/>
|
2020-05-14 19:10:28 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_sai_num_auth_vectors' time='MASKED'/>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_sai_eps' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_ul_unknown_imsi' time='MASKED'/>
|
2020-05-14 19:10:28 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_ul_unknown_imsi_via_proxy' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_sai_err_unknown_imsi' time='MASKED'/>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_ul' time='MASKED'/>
|
2020-05-14 19:10:28 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_ul_via_proxy' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_vty' time='MASKED'/>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_vty_msisdn_isd' time='MASKED'/>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_purge_cs' time='MASKED'/>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_purge_ps' time='MASKED'/>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_purge_unknown' time='MASKED'/>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_mo_ussd_unknown' time='MASKED'/>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_mo_ussd_euse_disc' time='MASKED'/>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_mo_ussd_iuse_imsi' time='MASKED'/>
|
2020-05-14 19:10:28 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_mo_ussd_iuse_imsi_via_proxy' time='MASKED'/>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_mo_ussd_iuse_msisdn' time='MASKED'/>
|
2020-05-14 19:10:28 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_mo_ussd_iuse_msisdn_via_proxy' time='MASKED'/>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_mo_ussd_euse' time='MASKED'/>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_mo_ussd_euse_continue' time='MASKED'/>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_mo_ussd_euse_defaultroute' time='MASKED'/>
|
2018-11-27 23:02:22 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_mo_sss_reject' time='MASKED'/>
|
2019-06-27 08:51:07 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_check_imei' time='MASKED'/>
|
2020-05-14 19:10:28 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_check_imei_via_proxy' time='MASKED'/>
|
2019-06-27 08:51:07 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_check_imei_invalid_len' time='MASKED'/>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_gsup_check_imei_unknown_imsi' time='MASKED'/>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_subscr_create_on_demand_check_imei_early' time='MASKED'/>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_subscr_create_on_demand_ul' time='MASKED'/>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_subscr_create_on_demand_sai' time='MASKED'/>
|
2020-09-11 18:24:23 +00:00
|
|
|
<testcase classname='HLR_Tests' name='TC_MSLookup_mDNS_service_other_home' time='MASKED'>
|
|
|
|
<failure type='fail-verdict'>OsmoHLR did not answer to mDNS query
|
|
|
|
HLR_Tests.ttcn:MASKED HLR_Tests control part
|
|
|
|
HLR_Tests.ttcn:MASKED TC_MSLookup_mDNS_service_other_home testcase
|
|
|
|
</failure>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_MSLookup_GSUP_proxy' time='MASKED'>
|
|
|
|
<failure type='fail-verdict'>Timeout
|
|
|
|
HLR_Tests.ttcn:MASKED HLR_Tests control part
|
|
|
|
HLR_Tests.ttcn:MASKED TC_MSLookup_GSUP_proxy testcase
|
|
|
|
</failure>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_MSLookup_mDNS_service_GSUP_HLR_home' time='MASKED'>
|
|
|
|
<failure type='fail-verdict'>OsmoHLR did not answer to mDNS query
|
|
|
|
HLR_Tests.ttcn:MASKED HLR_Tests control part
|
|
|
|
HLR_Tests.ttcn:MASKED TC_MSLookup_mDNS_service_GSUP_HLR_home testcase
|
|
|
|
</failure>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_MSLookup_mDNS_service_GSUP_HLR_proxy' time='MASKED'>
|
|
|
|
<failure type='fail-verdict'>Timeout
|
|
|
|
HLR_Tests.ttcn:MASKED HLR_Tests control part
|
|
|
|
HLR_Tests.ttcn:MASKED TC_MSLookup_mDNS_service_GSUP_HLR_proxy testcase
|
|
|
|
</failure>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='HLR_Tests' name='TC_MSLookup_mDNS_service_other_proxy' time='MASKED'>
|
|
|
|
<failure type='fail-verdict'>Timeout
|
|
|
|
HLR_Tests.ttcn:MASKED HLR_Tests control part
|
|
|
|
HLR_Tests.ttcn:MASKED TC_MSLookup_mDNS_service_other_proxy testcase
|
|
|
|
</failure>
|
|
|
|
</testcase>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</testsuite>
|