add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<?xml version="1.0"?>
|
2019-01-23 11:44:09 +00:00
|
|
|
<testsuite name='Titan' tests='82' failures='4' errors='19' skipped='0' inconc='0' time='MASKED'>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_cr_before_reset' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_noauth_tmsi' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_noauth_notmsi' time='MASKED'/>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_reject' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_timeout_gsup' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_cmserv_imsi_unknown' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mo_call' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_auth_sai_timeout' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_auth_sai_err' time='MASKED'/>
|
2018-05-02 09:59:18 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_clear_request' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_disconnect' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_by_imei' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_by_tmsi_noauth_unknown' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_imsi_detach_by_imsi' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_imsi_detach_by_tmsi' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_imsi_detach_by_imei' time='MASKED'/>
|
2018-05-02 09:59:18 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_emerg_call_imei_reject' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_emerg_call_imsi' time='MASKED'/>
|
2018-05-02 09:59:18 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_cm_serv_req_vgcs_reject' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_cm_serv_req_vbs_reject' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_cm_serv_req_lcs_reject' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_cm_reest_req_reject' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_auth_2G_fail' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_13_13' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_cl3_no_payload' time='MASKED'/>
|
2018-05-02 09:59:18 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_cl3_rnd_payload' time='MASKED'/>
|
2018-06-01 15:30:45 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_establish_and_nothing' time='MASKED'/>
|
2019-01-23 11:44:09 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_mo_setup_and_nothing' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_mo_crcx_ran_timeout' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_mo_crcx_ran_reject' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_mt_crcx_ran_reject' time='MASKED'>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<failure type='fail-verdict'>Timeout waiting for channel release
|
2018-04-11 13:56:41 +00:00
|
|
|
MSC_Tests.ttcn:MASKED MSC_Tests control part
|
|
|
|
MSC_Tests.ttcn:MASKED TC_mt_crcx_ran_reject testcase
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</failure>
|
|
|
|
</testcase>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_mo_setup_and_dtmf_dup' time='MASKED'/>
|
2019-01-23 11:44:09 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_gsup_cancel' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_1_13' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_3_13' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_3_1' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_3_1_no_cm' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_13_2' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_013_2' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_mo_release_timeout' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mt_call_no_dlcx_resp' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_reset_two' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mt_call' time='MASKED'/>
|
2018-04-11 13:54:07 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mo_sms' time='MASKED'/>
|
2018-05-02 09:59:18 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mt_sms' time='MASKED'/>
|
2018-11-22 18:01:33 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mt_sms_paging_and_nothing' time='MASKED'/>
|
2018-05-02 09:59:18 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_smpp_mo_sms' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_smpp_mt_sms' time='MASKED'/>
|
2018-11-11 19:50:23 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_gsup_mo_sms' time='MASKED'/>
|
2018-11-14 19:06:07 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_gsup_mo_smma' time='MASKED'/>
|
2018-11-23 20:40:20 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_gsup_mt_sms_ack' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_gsup_mt_sms_err' time='MASKED'/>
|
2019-01-23 11:44:09 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_gsup_mt_multi_part_sms' time='MASKED'>
|
|
|
|
<failure type='fail-verdict'>Tguard timeout
|
|
|
|
MSC_Tests.ttcn:MASKED MSC_Tests control part
|
|
|
|
MSC_Tests.ttcn:MASKED TC_gsup_mt_multi_part_sms testcase
|
|
|
|
</failure>
|
|
|
|
</testcase>
|
2018-06-19 10:51:20 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mo_ussd_single_request' time='MASKED'/>
|
2018-06-19 11:24:31 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mt_ussd_notification' time='MASKED'/>
|
2018-06-19 10:51:20 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mo_ussd_during_mt_call' time='MASKED'/>
|
2018-06-19 11:24:31 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mt_ussd_during_mt_call' time='MASKED'/>
|
2018-06-20 21:19:58 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mo_ussd_mo_release' time='MASKED'/>
|
2019-01-23 11:44:09 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_ss_session_timeout' time='MASKED'>
|
|
|
|
<error type='DTE'>Dynamic test case error: testcase.stop</error>
|
|
|
|
</testcase>
|
2018-12-17 14:06:20 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_cipher_complete_with_invalid_cipher' time='MASKED'/>
|
2019-01-23 11:44:09 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_reset' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_lu' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_lu_imsi_reject' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_lu_and_nothing' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_expl_imsi_det_eps' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_expl_imsi_det_noneps' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_paging_rej' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_paging_subscr_rej' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_paging_ue_unr' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_paging_and_nothing' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_paging_and_lu' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_mt_sms' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_mo_sms' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_mt_sms_and_nothing' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_mt_sms_and_reject' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_unexp_ud' time='MASKED'>
|
|
|
|
<failure type='fail-verdict'>Error in CTRL GET "fsm.SGs-UE.id.imsi:262420000002145.state": "Error"
|
|
|
|
MSC_Tests.ttcn:MASKED MSC_Tests control part
|
|
|
|
MSC_Tests.ttcn:MASKED TC_sgsap_unexp_ud testcase
|
|
|
|
</failure>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_unsol_ud' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_bssap_lu_sgsap_lu_and_mt_call' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_lu_and_mt_call' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_3_1_log_msc_debug' time='MASKED'/>
|
2019-01-23 11:44:09 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_mo_cc_bssmap_clear' time='MASKED'>
|
|
|
|
<failure type='fail-verdict'>Tguard timeout
|
|
|
|
MSC_Tests.ttcn:MASKED MSC_Tests control part
|
|
|
|
MSC_Tests.ttcn:MASKED TC_mo_cc_bssmap_clear testcase
|
|
|
|
</failure>
|
|
|
|
</testcase>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</testsuite>
|