add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<?xml version="1.0"?>
|
2020-05-14 19:10:28 +00:00
|
|
|
<testsuite name='Titan' tests='57' failures='3' errors='0' skipped='0' inconc='0' time='MASKED'>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_mnc3' time='MASKED'/>
|
2018-04-30 13:13:55 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_umts_aka_umts_res' time='MASKED'/>
|
2018-05-02 09:59:18 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_umts_aka_gsm_sres' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_auth_id_timeout' time='MASKED'/>
|
2018-10-02 22:44:23 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_auth_sai_timeout' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_auth_sai_reject' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_gsup_lu_timeout' time='MASKED'>
|
2018-10-02 22:44:23 +00:00
|
|
|
<failure type='fail-verdict'>
|
2018-04-11 13:56:41 +00:00
|
|
|
SGSN_Tests.ttcn:MASKED SGSN_Tests control part
|
|
|
|
SGSN_Tests.ttcn:MASKED TC_attach_gsup_lu_timeout testcase
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</failure>
|
|
|
|
</testcase>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_gsup_lu_reject' time='MASKED'>
|
2018-10-02 22:44:23 +00:00
|
|
|
<failure type='fail-verdict'>
|
2018-04-11 13:56:41 +00:00
|
|
|
SGSN_Tests.ttcn:MASKED SGSN_Tests control part
|
|
|
|
SGSN_Tests.ttcn:MASKED TC_attach_gsup_lu_reject testcase
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</failure>
|
|
|
|
</testcase>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_combined' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_accept_all' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_closed' time='MASKED'/>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_no_imei_response' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_no_imsi_response' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_closed_add_vty' time='MASKED'/>
|
2019-01-23 11:44:09 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_check_subscriber_list' time='MASKED'/>
|
2018-10-02 22:44:23 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_detach_check_subscriber_list' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_check_complete_resend' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_hlr_location_cancel_request_update' time='MASKED'/>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_hlr_location_cancel_request_withdraw' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_hlr_location_cancel_request_unknown_subscriber_withdraw' time='MASKED'/>
|
2018-06-07 13:47:26 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_hlr_location_cancel_request_unknown_subscriber_update' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_rau_unknown' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_rau' time='MASKED'/>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_rau_a_a' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_rau_a_b' time='MASKED'/>
|
2018-10-02 22:44:23 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_usim_resync' time='MASKED'/>
|
2018-06-07 13:47:26 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_detach_unknown_nopoweroff' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_detach_unknown_poweroff' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_detach_nopoweroff' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_detach_poweroff' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_pdp_act_unattached' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act_user' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act_ggsn_reject' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act_user_deact_mo' time='MASKED'/>
|
2018-07-16 10:28:03 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act_user_deact_mt' time='MASKED'/>
|
2019-08-15 14:01:20 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act_deact_dup' time='MASKED'/>
|
2018-05-10 21:11:54 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_second_attempt' time='MASKED'>
|
2018-09-06 12:13:34 +00:00
|
|
|
<failure type='fail-verdict'>Tguard timeout
|
|
|
|
SGSN_Tests.ttcn:MASKED SGSN_Tests control part
|
|
|
|
SGSN_Tests.ttcn:MASKED TC_attach_second_attempt testcase
|
|
|
|
</failure>
|
2018-05-10 21:11:54 +00:00
|
|
|
</testcase>
|
2019-08-28 15:33:46 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_echo_timeout' time='MASKED'/>
|
2018-07-17 13:50:21 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_restart_ctr_echo' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_restart_ctr_create' time='MASKED'/>
|
2018-07-16 13:10:08 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act_deact_mt_t3395_expire' time='MASKED'/>
|
2019-05-29 11:02:42 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act_deact_gtp_retrans' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act_deact_gtp_retrans_resp' time='MASKED'/>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act_user_error_ind_ggsn' time='MASKED'/>
|
2020-05-14 19:10:28 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act_gmm_detach' time='MASKED'/>
|
2018-10-02 22:44:23 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_gmm_attach_req_while_gmm_attach' time='MASKED'/>
|
2020-05-14 19:10:28 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_xid_empty_l3' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_xid_n201u' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_llc_null' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_llc_sabm_dm_llgmm' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_llc_sabm_dm_ll5' time='MASKED'/>
|
2019-11-08 17:32:28 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_req_id_req_ra_update' time='MASKED'/>
|
2020-05-14 19:10:28 +00:00
|
|
|
<testcase classname='SGSN_Tests_Iu' name='TC_iu_attach' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests_Iu' name='TC_iu_attach_geran_rau' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests_Iu' name='TC_geran_attach_iu_rau' time='MASKED'/>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</testsuite>
|