add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<?xml version="1.0"?>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testsuite name='Titan' tests='55' failures='3' errors='0' skipped='0' inconc='0' time='MASKED'>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_cr_before_reset' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_noauth_tmsi' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_noauth_notmsi' time='MASKED'/>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_reject' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_timeout_gsup' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_cmserv_imsi_unknown' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mo_call' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_auth_sai_timeout' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_auth_sai_err' time='MASKED'/>
|
2018-05-02 09:59:18 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_clear_request' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_disconnect' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_by_imei' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_by_tmsi_noauth_unknown' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_imsi_detach_by_imsi' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_imsi_detach_by_tmsi' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_imsi_detach_by_imei' time='MASKED'/>
|
2018-05-02 09:59:18 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_emerg_call_imei_reject' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_emerg_call_imsi' time='MASKED'/>
|
2018-05-02 09:59:18 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_cm_serv_req_vgcs_reject' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_cm_serv_req_vbs_reject' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_cm_serv_req_lcs_reject' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_cm_reest_req_reject' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_auth_2G_fail' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_13_13' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_cl3_no_payload' time='MASKED'/>
|
2018-05-02 09:59:18 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_cl3_rnd_payload' time='MASKED'/>
|
2018-06-01 15:30:45 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_establish_and_nothing' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_mo_setup_and_nothing' time='MASKED'>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<failure type='fail-verdict'>Timeout waiting for ClearCommand/Release
|
2018-04-11 13:56:41 +00:00
|
|
|
MSC_Tests.ttcn:MASKED MSC_Tests control part
|
|
|
|
MSC_Tests.ttcn:MASKED TC_mo_setup_and_nothing testcase
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</failure>
|
|
|
|
</testcase>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_mo_crcx_ran_timeout' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_mo_crcx_ran_reject' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_mt_crcx_ran_reject' time='MASKED'>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<failure type='fail-verdict'>Timeout waiting for channel release
|
2018-04-11 13:56:41 +00:00
|
|
|
MSC_Tests.ttcn:MASKED MSC_Tests control part
|
|
|
|
MSC_Tests.ttcn:MASKED TC_mt_crcx_ran_reject testcase
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</failure>
|
|
|
|
</testcase>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_mo_setup_and_dtmf_dup' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_gsup_cancel' time='MASKED'>
|
2018-09-06 12:13:34 +00:00
|
|
|
<failure type='fail-verdict'>Received unexpected BSSAP instead of CM SERV REJ
|
|
|
|
MSC_Tests.ttcn:MASKED MSC_Tests control part
|
|
|
|
MSC_Tests.ttcn:MASKED TC_gsup_cancel testcase
|
|
|
|
</failure>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</testcase>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_1_13' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_3_13' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_3_1' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_3_1_no_cm' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_13_2' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_013_2' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_mo_release_timeout' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mt_call_no_dlcx_resp' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_reset_two' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mt_call' time='MASKED'/>
|
2018-04-11 13:54:07 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mo_sms' time='MASKED'/>
|
2018-05-02 09:59:18 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mt_sms' time='MASKED'/>
|
2018-11-22 18:01:33 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mt_sms_paging_and_nothing' time='MASKED'/>
|
2018-05-02 09:59:18 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_smpp_mo_sms' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_smpp_mt_sms' time='MASKED'/>
|
2018-11-11 19:50:23 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_gsup_mo_sms' time='MASKED'/>
|
2018-11-14 19:06:07 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_gsup_mo_smma' time='MASKED'/>
|
2018-11-23 20:40:20 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_gsup_mt_sms_ack' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_gsup_mt_sms_err' time='MASKED'/>
|
2018-12-02 19:43:35 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_gsup_mt_multi_part_sms' time='MASKED'/>
|
2018-06-19 10:51:20 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mo_ussd_single_request' time='MASKED'/>
|
2018-06-19 11:24:31 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mt_ussd_notification' time='MASKED'/>
|
2018-06-19 10:51:20 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mo_ussd_during_mt_call' time='MASKED'/>
|
2018-06-19 11:24:31 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mt_ussd_during_mt_call' time='MASKED'/>
|
2018-06-20 21:19:58 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_mo_ussd_mo_release' time='MASKED'/>
|
2018-11-28 17:47:54 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_and_ss_session_timeout' time='MASKED'/>
|
2018-12-17 14:06:20 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_cipher_complete_with_invalid_cipher' time='MASKED'/>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_lu_imsi_auth_tmsi_encr_3_1_log_msc_debug' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_mo_cc_bssmap_clear' time='MASKED'/>
|
WIP: MSC_Tests: Add SGs testcases
This extens MSC_Tests.ttcn with an initial set of SGs interface test
cases for RESET, LU, DETACH, PAGING, SMS and CSFB procedures
In particular the following testcases are added:
- TC_sgsap_reset: isolated reset procedure test
- TC_sgsap_lu: isolated location update with TMSI realloc
- TC_sgsap_lu_imsi_reject: location update, reject case
- TC_sgsap_lu_and_nothing: location update with failed TMSI realloc
- TC_sgsap_expl_imsi_det_eps: detach from EPS serveces
- TC_sgsap_expl_imsi_det_noneps: detach from non-EPS services
- TC_sgsap_paging_rej: isolated paging, reject case
- TC_sgsap_paging_subscr_rej: isolated paging, subscr rejects call
- TC_sgsap_paging_ue_unr: isolated paging, ue is unreachable
- TC_sgsap_paging_and_nothing: page, but don't respond
- TC_sgsap_paging_and_lu: check paging followed by an LU
- TC_sgsap_mt_sms: mobile terminated SMS through SGs Interface
- TC_sgsap_mo_sms: mobile originated SMS through SGs Interface
- TC_sgsap_mt_sms_and_nothing: trigger SMS, but don't respond to paging
- TC_sgsap_mt_sms_and_reject: trigger SMS, but reject paging
- TC_sgsap_unexp_ud: Send unexpected unitdata (SGs Association: NULL)
- TC_sgsap_unsol_ud: Send unsolicited unitdata (subscriber not in VLR)
- TC_bssap_lu_sgsap_lu_and_mt_call: LU on 2G, LU on SGs and CSFB call
- TC_sgsap_lu_and_mt_call: LU on SGs, and CSFB call
Change-Id: I38543c35a9e74cea276e58d1d7ef01ef07ffe858
Depends: osmo-msc I73359925fc1ca72b33a1466e6ac41307f2f0b11d
Related: OS#3645
2018-12-06 10:56:27 +00:00
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_reset' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_lu' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_lu_imsi_reject' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_lu_and_nothing' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_expl_imsi_det_eps' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_expl_imsi_det_noneps' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_paging_rej' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_paging_subscr_rej' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_paging_ue_unr' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_paging_and_nothing' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_paging_and_lu' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_unexp_ud' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_unsol_ud' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_mt_sms' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_mo_sms' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_mt_sms_and_nothing' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_mt_sms_and_reject' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_bssap_lu_sgsap_lu_and_mt_call' time='MASKED'/>
|
|
|
|
<testcase classname='MSC_Tests' name='TC_sgsap_lu_and_mt_call' time='MASKED'/>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</testsuite>
|