add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<?xml version="1.0"?>
|
2018-07-16 10:28:03 +00:00
|
|
|
<testsuite name='Titan' tests='27' failures='4' errors='3' skipped='0' inconc='0' time='MASKED'>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_mnc3' time='MASKED'/>
|
2018-04-30 13:13:55 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_umts_aka_umts_res' time='MASKED'/>
|
2018-05-02 09:59:18 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_umts_aka_gsm_sres' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_auth_id_timeout' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_auth_sai_timeout' time='MASKED'>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<failure type='fail-verdict'>Tguard timeout
|
2018-04-11 13:56:41 +00:00
|
|
|
SGSN_Tests.ttcn:MASKED SGSN_Tests control part
|
|
|
|
SGSN_Tests.ttcn:MASKED TC_attach_auth_sai_timeout testcase
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</failure>
|
|
|
|
</testcase>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_auth_sai_reject' time='MASKED'>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<failure type='fail-verdict'>Tguard timeout
|
2018-04-11 13:56:41 +00:00
|
|
|
SGSN_Tests.ttcn:MASKED SGSN_Tests control part
|
|
|
|
SGSN_Tests.ttcn:MASKED TC_attach_auth_sai_reject testcase
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</failure>
|
|
|
|
</testcase>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_gsup_lu_timeout' time='MASKED'>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<failure type='fail-verdict'>Tguard timeout
|
2018-04-11 13:56:41 +00:00
|
|
|
SGSN_Tests.ttcn:MASKED SGSN_Tests control part
|
|
|
|
SGSN_Tests.ttcn:MASKED TC_attach_gsup_lu_timeout testcase
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</failure>
|
|
|
|
</testcase>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_gsup_lu_reject' time='MASKED'>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<failure type='fail-verdict'>Tguard timeout
|
2018-04-11 13:56:41 +00:00
|
|
|
SGSN_Tests.ttcn:MASKED SGSN_Tests control part
|
|
|
|
SGSN_Tests.ttcn:MASKED TC_attach_gsup_lu_reject testcase
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</failure>
|
|
|
|
</testcase>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_combined' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_accept_all' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_closed' time='MASKED'/>
|
2018-06-07 13:47:26 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_hlr_location_cancel_request_update' time='MASKED'>
|
|
|
|
<failure type='fail-verdict'>Unexpected GMM Detach Request
|
|
|
|
SGSN_Tests.ttcn:MASKED SGSN_Tests control part
|
|
|
|
SGSN_Tests.ttcn:MASKED TC_hlr_location_cancel_request_update testcase
|
|
|
|
</failure>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</testcase>
|
2018-06-07 13:47:26 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_hlr_location_cancel_request_withdraw' time='MASKED'>
|
|
|
|
<error type='DTE'>Dynamic test case error: Guard timer has expired. Execution of current test case will be interrupted.</error>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</testcase>
|
2018-06-07 13:47:26 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_hlr_location_cancel_request_unknown_subscriber_withdraw' time='MASKED'>
|
|
|
|
<error type='DTE'>Dynamic test case error: Guard timer has expired. Execution of current test case will be interrupted.</error>
|
|
|
|
</testcase>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_hlr_location_cancel_request_unknown_subscriber_update' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_rau_unknown' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_rau' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_detach_unknown_nopoweroff' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_detach_unknown_poweroff' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_detach_nopoweroff' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_detach_poweroff' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_pdp_act_unattached' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act_user' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act_ggsn_reject' time='MASKED'/>
|
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act_user_deact_mo' time='MASKED'/>
|
2018-07-16 10:28:03 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_pdp_act_user_deact_mt' time='MASKED'/>
|
2018-05-10 21:11:54 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_second_attempt' time='MASKED'>
|
|
|
|
<error type='DTE'></error>
|
|
|
|
</testcase>
|
2018-07-10 12:02:49 +00:00
|
|
|
<testcase classname='SGSN_Tests' name='TC_attach_restart_ctr_dettach' time='MASKED'/>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</testsuite>
|