add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
<?xml version="1.0"?>
2018-05-02 09:59:18 +00:00
<testsuite name= 'Titan' tests= '24' failures= '5' errors= '2' skipped= '0' inconc= '0' time= 'MASKED' >
2018-04-11 13:56:41 +00:00
<testcase classname= 'SGSN_Tests' name= 'TC_attach' time= 'MASKED' />
<testcase classname= 'SGSN_Tests' name= 'TC_attach_mnc3' time= 'MASKED' />
2018-04-30 13:13:55 +00:00
<testcase classname= 'SGSN_Tests' name= 'TC_attach_umts_aka_umts_res' time= 'MASKED' />
2018-05-02 09:59:18 +00:00
<testcase classname= 'SGSN_Tests' name= 'TC_attach_umts_aka_gsm_sres' time= 'MASKED' />
2018-04-11 13:56:41 +00:00
<testcase classname= 'SGSN_Tests' name= 'TC_attach_auth_id_timeout' time= 'MASKED' />
<testcase classname= 'SGSN_Tests' name= 'TC_attach_auth_sai_timeout' time= 'MASKED' >
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
<failure type= 'fail-verdict' > Tguard timeout
2018-04-11 13:56:41 +00:00
SGSN_Tests.ttcn:MASKED SGSN_Tests control part
SGSN_Tests.ttcn:MASKED TC_attach_auth_sai_timeout testcase
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
</failure>
</testcase>
2018-04-11 13:56:41 +00:00
<testcase classname= 'SGSN_Tests' name= 'TC_attach_auth_sai_reject' time= 'MASKED' >
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
<failure type= 'fail-verdict' > Tguard timeout
2018-04-11 13:56:41 +00:00
SGSN_Tests.ttcn:MASKED SGSN_Tests control part
SGSN_Tests.ttcn:MASKED TC_attach_auth_sai_reject testcase
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
</failure>
</testcase>
2018-04-11 13:56:41 +00:00
<testcase classname= 'SGSN_Tests' name= 'TC_attach_gsup_lu_timeout' time= 'MASKED' >
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
<failure type= 'fail-verdict' > Tguard timeout
2018-04-11 13:56:41 +00:00
SGSN_Tests.ttcn:MASKED SGSN_Tests control part
SGSN_Tests.ttcn:MASKED TC_attach_gsup_lu_timeout testcase
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
</failure>
</testcase>
2018-04-11 13:56:41 +00:00
<testcase classname= 'SGSN_Tests' name= 'TC_attach_gsup_lu_reject' time= 'MASKED' >
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
<failure type= 'fail-verdict' > Tguard timeout
2018-04-11 13:56:41 +00:00
SGSN_Tests.ttcn:MASKED SGSN_Tests control part
SGSN_Tests.ttcn:MASKED TC_attach_gsup_lu_reject testcase
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
</failure>
</testcase>
2018-04-11 13:56:41 +00:00
<testcase classname= 'SGSN_Tests' name= 'TC_attach_combined' time= 'MASKED' />
<testcase classname= 'SGSN_Tests' name= 'TC_attach_accept_all' time= 'MASKED' />
<testcase classname= 'SGSN_Tests' name= 'TC_attach_closed' time= 'MASKED' />
<testcase classname= 'SGSN_Tests' name= 'TC_rau_unknown' time= 'MASKED' />
<testcase classname= 'SGSN_Tests' name= 'TC_attach_rau' time= 'MASKED' />
<testcase classname= 'SGSN_Tests' name= 'TC_detach_unknown_nopoweroff' time= 'MASKED' >
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
<error type= 'DTE' > Dynamic test case error: Error message was received from MC: The connect operation refers to test component with component reference 77, which has already terminated.</error>
</testcase>
2018-04-11 13:56:41 +00:00
<testcase classname= 'SGSN_Tests' name= 'TC_detach_unknown_poweroff' time= 'MASKED' >
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
<error type= 'DTE' > Dynamic test case error: Error message was received from MC: The connect operation refers to test component with component reference 83, which has already terminated.</error>
</testcase>
2018-04-11 13:56:41 +00:00
<testcase classname= 'SGSN_Tests' name= 'TC_detach_nopoweroff' time= 'MASKED' />
<testcase classname= 'SGSN_Tests' name= 'TC_detach_poweroff' time= 'MASKED' />
<testcase classname= 'SGSN_Tests' name= 'TC_attach_pdp_act' time= 'MASKED' />
<testcase classname= 'SGSN_Tests' name= 'TC_pdp_act_unattached' time= 'MASKED' />
<testcase classname= 'SGSN_Tests' name= 'TC_attach_pdp_act_user' time= 'MASKED' />
<testcase classname= 'SGSN_Tests' name= 'TC_attach_pdp_act_ggsn_reject' time= 'MASKED' />
<testcase classname= 'SGSN_Tests' name= 'TC_attach_pdp_act_user_deact_mo' time= 'MASKED' />
<testcase classname= 'SGSN_Tests' name= 'TC_attach_pdp_act_user_deact_mt' time= 'MASKED' >
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
<failure type= 'fail-verdict' > Tguard timeout
2018-04-11 13:56:41 +00:00
SGSN_Tests.ttcn:MASKED SGSN_Tests control part
SGSN_Tests.ttcn:MASKED TC_attach_pdp_act_user_deact_mt testcase
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
</failure>
</testcase>
</testsuite>