add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<?xml version="1.0"?>
|
2019-04-10 08:15:28 +00:00
|
|
|
<testsuite name='GGSN_Tests' tests='15' failures='1' errors='0' skipped='0' inconc='0' time='MASKED'>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp4_act_deact' time='MASKED'/>
|
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp4_act_deact_ipcp' time='MASKED'/>
|
2019-04-10 08:15:28 +00:00
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp4_act_deact_ipcp_pap_broken' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp4_act_deact_pcodns' time='MASKED'/>
|
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp4_act_deact_gtpu_access' time='MASKED'/>
|
2018-07-06 11:24:14 +00:00
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp4_clients_interact_with_txseq' time='MASKED'/>
|
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp4_clients_interact_without_txseq' time='MASKED'/>
|
2018-05-30 15:22:02 +00:00
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp4_act_deact_with_single_dns' time='MASKED'/>
|
2019-08-21 14:30:27 +00:00
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp4_act_deact_with_separate_dns' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp6_act_deact' time='MASKED'/>
|
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp6_act_deact_pcodns' time='MASKED'/>
|
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp6_act_deact_icmp6' time='MASKED'/>
|
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp6_act_deact_gtpu_access' time='MASKED'/>
|
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp6_clients_interact' time='MASKED'/>
|
2019-08-21 14:13:30 +00:00
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp46_act_deact' time='MASKED'/>
|
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp46_act_deact_ipcp' time='MASKED'/>
|
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp46_act_deact_icmp6' time='MASKED'/>
|
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp46_act_deact_pcodns4' time='MASKED'/>
|
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp46_act_deact_pcodns6' time='MASKED'/>
|
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp46_act_deact_gtpu_access' time='MASKED'/>
|
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp46_clients_interact' time='MASKED'/>
|
2019-08-21 14:16:58 +00:00
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp46_act_deact_apn4' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='GGSN_Tests' name='TC_echo_req_resp' time='MASKED'/>
|
2019-08-23 16:58:53 +00:00
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp_act2_recovery' time='MASKED'/>
|
2019-05-27 18:04:35 +00:00
|
|
|
<testcase classname='GGSN_Tests' name='TC_act_deact_retrans_duplicate' time='MASKED'/>
|
2019-08-23 14:15:07 +00:00
|
|
|
<testcase classname='GGSN_Tests' name='TC_pdp_act_restart_ctr_echo' time='MASKED'/>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</testsuite>
|