add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<?xml version="1.0"?>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testsuite name='Titan' tests='35' failures='0' errors='0' skipped='1' inconc='0' time='MASKED'>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MGCP_Test' name='TC_selftest' time='MASKED'>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
<skipped>no verdict</skipped>
|
|
|
|
</testcase>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx' time='MASKED'/>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_no_lco' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_noprefix' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_unsupp_mode' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_early_bidir_mode' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_unsupp_param' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_missing_callid' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_missing_mode' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_unsupp_packet_intv' time='MASKED'/>
|
2018-06-06 11:15:03 +00:00
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_illegal_double_lco' time='MASKED'/>
|
2018-04-11 13:56:41 +00:00
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_sdp' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_wildcarded' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_wildcarded_exhaust' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_mdcx_without_crcx' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_dlcx_without_crcx' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_mdcx_wildcarded' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_dlcx_wildcarded' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_and_dlcx_ep_callid_connid' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_and_dlcx_ep_callid' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_and_dlcx_ep' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_and_dlcx_ep_callid_inval' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_and_dlcx_ep_callid_connid_inval' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_and_dlcx_retrans' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_crcx_dlcx_30ep' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_rtpem_selftest' time='MASKED'/>
|
2018-06-27 15:52:04 +00:00
|
|
|
<testcase classname='MGCP_Test' name='TC_one_crcx_receive_only_rtp' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_one_crcx_loopback_rtp' time='MASKED'/>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testcase classname='MGCP_Test' name='TC_two_crcx_and_rtp' time='MASKED'/>
|
2018-06-27 15:52:04 +00:00
|
|
|
<testcase classname='MGCP_Test' name='TC_two_crcx_and_rtp_bidir' time='MASKED'/>
|
2018-09-06 12:13:34 +00:00
|
|
|
<testcase classname='MGCP_Test' name='TC_two_crcx_diff_pt_and_rtp' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_two_crcx_diff_pt_and_rtp_bidir' time='MASKED'/>
|
2018-06-27 15:52:04 +00:00
|
|
|
<testcase classname='MGCP_Test' name='TC_two_crcx_mdcx_and_rtp' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_two_crcx_and_unsolicited_rtp' time='MASKED'/>
|
|
|
|
<testcase classname='MGCP_Test' name='TC_two_crcx_and_one_mdcx_rtp_ho' time='MASKED'/>
|
2019-02-21 16:35:01 +00:00
|
|
|
<testcase classname='MGCP_Test' name='TC_ts101318_rfc5993_rtp_conversion' time='MASKED'/>
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
</testsuite>
|