2018-01-29 10:55:20 +00:00
|
|
|
deps/*/
|
2017-04-12 10:15:38 +00:00
|
|
|
*.o
|
|
|
|
*.log
|
add compare-results.sh, call from start-testsuite.sh
Compare current test results to the expected results, and exit in error on
discrepancies.
Add compare-result.sh: (trivially) grep junit xml output to determine which
tests passed and which didn't, and compare against an expected-result.log,
another junit file from a previous run. Summarize and determine success.
Include an "xfail" feature: tests that are expected to fail are marked as
"xfail", unexpected failures as "FAIL".
In various subdirs, copy the current jenkins jobs' junit xml outputs as
expected-results.log, so that we will start getting useful output in both
jenkins runs and manual local runs.
In start-testsuite.sh, after running the tests, invoke the results comparison.
Due to the single-line parsing nature, the script so far does not distinguish
between error and failure. I doubt that we actually need to do that though.
Related: OS#3136
Change-Id: I87d62a8be73d73a5eeff61a842e7c27a0066079d
2018-04-05 14:56:38 +00:00
|
|
|
!expected-results.log
|
2017-12-12 13:10:36 +00:00
|
|
|
*.so
|
|
|
|
compile
|
2018-03-15 21:14:38 +00:00
|
|
|
*/.gitignore
|
2018-03-15 21:27:33 +00:00
|
|
|
*.cc
|
|
|
|
*.hh
|
|
|
|
!library/*.cc
|
|
|
|
!library/*.hh
|
2018-03-15 21:36:05 +00:00
|
|
|
*/*Test
|
|
|
|
*/*_Tests
|
|
|
|
selftest/Selftest
|
|
|
|
*/Makefile
|
|
|
|
!bin/Makefile
|
|
|
|
!deps/Makefile
|
2018-04-05 15:48:55 +00:00
|
|
|
.*.sw?
|
2019-10-09 13:36:46 +00:00
|
|
|
*.netcat.stderr
|
2020-03-03 15:47:38 +00:00
|
|
|
*.d
|
2021-06-01 03:28:28 +00:00
|
|
|
*.merged
|
2021-06-20 20:54:49 +00:00
|
|
|
sms.db
|