From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from gnu.wildebeest.org (wildebeest.demon.nl [212.238.236.112]) by sourceware.org (Postfix) with ESMTPS id 617E23858C2C for ; Thu, 9 Sep 2021 16:58:13 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 617E23858C2C Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=klomp.org Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=klomp.org Received: from tarox.wildebeest.org (83-87-18-245.cable.dynamic.v4.ziggo.nl [83.87.18.245]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by gnu.wildebeest.org (Postfix) with ESMTPSA id DD9B73000913; Thu, 9 Sep 2021 18:58:11 +0200 (CEST) Received: by tarox.wildebeest.org (Postfix, from userid 1000) id AA14243B1721; Thu, 9 Sep 2021 18:58:11 +0200 (CEST) From: Mark Wielaard To: elfutils-devel@sourceware.org Cc: Mark Wielaard Subject: [COMMITTED] tests: Cleanup error handling and don't share cache between servers/client Date: Thu, 9 Sep 2021 18:58:10 +0200 Message-Id: <20210909165810.26719-1-mark@klomp.org> X-Mailer: git-send-email 2.18.4 X-Spam-Status: No, score=-9.1 required=5.0 tests=BAYES_00, GIT_PATCH_0, JMQ_SPF_NEUTRAL, KAM_BADIPHTTP, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: elfutils-devel@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Elfutils-devel mailing list List-Unsubscribe: , List-Archive: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Sep 2021 16:58:15 -0000 There were still three tests that shared a cache between the servers and client that queried those servers. Give them all separate caches. Also the error handler for debuginfod tests wasn't called when a command inside a function failed. Since testrun is a function, there would be no metrics or error log files listed if the testrun command failed. Making it hard to see what went wrong. Fix this by using set -o errtrace Signed-off-by: Mark Wielaard --- tests/ChangeLog | 11 +++++++++++ tests/debuginfod-subr.sh | 14 ++++++++----- tests/run-debuginfod-federation-link.sh | 7 ++++--- tests/run-debuginfod-federation-metrics.sh | 18 ++++++++--------- tests/run-debuginfod-federation-sqlite.sh | 23 ++++++++++------------ 5 files changed, 42 insertions(+), 31 deletions(-) diff --git a/tests/ChangeLog b/tests/ChangeLog index 85dca442..05b31fd8 100644 --- a/tests/ChangeLog +++ b/tests/ChangeLog @@ -1,3 +1,14 @@ +2021-09-09 Mark Wielaard + + * debuginfod-subr.sh: set -o errtrace. + (cleanup): Don't fail kill or wait. Only trap on normal exit. + (err): Don't fail curl metrics. Call cleanup. + * run-debuginfod-federation-link.sh: Use separate client caches + for both servers and debuginfod client. Remove duplicate valgrind + disabling. + * run-debuginfod-federation-metrics.sh: Likewise. + * run-debuginfod-federation-sqlite.sh: Likewise. + 2021-09-06 Dmitry V. Levin * elfcopy.c (copy_elf): Remove cast of malloc return value. diff --git a/tests/debuginfod-subr.sh b/tests/debuginfod-subr.sh index 7d238436..c21b7b8a 100755 --- a/tests/debuginfod-subr.sh +++ b/tests/debuginfod-subr.sh @@ -16,6 +16,9 @@ # sourced from run-debuginfod-*.sh tests (must be bash scripts) +# We trap ERR and like commands that fail in function to also trap +set -o errtrace + . $srcdir/test-subr.sh # includes set -e type curl 2>/dev/null || (echo "need curl"; exit 77) @@ -27,14 +30,14 @@ echo "zstd=$zstd bsdtar=`bsdtar --version`" cleanup() { - if [ $PID1 -ne 0 ]; then kill $PID1; wait $PID1; fi - if [ $PID2 -ne 0 ]; then kill $PID2; wait $PID2; fi + if [ $PID1 -ne 0 ]; then kill $PID1 || : ; wait $PID1 || :; fi + if [ $PID2 -ne 0 ]; then kill $PID2 || : ; wait $PID2 || :; fi rm -rf F R D L Z ${PWD}/foobar ${PWD}/mocktree ${PWD}/.client_cache* ${PWD}/tmp* exit_cleanup } -# clean up trash if we were aborted early -trap cleanup 0 1 2 3 5 9 15 +# clean up trash if we exit +trap cleanup 0 errfiles_list= err() { @@ -42,7 +45,7 @@ err() { for port in $PORT1 $PORT2 do echo ERROR REPORT $port metrics - curl -s http://127.0.0.1:$port/metrics + curl -s http://127.0.0.1:$port/metrics || : echo done for x in $errfiles_list @@ -51,6 +54,7 @@ err() { cat $x echo done + cleanup false # trigger set -e } trap err ERR diff --git a/tests/run-debuginfod-federation-link.sh b/tests/run-debuginfod-federation-link.sh index 050bcbcf..1347e7b8 100755 --- a/tests/run-debuginfod-federation-link.sh +++ b/tests/run-debuginfod-federation-link.sh @@ -98,6 +98,9 @@ wait_ready $PORT2 'thread_busy{role="http-metrics"}' 1 # have clients contact the new server export DEBUGINFOD_URLS=http://127.0.0.1:$PORT2 +# Use fresh cache for debuginfod-find client requests +export DEBUGINFOD_CACHE_PATH=${PWD}/.client_cache3 +mkdir -p $DEBUGINFOD_CACHE_PATH if type bsdtar 2>/dev/null; then # copy in the deb files @@ -117,7 +120,6 @@ if type bsdtar 2>/dev/null; then archive_test f17a29b5a25bd4960531d82aa6b07c8abe84fa66 "" "" fi -rm -rf $DEBUGINFOD_CACHE_PATH testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID # send a request to stress XFF and User-Agent federation relay; @@ -148,8 +150,7 @@ export DEBUGINFOD_URLS=127.0.0.1:$PORT2 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID # test parallel queries in client -export DEBUGINFOD_CACHE_PATH=${PWD}/.client_cache3 -mkdir -p $DEBUGINFOD_CACHE_PATH +rm -rf $DEBUGINFOD_CACHE_PATH export DEBUGINFOD_URLS="BAD http://127.0.0.1:$PORT1 127.0.0.1:$PORT1 http://127.0.0.1:$PORT2 DNE" testrun ${abs_builddir}/debuginfod_build_id_find -e F/prog 1 diff --git a/tests/run-debuginfod-federation-metrics.sh b/tests/run-debuginfod-federation-metrics.sh index 0cc4c2f7..2d0fd6d4 100755 --- a/tests/run-debuginfod-federation-metrics.sh +++ b/tests/run-debuginfod-federation-metrics.sh @@ -92,6 +92,10 @@ wait_ready $PORT2 'thread_busy{role="http-metrics"}' 1 # have clients contact the new server export DEBUGINFOD_URLS=http://127.0.0.1:$PORT2 +# Use fresh cache for debuginfod-find client requests +export DEBUGINFOD_CACHE_PATH=${PWD}/.client_cache3 +mkdir -p $DEBUGINFOD_CACHE_PATH + if type bsdtar 2>/dev/null; then # copy in the deb files cp -rvp ${abs_srcdir}/debuginfod-debs/*deb D @@ -110,7 +114,6 @@ if type bsdtar 2>/dev/null; then archive_test f17a29b5a25bd4960531d82aa6b07c8abe84fa66 "" "" fi -rm -rf $DEBUGINFOD_CACHE_PATH testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID # send a request to stress XFF and User-Agent federation relay; @@ -171,20 +174,15 @@ curl -s http://127.0.0.1:$PORT2/buildid/deadbeef/debuginfo > /dev/null || true curl -s http://127.0.0.1:$PORT2/buildid/deadbeef/badtype > /dev/null || true (curl -s http://127.0.0.1:$PORT2/metrics | grep 'badtype') && false -# DISABLE VALGRIND checking because valgrind might use debuginfod client -# requests itself, causing confusion about who put what in the cache. -# It stays disabled till the end of this test. -unset VALGRIND_CMD - # Confirm that reused curl connections survive 404 errors. -# The rm's force an uncached fetch -rm -f $DEBUGINFOD_CACHE_PATH/$BUILDID/debuginfo .client_cache*/$BUILDID/debuginfo +# The rm's force an uncached fetch (in both servers and client cache) +rm -f .client_cache*/$BUILDID/debuginfo testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID -rm -f $DEBUGINFOD_CACHE_PATH/$BUILDID/debuginfo .client_cache*/$BUILDID/debuginfo +rm -f .client_cache*/$BUILDID/debuginfo testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID -rm -f $DEBUGINFOD_CACHE_PATH/$BUILDID/debuginfo .client_cache*/$BUILDID/debuginfo +rm -f .client_cache*/$BUILDID/debuginfo testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID # Confirm that some debuginfod client pools are being used diff --git a/tests/run-debuginfod-federation-sqlite.sh b/tests/run-debuginfod-federation-sqlite.sh index 5a18b4bb..45761ed7 100755 --- a/tests/run-debuginfod-federation-sqlite.sh +++ b/tests/run-debuginfod-federation-sqlite.sh @@ -78,7 +78,11 @@ wait_ready $PORT2 'thread_work_total{role="traverse"}' 1 # And initial groom cycle wait_ready $PORT1 'thread_work_total{role="groom"}' 1 -export DEBUGINFOD_URLS='http://127.0.0.1:'$PORT2 +export DEBUGINFOD_URLS='http://127.0.0.1:'$PORT2 +# Use fresh cache for debuginfod-find client requests +export DEBUGINFOD_CACHE_PATH=${PWD}/.client_cache3 +mkdir -p $DEBUGINFOD_CACHE_PATH + if type bsdtar 2>/dev/null; then # copy in the deb files cp -rvp ${abs_srcdir}/debuginfod-debs/*deb D @@ -97,7 +101,6 @@ if type bsdtar 2>/dev/null; then archive_test f17a29b5a25bd4960531d82aa6b07c8abe84fa66 "" "" fi -rm -rf $DEBUGINFOD_CACHE_PATH testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID # send a request to stress XFF and User-Agent federation relay; @@ -127,8 +130,7 @@ rm -f $DEBUGINFOD_CACHE_PATH/$BUILDID/debuginfo # drop 000-perm negative-hit fil export DEBUGINFOD_URLS=127.0.0.1:$PORT2 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID # test parallel queries in client -export DEBUGINFOD_CACHE_PATH=${PWD}/.client_cache3 -mkdir -p $DEBUGINFOD_CACHE_PATH +rm -rf $DEBUGINFOD_CACHE_PATH export DEBUGINFOD_URLS="BAD http://127.0.0.1:$PORT1 127.0.0.1:$PORT1 http://127.0.0.1:$PORT2 DNE" testrun ${abs_builddir}/debuginfod_build_id_find -e F/prog 1 @@ -142,20 +144,15 @@ curl -s http://127.0.0.1:$PORT2/buildid/deadbeef/debuginfo > /dev/null || true curl -s http://127.0.0.1:$PORT2/buildid/deadbeef/badtype > /dev/null || true (curl -s http://127.0.0.1:$PORT2/metrics | grep 'badtype') && false -# DISABLE VALGRIND checking because valgrind might use debuginfod client -# requests itself, causing confusion about who put what in the cache. -# It stays disabled till the end of this test. -unset VALGRIND_CMD - # Confirm that reused curl connections survive 404 errors. -# The rm's force an uncached fetch -rm -f $DEBUGINFOD_CACHE_PATH/$BUILDID/debuginfo .client_cache*/$BUILDID/debuginfo +# The rm's force an uncached fetch (in both servers and client cache) +rm -f .client_cache*/$BUILDID/debuginfo testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID -rm -f $DEBUGINFOD_CACHE_PATH/$BUILDID/debuginfo .client_cache*/$BUILDID/debuginfo +rm -f .client_cache*/$BUILDID/debuginfo testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID -rm -f $DEBUGINFOD_CACHE_PATH/$BUILDID/debuginfo .client_cache*/$BUILDID/debuginfo +rm -f .client_cache*/$BUILDID/debuginfo testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID # Trigger a flood of requests against the same archive content file. # Use a file that hasn't been previously extracted in to make it -- 2.18.4