From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 45721 invoked by alias); 27 May 2015 20:17:03 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 40892 invoked by uid 89); 27 May 2015 20:16:33 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.6 required=5.0 tests=AWL,BAYES_00,RCVD_IN_DNSWL_LOW,SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-wg0-f51.google.com Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com) (74.125.82.51) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-GCM-SHA256 encrypted) ESMTPS; Wed, 27 May 2015 20:16:08 +0000 Received: by wgme6 with SMTP id e6so19248563wgm.2 for ; Wed, 27 May 2015 13:16:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=dqjSXNUWSuZR/JrmbpQQpZLrgcNQVcUtgeXqE5i89qw=; b=ce0/5OJ93arLlNOBf1Sr6NqHuF3MQkQTlzcrm+TPa9AQzCq1F9Bc5vwE14kBYIY/9H CVB4qrCrLchKMpUdBjr48niuHwdOwDicOwCuDi0/dZEE5R8nw3Ke9XFmqthjgjrUFj4B 1yHtuOFVgA3n+0mPVoqlEJNqKbIeHwiTc8GTRqQJj8z8DDbOBMXFA1ao4tMpDynJLS+9 99slhLIi679HYfY2YL/hJgCtSJFdo6m08bTGgL/VUV8v4p+2uSWMDcbdcYHina0ZHFIR sgmVt0kXU/veigfZIl45O3Lojigy2F1uQYXaDTGoHbQvsy9NekotWCZQeuSkF/5H+2WN eNOg== X-Gm-Message-State: ALoCoQmp3yjbLDk/UufnCD6uIdEbhuDebYDUCesP14bueYmf7Gbj5hWyYUrWoe4Z03i3Z5ZfG1dn X-Received: by 10.180.206.137 with SMTP id lo9mr52828293wic.81.1432757764519; Wed, 27 May 2015 13:16:04 -0700 (PDT) Received: from babel.clyon.hd.free.fr (vig38-2-82-225-222-175.fbx.proxad.net. [82.225.222.175]) by mx.google.com with ESMTPSA id u7sm76992wif.3.2015.05.27.13.16.03 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 27 May 2015 13:16:03 -0700 (PDT) From: Christophe Lyon To: gcc-patches@gcc.gnu.org Subject: [Patch ARM-AArch64/testsuite Neon intrinsics 11/20] Add vrsra_n tests. Date: Wed, 27 May 2015 20:17:00 -0000 Message-Id: <1432757747-4891-12-git-send-email-christophe.lyon@linaro.org> In-Reply-To: <1432757747-4891-1-git-send-email-christophe.lyon@linaro.org> References: <1432757747-4891-1-git-send-email-christophe.lyon@linaro.org> X-IsSubscribed: yes X-SW-Source: 2015-05/txt/msg02556.txt.bz2 diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vrsra_n.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vrsra_n.c new file mode 100644 index 0000000..a9eda22 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vrsra_n.c @@ -0,0 +1,553 @@ +#include +#include "arm-neon-ref.h" +#include "compute-ref-data.h" + +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0xf9, 0xfa, 0xfb, 0xfc, + 0xfd, 0xfe, 0xff, 0x0 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0xfffffffd, 0xfffffffe }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0x5, 0x6, 0x7, 0x8, + 0x9, 0xa, 0xb, 0xc }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0xfffd, 0xfffe, 0xffff, 0x0 }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0xfffffff4, 0xfffffff5 }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0xf9, 0xfa, 0xfb, 0xfc, + 0xfd, 0xfe, 0xff, 0x0, + 0x1, 0x2, 0x3, 0x4, + 0x5, 0x6, 0x7, 0x8 }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3, + 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0xfffffffd, 0xfffffffe, + 0xffffffff, 0x0 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0xfffffffffffffff0, 0xfffffffffffffff1 }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0x5, 0x6, 0x7, 0x8, + 0x9, 0xa, 0xb, 0xc, + 0xd, 0xe, 0xf, 0x10, + 0x11, 0x12, 0x13, 0x14 }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0xfffd, 0xfffe, 0xffff, 0x0, + 0x1, 0x2, 0x3, 0x4 }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0xfffffff4, 0xfffffff5, + 0xfffffff6, 0xfffffff7 }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0xfffffffffffffff0, + 0xfffffffffffffff1 }; + +/* Expected results with max input and shift by 1. */ +VECT_VAR_DECL(expected_max_sh1,int,8,8) [] = { 0x40, 0x40, 0x40, 0x40, + 0x40, 0x40, 0x40, 0x40 }; +VECT_VAR_DECL(expected_max_sh1,int,16,4) [] = { 0x4000, 0x4000, 0x4000, 0x4000 }; +VECT_VAR_DECL(expected_max_sh1,int,32,2) [] = { 0x40000000, 0x40000000 }; +VECT_VAR_DECL(expected_max_sh1,int,64,1) [] = { 0x4000000000000000 }; +VECT_VAR_DECL(expected_max_sh1,uint,8,8) [] = { 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80 }; +VECT_VAR_DECL(expected_max_sh1,uint,16,4) [] = { 0x8000, 0x8000, + 0x8000, 0x8000 }; +VECT_VAR_DECL(expected_max_sh1,uint,32,2) [] = { 0x80000000, 0x80000000 }; +VECT_VAR_DECL(expected_max_sh1,uint,64,1) [] = { 0x8000000000000000 }; +VECT_VAR_DECL(expected_max_sh1,int,8,16) [] = { 0x40, 0x40, 0x40, 0x40, + 0x40, 0x40, 0x40, 0x40, + 0x40, 0x40, 0x40, 0x40, + 0x40, 0x40, 0x40, 0x40 }; +VECT_VAR_DECL(expected_max_sh1,int,16,8) [] = { 0x4000, 0x4000, 0x4000, 0x4000, + 0x4000, 0x4000, 0x4000, 0x4000 }; +VECT_VAR_DECL(expected_max_sh1,int,32,4) [] = { 0x40000000, 0x40000000, + 0x40000000, 0x40000000 }; +VECT_VAR_DECL(expected_max_sh1,int,64,2) [] = { 0x4000000000000000, + 0x4000000000000000 }; +VECT_VAR_DECL(expected_max_sh1,uint,8,16) [] = { 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80 }; +VECT_VAR_DECL(expected_max_sh1,uint,16,8) [] = { 0x8000, 0x8000, + 0x8000, 0x8000, + 0x8000, 0x8000, + 0x8000, 0x8000 }; +VECT_VAR_DECL(expected_max_sh1,uint,32,4) [] = { 0x80000000, 0x80000000, + 0x80000000, 0x80000000 }; +VECT_VAR_DECL(expected_max_sh1,uint,64,2) [] = { 0x8000000000000000, + 0x8000000000000000 }; + +/* Expected results with max input and shift by 3. */ +VECT_VAR_DECL(expected_max_sh3,int,8,8) [] = { 0x10, 0x10, 0x10, 0x10, + 0x10, 0x10, 0x10, 0x10 }; +VECT_VAR_DECL(expected_max_sh3,int,16,4) [] = { 0x1000, 0x1000, 0x1000, 0x1000 }; +VECT_VAR_DECL(expected_max_sh3,int,32,2) [] = { 0x10000000, 0x10000000 }; +VECT_VAR_DECL(expected_max_sh3,int,64,1) [] = { 0x1000000000000000 }; +VECT_VAR_DECL(expected_max_sh3,uint,8,8) [] = { 0x20, 0x20, 0x20, 0x20, + 0x20, 0x20, 0x20, 0x20 }; +VECT_VAR_DECL(expected_max_sh3,uint,16,4) [] = { 0x2000, 0x2000, + 0x2000, 0x2000 }; +VECT_VAR_DECL(expected_max_sh3,uint,32,2) [] = { 0x20000000, 0x20000000 }; +VECT_VAR_DECL(expected_max_sh3,uint,64,1) [] = { 0x2000000000000000 }; +VECT_VAR_DECL(expected_max_sh3,int,8,16) [] = { 0x10, 0x10, 0x10, 0x10, + 0x10, 0x10, 0x10, 0x10, + 0x10, 0x10, 0x10, 0x10, + 0x10, 0x10, 0x10, 0x10 }; +VECT_VAR_DECL(expected_max_sh3,int,16,8) [] = { 0x1000, 0x1000, 0x1000, 0x1000, + 0x1000, 0x1000, 0x1000, 0x1000 }; +VECT_VAR_DECL(expected_max_sh3,int,32,4) [] = { 0x10000000, 0x10000000, + 0x10000000, 0x10000000 }; +VECT_VAR_DECL(expected_max_sh3,int,64,2) [] = { 0x1000000000000000, + 0x1000000000000000 }; +VECT_VAR_DECL(expected_max_sh3,uint,8,16) [] = { 0x20, 0x20, 0x20, 0x20, + 0x20, 0x20, 0x20, 0x20, + 0x20, 0x20, 0x20, 0x20, + 0x20, 0x20, 0x20, 0x20 }; +VECT_VAR_DECL(expected_max_sh3,uint,16,8) [] = { 0x2000, 0x2000, + 0x2000, 0x2000, + 0x2000, 0x2000, + 0x2000, 0x2000 }; +VECT_VAR_DECL(expected_max_sh3,uint,32,4) [] = { 0x20000000, 0x20000000, + 0x20000000, 0x20000000 }; +VECT_VAR_DECL(expected_max_sh3,uint,64,2) [] = { 0x2000000000000000, + 0x2000000000000000 }; + +/* Expected results with max input and shift by type width. */ +VECT_VAR_DECL(expected_max_shmax,int,8,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_max_shmax,int,16,4) [] = { 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_max_shmax,int,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_max_shmax,int,64,1) [] = { 0x0 }; +VECT_VAR_DECL(expected_max_shmax,uint,8,8) [] = { 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_max_shmax,uint,16,4) [] = { 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_max_shmax,uint,32,2) [] = { 0x1, 0x1 }; +VECT_VAR_DECL(expected_max_shmax,uint,64,1) [] = { 0x1 }; +VECT_VAR_DECL(expected_max_shmax,int,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_max_shmax,int,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_max_shmax,int,32,4) [] = { 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_max_shmax,int,64,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_max_shmax,uint,8,16) [] = { 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_max_shmax,uint,16,8) [] = { 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_max_shmax,uint,32,4) [] = { 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_max_shmax,uint,64,2) [] = { 0x1, 0x1 }; + +/* Expected results with min negative input and shift by 1. */ +VECT_VAR_DECL(expected_min_sh1,int,8,8) [] = { 0xc0, 0xc0, 0xc0, 0xc0, + 0xc0, 0xc0, 0xc0, 0xc0 }; +VECT_VAR_DECL(expected_min_sh1,int,16,4) [] = { 0xc000, 0xc000, 0xc000, 0xc000 }; +VECT_VAR_DECL(expected_min_sh1,int,32,2) [] = { 0xc0000000, 0xc0000000 }; +VECT_VAR_DECL(expected_min_sh1,int,64,1) [] = { 0xc000000000000000 }; +VECT_VAR_DECL(expected_min_sh1,uint,8,8) [] = { 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_sh1,uint,16,4) [] = { 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_sh1,uint,32,2) [] = { 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_sh1,uint,64,1) [] = { 0x1 }; +VECT_VAR_DECL(expected_min_sh1,int,8,16) [] = { 0xc0, 0xc0, 0xc0, 0xc0, + 0xc0, 0xc0, 0xc0, 0xc0, + 0xc0, 0xc0, 0xc0, 0xc0, + 0xc0, 0xc0, 0xc0, 0xc0 }; +VECT_VAR_DECL(expected_min_sh1,int,16,8) [] = { 0xc000, 0xc000, 0xc000, 0xc000, + 0xc000, 0xc000, 0xc000, 0xc000 }; +VECT_VAR_DECL(expected_min_sh1,int,32,4) [] = { 0xc0000000, 0xc0000000, + 0xc0000000, 0xc0000000 }; +VECT_VAR_DECL(expected_min_sh1,int,64,2) [] = { 0xc000000000000000, + 0xc000000000000000 }; +VECT_VAR_DECL(expected_min_sh1,uint,8,16) [] = { 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_sh1,uint,16,8) [] = { 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_sh1,uint,32,4) [] = { 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_sh1,uint,64,2) [] = { 0x1, 0x1 }; + +/* Expected results with min negative input and shift by 3. */ +VECT_VAR_DECL(expected_min_sh3,int,8,8) [] = { 0xf0, 0xf0, 0xf0, 0xf0, + 0xf0, 0xf0, 0xf0, 0xf0 }; +VECT_VAR_DECL(expected_min_sh3,int,16,4) [] = { 0xf000, 0xf000, 0xf000, 0xf000 }; +VECT_VAR_DECL(expected_min_sh3,int,32,2) [] = { 0xf0000000, 0xf0000000 }; +VECT_VAR_DECL(expected_min_sh3,int,64,1) [] = { 0xf000000000000000 }; +VECT_VAR_DECL(expected_min_sh3,uint,8,8) [] = { 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_sh3,uint,16,4) [] = { 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_sh3,uint,32,2) [] = { 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_sh3,uint,64,1) [] = { 0x1 }; +VECT_VAR_DECL(expected_min_sh3,int,8,16) [] = { 0xf0, 0xf0, 0xf0, 0xf0, + 0xf0, 0xf0, 0xf0, 0xf0, + 0xf0, 0xf0, 0xf0, 0xf0, + 0xf0, 0xf0, 0xf0, 0xf0 }; +VECT_VAR_DECL(expected_min_sh3,int,16,8) [] = { 0xf000, 0xf000, 0xf000, 0xf000, + 0xf000, 0xf000, 0xf000, 0xf000 }; +VECT_VAR_DECL(expected_min_sh3,int,32,4) [] = { 0xf0000000, 0xf0000000, + 0xf0000000, 0xf0000000 }; +VECT_VAR_DECL(expected_min_sh3,int,64,2) [] = { 0xf000000000000000, + 0xf000000000000000 }; +VECT_VAR_DECL(expected_min_sh3,uint,8,16) [] = { 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_sh3,uint,16,8) [] = { 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_sh3,uint,32,4) [] = { 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_sh3,uint,64,2) [] = { 0x1, 0x1 }; + +/* Expected results with min negative input and shift by type width. */ +VECT_VAR_DECL(expected_min_shmax,int,8,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_min_shmax,int,16,4) [] = { 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_min_shmax,int,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_min_shmax,int,64,1) [] = { 0x0 }; +VECT_VAR_DECL(expected_min_shmax,uint,8,8) [] = { 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_shmax,uint,16,4) [] = { 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_shmax,uint,32,2) [] = { 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_shmax,uint,64,1) [] = { 0x1 }; +VECT_VAR_DECL(expected_min_shmax,int,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_min_shmax,int,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_min_shmax,int,32,4) [] = { 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_min_shmax,int,64,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_min_shmax,uint,8,16) [] = { 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_shmax,uint,16,8) [] = { 0x1, 0x1, 0x1, 0x1, + 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_shmax,uint,32,4) [] = { 0x1, 0x1, 0x1, 0x1 }; +VECT_VAR_DECL(expected_min_shmax,uint,64,2) [] = { 0x1, 0x1 }; + +#define TEST_MSG "VRSRA_N" +void exec_vrsra_n (void) +{ + /* Basic test: y=vrsra_n(x,v), then store the result. */ +#define TEST_VRSRA_N(Q, T1, T2, W, N, V) \ + VECT_VAR(vector_res, T1, W, N) = \ + vrsra##Q##_n_##T2##W(VECT_VAR(vector, T1, W, N), \ + VECT_VAR(vector2, T1, W, N), \ + V); \ + vst1##Q##_##T2##W(VECT_VAR(result, T1, W, N), VECT_VAR(vector_res, T1, W, N)) + + DECL_VARIABLE_ALL_VARIANTS(vector); + DECL_VARIABLE_ALL_VARIANTS(vector2); + DECL_VARIABLE_ALL_VARIANTS(vector_res); + + clean_results (); + + /* Initialize input "vector" from "buffer". */ + TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector, buffer); + + /* Choose arbitrary initialization values. */ + VDUP(vector2, , int, s, 8, 8, 0x11); + VDUP(vector2, , int, s, 16, 4, 0x22); + VDUP(vector2, , int, s, 32, 2, 0x33); + VDUP(vector2, , int, s, 64, 1, 0x44); + VDUP(vector2, , uint, u, 8, 8, 0x55); + VDUP(vector2, , uint, u, 16, 4, 0x66); + VDUP(vector2, , uint, u, 32, 2, 0x77); + VDUP(vector2, , uint, u, 64, 1, 0x88); + + VDUP(vector2, q, int, s, 8, 16, 0x11); + VDUP(vector2, q, int, s, 16, 8, 0x22); + VDUP(vector2, q, int, s, 32, 4, 0x33); + VDUP(vector2, q, int, s, 64, 2, 0x44); + VDUP(vector2, q, uint, u, 8, 16, 0x55); + VDUP(vector2, q, uint, u, 16, 8, 0x66); + VDUP(vector2, q, uint, u, 32, 4, 0x77); + VDUP(vector2, q, uint, u, 64, 2, 0x88); + + /* Choose shift amount arbitrarily. */ + TEST_VRSRA_N(, int, s, 8, 8, 1); + TEST_VRSRA_N(, int, s, 16, 4, 12); + TEST_VRSRA_N(, int, s, 32, 2, 2); + TEST_VRSRA_N(, int, s, 64, 1, 32); + TEST_VRSRA_N(, uint, u, 8, 8, 2); + TEST_VRSRA_N(, uint, u, 16, 4, 3); + TEST_VRSRA_N(, uint, u, 32, 2, 5); + TEST_VRSRA_N(, uint, u, 64, 1, 33); + + TEST_VRSRA_N(q, int, s, 8, 16, 1); + TEST_VRSRA_N(q, int, s, 16, 8, 12); + TEST_VRSRA_N(q, int, s, 32, 4, 2); + TEST_VRSRA_N(q, int, s, 64, 2, 32); + TEST_VRSRA_N(q, uint, u, 8, 16, 2); + TEST_VRSRA_N(q, uint, u, 16, 8, 3); + TEST_VRSRA_N(q, uint, u, 32, 4, 5); + TEST_VRSRA_N(q, uint, u, 64, 2, 33); + +#define CMT "" + CHECK(TEST_MSG, int, 8, 8, PRIx8, expected, CMT); + CHECK(TEST_MSG, int, 16, 4, PRIx16, expected, CMT); + CHECK(TEST_MSG, int, 32, 2, PRIx32, expected, CMT); + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected, CMT); + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected, CMT); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected, CMT); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected, CMT); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected, CMT); + CHECK(TEST_MSG, int, 8, 16, PRIx8, expected, CMT); + CHECK(TEST_MSG, int, 16, 8, PRIx16, expected, CMT); + CHECK(TEST_MSG, int, 32, 4, PRIx32, expected, CMT); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected, CMT); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected, CMT); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected, CMT); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected, CMT); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected, CMT); + + + /* Initialize the accumulator with 0. */ + VDUP(vector, , int, s, 8, 8, 0); + VDUP(vector, , int, s, 16, 4, 0); + VDUP(vector, , int, s, 32, 2, 0); + VDUP(vector, , int, s, 64, 1, 0); + VDUP(vector, , uint, u, 8, 8, 0); + VDUP(vector, , uint, u, 16, 4, 0); + VDUP(vector, , uint, u, 32, 2, 0); + VDUP(vector, , uint, u, 64, 1, 0); + VDUP(vector, q, int, s, 8, 16, 0); + VDUP(vector, q, int, s, 16, 8, 0); + VDUP(vector, q, int, s, 32, 4, 0); + VDUP(vector, q, int, s, 64, 2, 0); + VDUP(vector, q, uint, u, 8, 16, 0); + VDUP(vector, q, uint, u, 16, 8, 0); + VDUP(vector, q, uint, u, 32, 4, 0); + VDUP(vector, q, uint, u, 64, 2, 0); + + /* Initialize with max values to check overflow. */ + VDUP(vector2, , int, s, 8, 8, 0x7F); + VDUP(vector2, , int, s, 16, 4, 0x7FFF); + VDUP(vector2, , int, s, 32, 2, 0x7FFFFFFF); + VDUP(vector2, , int, s, 64, 1, 0x7FFFFFFFFFFFFFFFLL); + VDUP(vector2, , uint, u, 8, 8, 0xFF); + VDUP(vector2, , uint, u, 16, 4, 0xFFFF); + VDUP(vector2, , uint, u, 32, 2, 0xFFFFFFFF); + VDUP(vector2, , uint, u, 64, 1, 0xFFFFFFFFFFFFFFFFULL); + VDUP(vector2, q, int, s, 8, 16, 0x7F); + VDUP(vector2, q, int, s, 16, 8, 0x7FFF); + VDUP(vector2, q, int, s, 32, 4, 0x7FFFFFFF); + VDUP(vector2, q, int, s, 64, 2, 0x7FFFFFFFFFFFFFFFLL); + VDUP(vector2, q, uint, u, 8, 16, 0xFF); + VDUP(vector2, q, uint, u, 16, 8, 0xFFFF); + VDUP(vector2, q, uint, u, 32, 4, 0xFFFFFFFF); + VDUP(vector2, q, uint, u, 64, 2, 0xFFFFFFFFFFFFFFFFULL); + + /* Shift by 1 to check overflow with rounding constant. */ + TEST_VRSRA_N(, int, s, 8, 8, 1); + TEST_VRSRA_N(, int, s, 16, 4, 1); + TEST_VRSRA_N(, int, s, 32, 2, 1); + TEST_VRSRA_N(, int, s, 64, 1, 1); + TEST_VRSRA_N(, uint, u, 8, 8, 1); + TEST_VRSRA_N(, uint, u, 16, 4, 1); + TEST_VRSRA_N(, uint, u, 32, 2, 1); + TEST_VRSRA_N(, uint, u, 64, 1, 1); + TEST_VRSRA_N(q, int, s, 8, 16, 1); + TEST_VRSRA_N(q, int, s, 16, 8, 1); + TEST_VRSRA_N(q, int, s, 32, 4, 1); + TEST_VRSRA_N(q, int, s, 64, 2, 1); + TEST_VRSRA_N(q, uint, u, 8, 16, 1); + TEST_VRSRA_N(q, uint, u, 16, 8, 1); + TEST_VRSRA_N(q, uint, u, 32, 4, 1); + TEST_VRSRA_N(q, uint, u, 64, 2, 1); + +#undef CMT +#define CMT " (checking overflow: shift by 1, max input)" + CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_max_sh1, CMT); + CHECK(TEST_MSG, int, 16, 4, PRIx16, expected_max_sh1, CMT); + CHECK(TEST_MSG, int, 32, 2, PRIx32, expected_max_sh1, CMT); + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_max_sh1, CMT); + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_max_sh1, CMT); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected_max_sh1, CMT); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_max_sh1, CMT); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_max_sh1, CMT); + CHECK(TEST_MSG, int, 8, 16, PRIx8, expected_max_sh1, CMT); + CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_max_sh1, CMT); + CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_max_sh1, CMT); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_max_sh1, CMT); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_max_sh1, CMT); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected_max_sh1, CMT); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_max_sh1, CMT); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_max_sh1, CMT); + + + /* Shift by 3 to check overflow with rounding constant. */ + TEST_VRSRA_N(, int, s, 8, 8, 3); + TEST_VRSRA_N(, int, s, 16, 4, 3); + TEST_VRSRA_N(, int, s, 32, 2, 3); + TEST_VRSRA_N(, int, s, 64, 1, 3); + TEST_VRSRA_N(, uint, u, 8, 8, 3); + TEST_VRSRA_N(, uint, u, 16, 4, 3); + TEST_VRSRA_N(, uint, u, 32, 2, 3); + TEST_VRSRA_N(, uint, u, 64, 1, 3); + TEST_VRSRA_N(q, int, s, 8, 16, 3); + TEST_VRSRA_N(q, int, s, 16, 8, 3); + TEST_VRSRA_N(q, int, s, 32, 4, 3); + TEST_VRSRA_N(q, int, s, 64, 2, 3); + TEST_VRSRA_N(q, uint, u, 8, 16, 3); + TEST_VRSRA_N(q, uint, u, 16, 8, 3); + TEST_VRSRA_N(q, uint, u, 32, 4, 3); + TEST_VRSRA_N(q, uint, u, 64, 2, 3); + +#undef CMT +#define CMT " (checking overflow: shift by 3, max input)" + CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_max_sh3, CMT); + CHECK(TEST_MSG, int, 16, 4, PRIx16, expected_max_sh3, CMT); + CHECK(TEST_MSG, int, 32, 2, PRIx32, expected_max_sh3, CMT); + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_max_sh3, CMT); + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_max_sh3, CMT); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected_max_sh3, CMT); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_max_sh3, CMT); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_max_sh3, CMT); + CHECK(TEST_MSG, int, 8, 16, PRIx8, expected_max_sh3, CMT); + CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_max_sh3, CMT); + CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_max_sh3, CMT); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_max_sh3, CMT); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_max_sh3, CMT); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected_max_sh3, CMT); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_max_sh3, CMT); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_max_sh3, CMT); + + + /* Shift by max to check overflow with rounding constant. */ + TEST_VRSRA_N(, int, s, 8, 8, 8); + TEST_VRSRA_N(, int, s, 16, 4, 16); + TEST_VRSRA_N(, int, s, 32, 2, 32); + TEST_VRSRA_N(, int, s, 64, 1, 64); + TEST_VRSRA_N(, uint, u, 8, 8, 8); + TEST_VRSRA_N(, uint, u, 16, 4, 16); + TEST_VRSRA_N(, uint, u, 32, 2, 32); + TEST_VRSRA_N(, uint, u, 64, 1, 64); + TEST_VRSRA_N(q, int, s, 8, 16, 8); + TEST_VRSRA_N(q, int, s, 16, 8, 16); + TEST_VRSRA_N(q, int, s, 32, 4, 32); + TEST_VRSRA_N(q, int, s, 64, 2, 64); + TEST_VRSRA_N(q, uint, u, 8, 16, 8); + TEST_VRSRA_N(q, uint, u, 16, 8, 16); + TEST_VRSRA_N(q, uint, u, 32, 4, 32); + TEST_VRSRA_N(q, uint, u, 64, 2, 64); + +#undef CMT +#define CMT " (checking overflow: shift by max, max input)" + CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_max_shmax, CMT); + CHECK(TEST_MSG, int, 16, 4, PRIx16, expected_max_shmax, CMT); + CHECK(TEST_MSG, int, 32, 2, PRIx32, expected_max_shmax, CMT); + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_max_shmax, CMT); + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_max_shmax, CMT); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected_max_shmax, CMT); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_max_shmax, CMT); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_max_shmax, CMT); + CHECK(TEST_MSG, int, 8, 16, PRIx8, expected_max_shmax, CMT); + CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_max_shmax, CMT); + CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_max_shmax, CMT); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_max_shmax, CMT); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_max_shmax, CMT); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected_max_shmax, CMT); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_max_shmax, CMT); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_max_shmax, CMT); + + + /* Initialize with min values to check overflow. */ + VDUP(vector2, , int, s, 8, 8, 0x80); + VDUP(vector2, , int, s, 16, 4, 0x8000); + VDUP(vector2, , int, s, 32, 2, 0x80000000); + VDUP(vector2, , int, s, 64, 1, 0x8000000000000000LL); + VDUP(vector2, q, int, s, 8, 16, 0x80); + VDUP(vector2, q, int, s, 16, 8, 0x8000); + VDUP(vector2, q, int, s, 32, 4, 0x80000000); + VDUP(vector2, q, int, s, 64, 2, 0x8000000000000000ULL); + + /* Shift by 1 to check overflow with rounding constant. */ + TEST_VRSRA_N(, int, s, 8, 8, 1); + TEST_VRSRA_N(, int, s, 16, 4, 1); + TEST_VRSRA_N(, int, s, 32, 2, 1); + TEST_VRSRA_N(, int, s, 64, 1, 1); + TEST_VRSRA_N(q, int, s, 8, 16, 1); + TEST_VRSRA_N(q, int, s, 16, 8, 1); + TEST_VRSRA_N(q, int, s, 32, 4, 1); + TEST_VRSRA_N(q, int, s, 64, 2, 1); + +#undef CMT +#define CMT " (checking overflow: shift by 1, min negative input)" + CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_min_sh1, CMT); + CHECK(TEST_MSG, int, 16, 4, PRIx16, expected_min_sh1, CMT); + CHECK(TEST_MSG, int, 32, 2, PRIx32, expected_min_sh1, CMT); + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_min_sh1, CMT); + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_min_sh1, CMT); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected_min_sh1, CMT); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_min_sh1, CMT); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_min_sh1, CMT); + CHECK(TEST_MSG, int, 8, 16, PRIx8, expected_min_sh1, CMT); + CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_min_sh1, CMT); + CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_min_sh1, CMT); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_min_sh1, CMT); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_min_sh1, CMT); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected_min_sh1, CMT); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_min_sh1, CMT); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_min_sh1, CMT); + + + /* Shift by 3 to check overflow with rounding constant. */ + TEST_VRSRA_N(, int, s, 8, 8, 3); + TEST_VRSRA_N(, int, s, 16, 4, 3); + TEST_VRSRA_N(, int, s, 32, 2, 3); + TEST_VRSRA_N(, int, s, 64, 1, 3); + TEST_VRSRA_N(q, int, s, 8, 16, 3); + TEST_VRSRA_N(q, int, s, 16, 8, 3); + TEST_VRSRA_N(q, int, s, 32, 4, 3); + TEST_VRSRA_N(q, int, s, 64, 2, 3); + +#undef CMT +#define CMT " (checking overflow: shift by 3, min negative input)" + CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_min_sh3, CMT); + CHECK(TEST_MSG, int, 16, 4, PRIx16, expected_min_sh3, CMT); + CHECK(TEST_MSG, int, 32, 2, PRIx32, expected_min_sh3, CMT); + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_min_sh3, CMT); + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_min_sh3, CMT); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected_min_sh3, CMT); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_min_sh3, CMT); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_min_sh3, CMT); + CHECK(TEST_MSG, int, 8, 16, PRIx8, expected_min_sh3, CMT); + CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_min_sh3, CMT); + CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_min_sh3, CMT); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_min_sh3, CMT); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_min_sh3, CMT); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected_min_sh3, CMT); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_min_sh3, CMT); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_min_sh3, CMT); + + + /* Shift by max to check overflow with rounding constant. */ + TEST_VRSRA_N(, int, s, 8, 8, 8); + TEST_VRSRA_N(, int, s, 16, 4, 16); + TEST_VRSRA_N(, int, s, 32, 2, 32); + TEST_VRSRA_N(, int, s, 64, 1, 64); + TEST_VRSRA_N(q, int, s, 8, 16, 8); + TEST_VRSRA_N(q, int, s, 16, 8, 16); + TEST_VRSRA_N(q, int, s, 32, 4, 32); + TEST_VRSRA_N(q, int, s, 64, 2, 64); + +#undef CMT +#define CMT " (checking overflow: shift by max, min negative input)" + CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_min_shmax, CMT); + CHECK(TEST_MSG, int, 16, 4, PRIx16, expected_min_shmax, CMT); + CHECK(TEST_MSG, int, 32, 2, PRIx32, expected_min_shmax, CMT); + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_min_shmax, CMT); + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_min_shmax, CMT); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected_min_shmax, CMT); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_min_shmax, CMT); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_min_shmax, CMT); + CHECK(TEST_MSG, int, 8, 16, PRIx8, expected_min_shmax, CMT); + CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_min_shmax, CMT); + CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_min_shmax, CMT); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_min_shmax, CMT); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_min_shmax, CMT); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected_min_shmax, CMT); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_min_shmax, CMT); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_min_shmax, CMT); +} + +int main (void) +{ + exec_vrsra_n (); + return 0; +} -- 2.1.4