From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from server.nextmovesoftware.com (server.nextmovesoftware.com [162.254.253.69]) by sourceware.org (Postfix) with ESMTPS id 96B743858D3C for ; Sun, 31 Oct 2021 10:02:06 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 96B743858D3C Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=nextmovesoftware.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=nextmovesoftware.com DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nextmovesoftware.com; s=default; h=Content-Type:MIME-Version:Message-ID: Date:Subject:Cc:To:From:Sender:Reply-To:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=fqAS2s6S+MyM8b4QTz50VfPVHgGxEY9cQwR/E1Kd45A=; b=kklEvPj3k2ZHvw9GJoWkee0LZS gh8vNte8liyxb6ugg3DciaGq/OfpKC8KubU7+1ul/OuTCBlVVm6q672n2/bA1vqUXphsTA5gb64wb nnMJTQIBG2UeN2MQzyVu4Pz/COEEvqjYhCobpJYO0dRCpf5+W8Zz5ZycfPicnv2mxR3yh8Zv6M8te tHGpGF7csTCEJXrooffzbVn5WPw5W2/Fq3pu/vwIYZrVwgq3ittgfoSKOZpRIYAFgBPADHIQGdyGC YSL5mKfUl569q3GhqtmZADxS9iSv8WYo+1bIG7WiRxozoOiQf2M8CK3b5FFs1XtroaGqepyHkn/3j 2bBr+t5w==; Received: from [213.122.231.240] (port=64832 helo=Dell) by server.nextmovesoftware.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1mh7fB-0004gb-G7; Sun, 31 Oct 2021 06:02:05 -0400 From: "Roger Sayle" To: "'GCC Patches'" Cc: "'Uros Bizjak'" , "'Jakub Jelinek'" Subject: [PATCH Take #2] x86_64: Expand ashrv1ti (and PR target/102986) Date: Sun, 31 Oct 2021 10:02:02 -0000 Message-ID: <030001d7ce3e$5b4742d0$11d5c870$@nextmovesoftware.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_0301_01D7CE3E.5B49B3D0" X-Mailer: Microsoft Outlook 16.0 Thread-Index: AdfOPNcJSIsWOqp+TZWyaLiTX8WfNg== Content-Language: en-gb X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - server.nextmovesoftware.com X-AntiAbuse: Original Domain - gcc.gnu.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - nextmovesoftware.com X-Get-Message-Sender-Via: server.nextmovesoftware.com: authenticated_id: roger@nextmovesoftware.com X-Authenticated-Sender: server.nextmovesoftware.com: roger@nextmovesoftware.com X-Source: X-Source-Args: X-Source-Dir: X-Spam-Status: No, score=-10.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_BARRACUDACENTRAL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 31 Oct 2021 10:02:11 -0000 This is a multipart message in MIME format. ------=_NextPart_000_0301_01D7CE3E.5B49B3D0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Very many thanks to Jakub for proof-reading my patch, catching my silly GNU-style mistakes and making excellent suggestions. This revised patch incorporates all of his feedback, and has been tested on x86_64-pc-linux-gnu with make bootstrap and make -k check with no new failures. 2021-10-31 Roger Sayle Jakub Jelinek gcc/ChangeLog PR target/102986 * config/i386/i386-expand.c (ix86_expand_v1ti_to_ti, ix86_expand_ti_to_v1ti): New helper functions. (ix86_expand_v1ti_shift): Check if the amount operand is an integer constant, and expand as a TImode shift if it isn't. (ix86_expand_v1ti_rotate): Check if the amount operand is an integer constant, and expand as a TImode rotate if it isn't. (ix86_expand_v1ti_ashiftrt): New function to expand arithmetic right shifts of V1TImode quantities. * config/i386/i386-protos.h (ix86_expand_v1ti_ashift): Prototype. * config/i386/sse.md (ashlv1ti3, lshrv1ti3): Change constraints to QImode general_operand, and let the helper functions lower shifts by non-constant operands, as TImode shifts. Make conditional on TARGET_64BIT. (ashrv1ti3): New expander calling ix86_expand_v1ti_ashiftrt. (rotlv1ti3, rotrv1ti3): Change shift operand to QImode. Make conditional on TARGET_64BIT. gcc/testsuite/ChangeLog PR target/102986 * gcc.target/i386/sse2-v1ti-ashiftrt-1.c: New test case. * gcc.target/i386/sse2-v1ti-ashiftrt-2.c: New test case. * gcc.target/i386/sse2-v1ti-ashiftrt-3.c: New test case. * gcc.target/i386/sse2-v1ti-shift-2.c: New test case. * gcc.target/i386/sse2-v1ti-shift-3.c: New test case. Thanks. Roger -- -----Original Message----- From: Jakub Jelinek Sent: 30 October 2021 11:30 To: Roger Sayle Cc: 'GCC Patches' ; 'Uros Bizjak' Subject: Re: [PATCH] x86_64: Expand ashrv1ti (and PR target/102986) On Sat, Oct 30, 2021 at 11:16:41AM +0100, Roger Sayle wrote: > 2021-10-30 Roger Sayle > > gcc/ChangeLog > PR target/102986 > * config/i386/i386-expand.c (ix86_expand_v1ti_to_ti, > ix86_expand_ti_to_v1ti): New helper functions. > (ix86_expand_v1ti_shift): Check if the amount operand is an > integer constant, and expand as a TImode shift if it isn't. > (ix86_expand_v1ti_rotate): Check if the amount operand is an > integer constant, and expand as a TImode rotate if it isn't. > (ix86_expand_v1ti_ashiftrt): New function to expand arithmetic > right shifts of V1TImode quantities. > * config/i386/i386-protos.h (ix86_expand_v1ti_ashift): Prototype. > * config/i386/sse.md (ashlv1ti3, lshrv1ti3): Change constraints > to QImode general_operand, and let the helper functions lower > shifts by non-constant operands, as TImode shifts. > (ashrv1ti3): New expander calling ix86_expand_v1ti_ashiftrt. > (rotlv1ti3, rotrv1ti3): Change shift operand to QImode. > > gcc/testsuite/ChangeLog > PR target/102986 > * gcc.target/i386/sse2-v1ti-ashiftrt-1.c: New test case. > * gcc.target/i386/sse2-v1ti-ashiftrt-2.c: New test case. > * gcc.target/i386/sse2-v1ti-ashiftrt-3.c: New test case. > * gcc.target/i386/sse2-v1ti-shift-2.c: New test case. > * gcc.target/i386/sse2-v1ti-shift-3.c: New test case. > > Sorry again for the breakage in my last patch. I wasn't testing things > that shouldn't have been affected/changed. Not a review, will defer that to Uros, but just nits: > +/* Expand move of V1TI mode register X to a new TI mode register. */ > +static rtx ix86_expand_v1ti_to_ti (rtx x) ix86_expand_v1ti_to_ti should be at the start of next line, so static rtx ix86_expand_v1ti_to_ti (rtx x) Ditto for other functions and also in functions you've added by the previous patch. > + emit_insn (code == ASHIFT ? gen_ashlti3(tmp2, tmp1, operands[2]) > + : gen_lshrti3(tmp2, tmp1, operands[2])); Space before ( twice. > + emit_insn (code == ROTATE ? gen_rotlti3(tmp2, tmp1, operands[2]) > + : gen_rotrti3(tmp2, tmp1, operands[2])); Likewise. > + emit_insn (gen_ashrti3(tmp2, tmp1, operands[2])); Similarly. Also, I wonder for all these patterns (previously and now added), shouldn't they have && TARGET_64BIT in conditions? I mean, we don't really support scalar TImode for ia32, but VALID_SSE_REG_MODE includes V1TImode and while the constant shifts can be done, I think the variable shifts can't, there are no TImode shift patterns... Jakub ------=_NextPart_000_0301_01D7CE3E.5B49B3D0 Content-Type: text/plain; name="patchv4.txt" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="patchv4.txt" diff --git a/gcc/config/i386/i386-expand.c = b/gcc/config/i386/i386-expand.c=0A= index 4c3800e..db967e4 100644=0A= --- a/gcc/config/i386/i386-expand.c=0A= +++ b/gcc/config/i386/i386-expand.c=0A= @@ -6157,12 +6157,52 @@ ix86_split_lshr (rtx *operands, rtx scratch, = machine_mode mode)=0A= }=0A= }=0A= =0A= +/* Expand move of V1TI mode register X to a new TI mode register. */=0A= +static rtx=0A= +ix86_expand_v1ti_to_ti (rtx x)=0A= +{=0A= + rtx result =3D gen_reg_rtx (TImode);=0A= + emit_move_insn (result, gen_lowpart (TImode, x));=0A= + return result;=0A= +}=0A= +=0A= +/* Expand move of TI mode register X to a new V1TI mode register. */=0A= +static rtx=0A= +ix86_expand_ti_to_v1ti (rtx x)=0A= +{=0A= + rtx result =3D gen_reg_rtx (V1TImode);=0A= + if (TARGET_SSE2)=0A= + {=0A= + rtx lo =3D gen_lowpart (DImode, x);=0A= + rtx hi =3D gen_highpart (DImode, x);=0A= + rtx tmp =3D gen_reg_rtx (V2DImode);=0A= + emit_insn (gen_vec_concatv2di (tmp, lo, hi));=0A= + emit_move_insn (result, gen_lowpart (V1TImode, tmp));=0A= + }=0A= + else=0A= + emit_move_insn (result, gen_lowpart (V1TImode, x));=0A= + return result;=0A= +}=0A= +=0A= /* Expand V1TI mode shift (of rtx_code CODE) by constant. */=0A= -void ix86_expand_v1ti_shift (enum rtx_code code, rtx operands[])=0A= +void=0A= +ix86_expand_v1ti_shift (enum rtx_code code, rtx operands[])=0A= {=0A= - HOST_WIDE_INT bits =3D INTVAL (operands[2]) & 127;=0A= rtx op1 =3D force_reg (V1TImode, operands[1]);=0A= =0A= + if (!CONST_INT_P (operands[2]))=0A= + {=0A= + rtx tmp1 =3D ix86_expand_v1ti_to_ti (op1);=0A= + rtx tmp2 =3D gen_reg_rtx (TImode);=0A= + emit_insn (code =3D=3D ASHIFT ? gen_ashlti3 (tmp2, tmp1, = operands[2])=0A= + : gen_lshrti3 (tmp2, tmp1, operands[2]));=0A= + rtx tmp3 =3D ix86_expand_ti_to_v1ti (tmp2);=0A= + emit_move_insn (operands[0], tmp3);=0A= + return;=0A= + }=0A= +=0A= + HOST_WIDE_INT bits =3D INTVAL (operands[2]) & 127;=0A= +=0A= if (bits =3D=3D 0)=0A= {=0A= emit_move_insn (operands[0], op1);=0A= @@ -6173,7 +6213,7 @@ void ix86_expand_v1ti_shift (enum rtx_code code, = rtx operands[])=0A= {=0A= rtx tmp =3D gen_reg_rtx (V1TImode);=0A= if (code =3D=3D ASHIFT)=0A= - emit_insn (gen_sse2_ashlv1ti3 (tmp, op1, GEN_INT (bits)));=0A= + emit_insn (gen_sse2_ashlv1ti3 (tmp, op1, GEN_INT (bits)));=0A= else=0A= emit_insn (gen_sse2_lshrv1ti3 (tmp, op1, GEN_INT (bits)));=0A= emit_move_insn (operands[0], tmp);=0A= @@ -6228,11 +6268,24 @@ void ix86_expand_v1ti_shift (enum rtx_code code, = rtx operands[])=0A= }=0A= =0A= /* Expand V1TI mode rotate (of rtx_code CODE) by constant. */=0A= -void ix86_expand_v1ti_rotate (enum rtx_code code, rtx operands[])=0A= +void=0A= +ix86_expand_v1ti_rotate (enum rtx_code code, rtx operands[])=0A= {=0A= - HOST_WIDE_INT bits =3D INTVAL (operands[2]) & 127;=0A= rtx op1 =3D force_reg (V1TImode, operands[1]);=0A= =0A= + if (!CONST_INT_P (operands[2]))=0A= + {=0A= + rtx tmp1 =3D ix86_expand_v1ti_to_ti (op1);=0A= + rtx tmp2 =3D gen_reg_rtx (TImode);=0A= + emit_insn (code =3D=3D ROTATE ? gen_rotlti3 (tmp2, tmp1, = operands[2])=0A= + : gen_rotrti3 (tmp2, tmp1, operands[2]));=0A= + rtx tmp3 =3D ix86_expand_ti_to_v1ti (tmp2);=0A= + emit_move_insn (operands[0], tmp3);=0A= + return;=0A= + }=0A= +=0A= + HOST_WIDE_INT bits =3D INTVAL (operands[2]) & 127;=0A= +=0A= if (bits =3D=3D 0)=0A= {=0A= emit_move_insn (operands[0], op1);=0A= @@ -6320,6 +6373,469 @@ void ix86_expand_v1ti_rotate (enum rtx_code = code, rtx operands[])=0A= emit_move_insn (operands[0], tmp4);=0A= }=0A= =0A= +/* Expand V1TI mode ashiftrt by constant. */=0A= +void=0A= +ix86_expand_v1ti_ashiftrt (rtx operands[])=0A= +{=0A= + rtx op1 =3D force_reg (V1TImode, operands[1]);=0A= +=0A= + if (!CONST_INT_P (operands[2]))=0A= + {=0A= + rtx tmp1 =3D ix86_expand_v1ti_to_ti (op1);=0A= + rtx tmp2 =3D gen_reg_rtx (TImode);=0A= + emit_insn (gen_ashrti3 (tmp2, tmp1, operands[2]));=0A= + rtx tmp3 =3D ix86_expand_ti_to_v1ti (tmp2);=0A= + emit_move_insn (operands[0], tmp3);=0A= + return;=0A= + }=0A= +=0A= + HOST_WIDE_INT bits =3D INTVAL (operands[2]) & 127;=0A= +=0A= + if (bits =3D=3D 0)=0A= + {=0A= + emit_move_insn (operands[0], op1);=0A= + return;=0A= + }=0A= +=0A= + if (bits =3D=3D 127)=0A= + {=0A= + /* Two operations. */=0A= + rtx tmp1 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp2 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp1, gen_lowpart (V4SImode, op1));=0A= + emit_insn (gen_sse2_pshufd (tmp2, tmp1, GEN_INT (0xff)));=0A= +=0A= + rtx tmp3 =3D gen_reg_rtx (V4SImode);=0A= + emit_insn (gen_ashrv4si3 (tmp3, tmp2, GEN_INT (31)));=0A= +=0A= + rtx tmp4 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp4, gen_lowpart (V1TImode, tmp3));=0A= + emit_move_insn (operands[0], tmp4);=0A= + return;=0A= + }=0A= +=0A= + if (bits =3D=3D 64)=0A= + {=0A= + /* Three operations. */=0A= + rtx tmp1 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp2 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp1, gen_lowpart (V4SImode, op1));=0A= + emit_insn (gen_sse2_pshufd (tmp2, tmp1, GEN_INT (0xff)));=0A= +=0A= + rtx tmp3 =3D gen_reg_rtx (V4SImode);=0A= + emit_insn (gen_ashrv4si3 (tmp3, tmp2, GEN_INT (31)));=0A= +=0A= + rtx tmp4 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp5 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp6 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp4, gen_lowpart (V2DImode, tmp1));=0A= + emit_move_insn (tmp5, gen_lowpart (V2DImode, tmp3));=0A= + emit_insn (gen_vec_interleave_highv2di (tmp6, tmp4, tmp5));=0A= +=0A= + rtx tmp7 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp7, gen_lowpart (V1TImode, tmp6));=0A= + emit_move_insn (operands[0], tmp7);=0A= + return;=0A= + }=0A= +=0A= + if (bits =3D=3D 96)=0A= + {=0A= + /* Three operations. */=0A= + rtx tmp3 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp1 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp2 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp1, gen_lowpart (V4SImode, op1));=0A= + emit_insn (gen_ashrv4si3 (tmp2, tmp1, GEN_INT (31)));=0A= +=0A= + rtx tmp4 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp5 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp3, gen_lowpart (V2DImode, tmp1));=0A= + emit_move_insn (tmp4, gen_lowpart (V2DImode, tmp2));=0A= + emit_insn (gen_vec_interleave_highv2di (tmp5, tmp3, tmp4));=0A= +=0A= + rtx tmp6 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp7 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp6, gen_lowpart (V4SImode, tmp5));=0A= + emit_insn (gen_sse2_pshufd (tmp7, tmp6, GEN_INT (0xfd)));=0A= +=0A= + rtx tmp8 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp8, gen_lowpart (V1TImode, tmp7));=0A= + emit_move_insn (operands[0], tmp8);=0A= + return;=0A= + }=0A= +=0A= + if (TARGET_AVX2 || TARGET_SSE4_1)=0A= + {=0A= + /* Three operations. */=0A= + if (bits =3D=3D 32)=0A= + {=0A= + rtx tmp1 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp2 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp1, gen_lowpart (V4SImode, op1));=0A= + emit_insn (gen_ashrv4si3 (tmp2, tmp1, GEN_INT (31)));=0A= +=0A= + rtx tmp3 =3D gen_reg_rtx (V1TImode);=0A= + emit_insn (gen_sse2_lshrv1ti3 (tmp3, op1, GEN_INT (32)));=0A= +=0A= + if (TARGET_AVX2)=0A= + {=0A= + rtx tmp4 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp5 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp4, gen_lowpart (V4SImode, tmp3));=0A= + emit_insn (gen_avx2_pblenddv4si (tmp5, tmp2, tmp4,=0A= + GEN_INT (7)));=0A= +=0A= + rtx tmp6 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp6, gen_lowpart (V1TImode, tmp5));=0A= + emit_move_insn (operands[0], tmp6);=0A= + }=0A= + else=0A= + {=0A= + rtx tmp4 =3D gen_reg_rtx (V8HImode);=0A= + rtx tmp5 =3D gen_reg_rtx (V8HImode);=0A= + rtx tmp6 =3D gen_reg_rtx (V8HImode);=0A= + emit_move_insn (tmp4, gen_lowpart (V8HImode, tmp2));=0A= + emit_move_insn (tmp5, gen_lowpart (V8HImode, tmp3));=0A= + emit_insn (gen_sse4_1_pblendw (tmp6, tmp4, tmp5,=0A= + GEN_INT (0x3f)));=0A= +=0A= + rtx tmp7 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp7, gen_lowpart (V1TImode, tmp6));=0A= + emit_move_insn (operands[0], tmp7);=0A= + }=0A= + return;=0A= + }=0A= +=0A= + /* Three operations. */=0A= + if (bits =3D=3D 8 || bits =3D=3D 16 || bits =3D=3D 24)=0A= + {=0A= + rtx tmp1 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp2 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp1, gen_lowpart (V4SImode, op1));=0A= + emit_insn (gen_ashrv4si3 (tmp2, tmp1, GEN_INT (bits)));=0A= +=0A= + rtx tmp3 =3D gen_reg_rtx (V1TImode);=0A= + emit_insn (gen_sse2_lshrv1ti3 (tmp3, op1, GEN_INT (bits)));=0A= +=0A= + if (TARGET_AVX2)=0A= + {=0A= + rtx tmp4 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp5 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp4, gen_lowpart (V4SImode, tmp3));=0A= + emit_insn (gen_avx2_pblenddv4si (tmp5, tmp2, tmp4,=0A= + GEN_INT (7)));=0A= +=0A= + rtx tmp6 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp6, gen_lowpart (V1TImode, tmp5));=0A= + emit_move_insn (operands[0], tmp6);=0A= + }=0A= + else=0A= + {=0A= + rtx tmp4 =3D gen_reg_rtx (V8HImode);=0A= + rtx tmp5 =3D gen_reg_rtx (V8HImode);=0A= + rtx tmp6 =3D gen_reg_rtx (V8HImode);=0A= + emit_move_insn (tmp4, gen_lowpart (V8HImode, tmp2));=0A= + emit_move_insn (tmp5, gen_lowpart (V8HImode, tmp3));=0A= + emit_insn (gen_sse4_1_pblendw (tmp6, tmp4, tmp5,=0A= + GEN_INT (0x3f)));=0A= +=0A= + rtx tmp7 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp7, gen_lowpart (V1TImode, tmp6));=0A= + emit_move_insn (operands[0], tmp7);=0A= + }=0A= + return;=0A= + }=0A= + }=0A= +=0A= + if (bits > 96)=0A= + {=0A= + /* Four operations. */=0A= + rtx tmp1 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp2 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp1, gen_lowpart (V4SImode, op1));=0A= + emit_insn (gen_ashrv4si3 (tmp2, tmp1, GEN_INT (bits - 96)));=0A= +=0A= + rtx tmp3 =3D gen_reg_rtx (V4SImode);=0A= + emit_insn (gen_ashrv4si3 (tmp3, tmp1, GEN_INT (31)));=0A= +=0A= + rtx tmp4 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp5 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp6 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp4, gen_lowpart (V2DImode, tmp2));=0A= + emit_move_insn (tmp5, gen_lowpart (V2DImode, tmp3));=0A= + emit_insn (gen_vec_interleave_highv2di (tmp6, tmp4, tmp5));=0A= +=0A= + rtx tmp7 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp8 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp7, gen_lowpart (V4SImode, tmp6));=0A= + emit_insn (gen_sse2_pshufd (tmp8, tmp7, GEN_INT (0xfd)));=0A= +=0A= + rtx tmp9 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp9, gen_lowpart (V1TImode, tmp8));=0A= + emit_move_insn (operands[0], tmp9);=0A= + return;=0A= + }=0A= +=0A= + if (TARGET_SSE4_1 && (bits =3D=3D 48 || bits =3D=3D 80))=0A= + {=0A= + /* Four operations. */=0A= + rtx tmp1 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp2 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp1, gen_lowpart (V4SImode, op1));=0A= + emit_insn (gen_sse2_pshufd (tmp2, tmp1, GEN_INT (0xff)));=0A= +=0A= + rtx tmp3 =3D gen_reg_rtx (V4SImode);=0A= + emit_insn (gen_ashrv4si3 (tmp3, tmp2, GEN_INT (31)));=0A= +=0A= + rtx tmp4 =3D gen_reg_rtx (V1TImode);=0A= + emit_insn (gen_sse2_lshrv1ti3 (tmp4, op1, GEN_INT (bits)));=0A= +=0A= + rtx tmp5 =3D gen_reg_rtx (V8HImode);=0A= + rtx tmp6 =3D gen_reg_rtx (V8HImode);=0A= + rtx tmp7 =3D gen_reg_rtx (V8HImode);=0A= + emit_move_insn (tmp5, gen_lowpart (V8HImode, tmp3));=0A= + emit_move_insn (tmp6, gen_lowpart (V8HImode, tmp4));=0A= + emit_insn (gen_sse4_1_pblendw (tmp7, tmp5, tmp6,=0A= + GEN_INT (bits =3D=3D 48 ? 0x1f : 0x07)));=0A= +=0A= + rtx tmp8 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp8, gen_lowpart (V1TImode, tmp7));=0A= + emit_move_insn (operands[0], tmp8);=0A= + return;=0A= + }=0A= +=0A= + if ((bits & 7) =3D=3D 0)=0A= + {=0A= + /* Five operations. */=0A= + rtx tmp1 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp2 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp1, gen_lowpart (V4SImode, op1));=0A= + emit_insn (gen_sse2_pshufd (tmp2, tmp1, GEN_INT (0xff)));=0A= +=0A= + rtx tmp3 =3D gen_reg_rtx (V4SImode);=0A= + emit_insn (gen_ashrv4si3 (tmp3, tmp2, GEN_INT (31)));=0A= +=0A= + rtx tmp4 =3D gen_reg_rtx (V1TImode);=0A= + emit_insn (gen_sse2_lshrv1ti3 (tmp4, op1, GEN_INT (bits)));=0A= +=0A= + rtx tmp5 =3D gen_reg_rtx (V1TImode);=0A= + rtx tmp6 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp5, gen_lowpart (V1TImode, tmp3));=0A= + emit_insn (gen_sse2_ashlv1ti3 (tmp6, tmp5, GEN_INT (128 - bits)));=0A= +=0A= + rtx tmp7 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp8 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp9 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp7, gen_lowpart (V2DImode, tmp4));=0A= + emit_move_insn (tmp8, gen_lowpart (V2DImode, tmp6));=0A= + emit_insn (gen_iorv2di3 (tmp9, tmp7, tmp8));=0A= +=0A= + rtx tmp10 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp10, gen_lowpart (V1TImode, tmp9));=0A= + emit_move_insn (operands[0], tmp10);=0A= + return;=0A= + }=0A= +=0A= + if (TARGET_AVX2 && bits < 32)=0A= + {=0A= + /* Six operations. */=0A= + rtx tmp1 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp2 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp1, gen_lowpart (V4SImode, op1));=0A= + emit_insn (gen_ashrv4si3 (tmp2, tmp1, GEN_INT (bits)));=0A= +=0A= + rtx tmp3 =3D gen_reg_rtx (V1TImode);=0A= + emit_insn (gen_sse2_lshrv1ti3 (tmp3, op1, GEN_INT (64)));=0A= +=0A= + rtx tmp4 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp5 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp4, gen_lowpart (V2DImode, op1));=0A= + emit_insn (gen_lshrv2di3 (tmp5, tmp4, GEN_INT (bits)));=0A= +=0A= + rtx tmp6 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp7 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp6, gen_lowpart (V2DImode, tmp3));=0A= + emit_insn (gen_ashlv2di3 (tmp7, tmp6, GEN_INT (64 - bits)));=0A= +=0A= + rtx tmp8 =3D gen_reg_rtx (V2DImode);=0A= + emit_insn (gen_iorv2di3 (tmp8, tmp5, tmp7));=0A= +=0A= + rtx tmp9 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp10 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp9, gen_lowpart (V4SImode, tmp8));=0A= + emit_insn (gen_avx2_pblenddv4si (tmp10, tmp2, tmp9, GEN_INT (7)));=0A= +=0A= + rtx tmp11 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp11, gen_lowpart (V1TImode, tmp10));=0A= + emit_move_insn (operands[0], tmp11);=0A= + return;=0A= + }=0A= +=0A= + if (TARGET_SSE4_1 && bits < 15)=0A= + {=0A= + /* Six operations. */=0A= + rtx tmp1 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp2 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp1, gen_lowpart (V4SImode, op1));=0A= + emit_insn (gen_ashrv4si3 (tmp2, tmp1, GEN_INT (bits)));=0A= +=0A= + rtx tmp3 =3D gen_reg_rtx (V1TImode);=0A= + emit_insn (gen_sse2_lshrv1ti3 (tmp3, op1, GEN_INT (64)));=0A= +=0A= + rtx tmp4 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp5 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp4, gen_lowpart (V2DImode, op1));=0A= + emit_insn (gen_lshrv2di3 (tmp5, tmp4, GEN_INT (bits)));=0A= +=0A= + rtx tmp6 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp7 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp6, gen_lowpart (V2DImode, tmp3));=0A= + emit_insn (gen_ashlv2di3 (tmp7, tmp6, GEN_INT (64 - bits)));=0A= +=0A= + rtx tmp8 =3D gen_reg_rtx (V2DImode);=0A= + emit_insn (gen_iorv2di3 (tmp8, tmp5, tmp7));=0A= +=0A= + rtx tmp9 =3D gen_reg_rtx (V8HImode);=0A= + rtx tmp10 =3D gen_reg_rtx (V8HImode);=0A= + rtx tmp11 =3D gen_reg_rtx (V8HImode);=0A= + emit_move_insn (tmp9, gen_lowpart (V8HImode, tmp2));=0A= + emit_move_insn (tmp10, gen_lowpart (V8HImode, tmp8));=0A= + emit_insn (gen_sse4_1_pblendw (tmp11, tmp9, tmp10, GEN_INT = (0x3f)));=0A= +=0A= + rtx tmp12 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp12, gen_lowpart (V1TImode, tmp11));=0A= + emit_move_insn (operands[0], tmp12);=0A= + return;=0A= + }=0A= +=0A= + if (bits =3D=3D 1)=0A= + {=0A= + /* Eight operations. */=0A= + rtx tmp1 =3D gen_reg_rtx (V1TImode);=0A= + emit_insn (gen_sse2_lshrv1ti3 (tmp1, op1, GEN_INT (64)));=0A= +=0A= + rtx tmp2 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp3 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp2, gen_lowpart (V2DImode, op1));=0A= + emit_insn (gen_lshrv2di3 (tmp3, tmp2, GEN_INT (1)));=0A= +=0A= + rtx tmp4 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp5 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp4, gen_lowpart (V2DImode, tmp1));=0A= + emit_insn (gen_ashlv2di3 (tmp5, tmp4, GEN_INT (63)));=0A= +=0A= + rtx tmp6 =3D gen_reg_rtx (V2DImode);=0A= + emit_insn (gen_iorv2di3 (tmp6, tmp3, tmp5));=0A= +=0A= + rtx tmp7 =3D gen_reg_rtx (V2DImode);=0A= + emit_insn (gen_lshrv2di3 (tmp7, tmp2, GEN_INT (63)));=0A= +=0A= + rtx tmp8 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp9 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp8, gen_lowpart (V4SImode, tmp7));=0A= + emit_insn (gen_sse2_pshufd (tmp9, tmp8, GEN_INT (0xbf)));=0A= +=0A= + rtx tmp10 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp11 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp10, gen_lowpart (V2DImode, tmp9));=0A= + emit_insn (gen_ashlv2di3 (tmp11, tmp10, GEN_INT (31)));=0A= +=0A= + rtx tmp12 =3D gen_reg_rtx (V2DImode);=0A= + emit_insn (gen_iorv2di3 (tmp12, tmp6, tmp11));=0A= +=0A= + rtx tmp13 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp13, gen_lowpart (V1TImode, tmp12));=0A= + emit_move_insn (operands[0], tmp13);=0A= + return;=0A= + }=0A= +=0A= + if (bits > 64)=0A= + {=0A= + /* Eight operations. */=0A= + rtx tmp1 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp2 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp1, gen_lowpart (V4SImode, op1));=0A= + emit_insn (gen_sse2_pshufd (tmp2, tmp1, GEN_INT (0xff)));=0A= +=0A= + rtx tmp3 =3D gen_reg_rtx (V4SImode);=0A= + emit_insn (gen_ashrv4si3 (tmp3, tmp2, GEN_INT (31)));=0A= +=0A= + rtx tmp4 =3D gen_reg_rtx (V1TImode);=0A= + emit_insn (gen_sse2_lshrv1ti3 (tmp4, op1, GEN_INT (64)));=0A= +=0A= + rtx tmp5 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp6 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp5, gen_lowpart (V2DImode, tmp4));=0A= + emit_insn (gen_lshrv2di3 (tmp6, tmp5, GEN_INT (bits - 64)));=0A= +=0A= + rtx tmp7 =3D gen_reg_rtx (V1TImode);=0A= + rtx tmp8 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp7, gen_lowpart (V1TImode, tmp3));=0A= + emit_insn (gen_sse2_ashlv1ti3 (tmp8, tmp7, GEN_INT (64)));=0A= + =0A= + rtx tmp9 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp10 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp9, gen_lowpart (V2DImode, tmp3));=0A= + emit_insn (gen_ashlv2di3 (tmp10, tmp9, GEN_INT (128 - bits)));=0A= +=0A= + rtx tmp11 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp12 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp11, gen_lowpart (V2DImode, tmp8));=0A= + emit_insn (gen_iorv2di3 (tmp12, tmp10, tmp11));=0A= +=0A= + rtx tmp13 =3D gen_reg_rtx (V2DImode);=0A= + emit_insn (gen_iorv2di3 (tmp13, tmp6, tmp12));=0A= +=0A= + rtx tmp14 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp14, gen_lowpart (V1TImode, tmp13));=0A= + emit_move_insn (operands[0], tmp14);=0A= + }=0A= + else=0A= + {=0A= + /* Nine operations. */=0A= + rtx tmp1 =3D gen_reg_rtx (V4SImode);=0A= + rtx tmp2 =3D gen_reg_rtx (V4SImode);=0A= + emit_move_insn (tmp1, gen_lowpart (V4SImode, op1));=0A= + emit_insn (gen_sse2_pshufd (tmp2, tmp1, GEN_INT (0xff)));=0A= +=0A= + rtx tmp3 =3D gen_reg_rtx (V4SImode);=0A= + emit_insn (gen_ashrv4si3 (tmp3, tmp2, GEN_INT (31)));=0A= +=0A= + rtx tmp4 =3D gen_reg_rtx (V1TImode);=0A= + emit_insn (gen_sse2_lshrv1ti3 (tmp4, op1, GEN_INT (64)));=0A= +=0A= + rtx tmp5 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp6 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp5, gen_lowpart (V2DImode, op1));=0A= + emit_insn (gen_lshrv2di3 (tmp6, tmp5, GEN_INT (bits)));=0A= +=0A= + rtx tmp7 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp8 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp7, gen_lowpart (V2DImode, tmp4));=0A= + emit_insn (gen_ashlv2di3 (tmp8, tmp7, GEN_INT (64 - bits)));=0A= +=0A= + rtx tmp9 =3D gen_reg_rtx (V2DImode);=0A= + emit_insn (gen_iorv2di3 (tmp9, tmp6, tmp8));=0A= +=0A= + rtx tmp10 =3D gen_reg_rtx (V1TImode);=0A= + rtx tmp11 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp10, gen_lowpart (V1TImode, tmp3));=0A= + emit_insn (gen_sse2_ashlv1ti3 (tmp11, tmp10, GEN_INT (64)));=0A= +=0A= + rtx tmp12 =3D gen_reg_rtx (V2DImode);=0A= + rtx tmp13 =3D gen_reg_rtx (V2DImode);=0A= + emit_move_insn (tmp12, gen_lowpart (V2DImode, tmp11));=0A= + emit_insn (gen_ashlv2di3 (tmp13, tmp12, GEN_INT (64 - bits)));=0A= +=0A= + rtx tmp14 =3D gen_reg_rtx (V2DImode);=0A= + emit_insn (gen_iorv2di3 (tmp14, tmp9, tmp13));=0A= +=0A= + rtx tmp15 =3D gen_reg_rtx (V1TImode);=0A= + emit_move_insn (tmp15, gen_lowpart (V1TImode, tmp14));=0A= + emit_move_insn (operands[0], tmp15);=0A= + }=0A= +}=0A= +=0A= /* Return mode for the memcpy/memset loop counter. Prefer SImode over=0A= DImode for constant loop counts. */=0A= =0A= diff --git a/gcc/config/i386/i386-protos.h = b/gcc/config/i386/i386-protos.h=0A= index 9918a28..bd52450 100644=0A= --- a/gcc/config/i386/i386-protos.h=0A= +++ b/gcc/config/i386/i386-protos.h=0A= @@ -161,6 +161,7 @@ extern void ix86_split_ashr (rtx *, rtx, = machine_mode);=0A= extern void ix86_split_lshr (rtx *, rtx, machine_mode);=0A= extern void ix86_expand_v1ti_shift (enum rtx_code, rtx[]);=0A= extern void ix86_expand_v1ti_rotate (enum rtx_code, rtx[]);=0A= +extern void ix86_expand_v1ti_ashiftrt (rtx[]);=0A= extern rtx ix86_find_base_term (rtx);=0A= extern bool ix86_check_movabs (rtx, int);=0A= extern bool ix86_check_no_addr_space (rtx);=0A= diff --git a/gcc/config/i386/sse.md b/gcc/config/i386/sse.md=0A= index bdc6067..3307c1b 100644=0A= --- a/gcc/config/i386/sse.md=0A= +++ b/gcc/config/i386/sse.md=0A= @@ -15079,8 +15079,8 @@=0A= [(set (match_operand:V1TI 0 "register_operand")=0A= (ashift:V1TI=0A= (match_operand:V1TI 1 "register_operand")=0A= - (match_operand:SI 2 "const_int_operand")))]=0A= - "TARGET_SSE2"=0A= + (match_operand:QI 2 "general_operand")))]=0A= + "TARGET_SSE2 && TARGET_64BIT"=0A= {=0A= ix86_expand_v1ti_shift (ASHIFT, operands);=0A= DONE;=0A= @@ -15090,19 +15090,30 @@=0A= [(set (match_operand:V1TI 0 "register_operand")=0A= (lshiftrt:V1TI=0A= (match_operand:V1TI 1 "register_operand")=0A= - (match_operand:SI 2 "const_int_operand")))]=0A= - "TARGET_SSE2"=0A= + (match_operand:QI 2 "general_operand")))]=0A= + "TARGET_SSE2 && TARGET_64BIT"=0A= {=0A= ix86_expand_v1ti_shift (LSHIFTRT, operands);=0A= DONE;=0A= })=0A= =0A= +(define_expand "ashrv1ti3"=0A= + [(set (match_operand:V1TI 0 "register_operand")=0A= + (ashiftrt:V1TI=0A= + (match_operand:V1TI 1 "register_operand")=0A= + (match_operand:QI 2 "general_operand")))]=0A= + "TARGET_SSE2 && TARGET_64BIT"=0A= +{=0A= + ix86_expand_v1ti_ashiftrt (operands);=0A= + DONE;=0A= +})=0A= +=0A= (define_expand "rotlv1ti3"=0A= [(set (match_operand:V1TI 0 "register_operand")=0A= (rotate:V1TI=0A= (match_operand:V1TI 1 "register_operand")=0A= - (match_operand:SI 2 "const_int_operand")))]=0A= - "TARGET_SSE2"=0A= + (match_operand:QI 2 "const_int_operand")))]=0A= + "TARGET_SSE2 && TARGET_64BIT"=0A= {=0A= ix86_expand_v1ti_rotate (ROTATE, operands);=0A= DONE;=0A= @@ -15112,8 +15123,8 @@=0A= [(set (match_operand:V1TI 0 "register_operand")=0A= (rotatert:V1TI=0A= (match_operand:V1TI 1 "register_operand")=0A= - (match_operand:SI 2 "const_int_operand")))]=0A= - "TARGET_SSE2"=0A= + (match_operand:QI 2 "const_int_operand")))]=0A= + "TARGET_SSE2 && TARGET_64BIT"=0A= {=0A= ix86_expand_v1ti_rotate (ROTATERT, operands);=0A= DONE;=0A= diff --git a/gcc/testsuite/gcc.target/i386/sse2-v1ti-ashiftrt-1.c = b/gcc/testsuite/gcc.target/i386/sse2-v1ti-ashiftrt-1.c=0A= new file mode 100644=0A= index 0000000..05869bf=0A= --- /dev/null=0A= +++ b/gcc/testsuite/gcc.target/i386/sse2-v1ti-ashiftrt-1.c=0A= @@ -0,0 +1,167 @@=0A= +/* { dg-do run { target int128 } } */=0A= +/* { dg-options "-O2 -msse2" } */=0A= +/* { dg-require-effective-target sse2 } */=0A= +=0A= +typedef __int128 v1ti __attribute__ ((__vector_size__ (16)));=0A= +typedef __int128 ti;=0A= +=0A= +ti ashr(ti x, unsigned int i) { return x >> i; }=0A= +=0A= +v1ti ashr_1(v1ti x) { return x >> 1; }=0A= +v1ti ashr_2(v1ti x) { return x >> 2; }=0A= +v1ti ashr_7(v1ti x) { return x >> 7; }=0A= +v1ti ashr_8(v1ti x) { return x >> 8; }=0A= +v1ti ashr_9(v1ti x) { return x >> 9; }=0A= +v1ti ashr_15(v1ti x) { return x >> 15; }=0A= +v1ti ashr_16(v1ti x) { return x >> 16; }=0A= +v1ti ashr_17(v1ti x) { return x >> 17; }=0A= +v1ti ashr_23(v1ti x) { return x >> 23; }=0A= +v1ti ashr_24(v1ti x) { return x >> 24; }=0A= +v1ti ashr_25(v1ti x) { return x >> 25; }=0A= +v1ti ashr_31(v1ti x) { return x >> 31; }=0A= +v1ti ashr_32(v1ti x) { return x >> 32; }=0A= +v1ti ashr_33(v1ti x) { return x >> 33; }=0A= +v1ti ashr_47(v1ti x) { return x >> 47; }=0A= +v1ti ashr_48(v1ti x) { return x >> 48; }=0A= +v1ti ashr_49(v1ti x) { return x >> 49; }=0A= +v1ti ashr_63(v1ti x) { return x >> 63; }=0A= +v1ti ashr_64(v1ti x) { return x >> 64; }=0A= +v1ti ashr_65(v1ti x) { return x >> 65; }=0A= +v1ti ashr_72(v1ti x) { return x >> 72; }=0A= +v1ti ashr_79(v1ti x) { return x >> 79; }=0A= +v1ti ashr_80(v1ti x) { return x >> 80; }=0A= +v1ti ashr_81(v1ti x) { return x >> 81; }=0A= +v1ti ashr_95(v1ti x) { return x >> 95; }=0A= +v1ti ashr_96(v1ti x) { return x >> 96; }=0A= +v1ti ashr_97(v1ti x) { return x >> 97; }=0A= +v1ti ashr_111(v1ti x) { return x >> 111; }=0A= +v1ti ashr_112(v1ti x) { return x >> 112; }=0A= +v1ti ashr_113(v1ti x) { return x >> 113; }=0A= +v1ti ashr_119(v1ti x) { return x >> 119; }=0A= +v1ti ashr_120(v1ti x) { return x >> 120; }=0A= +v1ti ashr_121(v1ti x) { return x >> 121; }=0A= +v1ti ashr_126(v1ti x) { return x >> 126; }=0A= +v1ti ashr_127(v1ti x) { return x >> 127; }=0A= +=0A= +typedef v1ti (*fun)(v1ti);=0A= +=0A= +struct {=0A= + unsigned int i;=0A= + fun ashr;=0A= +} table[35] =3D {=0A= + { 1, ashr_1 },=0A= + { 2, ashr_2 },=0A= + { 7, ashr_7 },=0A= + { 8, ashr_8 },=0A= + { 9, ashr_9 },=0A= + { 15, ashr_15 },=0A= + { 16, ashr_16 },=0A= + { 17, ashr_17 },=0A= + { 23, ashr_23 },=0A= + { 24, ashr_24 },=0A= + { 25, ashr_25 },=0A= + { 31, ashr_31 },=0A= + { 32, ashr_32 },=0A= + { 33, ashr_33 },=0A= + { 47, ashr_47 },=0A= + { 48, ashr_48 },=0A= + { 49, ashr_49 },=0A= + { 63, ashr_63 },=0A= + { 64, ashr_64 },=0A= + { 65, ashr_65 },=0A= + { 72, ashr_72 },=0A= + { 79, ashr_79 },=0A= + { 80, ashr_80 },=0A= + { 81, ashr_81 },=0A= + { 95, ashr_95 },=0A= + { 96, ashr_96 },=0A= + { 97, ashr_97 },=0A= + { 111, ashr_111 },=0A= + { 112, ashr_112 },=0A= + { 113, ashr_113 },=0A= + { 119, ashr_119 },=0A= + { 120, ashr_120 },=0A= + { 121, ashr_121 },=0A= + { 126, ashr_126 },=0A= + { 127, ashr_127 }=0A= +};=0A= +=0A= +void test(ti x)=0A= +{=0A= + unsigned int i;=0A= + v1ti t =3D (v1ti)x;=0A= +=0A= + for (i=3D0; i<(sizeof(table)/sizeof(table[0])); i++) {=0A= + if ((ti)(*table[i].ashr)(t) !=3D ashr(x,table[i].i))=0A= + __builtin_abort();=0A= + }=0A= +}=0A= +=0A= +int main()=0A= +{=0A= + ti x;=0A= +=0A= + x =3D ((ti)0x0011223344556677ull)<<64 | 0x8899aabbccddeeffull;=0A= + test(x);=0A= + x =3D ((ti)0xffeeddccbbaa9988ull)<<64 | 0x7766554433221100ull;=0A= + test(x);=0A= + x =3D ((ti)0x0123456789abcdefull)<<64 | 0x0123456789abcdefull;=0A= + test(x);=0A= + x =3D ((ti)0xfedcba9876543210ull)<<64 | 0xfedcba9876543210ull;=0A= + test(x);=0A= + x =3D ((ti)0x0123456789abcdefull)<<64 | 0xfedcba9876543210ull;=0A= + test(x);=0A= + x =3D ((ti)0xfedcba9876543210ull)<<64 | 0x0123456789abcdefull;=0A= + test(x);=0A= + x =3D 0;=0A= + test(x);=0A= + x =3D 0xffffffffffffffffull;=0A= + test(x);=0A= + x =3D ((ti)0xffffffffffffffffull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xffffffffffffffffull)<<64 | 0xffffffffffffffffull;=0A= + test(x);=0A= + x =3D ((ti)0x5a5a5a5a5a5a5a5aull)<<64 | 0x5a5a5a5a5a5a5a5aull;=0A= + test(x);=0A= + x =3D ((ti)0xa5a5a5a5a5a5a5a5ull)<<64 | 0xa5a5a5a5a5a5a5a5ull;=0A= + test(x);=0A= + x =3D 0xffull;=0A= + test(x);=0A= + x =3D 0xff00ull;=0A= + test(x);=0A= + x =3D 0xff0000ull;=0A= + test(x);=0A= + x =3D 0xff000000ull;=0A= + test(x);=0A= + x =3D 0xff00000000ull;=0A= + test(x);=0A= + x =3D 0xff0000000000ull;=0A= + test(x);=0A= + x =3D 0xff000000000000ull;=0A= + test(x);=0A= + x =3D 0xff00000000000000ull;=0A= + test(x);=0A= + x =3D ((ti)0xffull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff00ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff0000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff00000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff0000000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff000000000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff00000000000000ull)<<64;=0A= + test(x);=0A= + x =3D 0xdeadbeefcafebabeull;=0A= + test(x);=0A= + x =3D ((ti)0xdeadbeefcafebabeull)<<64;=0A= + test(x);=0A= +=0A= + return 0;=0A= +}=0A= +=0A= diff --git a/gcc/testsuite/gcc.target/i386/sse2-v1ti-ashiftrt-2.c = b/gcc/testsuite/gcc.target/i386/sse2-v1ti-ashiftrt-2.c=0A= new file mode 100644=0A= index 0000000..b3d0aa3=0A= --- /dev/null=0A= +++ b/gcc/testsuite/gcc.target/i386/sse2-v1ti-ashiftrt-2.c=0A= @@ -0,0 +1,166 @@=0A= +/* { dg-do compile { target int128 } } */=0A= +/* { dg-options "-O2 -msse2 -mavx2 " } */=0A= +=0A= +typedef __int128 v1ti __attribute__ ((__vector_size__ (16)));=0A= +typedef __int128 ti;=0A= +=0A= +ti ashr(ti x, unsigned int i) { return x >> i; }=0A= +=0A= +v1ti ashr_1(v1ti x) { return x >> 1; }=0A= +v1ti ashr_2(v1ti x) { return x >> 2; }=0A= +v1ti ashr_7(v1ti x) { return x >> 7; }=0A= +v1ti ashr_8(v1ti x) { return x >> 8; }=0A= +v1ti ashr_9(v1ti x) { return x >> 9; }=0A= +v1ti ashr_15(v1ti x) { return x >> 15; }=0A= +v1ti ashr_16(v1ti x) { return x >> 16; }=0A= +v1ti ashr_17(v1ti x) { return x >> 17; }=0A= +v1ti ashr_23(v1ti x) { return x >> 23; }=0A= +v1ti ashr_24(v1ti x) { return x >> 24; }=0A= +v1ti ashr_25(v1ti x) { return x >> 25; }=0A= +v1ti ashr_31(v1ti x) { return x >> 31; }=0A= +v1ti ashr_32(v1ti x) { return x >> 32; }=0A= +v1ti ashr_33(v1ti x) { return x >> 33; }=0A= +v1ti ashr_47(v1ti x) { return x >> 47; }=0A= +v1ti ashr_48(v1ti x) { return x >> 48; }=0A= +v1ti ashr_49(v1ti x) { return x >> 49; }=0A= +v1ti ashr_63(v1ti x) { return x >> 63; }=0A= +v1ti ashr_64(v1ti x) { return x >> 64; }=0A= +v1ti ashr_65(v1ti x) { return x >> 65; }=0A= +v1ti ashr_72(v1ti x) { return x >> 72; }=0A= +v1ti ashr_79(v1ti x) { return x >> 79; }=0A= +v1ti ashr_80(v1ti x) { return x >> 80; }=0A= +v1ti ashr_81(v1ti x) { return x >> 81; }=0A= +v1ti ashr_95(v1ti x) { return x >> 95; }=0A= +v1ti ashr_96(v1ti x) { return x >> 96; }=0A= +v1ti ashr_97(v1ti x) { return x >> 97; }=0A= +v1ti ashr_111(v1ti x) { return x >> 111; }=0A= +v1ti ashr_112(v1ti x) { return x >> 112; }=0A= +v1ti ashr_113(v1ti x) { return x >> 113; }=0A= +v1ti ashr_119(v1ti x) { return x >> 119; }=0A= +v1ti ashr_120(v1ti x) { return x >> 120; }=0A= +v1ti ashr_121(v1ti x) { return x >> 121; }=0A= +v1ti ashr_126(v1ti x) { return x >> 126; }=0A= +v1ti ashr_127(v1ti x) { return x >> 127; }=0A= +=0A= +typedef v1ti (*fun)(v1ti);=0A= +=0A= +struct {=0A= + unsigned int i;=0A= + fun ashr;=0A= +} table[35] =3D {=0A= + { 1, ashr_1 },=0A= + { 2, ashr_2 },=0A= + { 7, ashr_7 },=0A= + { 8, ashr_8 },=0A= + { 9, ashr_9 },=0A= + { 15, ashr_15 },=0A= + { 16, ashr_16 },=0A= + { 17, ashr_17 },=0A= + { 23, ashr_23 },=0A= + { 24, ashr_24 },=0A= + { 25, ashr_25 },=0A= + { 31, ashr_31 },=0A= + { 32, ashr_32 },=0A= + { 33, ashr_33 },=0A= + { 47, ashr_47 },=0A= + { 48, ashr_48 },=0A= + { 49, ashr_49 },=0A= + { 63, ashr_63 },=0A= + { 64, ashr_64 },=0A= + { 65, ashr_65 },=0A= + { 72, ashr_72 },=0A= + { 79, ashr_79 },=0A= + { 80, ashr_80 },=0A= + { 81, ashr_81 },=0A= + { 95, ashr_95 },=0A= + { 96, ashr_96 },=0A= + { 97, ashr_97 },=0A= + { 111, ashr_111 },=0A= + { 112, ashr_112 },=0A= + { 113, ashr_113 },=0A= + { 119, ashr_119 },=0A= + { 120, ashr_120 },=0A= + { 121, ashr_121 },=0A= + { 126, ashr_126 },=0A= + { 127, ashr_127 }=0A= +};=0A= +=0A= +void test(ti x)=0A= +{=0A= + unsigned int i;=0A= + v1ti t =3D (v1ti)x;=0A= +=0A= + for (i=3D0; i<(sizeof(table)/sizeof(table[0])); i++) {=0A= + if ((ti)(*table[i].ashr)(t) !=3D ashr(x,table[i].i))=0A= + __builtin_abort();=0A= + }=0A= +}=0A= +=0A= +int main()=0A= +{=0A= + ti x;=0A= +=0A= + x =3D ((ti)0x0011223344556677ull)<<64 | 0x8899aabbccddeeffull;=0A= + test(x);=0A= + x =3D ((ti)0xffeeddccbbaa9988ull)<<64 | 0x7766554433221100ull;=0A= + test(x);=0A= + x =3D ((ti)0x0123456789abcdefull)<<64 | 0x0123456789abcdefull;=0A= + test(x);=0A= + x =3D ((ti)0xfedcba9876543210ull)<<64 | 0xfedcba9876543210ull;=0A= + test(x);=0A= + x =3D ((ti)0x0123456789abcdefull)<<64 | 0xfedcba9876543210ull;=0A= + test(x);=0A= + x =3D ((ti)0xfedcba9876543210ull)<<64 | 0x0123456789abcdefull;=0A= + test(x);=0A= + x =3D 0;=0A= + test(x);=0A= + x =3D 0xffffffffffffffffull;=0A= + test(x);=0A= + x =3D ((ti)0xffffffffffffffffull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xffffffffffffffffull)<<64 | 0xffffffffffffffffull;=0A= + test(x);=0A= + x =3D ((ti)0x5a5a5a5a5a5a5a5aull)<<64 | 0x5a5a5a5a5a5a5a5aull;=0A= + test(x);=0A= + x =3D ((ti)0xa5a5a5a5a5a5a5a5ull)<<64 | 0xa5a5a5a5a5a5a5a5ull;=0A= + test(x);=0A= + x =3D 0xffull;=0A= + test(x);=0A= + x =3D 0xff00ull;=0A= + test(x);=0A= + x =3D 0xff0000ull;=0A= + test(x);=0A= + x =3D 0xff000000ull;=0A= + test(x);=0A= + x =3D 0xff00000000ull;=0A= + test(x);=0A= + x =3D 0xff0000000000ull;=0A= + test(x);=0A= + x =3D 0xff000000000000ull;=0A= + test(x);=0A= + x =3D 0xff00000000000000ull;=0A= + test(x);=0A= + x =3D ((ti)0xffull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff00ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff0000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff00000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff0000000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff000000000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff00000000000000ull)<<64;=0A= + test(x);=0A= + x =3D 0xdeadbeefcafebabeull;=0A= + test(x);=0A= + x =3D ((ti)0xdeadbeefcafebabeull)<<64;=0A= + test(x);=0A= +=0A= + return 0;=0A= +}=0A= +=0A= diff --git a/gcc/testsuite/gcc.target/i386/sse2-v1ti-ashiftrt-3.c = b/gcc/testsuite/gcc.target/i386/sse2-v1ti-ashiftrt-3.c=0A= new file mode 100644=0A= index 0000000..61d4f4c=0A= --- /dev/null=0A= +++ b/gcc/testsuite/gcc.target/i386/sse2-v1ti-ashiftrt-3.c=0A= @@ -0,0 +1,166 @@=0A= +/* { dg-do compile { target int128 } } */=0A= +/* { dg-options "-O2 -msse2 -msse4.1" } */=0A= +=0A= +typedef __int128 v1ti __attribute__ ((__vector_size__ (16)));=0A= +typedef __int128 ti;=0A= +=0A= +ti ashr(ti x, unsigned int i) { return x >> i; }=0A= +=0A= +v1ti ashr_1(v1ti x) { return x >> 1; }=0A= +v1ti ashr_2(v1ti x) { return x >> 2; }=0A= +v1ti ashr_7(v1ti x) { return x >> 7; }=0A= +v1ti ashr_8(v1ti x) { return x >> 8; }=0A= +v1ti ashr_9(v1ti x) { return x >> 9; }=0A= +v1ti ashr_15(v1ti x) { return x >> 15; }=0A= +v1ti ashr_16(v1ti x) { return x >> 16; }=0A= +v1ti ashr_17(v1ti x) { return x >> 17; }=0A= +v1ti ashr_23(v1ti x) { return x >> 23; }=0A= +v1ti ashr_24(v1ti x) { return x >> 24; }=0A= +v1ti ashr_25(v1ti x) { return x >> 25; }=0A= +v1ti ashr_31(v1ti x) { return x >> 31; }=0A= +v1ti ashr_32(v1ti x) { return x >> 32; }=0A= +v1ti ashr_33(v1ti x) { return x >> 33; }=0A= +v1ti ashr_47(v1ti x) { return x >> 47; }=0A= +v1ti ashr_48(v1ti x) { return x >> 48; }=0A= +v1ti ashr_49(v1ti x) { return x >> 49; }=0A= +v1ti ashr_63(v1ti x) { return x >> 63; }=0A= +v1ti ashr_64(v1ti x) { return x >> 64; }=0A= +v1ti ashr_65(v1ti x) { return x >> 65; }=0A= +v1ti ashr_72(v1ti x) { return x >> 72; }=0A= +v1ti ashr_79(v1ti x) { return x >> 79; }=0A= +v1ti ashr_80(v1ti x) { return x >> 80; }=0A= +v1ti ashr_81(v1ti x) { return x >> 81; }=0A= +v1ti ashr_95(v1ti x) { return x >> 95; }=0A= +v1ti ashr_96(v1ti x) { return x >> 96; }=0A= +v1ti ashr_97(v1ti x) { return x >> 97; }=0A= +v1ti ashr_111(v1ti x) { return x >> 111; }=0A= +v1ti ashr_112(v1ti x) { return x >> 112; }=0A= +v1ti ashr_113(v1ti x) { return x >> 113; }=0A= +v1ti ashr_119(v1ti x) { return x >> 119; }=0A= +v1ti ashr_120(v1ti x) { return x >> 120; }=0A= +v1ti ashr_121(v1ti x) { return x >> 121; }=0A= +v1ti ashr_126(v1ti x) { return x >> 126; }=0A= +v1ti ashr_127(v1ti x) { return x >> 127; }=0A= +=0A= +typedef v1ti (*fun)(v1ti);=0A= +=0A= +struct {=0A= + unsigned int i;=0A= + fun ashr;=0A= +} table[35] =3D {=0A= + { 1, ashr_1 },=0A= + { 2, ashr_2 },=0A= + { 7, ashr_7 },=0A= + { 8, ashr_8 },=0A= + { 9, ashr_9 },=0A= + { 15, ashr_15 },=0A= + { 16, ashr_16 },=0A= + { 17, ashr_17 },=0A= + { 23, ashr_23 },=0A= + { 24, ashr_24 },=0A= + { 25, ashr_25 },=0A= + { 31, ashr_31 },=0A= + { 32, ashr_32 },=0A= + { 33, ashr_33 },=0A= + { 47, ashr_47 },=0A= + { 48, ashr_48 },=0A= + { 49, ashr_49 },=0A= + { 63, ashr_63 },=0A= + { 64, ashr_64 },=0A= + { 65, ashr_65 },=0A= + { 72, ashr_72 },=0A= + { 79, ashr_79 },=0A= + { 80, ashr_80 },=0A= + { 81, ashr_81 },=0A= + { 95, ashr_95 },=0A= + { 96, ashr_96 },=0A= + { 97, ashr_97 },=0A= + { 111, ashr_111 },=0A= + { 112, ashr_112 },=0A= + { 113, ashr_113 },=0A= + { 119, ashr_119 },=0A= + { 120, ashr_120 },=0A= + { 121, ashr_121 },=0A= + { 126, ashr_126 },=0A= + { 127, ashr_127 }=0A= +};=0A= +=0A= +void test(ti x)=0A= +{=0A= + unsigned int i;=0A= + v1ti t =3D (v1ti)x;=0A= +=0A= + for (i=3D0; i<(sizeof(table)/sizeof(table[0])); i++) {=0A= + if ((ti)(*table[i].ashr)(t) !=3D ashr(x,table[i].i))=0A= + __builtin_abort();=0A= + }=0A= +}=0A= +=0A= +int main()=0A= +{=0A= + ti x;=0A= +=0A= + x =3D ((ti)0x0011223344556677ull)<<64 | 0x8899aabbccddeeffull;=0A= + test(x);=0A= + x =3D ((ti)0xffeeddccbbaa9988ull)<<64 | 0x7766554433221100ull;=0A= + test(x);=0A= + x =3D ((ti)0x0123456789abcdefull)<<64 | 0x0123456789abcdefull;=0A= + test(x);=0A= + x =3D ((ti)0xfedcba9876543210ull)<<64 | 0xfedcba9876543210ull;=0A= + test(x);=0A= + x =3D ((ti)0x0123456789abcdefull)<<64 | 0xfedcba9876543210ull;=0A= + test(x);=0A= + x =3D ((ti)0xfedcba9876543210ull)<<64 | 0x0123456789abcdefull;=0A= + test(x);=0A= + x =3D 0;=0A= + test(x);=0A= + x =3D 0xffffffffffffffffull;=0A= + test(x);=0A= + x =3D ((ti)0xffffffffffffffffull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xffffffffffffffffull)<<64 | 0xffffffffffffffffull;=0A= + test(x);=0A= + x =3D ((ti)0x5a5a5a5a5a5a5a5aull)<<64 | 0x5a5a5a5a5a5a5a5aull;=0A= + test(x);=0A= + x =3D ((ti)0xa5a5a5a5a5a5a5a5ull)<<64 | 0xa5a5a5a5a5a5a5a5ull;=0A= + test(x);=0A= + x =3D 0xffull;=0A= + test(x);=0A= + x =3D 0xff00ull;=0A= + test(x);=0A= + x =3D 0xff0000ull;=0A= + test(x);=0A= + x =3D 0xff000000ull;=0A= + test(x);=0A= + x =3D 0xff00000000ull;=0A= + test(x);=0A= + x =3D 0xff0000000000ull;=0A= + test(x);=0A= + x =3D 0xff000000000000ull;=0A= + test(x);=0A= + x =3D 0xff00000000000000ull;=0A= + test(x);=0A= + x =3D ((ti)0xffull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff00ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff0000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff00000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff0000000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff000000000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff00000000000000ull)<<64;=0A= + test(x);=0A= + x =3D 0xdeadbeefcafebabeull;=0A= + test(x);=0A= + x =3D ((ti)0xdeadbeefcafebabeull)<<64;=0A= + test(x);=0A= +=0A= + return 0;=0A= +}=0A= +=0A= diff --git a/gcc/testsuite/gcc.target/i386/sse2-v1ti-shift-2.c = b/gcc/testsuite/gcc.target/i386/sse2-v1ti-shift-2.c=0A= new file mode 100644=0A= index 0000000..18da2ef=0A= --- /dev/null=0A= +++ b/gcc/testsuite/gcc.target/i386/sse2-v1ti-shift-2.c=0A= @@ -0,0 +1,13 @@=0A= +/* PR target/102986 */=0A= +/* { dg-do compile { target int128 } } */=0A= +/* { dg-options "-O2 -msse2" } */=0A= +=0A= +typedef unsigned __int128 uv1ti __attribute__ ((__vector_size__ (16)));=0A= +typedef __int128 sv1ti __attribute__ ((__vector_size__ (16)));=0A= +=0A= +uv1ti ashl(uv1ti x, unsigned int i) { return x << i; }=0A= +uv1ti lshr(uv1ti x, unsigned int i) { return x >> i; }=0A= +sv1ti ashr(sv1ti x, unsigned int i) { return x >> i; }=0A= +uv1ti rotr(uv1ti x, unsigned int i) { return (x >> i) | (x << (128-i)); = }=0A= +uv1ti rotl(uv1ti x, unsigned int i) { return (x << i) | (x >> (128-i)); = }=0A= +=0A= diff --git a/gcc/testsuite/gcc.target/i386/sse2-v1ti-shift-3.c = b/gcc/testsuite/gcc.target/i386/sse2-v1ti-shift-3.c=0A= new file mode 100644=0A= index 0000000..8d5c122=0A= --- /dev/null=0A= +++ b/gcc/testsuite/gcc.target/i386/sse2-v1ti-shift-3.c=0A= @@ -0,0 +1,113 @@=0A= +/* PR target/102986 */=0A= +/* { dg-do run { target int128 } } */=0A= +/* { dg-options "-O2 -msse2" } */=0A= +/* { dg-require-effective-target sse2 } */=0A= +=0A= +typedef unsigned __int128 uv1ti __attribute__ ((__vector_size__ (16)));=0A= +typedef __int128 sv1ti __attribute__ ((__vector_size__ (16)));=0A= +typedef __int128 v1ti __attribute__ ((__vector_size__ (16)));=0A= +=0A= +typedef unsigned __int128 uti;=0A= +typedef __int128 sti;=0A= +typedef __int128 ti;=0A= +=0A= +uv1ti ashl_v1ti(uv1ti x, unsigned int i) { return x << i; }=0A= +uv1ti lshr_v1ti(uv1ti x, unsigned int i) { return x >> i; }=0A= +sv1ti ashr_v1ti(sv1ti x, unsigned int i) { return x >> i; }=0A= +uv1ti rotr_v1ti(uv1ti x, unsigned int i) { return (x >> i) | (x << = (128-i)); }=0A= +uv1ti rotl_v1ti(uv1ti x, unsigned int i) { return (x << i) | (x >> = (128-i)); }=0A= +=0A= +uti ashl_ti(uti x, unsigned int i) { return x << i; }=0A= +uti lshr_ti(uti x, unsigned int i) { return x >> i; }=0A= +sti ashr_ti(sti x, unsigned int i) { return x >> i; }=0A= +uti rotr_ti(uti x, unsigned int i) { return (x >> i) | (x << (128-i)); }=0A= +uti rotl_ti(uti x, unsigned int i) { return (x << i) | (x >> (128-i)); }=0A= +=0A= +void test(ti x)=0A= +{=0A= + unsigned int i;=0A= + uv1ti ut =3D (uv1ti)x;=0A= + sv1ti st =3D (sv1ti)x;=0A= +=0A= + for (i=3D0; i<128; i++) {=0A= + if ((ti)ashl_v1ti(ut,i) !=3D (ti)ashl_ti(x,i))=0A= + __builtin_abort();=0A= + if ((ti)lshr_v1ti(ut,i) !=3D (ti)lshr_ti(x,i))=0A= + __builtin_abort();=0A= + if ((ti)ashr_v1ti(st,i) !=3D (ti)ashr_ti(x,i))=0A= + __builtin_abort();=0A= + if ((ti)rotr_v1ti(ut,i) !=3D (ti)rotr_ti(x,i))=0A= + __builtin_abort();=0A= + if ((ti)rotl_v1ti(ut,i) !=3D (ti)rotl_ti(x,i))=0A= + __builtin_abort();=0A= + }=0A= +}=0A= +=0A= +int main()=0A= +{=0A= + ti x;=0A= +=0A= + x =3D ((ti)0x0011223344556677ull)<<64 | 0x8899aabbccddeeffull;=0A= + test(x);=0A= + x =3D ((ti)0xffeeddccbbaa9988ull)<<64 | 0x7766554433221100ull;=0A= + test(x);=0A= + x =3D ((ti)0x0123456789abcdefull)<<64 | 0x0123456789abcdefull;=0A= + test(x);=0A= + x =3D ((ti)0xfedcba9876543210ull)<<64 | 0xfedcba9876543210ull;=0A= + test(x);=0A= + x =3D ((ti)0x0123456789abcdefull)<<64 | 0xfedcba9876543210ull;=0A= + test(x);=0A= + x =3D ((ti)0xfedcba9876543210ull)<<64 | 0x0123456789abcdefull;=0A= + test(x);=0A= + x =3D 0;=0A= + test(x);=0A= + x =3D 0xffffffffffffffffull;=0A= + test(x);=0A= + x =3D ((ti)0xffffffffffffffffull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xffffffffffffffffull)<<64 | 0xffffffffffffffffull;=0A= + test(x);=0A= + x =3D ((ti)0x5a5a5a5a5a5a5a5aull)<<64 | 0x5a5a5a5a5a5a5a5aull;=0A= + test(x);=0A= + x =3D ((ti)0xa5a5a5a5a5a5a5a5ull)<<64 | 0xa5a5a5a5a5a5a5a5ull;=0A= + test(x);=0A= + x =3D 0xffull;=0A= + test(x);=0A= + x =3D 0xff00ull;=0A= + test(x);=0A= + x =3D 0xff0000ull;=0A= + test(x);=0A= + x =3D 0xff000000ull;=0A= + test(x);=0A= + x =3D 0xff00000000ull;=0A= + test(x);=0A= + x =3D 0xff0000000000ull;=0A= + test(x);=0A= + x =3D 0xff000000000000ull;=0A= + test(x);=0A= + x =3D 0xff00000000000000ull;=0A= + test(x);=0A= + x =3D ((ti)0xffull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff00ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff0000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff00000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff0000000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff000000000000ull)<<64;=0A= + test(x);=0A= + x =3D ((ti)0xff00000000000000ull)<<64;=0A= + test(x);=0A= + x =3D 0xdeadbeefcafebabeull;=0A= + test(x);=0A= + x =3D ((ti)0xdeadbeefcafebabeull)<<64;=0A= + test(x);=0A= +=0A= + return 0;=0A= +}=0A= +=0A= ------=_NextPart_000_0301_01D7CE3E.5B49B3D0--