From: Yuri Rumyantsev <ysrumyan@gmail.com>
To: "Uros Bizjak" <ubizjak@gmail.com>,
gcc-patches <gcc-patches@gcc.gnu.org>,
"Richard Biener" <richard.guenther@gmail.com>,
"Igor Zamyatin" <izamyatin@gmail.com>,
"Илья Энкович" <enkovich.gnu@gmail.com>
Subject: Re: [off-list] Re: [PATCH PR68542]
Date: Fri, 29 Jan 2016 14:13:00 -0000 [thread overview]
Message-ID: <CAEoMCqQc_kbYCHzd6241EPP7oZExA7MYKPA5Y-DaVnMYJL3LPA@mail.gmail.com> (raw)
In-Reply-To: <CAFULd4bA1f085BsrPO87gX3iw34dVjvF2Rz5DA=r2E66TywuJQ@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 3746 bytes --]
Uros,
Here is update patch which includes (1) couple changes proposed by
Richard in tree-vect-loop.c and (2) the changes in back-end proposed
by you.
Is it OK for trunk?
Bootstrap and regression testing dis not show any new failures.
ChangeLog:
2016-01-29 Yuri Rumyantsev <ysrumyan@gmail.com>
PR middle-end/68542
* config/i386/i386.c (ix86_expand_branch): Add support for conditional
branch with vector comparison.
*config/i386/sse.md (Vi48_AVX): New mode iterator.
(define_expand "cbranch<mode>4): Add support for conditional branch
with vector comparison.
* tree-vect-loop.c (optimize_mask_stores): New function.
* tree-vect-stmts.c (vectorizable_mask_load_store): Initialize
has_mask_store field of vect_info.
* tree-vectorizer.c (vectorize_loops): Invoke optimaze_mask_stores for
vectorized loops having masked stores after vec_info destroy.
* tree-vectorizer.h (loop_vec_info): Add new has_mask_store field and
correspondent macros.
(optimize_mask_stores): Add prototype.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/vect-mask-store-move-1.c: New test.
* testsuite/gcc.target/i386/avx2-vect-mask-store-move1.c: Likewise.
2016-01-29 15:26 GMT+03:00 Uros Bizjak <ubizjak@gmail.com>:
> On Fri, Jan 29, 2016 at 1:20 PM, Yuri Rumyantsev <ysrumyan@gmail.com> wrote:
>> Uros,
>>
>> Thanks for your comments.
>> I deleted swap of operands as you told.
>> Let me explain my point in adding support for conditional branches
>> with vector comparison.
>> This feature is used to put vectorized masked stores and its
>> producers under guard that checks that mask is not zero, i.e. if mask
>> which is result of other vector computations is zero we don't need to
>> execute correspondent masked store and its producers if they don't
>> have other uses. It means that only integer 128-bit and 256-bit
>> vectors must be accepted as operands of cbranch. I did not introduce
>> new iterator but simply used existence iterator V48_AVX2. BTW you
>> proposed to add new iterator VI_AVX but it would be better to ad
>> VI48_AVX as
>>
>> (define_mode_iterator Vi48_AVX
>> [(V4SI "TARGET_AVX") (V2DI "TARGET_AVX")
>> (V8SI "TARGET_AVX") (V4DI "TARGET_AVX")])
>>
>> I also don't think that we need to add support in expand_compare since
>> such comparisons are not generated.
>
> OK with me. If there is no need for cstore pattern, then the
> comparison can be integrated with existing code in expand_branch by
> using ""goto simple" as is already the case there.
>
> BR,
> Uros.
>
>> 2016-01-28 20:08 GMT+03:00 Uros Bizjak <ubizjak@gmail.com>:
>>> Yuri,
>>>
>>> please find attached a target-dependent patch that illustrates my
>>> review remarks. The patch is lightly tested, and it produces desired
>>> ptest insns on the testcases you provided.
>>>
>>> Some further remarks:
>>>
>>> + tmp = gen_rtx_fmt_ee (code, VOIDmode, flag, const0_rtx);
>>> + if (code == EQ)
>>> + tmp = gen_rtx_IF_THEN_ELSE (VOIDmode, tmp,
>>> + gen_rtx_LABEL_REF (VOIDmode, label), pc_rtx);
>>> + else
>>> + tmp = gen_rtx_IF_THEN_ELSE (VOIDmode, tmp,
>>> + pc_rtx, gen_rtx_LABEL_REF (VOIDmode, label));
>>> + emit_jump_insn (gen_rtx_SET (pc_rtx, tmp));
>>> + return;
>>>
>>> The above code is IMO wrong. You don't need to swap the arms of the
>>> target, since "code" will generate je or jne. Please see the attached
>>> patch.
>>>
>>> BTW: Maybe we can introduce corresponding cstore pattrn to use ptest
>>> in order to more efficiently vectorize code like:
>>>
>>> --cut here--
>>> int a[256];
>>>
>>> int foo (void)
>>> {
>>> int ret = 0;
>>> int i;
>>>
>>> for (i = 0; i < 256; i++)
>>> {
>>> if (a[i] != 0)
>>> ret = 1;
>>> }
>>> return ret;
>>> }
>>> --cut here--
>>>
>>> Uros.
[-- Attachment #2: PR68542.patch.3 --]
[-- Type: application/octet-stream, Size: 13938 bytes --]
diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c
index 733d0ab..ea90150 100644
--- a/gcc/config/i386/i386.c
+++ b/gcc/config/i386/i386.c
@@ -21669,6 +21669,30 @@ ix86_expand_branch (enum rtx_code code, rtx op0, rtx op1, rtx label)
machine_mode mode = GET_MODE (op0);
rtx tmp;
+ /* Handle special case - vector comparsion with boolean result, transform
+ it using ptest instruction. */
+ if (GET_MODE_CLASS (mode) == MODE_VECTOR_INT)
+ {
+ rtx flag = gen_rtx_REG (CCZmode, FLAGS_REG);
+ machine_mode p_mode = GET_MODE_SIZE (mode) == 32 ? V4DImode : V2DImode;
+
+ gcc_assert (code == EQ || code == NE);
+ /* Generate XOR since we can't check that one operand is zero vector. */
+ tmp = gen_reg_rtx (mode);
+ emit_insn (gen_rtx_SET (tmp, gen_rtx_XOR (mode, op0, op1)));
+ tmp = gen_lowpart (p_mode, tmp);
+ emit_insn (gen_rtx_SET (gen_rtx_REG (CCmode, FLAGS_REG),
+ gen_rtx_UNSPEC (CCmode,
+ gen_rtvec (2, tmp, tmp),
+ UNSPEC_PTEST)));
+ tmp = gen_rtx_fmt_ee (code, VOIDmode, flag, const0_rtx);
+ tmp = gen_rtx_IF_THEN_ELSE (VOIDmode, tmp,
+ gen_rtx_LABEL_REF (VOIDmode, label),
+ pc_rtx);
+ emit_jump_insn (gen_rtx_SET (pc_rtx, tmp));
+ return;
+ }
+
switch (mode)
{
case SFmode:
diff --git a/gcc/config/i386/sse.md b/gcc/config/i386/sse.md
index 6740edf..e6e2685 100644
--- a/gcc/config/i386/sse.md
+++ b/gcc/config/i386/sse.md
@@ -305,6 +305,10 @@
(V8SI "TARGET_AVX") (V4DI "TARGET_AVX")
(V8SF "TARGET_AVX") (V4DF"TARGET_AVX")])
+(define_mode_iterator Vi48_AVX
+ [(V4SI "TARGET_AVX") (V2DI "TARGET_AVX")
+ (V8SI "TARGET_AVX") (V4DI "TARGET_AVX")])
+
(define_mode_iterator VI8
[(V8DI "TARGET_AVX512F") (V4DI "TARGET_AVX") V2DI])
@@ -18350,6 +18354,23 @@
(match_operand:<avx512fmaskmode> 2 "register_operand")))]
"TARGET_AVX512BW")
+(define_expand "cbranch<mode>4"
+ [(set (reg:CC FLAGS_REG)
+ (compare:CC (match_operand:Vi48_AVX 1 "register_operand")
+ (match_operand:Vi48_AVX 2 "nonimmediate_operand")))
+ (set (pc) (if_then_else
+ (match_operator 0 "bt_comparison_operator"
+ [(reg:CC FLAGS_REG) (const_int 0)])
+ (label_ref (match_operand 3))
+ (pc)))]
+ "TARGET_SSE4_1"
+{
+ ix86_expand_branch (GET_CODE (operands[0]),
+ operands[1], operands[2], operands[3]);
+ DONE;
+})
+
+
(define_insn_and_split "avx_<castmode><avxsizesuffix>_<castmode>"
[(set (match_operand:AVX256MODE2P 0 "nonimmediate_operand" "=x,m")
(unspec:AVX256MODE2P
diff --git a/gcc/testsuite/gcc.dg/vect/vect-mask-store-move-1.c b/gcc/testsuite/gcc.dg/vect/vect-mask-store-move-1.c
new file mode 100644
index 0000000..e575f6d
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/vect/vect-mask-store-move-1.c
@@ -0,0 +1,19 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+/* { dg-additional-options "-mavx2" { target { i?86-*-* x86_64-*-* } } } */
+
+#define N 256
+int p1[N], p2[N], p3[N];
+int c[N];
+void foo (int n)
+{
+ int i;
+ for (i=0; i<n; i++)
+ if (c[i])
+ {
+ p1[i] += 1;
+ p2[i] = p3[i] +2;
+ }
+}
+
+/* { dg-final { scan-tree-dump-times "Move stmt to created bb" 6 "vect" } } */
diff --git a/gcc/testsuite/gcc.target/i386/avx2-vect-mask-store-move1.c b/gcc/testsuite/gcc.target/i386/avx2-vect-mask-store-move1.c
new file mode 100644
index 0000000..2a10560
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/avx2-vect-mask-store-move1.c
@@ -0,0 +1,79 @@
+/* { dg-options "-O3 -mavx2 -fdump-tree-vect-details" } */
+/* { dg-require-effective-target avx2 } */
+
+#include "avx2-check.h"
+#define N 32
+int *p1, *p2, *p3;
+int c[N];
+int p1ref[N], p2ref[N];
+
+__attribute__((noinline, noclone)) void foo (int n)
+{
+ int i;
+ for (i=0; i<n; i++)
+ if (c[i])
+ {
+ p1[i] += 1;
+ p2[i] = p3[i] +2;
+ }
+}
+
+void init ()
+{
+ p1ref[0]=1; p2ref[0]=2;
+ p1ref[1]=3; p2ref[1]=5;
+ p1ref[2]=5; p2ref[2]=8;
+ p1ref[3]=7; p2ref[3]=11;
+ p1ref[4]=9; p2ref[4]=14;
+ p1ref[5]=11; p2ref[5]=17;
+ p1ref[6]=13; p2ref[6]=20;
+ p1ref[7]=15; p2ref[7]=23;
+ p1ref[8]=16; p2ref[8]=8;
+ p1ref[9]=18; p2ref[9]=9;
+ p1ref[10]=20; p2ref[10]=10;
+ p1ref[11]=22; p2ref[11]=11;
+ p1ref[12]=24; p2ref[12]=12;
+ p1ref[13]=26; p2ref[13]=13;
+ p1ref[14]=28; p2ref[14]=14;
+ p1ref[15]=30; p2ref[15]=15;
+ p1ref[16]=33; p2ref[16]=50;
+ p1ref[17]=35; p2ref[17]=53;
+ p1ref[18]=37; p2ref[18]=56;
+ p1ref[19]=39; p2ref[19]=59;
+ p1ref[20]=41; p2ref[20]=62;
+ p1ref[21]=43; p2ref[21]=65;
+ p1ref[22]=45; p2ref[22]=68;
+ p1ref[23]=47; p2ref[23]=71;
+ p1ref[24]=48; p2ref[24]=24;
+ p1ref[25]=50; p2ref[25]=25;
+ p1ref[26]=52; p2ref[26]=26;
+ p1ref[27]=54; p2ref[27]=27;
+ p1ref[28]=56; p2ref[28]=28;
+ p1ref[29]=58; p2ref[29]=29;
+ p1ref[30]=60; p2ref[30]=30;
+ p1ref[31]=62; p2ref[31]=31;
+}
+
+static void
+avx2_test (void)
+{
+ int * P = malloc (N * 3 * sizeof (int));
+ int i;
+
+ p1 = &P[0];
+ p2 = &P[N];
+ p3 = &P[2 * N];
+ for (i=0; i<N; i++) {
+ p1[i] = i + i;
+ p3[i] = i * 3;
+ p2[i] = i;
+ c[i] = (i >> 3) & 1? 0: 1;
+ }
+ init ();
+ foo (N);
+ for (i=0; i<N;i++)
+ if (p1[i] != p1ref[i] || p2[i] != p2ref[i])
+ abort ();
+}
+
+/* { dg-final { scan-tree-dump-times "Move stmt to created bb" 6 "vect" } } */
diff --git a/gcc/tree-vect-loop.c b/gcc/tree-vect-loop.c
index 77ad760..60b0a09 100644
--- a/gcc/tree-vect-loop.c
+++ b/gcc/tree-vect-loop.c
@@ -6927,3 +6927,195 @@ vect_transform_loop (loop_vec_info loop_vinfo)
dump_printf (MSG_NOTE, "\n");
}
}
+
+/* The code below is trying to perform simple optimization - revert
+ if-conversion for masked stores, i.e. if the mask of a store is zero
+ do not perform it and all stored value producers also if possible.
+ For example,
+ for (i=0; i<n; i++)
+ if (c[i])
+ {
+ p1[i] += 1;
+ p2[i] = p3[i] +2;
+ }
+ this transformation will produce the following semi-hammock:
+
+ if (!mask__ifc__42.18_165 == { 0, 0, 0, 0, 0, 0, 0, 0 })
+ {
+ vect__11.19_170 = MASK_LOAD (vectp_p1.20_168, 0B, mask__ifc__42.18_165);
+ vect__12.22_172 = vect__11.19_170 + vect_cst__171;
+ MASK_STORE (vectp_p1.23_175, 0B, mask__ifc__42.18_165, vect__12.22_172);
+ vect__18.25_182 = MASK_LOAD (vectp_p3.26_180, 0B, mask__ifc__42.18_165);
+ vect__19.28_184 = vect__18.25_182 + vect_cst__183;
+ MASK_STORE (vectp_p2.29_187, 0B, mask__ifc__42.18_165, vect__19.28_184);
+ }
+*/
+
+void
+optimize_mask_stores (struct loop *loop)
+{
+ basic_block *bbs = get_loop_body (loop);
+ unsigned nbbs = loop->num_nodes;
+ unsigned i;
+ basic_block bb;
+ gimple_stmt_iterator gsi;
+ gimple *stmt, *stmt1 = NULL;
+ auto_vec<gimple *> worklist;
+
+ vect_location = find_loop_location (loop);
+ /* Pick up all masked stores in loop if any. */
+ for (i = 0; i < nbbs; i++)
+ {
+ bb = bbs[i];
+ for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi);
+ gsi_next (&gsi))
+ {
+ stmt = gsi_stmt (gsi);
+ if (is_gimple_call (stmt)
+ && gimple_call_internal_p (stmt)
+ && gimple_call_internal_fn (stmt) == IFN_MASK_STORE)
+ worklist.safe_push (stmt);
+ }
+ }
+
+ free (bbs);
+ if (worklist.is_empty ())
+ return;
+
+ /* Loop has masked stores. */
+ while (!worklist.is_empty ())
+ {
+ gimple *last, *last_store;
+ edge e, efalse;
+ tree mask;
+ basic_block store_bb, join_bb;
+ gimple_stmt_iterator gsi_to;
+ tree vdef, new_vdef;
+ gphi *phi;
+ tree vectype;
+ tree zero;
+
+ last = worklist.pop ();
+ mask = gimple_call_arg (last, 2);
+ bb = gimple_bb (last);
+ /* Create new bb. */
+ e = split_block (bb, last);
+ join_bb = e->dest;
+ store_bb = create_empty_bb (bb);
+ add_bb_to_loop (store_bb, loop);
+ e->flags = EDGE_TRUE_VALUE;
+ efalse = make_edge (bb, store_bb, EDGE_FALSE_VALUE);
+ /* Put STORE_BB to likely part. */
+ efalse->probability = PROB_UNLIKELY;
+ store_bb->frequency = PROB_ALWAYS - EDGE_FREQUENCY (efalse);
+ make_edge (store_bb, join_bb, EDGE_FALLTHRU);
+ if (dom_info_available_p (CDI_DOMINATORS))
+ set_immediate_dominator (CDI_DOMINATORS, store_bb, bb);
+ if (dump_enabled_p ())
+ dump_printf_loc (MSG_NOTE, vect_location,
+ "Create new block %d to sink mask stores.",
+ store_bb->index);
+ /* Create vector comparison with boolean result. */
+ vectype = TREE_TYPE (mask);
+ zero = build_zero_cst (vectype);
+ stmt = gimple_build_cond (EQ_EXPR, mask, zero, NULL_TREE, NULL_TREE);
+ gsi = gsi_last_bb (bb);
+ gsi_insert_after (&gsi, stmt, GSI_SAME_STMT);
+ /* Create new PHI node for vdef of the last masked store:
+ .MEM_2 = VDEF <.MEM_1>
+ will be converted to
+ .MEM.3 = VDEF <.MEM_1>
+ and new PHI node will be created in join bb
+ .MEM_2 = PHI <.MEM_1, .MEM_3>
+ */
+ vdef = gimple_vdef (last);
+ new_vdef = make_ssa_name (gimple_vop (cfun), last);
+ gimple_set_vdef (last, new_vdef);
+ phi = create_phi_node (vdef, join_bb);
+ add_phi_arg (phi, new_vdef, EDGE_SUCC (store_bb, 0), UNKNOWN_LOCATION);
+
+ /* Put all masked stores with the same mask to STORE_BB if possible. */
+ while (true)
+ {
+ gimple_stmt_iterator gsi_from;
+ /* Move masked store to STORE_BB. */
+ last_store = last;
+ gsi = gsi_for_stmt (last);
+ gsi_from = gsi;
+ /* Shift GSI to the previous stmt for further traversal. */
+ gsi_prev (&gsi);
+ gsi_to = gsi_start_bb (store_bb);
+ gsi_move_before (&gsi_from, &gsi_to);
+ /* Setup GSI_TO to the non-empty block start. */
+ gsi_to = gsi_start_bb (store_bb);
+ if (dump_enabled_p ())
+ {
+ dump_printf_loc (MSG_NOTE, vect_location,
+ "Move stmt to created bb\n");
+ dump_gimple_stmt (MSG_NOTE, TDF_SLIM, last, 0);
+ }
+ /* Move all stored value producers if possible. */
+ while (!gsi_end_p (gsi))
+ {
+ tree lhs;
+ imm_use_iterator imm_iter;
+ use_operand_p use_p;
+ bool res;
+ stmt1 = gsi_stmt (gsi);
+ /* Do not consider statements writing to memory. */
+ if (gimple_vdef (stmt1))
+ break;
+ gsi_from = gsi;
+ gsi_prev (&gsi);
+ lhs = gimple_get_lhs (stmt1);
+ if (!lhs)
+ break;
+
+ /* LHS of vectorized stmt must be SSA_NAME. */
+ if (TREE_CODE (lhs) != SSA_NAME)
+ break;
+
+ /* Skip scalar statements. */
+ if (!VECTOR_TYPE_P (TREE_TYPE (lhs)))
+ continue;
+
+ /* Check that LHS does not have uses outside of STORE_BB. */
+ res = true;
+ FOR_EACH_IMM_USE_FAST (use_p, imm_iter, lhs)
+ {
+ gimple *use_stmt;
+ use_stmt = USE_STMT (use_p);
+ if (gimple_bb (use_stmt) != store_bb)
+ {
+ res = false;
+ break;
+ }
+ }
+ if (!res)
+ break;
+
+ if (gimple_vuse (stmt1)
+ && gimple_vuse (stmt1) != gimple_vuse (last_store))
+ break;
+
+ /* Can move STMT1 to STORE_BB. */
+ if (dump_enabled_p ())
+ {
+ dump_printf_loc (MSG_NOTE, vect_location,
+ "Move stmt to created bb\n");
+ dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt1, 0);
+ }
+ gsi_move_before (&gsi_from, &gsi_to);
+ /* Shift GSI_TO for further insertion. */
+ gsi_prev (&gsi_to);
+ }
+ /* Put other masked stores with the same mask to STORE_BB. */
+ if (worklist.is_empty ()
+ || gimple_call_arg (worklist.last (), 2) != mask
+ || worklist.last () != stmt1)
+ break;
+ last = worklist.pop ();
+ }
+ add_phi_arg (phi, gimple_vuse (last_store), e, UNKNOWN_LOCATION);
+ }
+}
diff --git a/gcc/tree-vect-stmts.c b/gcc/tree-vect-stmts.c
index 7c6fa73..645257b 100644
--- a/gcc/tree-vect-stmts.c
+++ b/gcc/tree-vect-stmts.c
@@ -2019,6 +2019,7 @@ vectorizable_mask_load_store (gimple *stmt, gimple_stmt_iterator *gsi,
{
tree vec_rhs = NULL_TREE, vec_mask = NULL_TREE;
prev_stmt_info = NULL;
+ LOOP_VINFO_HAS_MASK_STORE (loop_vinfo) = true;
for (i = 0; i < ncopies; i++)
{
unsigned align, misalign;
diff --git a/gcc/tree-vectorizer.c b/gcc/tree-vectorizer.c
index c496c4b..3d8fbac 100644
--- a/gcc/tree-vectorizer.c
+++ b/gcc/tree-vectorizer.c
@@ -604,12 +604,18 @@ vectorize_loops (void)
for (i = 1; i < vect_loops_num; i++)
{
loop_vec_info loop_vinfo;
+ bool has_mask_store;
loop = get_loop (cfun, i);
if (!loop)
continue;
loop_vinfo = (loop_vec_info) loop->aux;
+ has_mask_store = false;
+ if (loop_vinfo)
+ has_mask_store = LOOP_VINFO_HAS_MASK_STORE (loop_vinfo);
destroy_loop_vec_info (loop_vinfo, true);
+ if (has_mask_store)
+ optimize_mask_stores (loop);
loop->aux = NULL;
}
diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h
index ac68750..b68ecd8 100644
--- a/gcc/tree-vectorizer.h
+++ b/gcc/tree-vectorizer.h
@@ -333,6 +333,9 @@ typedef struct _loop_vec_info : public vec_info {
loop version without if-conversion. */
struct loop *scalar_loop;
+ /* Mark loops having masked stores. */
+ bool has_mask_store;
+
} *loop_vec_info;
/* Access Functions. */
@@ -368,6 +371,7 @@ typedef struct _loop_vec_info : public vec_info {
#define LOOP_VINFO_PEELING_FOR_NITER(L) (L)->peeling_for_niter
#define LOOP_VINFO_NO_DATA_DEPENDENCIES(L) (L)->no_data_dependencies
#define LOOP_VINFO_SCALAR_LOOP(L) (L)->scalar_loop
+#define LOOP_VINFO_HAS_MASK_STORE(L) (L)->has_mask_store
#define LOOP_VINFO_SCALAR_ITERATION_COST(L) (L)->scalar_cost_vec
#define LOOP_VINFO_SINGLE_SCALAR_ITERATION_COST(L) (L)->single_scalar_iteration_cost
@@ -1010,6 +1014,7 @@ extern void vect_get_vec_defs (tree, tree, gimple *, vec<tree> *,
vec<tree> *, slp_tree, int);
extern tree vect_gen_perm_mask_any (tree, const unsigned char *);
extern tree vect_gen_perm_mask_checked (tree, const unsigned char *);
+extern void optimize_mask_stores (struct loop*);
/* In tree-vect-data-refs.c. */
extern bool vect_can_force_dr_alignment_p (const_tree, unsigned int);
next parent reply other threads:[~2016-01-29 14:13 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CAFULd4ZVSDCEyyeT7fdF1Jccch3RjkkOZYsfzhxC_Ze9YuaYnA@mail.gmail.com>
[not found] ` <CAEoMCqSWx2AKqmzQYpm880BG0YsAtV2MmF1aVsSrJ-Oy+Dq7Jw@mail.gmail.com>
[not found] ` <CAFULd4bA1f085BsrPO87gX3iw34dVjvF2Rz5DA=r2E66TywuJQ@mail.gmail.com>
2016-01-29 14:13 ` Yuri Rumyantsev [this message]
2016-01-29 15:48 ` Uros Bizjak
2016-02-08 13:07 ` [Patch] Gate vect-mask-store-move-1.c correctly, and actually output the dump James Greenhalgh
2016-02-08 13:29 ` Yuri Rumyantsev
2016-02-08 13:40 ` James Greenhalgh
2016-02-08 14:23 ` Yuri Rumyantsev
2016-02-08 14:24 ` Richard Biener
2016-02-09 11:25 ` James Greenhalgh
2016-02-09 11:26 ` Richard Biener
2016-02-09 11:29 ` Jakub Jelinek
2016-02-08 16:08 ` Jeff Law
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAEoMCqQc_kbYCHzd6241EPP7oZExA7MYKPA5Y-DaVnMYJL3LPA@mail.gmail.com \
--to=ysrumyan@gmail.com \
--cc=enkovich.gnu@gmail.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=izamyatin@gmail.com \
--cc=richard.guenther@gmail.com \
--cc=ubizjak@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).