From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 114957 invoked by alias); 26 Aug 2019 20:41:13 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 114809 invoked by uid 89); 26 Aug 2019 20:41:13 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-13.1 required=5.0 tests=AWL,BAYES_00,GIT_PATCH_1,GIT_PATCH_2,GIT_PATCH_3,KAM_SHORT,RCVD_IN_DNSWL_LOW,SPF_PASS autolearn=ham version=3.3.1 spammy=clearer, awaiting, Indicate X-HELO: mx0a-001b2d01.pphosted.com Received: from mx0a-001b2d01.pphosted.com (HELO mx0a-001b2d01.pphosted.com) (148.163.156.1) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Mon, 26 Aug 2019 20:41:08 +0000 Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x7QKTcu3045915; Mon, 26 Aug 2019 16:41:07 -0400 Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 2umjwy0e08-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 26 Aug 2019 16:41:06 -0400 Received: from m0098409.ppops.net (m0098409.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.27/8.16.0.27) with SMTP id x7QKU2Pe048298; Mon, 26 Aug 2019 16:41:06 -0400 Received: from ppma03wdc.us.ibm.com (ba.79.3fa9.ip4.static.sl-reverse.com [169.63.121.186]) by mx0a-001b2d01.pphosted.com with ESMTP id 2umjwy0dh9-29 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 26 Aug 2019 16:41:05 -0400 Received: from pps.filterd (ppma03wdc.us.ibm.com [127.0.0.1]) by ppma03wdc.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id x7QK982M008260; Mon, 26 Aug 2019 20:10:24 GMT Received: from b01cxnp23032.gho.pok.ibm.com (b01cxnp23032.gho.pok.ibm.com [9.57.198.27]) by ppma03wdc.us.ibm.com with ESMTP id 2ujvv6etn4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 26 Aug 2019 20:10:24 +0000 Received: from b01ledav005.gho.pok.ibm.com (b01ledav005.gho.pok.ibm.com [9.57.199.110]) by b01cxnp23032.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x7QKAOVV47710630 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 26 Aug 2019 20:10:24 GMT Received: from b01ledav005.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E29F1AE062; Mon, 26 Aug 2019 20:10:23 +0000 (GMT) Received: from b01ledav005.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 92772AE060; Mon, 26 Aug 2019 20:10:23 +0000 (GMT) Received: from ibm-toto.the-meissners.org (unknown [9.32.77.177]) by b01ledav005.gho.pok.ibm.com (Postfix) with ESMTPS; Mon, 26 Aug 2019 20:10:23 +0000 (GMT) Date: Mon, 26 Aug 2019 21:12:00 -0000 From: Michael Meissner To: Michael Meissner , gcc-patches@gcc.gnu.org, segher@kernel.crashing.org, dje.gcc@gmail.com Subject: [PATCH, V3, #2 of 10], Improve rs6000_setup_addr_mask Message-ID: <20190826201022.GB11790@ibm-toto.the-meissners.org> Mail-Followup-To: Michael Meissner , gcc-patches@gcc.gnu.org, segher@kernel.crashing.org, dje.gcc@gmail.com References: <20190826173320.GA7958@ibm-toto.the-meissners.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190826173320.GA7958@ibm-toto.the-meissners.org> User-Agent: Mutt/1.5.21 (2010-09-15) X-SW-Source: 2019-08/txt/msg01783.txt.bz2 This patch is an optional patch. In previous patches, people said that the rs6000_setup_addr_mask function was hard to understand, exactly what address mask bits were set. This code attempts to make this clearer by moving the settings for GPRs, FPRs, and traditional Altivec registers to separate functions. Along the way, I discovered there were two things that arguably should be changed. In this patch, I opted to set the address masks to be the same as the previous compiler, since this is supposed to be an optional drop-in replacement. The first weirdity is where systems without Altivec support still set the address masks to inicate valid Altivec settings for the V1TImode type, due to a missing test. I have provided this fix as a separate patch, and it is currently awaiting approval: https://gcc.gnu.org/ml/gcc-patches/2019-08/msg01432.html The second issue is SDmode indicates that it can do PRE_INCREMENT, PRE_DECREMENT, and PRE_MODIFY in the floating point registers. It can't since you need to use the LFIWZX instruction to load SDmode, and that does not have an pre-increment format. I was not able to make a test case that actually failed with SDmode. I opted to make my comparison simpler by returning the same information that the current compiler uses. If you prefer, I can change it so the address mask does not indicate that the mode can do pre increment, etc. I tested this on a little endian power8 system, doing a bootstrap and make check. There were no regressions. Can I check this into the trunk once patch #1 has been checked in? In addition, I did the same test as I did in patch #1, and verified that all of the address masks were the same: | In addition, I built a big endian cross compiler and I used | -mdebug=reg to dump out the address masks for each of the modes that | we care about. I did: | | -mcpu=power5/power6/power7/power8/power9/future for big endian -m32 | -mcpu=power5/power6/power7/power8/power9/future for big endian -m64 | -mcpu=power8/power9/future for little endian -m64 -mabi=elfv2 | | | All of the cpu/endian/bit-size targets use the same address masks as | before except for little endian future. There the -mdebug=reg code | prints whether a mode needs to use the DS mode. I didn't print the | DS information earlier, so that I could more easily do the | comparison. For the future/le combinations, I verified that each of | the modes that says it needs to use a DS instruction format, did | actually have that requirement for the traditional instruction. 2019-08-26 Michael Meissner * config/rs6000/rs6000.c (FIRST_RELOAD_REG_CLASS): Delete. (LAST_RELOAD_REG_CLASS): Delete. (mode_uses_full_vector_reg): New helper function. (setup_reg_addr_masks_pre_incdec): New helper function. (setup_reg_addr_masks_gpr): New helper function. (setup_reg_addr_masks_fpr): New helper function. (setup_reg_addr_masks_altivec): New helper function. (rs6000_setup_reg_addr_masks): Move most of the code into the 3 specific helper functions that deals with the specific address masks for GPRs, FPRs, and traditional Altivec registers. Index: gcc/config/rs6000/rs6000.c =================================================================== --- gcc/config/rs6000/rs6000.c (revision 274870) +++ gcc/config/rs6000/rs6000.c (working copy) @@ -328,12 +328,6 @@ enum rs6000_reload_reg_type { N_RELOAD_REG }; -/* For setting up register classes, loop through the 3 register classes mapping - into real registers, and skip the ANY class, which is just an OR of the - bits. */ -#define FIRST_RELOAD_REG_CLASS RELOAD_REG_GPR -#define LAST_RELOAD_REG_CLASS RELOAD_REG_VMX - /* Map reload register type to a register in the register class. */ struct reload_reg_map_type { const char *name; /* Register class name. */ @@ -2532,156 +2526,304 @@ rs6000_debug_reg_global (void) } -/* Update the addr mask bits in reg_addr to help secondary reload and go if - legitimate address support to figure out the appropriate addressing to - use. */ +/* Return true if the mode is a type that uses the full vector register (like + V2DImode or KFmode). Do not return true for 128-bit types like TDmode or + IFmode. */ -static void -rs6000_setup_reg_addr_masks (void) +static bool +mode_uses_full_vector_reg (machine_mode mode) { - ssize_t rc, reg, m, nregs; - addr_mask_type any_addr_mask, addr_mask; + if (GET_MODE_SIZE (mode) < 16) + return false; - for (m = 0; m < NUM_MACHINE_MODES; ++m) + if (TARGET_VSX) + return (VECTOR_MODE_P (mode) + || FLOAT128_VECTOR_P (mode) + || mode == TImode); + + if (TARGET_ALTIVEC) + return ALTIVEC_VECTOR_MODE (mode); + + return false; +} + +/* Figure out if we can do PRE_INC, PRE_DEC, or PRE_MODIFY addressing for a + given MODE. If we allow scalars into Altivec registers, don't allow + PRE_INC, PRE_DEC, or PRE_MODIFY. + + For VSX systems, we don't allow update addressing for DFmode/SFmode if those + registers can go in both the traditional floating point registers and + Altivec registers. The load/store instructions for the Altivec registers do + not have update forms. If we allowed update addressing, it seems to break + IV-OPT code using floating point if the index type is int instead of long + (PR target/81550 and target/84042). */ + +/* Return the address mask bits for whether we allow PRE_INCREMENT, + PRE_DECREMENT, and PRE_MODIFY for a given MODE. */ + +static addr_mask_type +setup_reg_addr_masks_pre_incdec (machine_mode mode) +{ + addr_mask_type addr_mask = 0; + + if (TARGET_UPDATE + && GET_MODE_SIZE (mode) <= 8 + && !VECTOR_MODE_P (mode) + && !FLOAT128_VECTOR_P (mode) + && !COMPLEX_MODE_P (mode) + && (mode != E_DFmode || !TARGET_VSX) + && (mode != E_SFmode || !TARGET_P8_VECTOR)) + { + addr_mask |= RELOAD_REG_PRE_INCDEC; + + /* PRE_MODIFY is more restricted than PRE_INC/PRE_DEC in that we don't + allow PRE_MODIFY for some multi-register operations. */ + switch (mode) + { + default: + addr_mask |= RELOAD_REG_PRE_MODIFY; + break; + + case E_DImode: + if (TARGET_POWERPC64) + addr_mask |= RELOAD_REG_PRE_MODIFY; + break; + + case E_DFmode: + case E_DDmode: + if (TARGET_HARD_FLOAT) + addr_mask |= RELOAD_REG_PRE_MODIFY; + break; + } + } + + return addr_mask; +} + +/* Helper function for rs6000_setup_reg_addr_masks to set up the address masks + for GPR registers. */ + +static addr_mask_type +setup_reg_addr_masks_gpr (machine_mode mode) +{ + addr_mask_type addr_mask = 0; + + /* Can mode values go in the GPR registers? */ + if (rs6000_hard_regno_mode_ok_p[mode][FIRST_GPR_REGNO]) { - machine_mode m2 = (machine_mode) m; + size_t mode_size = GET_MODE_SIZE (mode); + size_t reg_size = TARGET_POWERPC64 ? 8 : 4; + machine_mode mode_inner = mode; bool complex_p = false; - bool small_int_p = (m2 == QImode || m2 == HImode || m2 == SImode); - size_t msize; - if (COMPLEX_MODE_P (m2)) + if (COMPLEX_MODE_P (mode_inner)) { complex_p = true; - m2 = GET_MODE_INNER (m2); + mode_inner = GET_MODE_INNER (mode_inner); } - msize = GET_MODE_SIZE (m2); + size_t mode_size_inner = GET_MODE_SIZE (mode_inner); + ssize_t nregs = rs6000_hard_regno_nregs[mode][FIRST_GPR_REGNO]; + + /* Indicate if the mode takes more than 1 physical register. If it takes + a single register, indicate it can do REG+REG addressing. */ + if (nregs > 1 || mode == BLKmode || complex_p) + addr_mask |= RELOAD_REG_MULTIPLE; + else if (mode_size <= reg_size) + addr_mask |= RELOAD_REG_INDEXED; + + /* GPR registers can do REG+OFFSET addressing for small scalar types. + For vectors, VSX registers can do REG+OFFSET addresssing if ISA 3.0 + instructions are enabled. The offset for 128-bit VSX registers is + only 12-bits. While GPRs can handle the full offset range, VSX + registers can only handle the restricted range. + + SDmode is special in that we want to access it only via REG+REG + addressing on power7 and above. This is because the natural way to + load a SDmode is to use the indexed LFIWZX instruction. In power6, we + had to load it up in a GPR, store it on the stack and then load it up + into a FPR. Don't allow SDmode to use offset addressing in power7 or + later, even in GPRs (to prevent the register allocator from using a + GPR to load the value). */ + + bool indexed_only = (mode == SDmode && TARGET_LFIWZX); + + if (!indexed_only + && (mode_size_inner <= 8 + || (mode_size_inner == 16 && TARGET_P9_VECTOR + && mode_uses_full_vector_reg (mode_inner)))) + { + addr_mask |= RELOAD_REG_OFFSET; - /* SDmode is special in that we want to access it only via REG+REG - addressing on power7 and above, since we want to use the LFIWZX and - STFIWZX instructions to load it. */ - bool indexed_only_p = (m == SDmode && TARGET_NO_SDMODE_STACK); + /* Set the DS format bit if we have 64-bit loads/stores on a 64-bit + system. */ + if (TARGET_POWERPC64 && mode_size_inner >= 8) + addr_mask |= RELOAD_REG_DS_OFFSET; + } + + /* Do we support pre_increment, pre_decrement, or pre_modify? */ + addr_mask |= setup_reg_addr_masks_pre_incdec (mode); + + /* Set the valid bit. */ + addr_mask |= RELOAD_REG_VALID; + } + + return addr_mask; +} - any_addr_mask = 0; - for (rc = FIRST_RELOAD_REG_CLASS; rc <= LAST_RELOAD_REG_CLASS; rc++) +/* Helper function for rs6000_setup_reg_addr_masks to set up the address masks + for traditional FPR registers. */ + +static addr_mask_type +setup_reg_addr_masks_fpr (machine_mode mode) +{ + addr_mask_type addr_mask = 0; + + /* Can mode values go in the FPR registers? */ + if (rs6000_hard_regno_mode_ok_p[mode][FIRST_FPR_REGNO]) + { + size_t mode_size = GET_MODE_SIZE (mode); + machine_mode mode_inner = mode; + bool complex_p = false; + + if (COMPLEX_MODE_P (mode_inner)) { - addr_mask = 0; - reg = reload_reg_map[rc].reg; + complex_p = true; + mode_inner = GET_MODE_INNER (mode_inner); + } - /* Can mode values go in the GPR/FPR/Altivec registers? */ - if (reg >= 0 && rs6000_hard_regno_mode_ok_p[m][reg]) - { - bool small_int_vsx_p = (small_int_p - && (rc == RELOAD_REG_FPR - || rc == RELOAD_REG_VMX)); - - nregs = rs6000_hard_regno_nregs[m][reg]; - addr_mask |= RELOAD_REG_VALID; - - /* Indicate if the mode takes more than 1 physical register. If - it takes a single register, indicate it can do REG+REG - addressing. Small integers in VSX registers can only do - REG+REG addressing. */ - if (small_int_vsx_p) - addr_mask |= RELOAD_REG_INDEXED; - else if (nregs > 1 || m == BLKmode || complex_p) - addr_mask |= RELOAD_REG_MULTIPLE; - else - addr_mask |= RELOAD_REG_INDEXED; - - /* Figure out if we can do PRE_INC, PRE_DEC, or PRE_MODIFY - addressing. If we allow scalars into Altivec registers, - don't allow PRE_INC, PRE_DEC, or PRE_MODIFY. - - For VSX systems, we don't allow update addressing for - DFmode/SFmode if those registers can go in both the - traditional floating point registers and Altivec registers. - The load/store instructions for the Altivec registers do not - have update forms. If we allowed update addressing, it seems - to break IV-OPT code using floating point if the index type is - int instead of long (PR target/81550 and target/84042). */ - - if (TARGET_UPDATE - && (rc == RELOAD_REG_GPR || rc == RELOAD_REG_FPR) - && msize <= 8 - && !VECTOR_MODE_P (m2) - && !FLOAT128_VECTOR_P (m2) - && !complex_p - && (m != E_DFmode || !TARGET_VSX) - && (m != E_SFmode || !TARGET_P8_VECTOR) - && !small_int_vsx_p) - { - addr_mask |= RELOAD_REG_PRE_INCDEC; + size_t mode_size_inner = GET_MODE_SIZE (mode_inner); + ssize_t nregs = rs6000_hard_regno_nregs[mode][FIRST_FPR_REGNO]; - /* PRE_MODIFY is more restricted than PRE_INC/PRE_DEC in that - we don't allow PRE_MODIFY for some multi-register - operations. */ - switch (m) - { - default: - addr_mask |= RELOAD_REG_PRE_MODIFY; - break; - - case E_DImode: - if (TARGET_POWERPC64) - addr_mask |= RELOAD_REG_PRE_MODIFY; - break; - - case E_DFmode: - case E_DDmode: - if (TARGET_HARD_FLOAT) - addr_mask |= RELOAD_REG_PRE_MODIFY; - break; - } - } - } + /* Indicate if the mode takes more than 1 physical register. If it takes + a single register, indicate it can do REG+REG addressing. */ + if (nregs > 1 || mode == BLKmode || complex_p) + addr_mask |= RELOAD_REG_MULTIPLE; + + else if (mode == SFmode || mode_size == 8 + || mode_uses_full_vector_reg (mode_inner) + || (TARGET_LFIWAX && (mode == SImode || mode == SDmode)) + || (TARGET_P9_VECTOR && (mode == QImode || mode == HImode))) + addr_mask |= RELOAD_REG_INDEXED; - /* GPR and FPR registers can do REG+OFFSET addressing, except - possibly for SDmode. ISA 3.0 (i.e. power9) adds D-form addressing - for 64-bit scalars and 32-bit SFmode to altivec registers. */ - if ((addr_mask != 0) && !indexed_only_p - && msize <= 8 - && (rc == RELOAD_REG_GPR - || ((msize == 8 || m2 == SFmode) - && (rc == RELOAD_REG_FPR - || (rc == RELOAD_REG_VMX && TARGET_P9_VECTOR))))) - addr_mask |= RELOAD_REG_OFFSET; - - /* VSX registers can do REG+OFFSET addresssing if ISA 3.0 - instructions are enabled. The offset for 128-bit VSX registers is - only 12-bits. While GPRs can handle the full offset range, VSX - registers can only handle the restricted range. */ - else if ((addr_mask != 0) && !indexed_only_p - && msize == 16 && TARGET_P9_VECTOR - && (ALTIVEC_OR_VSX_VECTOR_MODE (m2) - || (m2 == TImode && TARGET_VSX))) - { - addr_mask |= RELOAD_REG_OFFSET; - if (rc == RELOAD_REG_FPR || rc == RELOAD_REG_VMX) - addr_mask |= RELOAD_REG_QUAD_OFFSET; - } + /* FPR registers can do REG+OFFSET addressing for SFmode/DFmode. */ + if (mode_inner == SFmode || mode_size_inner == 8) + { + addr_mask |= RELOAD_REG_OFFSET; - /* VMX registers can do (REG & -16) and ((REG+REG) & -16) - addressing on 128-bit types. */ - if (rc == RELOAD_REG_VMX && msize == 16 - && (addr_mask & RELOAD_REG_VALID) != 0) - addr_mask |= RELOAD_REG_AND_M16; - - /* 64-bit and larger values on GPRs need DS format instructions. All - non-vector offset instructions in Altivec registers need the DS - format instructions. */ - const addr_mask_type quad_flags = (RELOAD_REG_OFFSET - | RELOAD_REG_QUAD_OFFSET); - - if ((addr_mask & quad_flags) == RELOAD_REG_OFFSET - && ((rc == RELOAD_REG_GPR && msize >= 8 && TARGET_POWERPC64) - || (rc == RELOAD_REG_VMX))) - addr_mask |= RELOAD_REG_DS_OFFSET; + /* Do we support pre_increment, pre_decrement, or pre_modify? */ + addr_mask |= setup_reg_addr_masks_pre_incdec (mode); + } + + /* It is weird that previous versions of GCC supported pre increment, + etc. forms of addressing for SDmode, when you could only use an + indexed instruction, but allow it for now. Previous versions of GCC + also set the indexed flag for SDmode, even though there was no direct + instruction to load it. */ + else if (mode_inner == SDmode) + addr_mask |= (RELOAD_REG_INDEXED + | RELOAD_REG_PRE_INCDEC + | RELOAD_REG_PRE_MODIFY); + + /* FPR registers can do REG+OFFSET addresssing for vectors if ISA 3.0 + instructions are enabled. The offset for 128-bit VSX registers is + only 12-bits. */ + else if (TARGET_P9_VECTOR && mode_uses_full_vector_reg (mode_inner)) + addr_mask |= RELOAD_REG_OFFSET | RELOAD_REG_QUAD_OFFSET; + + /* Set valid bit. */ + addr_mask |= RELOAD_REG_VALID; + } + + return addr_mask; +} + +/* Helper function for rs6000_setup_reg_addr_masks to set up the address masks + for traditional Altivec registers. */ + +static addr_mask_type +setup_reg_addr_masks_altivec (machine_mode mode) +{ + addr_mask_type addr_mask = 0; - reg_addr[m].addr_mask[rc] = addr_mask; - any_addr_mask |= (addr_mask & ~RELOAD_REG_AND_M16); + /* Can mode values go in the Altivec registers? */ + if (rs6000_hard_regno_mode_ok_p[mode][FIRST_ALTIVEC_REGNO]) + { + size_t mode_size = GET_MODE_SIZE (mode); + machine_mode mode_inner = mode; + bool complex_p = false; + + if (COMPLEX_MODE_P (mode_inner)) + { + complex_p = true; + mode_inner = GET_MODE_INNER (mode_inner); } + size_t mode_size_inner = GET_MODE_SIZE (mode_inner); + bool vector_p = mode_uses_full_vector_reg (mode_inner); + ssize_t nregs = rs6000_hard_regno_nregs[mode][FIRST_ALTIVEC_REGNO]; + + /* Indicate if the mode takes more than 1 physical register. If it takes + a single register, indicate it can do REG+REG addressing. */ + if (nregs > 1 || mode == BLKmode || complex_p) + addr_mask |= RELOAD_REG_MULTIPLE; + + else if (mode == SFmode || mode_size == 8 || vector_p + || (TARGET_P8_VECTOR && mode == SImode) + || (TARGET_P9_VECTOR && (mode == QImode || mode == HImode))) + addr_mask |= RELOAD_REG_INDEXED; + + /* Starting with ISA 3.0, Altivec registers can do REG+OFFSET addressing + for SFmode/DFmode. Vectors also support REG+OFFSET, but the offset is + limited to DQ format unless prefixed memory instructions are used. + Pre increment/decrement/modify is not supported. + + All Altivec scalar load/store instructions with an offset use the DS + format. */ + if (TARGET_P9_VECTOR) + { + if (mode_inner == SFmode || mode_size_inner == 8) + addr_mask |= RELOAD_REG_OFFSET | RELOAD_REG_DS_OFFSET; + + else if (vector_p) + addr_mask |= RELOAD_REG_OFFSET | RELOAD_REG_QUAD_OFFSET; + } + + /* Vectors can use Altivec memory instructions to support omitting the + bottom 4 bits in addition to normal indexed addressing. */ + if (vector_p) + addr_mask |= RELOAD_REG_AND_M16; + + /* Set valid bit. */ + addr_mask |= RELOAD_REG_VALID; + } + + return addr_mask; +} + +/* Update the addr mask bits in reg_addr to help secondary reload and go if + legitimate address support to figure out the appropriate addressing to + use. */ + +static void +rs6000_setup_reg_addr_masks (void) +{ + for (ssize_t m = 0; m < NUM_MACHINE_MODES; ++m) + { + machine_mode mode = (machine_mode)m; + + addr_mask_type addr_mask = setup_reg_addr_masks_gpr (mode); + addr_mask_type any_addr_mask = addr_mask; + reg_addr[m].addr_mask[RELOAD_REG_GPR] = addr_mask; + + addr_mask = setup_reg_addr_masks_fpr (mode); + any_addr_mask |= addr_mask; + reg_addr[m].addr_mask[RELOAD_REG_FPR] = addr_mask; + + addr_mask = setup_reg_addr_masks_altivec (mode); + any_addr_mask |= (addr_mask & ~RELOAD_REG_AND_M16); + reg_addr[m].addr_mask[RELOAD_REG_VMX] = addr_mask; + reg_addr[m].any_addr_mask = any_addr_mask; /* Figure out what the default reload register set that should be used @@ -2701,15 +2843,18 @@ rs6000_setup_reg_addr_masks (void) rc_order[rc_max++] = RELOAD_REG_VMX; /* Normal vectors and software IEEE 128-bit can use either floating point - registers or Altivec registers. */ - else if (TARGET_VSX && (VECTOR_MODE_P (m) || FLOAT128_IEEE_P (m))) + registers or Altivec registers. Don't favor TImode for vector + registers at this time. */ + else if (TARGET_VSX && m != E_TImode + && mode_uses_full_vector_reg ((machine_mode) m)) { rc_order[rc_max++] = RELOAD_REG_FPR; rc_order[rc_max++] = RELOAD_REG_VMX; } /* Altivec only vectors use the Altivec registers. */ - else if (TARGET_ALTIVEC && !TARGET_VSX && VECTOR_MODE_P (m)) + else if (TARGET_ALTIVEC && !TARGET_VSX + && mode_uses_full_vector_reg ((machine_mode) m)) rc_order[rc_max++] = RELOAD_REG_VMX; /* For scalar binary/decimal floating point, prefer FPRs over altivec -- Michael Meissner, IBM IBM, M/S 2506R, 550 King Street, Littleton, MA 01460-6245, USA email: meissner@linux.ibm.com, phone: +1 (978) 899-4797