From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 64047 invoked by alias); 15 Aug 2019 12:14:12 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 64033 invoked by uid 89); 15 Aug 2019 12:14:12 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-8.9 required=5.0 tests=AWL,BAYES_00,FREEMAIL_FROM,GIT_PATCH_2,GIT_PATCH_3,KAM_ASCII_DIVIDERS,KAM_SHORT,RCVD_IN_DNSWL_NONE,SPF_HELO_PASS,SPF_PASS autolearn=ham version=3.3.1 spammy= X-HELO: EUR02-HE1-obe.outbound.protection.outlook.com Received: from mail-oln040092068079.outbound.protection.outlook.com (HELO EUR02-HE1-obe.outbound.protection.outlook.com) (40.92.68.79) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 15 Aug 2019 12:14:07 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=N7HtGFzSmX5anFCMUgXW2/MWoEQ8syJPHkEuLOvepts9b0Ihh28dLGN34ioNw+avZ6AkWoM8NZ/GiqzMPbernT5k+B6XnWpg63EetDbb6psv8wKoX027orE1f4UUxUt0NgIww1QM1JQ/yHLMba9EoC/slmeN/ra8DKGET3VFnZu115E70vAIE4AjDbloTuMNZhjKNLsPnBcszKg4XIJKv0NFKDeRJp7t33NytmbJeog/N4pL3IV5uQ8ZTWNAC4UahcVB04AuAIwkprwzuUrXhVSJKxi6+tvx7C8OXnP6O7GYqknp5IwA+9gYTdQLDqT4QHM3rNiG4eyCmQe/0keZgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gv6lPq5mPsKgo6rsZKdo4KtEMXi/PEJYE2Oc6xiPmbU=; b=Fhy8gUj6J6yLLRDKcoV35Uidn3KCiYNFZJppfu/0aeMLpfE+iwk3dpWFQI1YGOUxvG89uXEuprnMX9FaV4NF0GcRmuRjRnCMTgY9LSvZHPZcjLiNUkyruNkYACZhya6lebO9rdvQ1kTd8ao35oO/aO9dXg3pg4LwuYypNKMjqruHLECjEEgjM4ngGWKDDlJsfvVDJPRkvVB6wT17ozSVGQmzQ3fo+Vinia7KvxZzh3qJ8gFy6XsNrsmQDU/IweO4gRCCaoX9LhqgTAEu2UNy4AzpeFk98OROMyNvVgB534CdISAkbqghJALrJ0KQ4wGE8cSU7ERYQx9dGNI+2kx0zw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none Received: from VE1EUR02FT053.eop-EUR02.prod.protection.outlook.com (10.152.12.54) by VE1EUR02HT150.eop-EUR02.prod.protection.outlook.com (10.152.13.166) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.2157.15; Thu, 15 Aug 2019 12:14:02 +0000 Received: from AM6PR10MB2566.EURPRD10.PROD.OUTLOOK.COM (10.152.12.60) by VE1EUR02FT053.mail.protection.outlook.com (10.152.13.137) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.2178.16 via Frontend Transport; Thu, 15 Aug 2019 12:14:02 +0000 Received: from AM6PR10MB2566.EURPRD10.PROD.OUTLOOK.COM ([fe80::4056:d9d8:9ce5:1976]) by AM6PR10MB2566.EURPRD10.PROD.OUTLOOK.COM ([fe80::4056:d9d8:9ce5:1976%4]) with mapi id 15.20.2157.022; Thu, 15 Aug 2019 12:14:02 +0000 From: Bernd Edlinger To: Richard Biener CC: "gcc-patches@gcc.gnu.org" , Richard Earnshaw , Ramana Radhakrishnan , Kyrill Tkachov , Eric Botcazou , Jeff Law , Jakub Jelinek Subject: Re: [PATCHv4] Fix not 8-byte aligned ldrd/strd on ARMv5 (PR 89544) Date: Thu, 15 Aug 2019 12:38:00 -0000 Message-ID: References: In-Reply-To: x-microsoft-original-message-id: x-ms-exchange-purlcount: 1 x-ms-exchange-transport-forked: True Content-Type: multipart/mixed; boundary="_002_AM6PR10MB2566EF4119F45406B60B3972E4AC0AM6PR10MB2566EURP_" MIME-Version: 1.0 X-SW-Source: 2019-08/txt/msg01093.txt.bz2 --_002_AM6PR10MB2566EF4119F45406B60B3972E4AC0AM6PR10MB2566EURP_ Content-Type: text/plain; charset="Windows-1252" Content-ID: <6B94853A864FD340A9DD9B235400EC3B@EURPRD10.PROD.OUTLOOK.COM> Content-Transfer-Encoding: quoted-printable Content-length: 21450 On 8/15/19 10:55 AM, Richard Biener wrote: > On Wed, 14 Aug 2019, Bernd Edlinger wrote: >=20 >> On 8/14/19 2:00 PM, Richard Biener wrote: >> >> Well, yes, but I was scared away by the complexity of emit_move_insn_1. >> >> It could be done, but in the moment I would be happy to have these >> checks of one major strict alignment target, ARM is a good candidate >> since most instructions work even if they are accidentally >> using unaligned arguments. So middle-end errors do not always >> visible by ordinary tests. Nevertheless it is a blatant violation of the >> contract between middle-end and back-end, which should be avoided. >=20 > Fair enough. >=20 >>>> Several struct-layout-1.dg testcase tripped over misaligned >>>> complex_cst constants, fixed by varasm.c (align_variable). >>>> This is likely a wrong code bug, because misaligned complex >>>> constants, are expanded to misaligned MEM_REF, but the >>>> expansion cannot handle misaligned constants, only packed >>>> structure fields. >>> >>> Hmm. So your patch overrides user-alignment here. Woudln't it >>> be better to do that more conciously by >>> >>> if (! DECL_USER_ALIGN (decl) >>> || (align < GET_MODE_ALIGNMENT (DECL_MODE (decl)) >>> && targetm.slow_unaligned_access (DECL_MODE (decl), align))) >>> ? I don't know why that would be better? If the value is underaligned no matter why, pretend it was declared as naturally aligned if that causes wrong code otherwise. That was the idea here. >>> ? And why is the movmisalign optab support missing here? >>> >> >> Yes, I wanted to replicate what we have in assign_parm_adjust_stack_rtl: >> >> /* If we can't trust the parm stack slot to be aligned enough for its >> ultimate type, don't use that slot after entry. We'll make another >> stack slot, if we need one. */ >> if (stack_parm >> && ((GET_MODE_ALIGNMENT (data->nominal_mode) > MEM_ALIGN (stack_pa= rm) >> && targetm.slow_unaligned_access (data->nominal_mode, >> MEM_ALIGN (stack_parm))) >> >> which also makes a variable more aligned than it is declared. >> But maybe both should also check the movmisalign optab in >> addition to slow_unaligned_access ? >=20 > Quite possible. >=20 Will do, see attached new version of the patch. >>> IMHO whatever code later fails to properly use unaligned loads >>> should be fixed instead rather than ignoring user requested alignment. >>> >>> Can you quote a short testcase that explains what exactly goes wrong? >>> The struct-layout ones are awkward to look at... >>> >> >> Sure, >> >> $ cat test.c >> _Complex float __attribute__((aligned(1))) cf; >> >> void foo (void) >> { >> cf =3D 1.0i; >> } >> >> $ arm-linux-gnueabihf-gcc -S test.c=20 >> during RTL pass: expand >> test.c: In function 'foo': >> test.c:5:6: internal compiler error: in gen_movsf, at config/arm/arm.md:= 7003 >> 5 | cf =3D 1.0i; >> | ~~~^~~~~~ >> 0x7ba475 gen_movsf(rtx_def*, rtx_def*) >> ../../gcc-trunk/gcc/config/arm/arm.md:7003 >> 0xa49587 insn_gen_fn::operator()(rtx_def*, rtx_def*) const >> ../../gcc-trunk/gcc/recog.h:318 >> 0xa49587 emit_move_insn_1(rtx_def*, rtx_def*) >> ../../gcc-trunk/gcc/expr.c:3695 >> 0xa49914 emit_move_insn(rtx_def*, rtx_def*) >> ../../gcc-trunk/gcc/expr.c:3791 >> 0xa494f7 emit_move_complex_parts(rtx_def*, rtx_def*) >> ../../gcc-trunk/gcc/expr.c:3490 >> 0xa49914 emit_move_insn(rtx_def*, rtx_def*) >> ../../gcc-trunk/gcc/expr.c:3791 >> 0xa5106f store_expr(tree_node*, rtx_def*, int, bool, bool) >> ../../gcc-trunk/gcc/expr.c:5855 >> 0xa51cc0 expand_assignment(tree_node*, tree_node*, bool) >> ../../gcc-trunk/gcc/expr.c:5441 >=20 > Huh, so why didn't it trigger >=20 > /* Handle misaligned stores. */ > mode =3D TYPE_MODE (TREE_TYPE (to)); > if ((TREE_CODE (to) =3D=3D MEM_REF > || TREE_CODE (to) =3D=3D TARGET_MEM_REF) > && mode !=3D BLKmode > && !mem_ref_refers_to_non_mem_p (to) > && ((align =3D get_object_alignment (to)) > < GET_MODE_ALIGNMENT (mode)) > && (((icode =3D optab_handler (movmisalign_optab, mode)) > !=3D CODE_FOR_nothing) > || targetm.slow_unaligned_access (mode, align))) > { >=20 > ? (_Complex float is 32bit aligned it seems, the DECL_RTL for the > var is (mem/c:SC (symbol_ref:SI ("cf") [flags 0x2] 0x2aaaaaad1240 cf>) [1 cf+0 S8 A8]), SCmode is 32bit aligned. >=20 > Ah, 'to' is a plain DECL here so the above handling is incomplete. > IIRC component refs like __real cf =3D 0.f should be handled fine > again(?). So, does adding || DECL_P (to) fix the case as well? >=20 So I tried this instead of the varasm.c change: Index: expr.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- expr.c (revision 274487) +++ expr.c (working copy) @@ -5002,9 +5002,10 @@ expand_assignment (tree to, tree from, bool nontem /* Handle misaligned stores. */ mode =3D TYPE_MODE (TREE_TYPE (to)); if ((TREE_CODE (to) =3D=3D MEM_REF - || TREE_CODE (to) =3D=3D TARGET_MEM_REF) + || TREE_CODE (to) =3D=3D TARGET_MEM_REF + || DECL_P (to)) && mode !=3D BLKmode - && !mem_ref_refers_to_non_mem_p (to) + && (DECL_P (to) || !mem_ref_refers_to_non_mem_p (to)) && ((align =3D get_object_alignment (to)) < GET_MODE_ALIGNMENT (mode)) && (((icode =3D optab_handler (movmisalign_optab, mode)) Result, yes, it fixes this test case but then I run all struct-layout-1.exp there are sill cases. where we have = problems: In file included from /home/ed/gnu/gcc-build-arm-linux-gnueabihf-linux64/gc= c/testsuite/gcc/gcc.dg-struct-layout-1//t024_x.c:8:^M /home/ed/gnu/gcc-build-arm-linux-gnueabihf-linux64/gcc/testsuite/gcc/gcc.dg= -struct-layout-1//t024_test.h: In function 'test2112':^M /home/ed/gnu/gcc-trunk/gcc/testsuite/gcc.dg/compat/struct-layout-1_x1.h:23:= 10: internal compiler error: in gen_movdf, at config/arm/arm.md:7107^M /home/ed/gnu/gcc-trunk/gcc/testsuite/gcc.dg/compat/struct-layout-1_x1.h:62:= 3: note: in definition of macro 'TX'^M /home/ed/gnu/gcc-build-arm-linux-gnueabihf-linux64/gcc/testsuite/gcc/gcc.dg= -struct-layout-1//t024_test.h:113:1: note: in expansion of macro 'TCI'^M /home/ed/gnu/gcc-build-arm-linux-gnueabihf-linux64/gcc/testsuite/gcc/gcc.dg= -struct-layout-1//t024_test.h:113:294: note: in expansion of macro 'F'^M 0x7ba377 gen_movdf(rtx_def*, rtx_def*)^M ../../gcc-trunk/gcc/config/arm/arm.md:7107^M 0xa494c7 insn_gen_fn::operator()(rtx_def*, rtx_def*) const^M ../../gcc-trunk/gcc/recog.h:318^M 0xa494c7 emit_move_insn_1(rtx_def*, rtx_def*)^M ../../gcc-trunk/gcc/expr.c:3695^M 0xa49854 emit_move_insn(rtx_def*, rtx_def*)^M ../../gcc-trunk/gcc/expr.c:3791^M 0xa49437 emit_move_complex_parts(rtx_def*, rtx_def*)^M ../../gcc-trunk/gcc/expr.c:3490^M 0xa49854 emit_move_insn(rtx_def*, rtx_def*)^M ../../gcc-trunk/gcc/expr.c:3791^M 0xa50faf store_expr(tree_node*, rtx_def*, int, bool, bool)^M ../../gcc-trunk/gcc/expr.c:5856^M 0xa51f34 expand_assignment(tree_node*, tree_node*, bool)^M ../../gcc-trunk/gcc/expr.c:5302^M 0xa51f34 expand_assignment(tree_node*, tree_node*, bool)^M ../../gcc-trunk/gcc/expr.c:4983^M 0x9338af expand_gimple_stmt_1^M ../../gcc-trunk/gcc/cfgexpand.c:3777^M 0x9338af expand_gimple_stmt^M ../../gcc-trunk/gcc/cfgexpand.c:3875^M 0x939221 expand_gimple_basic_block^M ../../gcc-trunk/gcc/cfgexpand.c:5915^M 0x93af86 execute^M ../../gcc-trunk/gcc/cfgexpand.c:6538^M Please submit a full bug report,^M My personal gut feeling this will be more fragile than over-aligning the constants. >> 0xa51cc0 expand_assignment(tree_node*, tree_node*, bool) >> ../../gcc-trunk/gcc/expr.c:4983 >> 0x93396f expand_gimple_stmt_1 >> ../../gcc-trunk/gcc/cfgexpand.c:3777 >> 0x93396f expand_gimple_stmt >> ../../gcc-trunk/gcc/cfgexpand.c:3875 >> 0x9392e1 expand_gimple_basic_block >> ../../gcc-trunk/gcc/cfgexpand.c:5915 >> 0x93b046 execute >> ../../gcc-trunk/gcc/cfgexpand.c:6538 >> Please submit a full bug report, >> with preprocessed source if appropriate. >> Please include the complete backtrace with any bug report. >> See for instructions. >> >> Without the hunk in varasm.c of course. >> >> What happens is that expand_expr_real_2 returns a unaligned mem_ref here: >> >> case COMPLEX_CST: >> /* Handle evaluating a complex constant in a CONCAT target. */ >> if (original_target && GET_CODE (original_target) =3D=3D CONCAT) >> { >> [... this path not taken ...] BTW: this code block executes when the other ICE happens. =20 >> } >> >> /* fall through */ >> >> case STRING_CST: >> temp =3D expand_expr_constant (exp, 1, modifier); >> >> /* temp contains a constant address. >> On RISC machines where a constant address isn't valid, >> make some insns to get that address into a register. */ >> if (modifier !=3D EXPAND_CONST_ADDRESS >> && modifier !=3D EXPAND_INITIALIZER >> && modifier !=3D EXPAND_SUM >> && ! memory_address_addr_space_p (mode, XEXP (temp, 0), >> MEM_ADDR_SPACE (temp))) >> return replace_equiv_address (temp, >> copy_rtx (XEXP (temp, 0))); >> return temp; >> >> The result of expand_expr_real(..., EXPAND_NORMAL) ought to be usable >> by emit_move_insn, that is expected just *everywhere* and can't be chang= ed. >> >> This could probably be fixed in an ugly way in the COMPLEX_CST, handler >> but OTOH, I don't see any reason why this constant has to be misaligned >> when it can be easily aligned, which avoids the need for a misaligned ac= cess. >=20 > If the COMPLEX_CST happends to end up in unaligned memory then that's > of course a bug (unless the target requests that for all COMPLEX_CSTs). > That is, if the unalignment is triggered because the store is to an > unaligned decl. >=20 > But I think the issue is the above one? >=20 yes initially the constant seems to be unaligned. then it is expanded, and there is no special handling for unaligned constants in expand_expr_rea= l, and then probably expand_assignment or store_expr seem not fully prepared f= or this either. >>>> Furthermore gcc.dg/Warray-bounds-33.c was fixed by the >>>> change in expr.c (expand_expr_real_1). Certainly is it invalid >>>> to read memory at a function address, but it should not ICE. >>>> The problem here, is the MEM_REF has no valid MEM_ALIGN, it looks >>>> like A32, so the misaligned code execution is not taken, but it is >>>> set to A8 below, but then we hit an ICE if the result is used: >>> >>> So the user accessed it as A32. >>> >>>> /* Don't set memory attributes if the base expression is >>>> SSA_NAME that got expanded as a MEM. In that case, we shou= ld >>>> just honor its original memory attributes. */ >>>> if (TREE_CODE (tem) !=3D SSA_NAME || !MEM_P (orig_op0)) >>>> set_mem_attributes (op0, exp, 0); >>> >>> Huh, I don't understand this. 'tem' should never be SSA_NAME. >> >> tem is the result of get_inner_reference, why can't that be a SSA_NAME ? >=20 > We can't subset an SSA_NAME. I have really no idea what this intended > to do... >=20 Nice, so would you do a patch to change that to a gcc_checking_assert (TREE_CODE (tem) !=3D SSA_NAME) ? maybe with a small explanation? >>> But set_mem_attributes_minus_bitpos uses get_object_alignment_1 >>> and that has special treatment for FUNCTION_DECLs that is not >>> covered by >>> >>> /* When EXP is an actual memory reference then we can use >>> TYPE_ALIGN of a pointer indirection to derive alignment. >>> Do so only if get_pointer_alignment_1 did not reveal absolute >>> alignment knowledge and if using that alignment would >>> improve the situation. */ >>> unsigned int talign; >>> if (!addr_p && !known_alignment >>> && (talign =3D min_align_of_type (TREE_TYPE (exp)) *=20 >>> BITS_PER_UNIT) >>> && talign > align) >>> align =3D talign; >>> >>> which could be moved out of the if-cascade. >>> >>> That said, setting A8 should eventually result into appropriate >>> unaligned expansion, so it seems odd this triggers the assert... >>> >> >> The function pointer is really 32-byte aligned in ARM mode to start >> with... >> >> The problem is that the code that handles this misaligned access >> is skipped because the mem_rtx has initially no MEM_ATTRS and therefore >> MEM_ALIGN =3D=3D 32, and therefore the code that handles the unaligned >> access is not taken. BUT before the mem_rtx is returned it is >> set to MEM_ALIGN =3D 8 by set_mem_attributes, and we have an assertion, >> because the result from expand_expr_real(..., EXPAND_NORMAL) ought to be >> usable with emit_move_insn. >=20 > yes, as said the _access_ determines the address should be aligned > so we shouldn't end up setting MEM_ALIGN to 8 but to 32 according > to the access type/mode. But we can't trust DECL_ALIGN of > FUNCTION_DECLs but we _can_ trust users writing *(int *)fn > (maybe for actual accesses we _can_ trust DECL_ALIGN, it's just > we may not compute nonzero bits for the actual address because > of function pointer mangling) > (for accessing function code I'd say this would be premature > optimization, but ...) >=20 Not a very nice solution, but it is not worth to spend much effort in optimizing undefined behavior, I just want to avoid the ICE at this time and would not trust the DECL_ALIGN either. >>>> >>>> Finally gcc.dg/torture/pr48493.c required the change >>>> in assign_parm_setup_stack. This is just not using the >>>> correct MEM_ALIGN attribute value, while the memory is >>>> actually aligned. >>> >>> But doesn't >>> >>> int align =3D STACK_SLOT_ALIGNMENT (data->passed_type, >>> GET_MODE (data->entry_parm), >>> TYPE_ALIGN=20 >>> (data->passed_type)); >>> + if (align < (int)GET_MODE_ALIGNMENT (GET_MODE=20 >>> (data->entry_parm)) >>> + && targetm.slow_unaligned_access (GET_MODE=20 >>> (data->entry_parm), >>> + align)) >>> + align =3D GET_MODE_ALIGNMENT (GET_MODE (data->entry_parm)); >>> >>> hint at that STACK_SLOT_ALIGNMENT is simply bogus for the target? >>> That is, the target says, for natural alignment 64 the stack slot >>> alignment can only be guaranteed 32. You can't then simply up it >>> but have to use unaligned accesses (or the target/middle-end needs >>> to do dynamic stack alignment). >>> >> Yes, maybe, but STACK_SLOT_ALIGNMENT is used in a few other places as we= ll, >> and none of them have a problem, probably because they use expand_expr, >> but here we use emit_move_insn: >> >> if (MEM_P (src)) >> { >> [...] >> } >> else >> { >> if (!REG_P (src)) >> src =3D force_reg (GET_MODE (src), src); >> emit_move_insn (dest, src); >> } >> >> So I could restrict that to >> >> if (!MEM_P (data->entry_parm) >> && align < (int)GET_MODE_ALIGNMENT (GET_MODE (data->entry_= parm)) >> && ((optab_handler (movmisalign_optab, >> GET_MODE (data->entry_parm)) >> !=3D CODE_FOR_nothing) >> || targetm.slow_unaligned_access (GET_MODE (data->entr= y_parm), >> align))) >> align =3D GET_MODE_ALIGNMENT (GET_MODE (data->entry_parm)); >> >> But OTOH even for arguments arriving in unaligned stack slots where >> emit_block_move could handle it, that would just work against the >> intention of assign_parm_adjust_stack_rtl. >> >> Of course there are limits how much alignment assign_stack_local >> can handle, and that would result in an assertion in the emit_move_insn. >> But in the end if that happens it is just an impossible target >> configuration. >=20 > Still I think you can't simply override STACK_SLOT_ALIGNMENT just because > of the mode of an entry param, can you? If you can assume a bigger > alignment then STACK_SLOT_ALIGNMENT should return it. >=20 I don't see a real problem here. All target except i386 and gcn (whatever = that is) use the default for STACK_SLOT_ALIGNMENT which simply allows any (large) al= ign value to rule the effective STACK_SLOT_ALIGNMENT. The user could have simply dec= lared the local variable with the alignment that results in better code FWIW. If the stack alignment is too high that is capped in assign_stack_local: /* Ignore alignment if it exceeds MAX_SUPPORTED_STACK_ALIGNMENT. */ if (alignment_in_bits > MAX_SUPPORTED_STACK_ALIGNMENT) { alignment_in_bits =3D MAX_SUPPORTED_STACK_ALIGNMENT; alignment =3D MAX_SUPPORTED_STACK_ALIGNMENT / BITS_PER_UNIT; } I for one, would just assume that MAX_SUPPORTED_STACK_ALIGNMENT should be sufficient for all modes that need movmisalign_optab and friends. If it is not, an ICE would be just fine. >>> >>>> Note that set_mem_attributes does not >>>> always preserve the MEM_ALIGN of the ref, since: >>> >>> set_mem_attributes sets _all_ attributes from an expression or type. >>> >> >> Not really: >> >> refattrs =3D MEM_ATTRS (ref); >> if (refattrs) >> { >> /* ??? Can this ever happen? Calling this routine on a MEM that >> already carries memory attributes should probably be invalid. = */ >> [...] >> attrs.align =3D refattrs->align; >> } >> else >> [...] >> >> if (objectp || TREE_CODE (t) =3D=3D INDIRECT_REF) >> attrs.align =3D MAX (attrs.align, TYPE_ALIGN (type)); >> >>>> /* Default values from pre-existing memory attributes if present. */ >>>> refattrs =3D MEM_ATTRS (ref); >>>> if (refattrs) >>>> { >>>> /* ??? Can this ever happen? Calling this routine on a MEM that >>>> already carries memory attributes should probably be invalid.= */ >>>> attrs.expr =3D refattrs->expr; >>>> attrs.offset_known_p =3D refattrs->offset_known_p; >>>> attrs.offset =3D refattrs->offset; >>>> attrs.size_known_p =3D refattrs->size_known_p; >>>> attrs.size =3D refattrs->size; >>>> attrs.align =3D refattrs->align; >>>> } >>>> >>>> but if we happen to set_mem_align to _exactly_ the MODE_ALIGNMENT >>>> the MEM_ATTRS are zero, and a smaller alignment may result. >>> >>> Not sure what you are saying here. That >>> >>> set_mem_align (MEM:SI A32, 32) >>> >>> produces a NULL MEM_ATTRS and thus set_mem_attributes not inheriting >>> the A32 but eventually computing sth lower? Yeah, that's probably >>> an interesting "hole" here. I'm quite sure that if we'd do >>> >>> refattrs =3D MEM_ATTRS (ref) ? MEM_ATTRS (ref) : mem_mode_attrs[(int) G= ET_MODE (ref)]; >>> >>> we run into issues exactly on strict-align targets ... >>> >> >> Yeah, that's scary... >> >>> >>> @@ -3291,6 +3306,23 @@ assign_parm_setup_reg (struct assign_parm_data_a= ll >>> >>> did_conversion =3D true; >>> } >>> + else if (MEM_P (data->entry_parm) >>> + && GET_MODE_ALIGNMENT (promoted_nominal_mode) >>> + > MEM_ALIGN (data->entry_parm) >>> + && (((icode =3D optab_handler (movmisalign_optab, >>> + promoted_nominal_mode)) >>> + !=3D CODE_FOR_nothing) >>> + || targetm.slow_unaligned_access (promoted_nominal_mode, >>> + MEM_ALIGN=20 >>> (data->entry_parm)))) >>> + { >>> + if (icode !=3D CODE_FOR_nothing) >>> + emit_insn (GEN_FCN (icode) (parmreg, validated_mem)); >>> + else >>> + rtl =3D parmreg =3D extract_bit_field (validated_mem, >>> + GET_MODE_BITSIZE (promoted_nominal_mode), 0, >>> + unsignedp, parmreg, >>> + promoted_nominal_mode, VOIDmode, false, NULL); >>> + } >>> else >>> emit_move_insn (parmreg, validated_mem); >>> >>> This hunk would be obvious to me if we'd use MEM_ALIGN (validated_mem) / >>> GET_MODE (validated_mem) instead of MEM_ALIGN (data->entry_parm) >>> and promoted_nominal_mode. >>> >> >> Yes, the idea is just to save some cycles, since >> >> parmreg =3D gen_reg_rtx (promoted_nominal_mode); >> we know that parmreg will also have that mode, plus >> emit_move_insn (parmreg, validated_mem) which would be called here >> asserts that: >> >> gcc_assert (mode !=3D BLKmode >> && (GET_MODE (y) =3D=3D mode || GET_MODE (y) =3D=3D VOIDmo= de)); >> >> so GET_MODE(validated_mem) =3D=3D GET_MODE (parmreg) =3D=3D promoted_nom= inal_mode >> >> I still like the current version with promoted_nominal_mode slighhtly >> better both because of performance, and the 80-column restriction. :) >=20 > So if you say they are 1:1 equivalent then go for it (for this hunk, > approved as "obvious"). >=20 Okay. Thanks, so I committed that hunk as r274531. Here is what I have right now, boot-strapped and reg-tested on x86_64-pc-li= nux-gnu and arm-linux-gnueabihf (still running, but looks good so far). Is it OK for trunk? Thanks Bernd. --_002_AM6PR10MB2566EF4119F45406B60B3972E4AC0AM6PR10MB2566EURP_ Content-Type: text/x-patch; name="patch-arm-align-abi.diff" Content-Description: patch-arm-align-abi.diff Content-Disposition: attachment; filename="patch-arm-align-abi.diff"; size=11293; creation-date="Thu, 15 Aug 2019 12:14:02 GMT"; modification-date="Thu, 15 Aug 2019 12:14:02 GMT" Content-ID: Content-Transfer-Encoding: base64 Content-length: 15311 MjAxOS0wOC0wNSAgQmVybmQgRWRsaW5nZXIgIDxiZXJuZC5lZGxpbmdlckBo b3RtYWlsLmRlPgoKCVBSIG1pZGRsZS1lbmQvODk1NDQKCSogZXhwci5jIChl eHBhbmRfZXhwcl9yZWFsXzEpOiBIYW5kbGUgRlVOQ1RJT05fREVDTCBhcyB1 bmFsaWduZWQuCgkqIGZ1bmN0aW9uLmMgKGFzc2lnbl9wYXJtX2ZpbmRfc3Rh Y2tfcnRsKTogVXNlIGxhcmdlciBhbGlnbm1lbnQKCXdoZW4gcG9zc2libGUu CgkoYXNzaWduX3Bhcm1fYWRqdXN0X3N0YWNrX3J0bCk6IENoZWNrIG1vdm1p c2FsaWduIG9wdGFiIHRvby4KCShhc3NpZ25fcGFybV9zZXR1cF9zdGFjayk6 IEFsbG9jYXRlIHByb3Blcmx5IGFsaWduZWQgc3RhY2sgc2xvdHMuCgkqIHZh cmFzbS5jIChhbGlnbl92YXJpYWJsZSk6IEFsaWduIGNvbnN0YW50cyBvZiBt aXNhbGlnbmVkIHR5cGVzLgoJKiBjb25maWcvYXJtL2FybS5tZCAobW92ZGks IG1vdnNpLCBtb3ZoaSwgbW92aGYsIG1vdnNmLCBtb3ZkZik6IENoZWNrCglz dHJpY3QgYWxpZ25tZW50IHJlc3RyaWN0aW9ucyBvbiBtZW1vcnkgYWRkcmVz c2VzLgoJKiBjb25maWcvYXJtL25lb24ubWQgKG1vdnRpLCBtb3Y8VlNUUlVD VD4sIG1vdjxWSD4pOiBMaWtld2lzZS4KCSogY29uZmlnL2FybS92ZWMtY29t bW9uLm1kIChtb3Y8VkFMTD4pOiBMaWtld2lzZS4KCnRlc3RzdWl0ZToKMjAx OS0wOC0wNSAgQmVybmQgRWRsaW5nZXIgIDxiZXJuZC5lZGxpbmdlckBob3Rt YWlsLmRlPgoKCVBSIG1pZGRsZS1lbmQvODk1NDQKCSogZ2NjLnRhcmdldC9h cm0vdW5hbGlnbmVkLWFyZ3VtZW50LTEuYzogTmV3IHRlc3QuCgkqIGdjYy50 YXJnZXQvYXJtL3VuYWxpZ25lZC1hcmd1bWVudC0yLmM6IE5ldyB0ZXN0LgoK SW5kZXg6IGdjYy9jb25maWcvYXJtL2FybS5tZAo9PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09Ci0tLSBnY2MvY29uZmlnL2FybS9hcm0ubWQJKHJldmlzaW9uIDI3 NDUzMSkKKysrIGdjYy9jb25maWcvYXJtL2FybS5tZAkod29ya2luZyBjb3B5 KQpAQCAtNTgzOCw2ICs1ODM4LDEyIEBACiAJKG1hdGNoX29wZXJhbmQ6REkg MSAiZ2VuZXJhbF9vcGVyYW5kIikpXQogICAiVEFSR0VUX0VJVEhFUiIKICAg IgorICBnY2NfY2hlY2tpbmdfYXNzZXJ0ICghTUVNX1AgKG9wZXJhbmRzWzBd KQorCQkgICAgICAgfHwgTUVNX0FMSUdOIChvcGVyYW5kc1swXSkKKwkJCSAg Pj0gR0VUX01PREVfQUxJR05NRU5UIChESW1vZGUpKTsKKyAgZ2NjX2NoZWNr aW5nX2Fzc2VydCAoIU1FTV9QIChvcGVyYW5kc1sxXSkKKwkJICAgICAgIHx8 IE1FTV9BTElHTiAob3BlcmFuZHNbMV0pCisJCQkgID49IEdFVF9NT0RFX0FM SUdOTUVOVCAoREltb2RlKSk7CiAgIGlmIChjYW5fY3JlYXRlX3BzZXVkb19w ICgpKQogICAgIHsKICAgICAgIGlmICghUkVHX1AgKG9wZXJhbmRzWzBdKSkK QEAgLTYwMTQsNiArNjAyMCwxMiBAQAogICB7CiAgIHJ0eCBiYXNlLCBvZmZz ZXQsIHRtcDsKIAorICBnY2NfY2hlY2tpbmdfYXNzZXJ0ICghTUVNX1AgKG9w ZXJhbmRzWzBdKQorCQkgICAgICAgfHwgTUVNX0FMSUdOIChvcGVyYW5kc1sw XSkKKwkJCSAgPj0gR0VUX01PREVfQUxJR05NRU5UIChTSW1vZGUpKTsKKyAg Z2NjX2NoZWNraW5nX2Fzc2VydCAoIU1FTV9QIChvcGVyYW5kc1sxXSkKKwkJ ICAgICAgIHx8IE1FTV9BTElHTiAob3BlcmFuZHNbMV0pCisJCQkgID49IEdF VF9NT0RFX0FMSUdOTUVOVCAoU0ltb2RlKSk7CiAgIGlmIChUQVJHRVRfMzJC SVQgfHwgVEFSR0VUX0hBVkVfTU9WVCkKICAgICB7CiAgICAgICAvKiBFdmVy eXRoaW5nIGV4Y2VwdCBtZW0gPSBjb25zdCBvciBtZW0gPSBtZW0gY2FuIGJl IGRvbmUgZWFzaWx5LiAgKi8KQEAgLTY1MDMsNiArNjUxNSwxMiBAQAogCSht YXRjaF9vcGVyYW5kOkhJIDEgImdlbmVyYWxfb3BlcmFuZCIpKV0KICAgIlRB UkdFVF9FSVRIRVIiCiAgICIKKyAgZ2NjX2NoZWNraW5nX2Fzc2VydCAoIU1F TV9QIChvcGVyYW5kc1swXSkKKwkJICAgICAgIHx8IE1FTV9BTElHTiAob3Bl cmFuZHNbMF0pCisJCQkgID49IEdFVF9NT0RFX0FMSUdOTUVOVCAoSEltb2Rl KSk7CisgIGdjY19jaGVja2luZ19hc3NlcnQgKCFNRU1fUCAob3BlcmFuZHNb MV0pCisJCSAgICAgICB8fCBNRU1fQUxJR04gKG9wZXJhbmRzWzFdKQorCQkJ ICA+PSBHRVRfTU9ERV9BTElHTk1FTlQgKEhJbW9kZSkpOwogICBpZiAoVEFS R0VUX0FSTSkKICAgICB7CiAgICAgICBpZiAoY2FuX2NyZWF0ZV9wc2V1ZG9f cCAoKSkKQEAgLTY5MTIsNiArNjkzMCwxMiBAQAogCShtYXRjaF9vcGVyYW5k OkhGIDEgImdlbmVyYWxfb3BlcmFuZCIpKV0KICAgIlRBUkdFVF9FSVRIRVIi CiAgICIKKyAgZ2NjX2NoZWNraW5nX2Fzc2VydCAoIU1FTV9QIChvcGVyYW5k c1swXSkKKwkJICAgICAgIHx8IE1FTV9BTElHTiAob3BlcmFuZHNbMF0pCisJ CQkgID49IEdFVF9NT0RFX0FMSUdOTUVOVCAoSEZtb2RlKSk7CisgIGdjY19j aGVja2luZ19hc3NlcnQgKCFNRU1fUCAob3BlcmFuZHNbMV0pCisJCSAgICAg ICB8fCBNRU1fQUxJR04gKG9wZXJhbmRzWzFdKQorCQkJICA+PSBHRVRfTU9E RV9BTElHTk1FTlQgKEhGbW9kZSkpOwogICBpZiAoVEFSR0VUXzMyQklUKQog ICAgIHsKICAgICAgIGlmIChNRU1fUCAob3BlcmFuZHNbMF0pKQpAQCAtNjk3 Niw2ICs3MDAwLDEyIEBACiAJKG1hdGNoX29wZXJhbmQ6U0YgMSAiZ2VuZXJh bF9vcGVyYW5kIikpXQogICAiVEFSR0VUX0VJVEhFUiIKICAgIgorICBnY2Nf Y2hlY2tpbmdfYXNzZXJ0ICghTUVNX1AgKG9wZXJhbmRzWzBdKQorCQkgICAg ICAgfHwgTUVNX0FMSUdOIChvcGVyYW5kc1swXSkKKwkJCSAgPj0gR0VUX01P REVfQUxJR05NRU5UIChTRm1vZGUpKTsKKyAgZ2NjX2NoZWNraW5nX2Fzc2Vy dCAoIU1FTV9QIChvcGVyYW5kc1sxXSkKKwkJICAgICAgIHx8IE1FTV9BTElH TiAob3BlcmFuZHNbMV0pCisJCQkgID49IEdFVF9NT0RFX0FMSUdOTUVOVCAo U0Ztb2RlKSk7CiAgIGlmIChUQVJHRVRfMzJCSVQpCiAgICAgewogICAgICAg aWYgKE1FTV9QIChvcGVyYW5kc1swXSkpCkBAIC03MDcxLDYgKzcxMDEsMTIg QEAKIAkobWF0Y2hfb3BlcmFuZDpERiAxICJnZW5lcmFsX29wZXJhbmQiKSld CiAgICJUQVJHRVRfRUlUSEVSIgogICAiCisgIGdjY19jaGVja2luZ19hc3Nl cnQgKCFNRU1fUCAob3BlcmFuZHNbMF0pCisJCSAgICAgICB8fCBNRU1fQUxJ R04gKG9wZXJhbmRzWzBdKQorCQkJICA+PSBHRVRfTU9ERV9BTElHTk1FTlQg KERGbW9kZSkpOworICBnY2NfY2hlY2tpbmdfYXNzZXJ0ICghTUVNX1AgKG9w ZXJhbmRzWzFdKQorCQkgICAgICAgfHwgTUVNX0FMSUdOIChvcGVyYW5kc1sx XSkKKwkJCSAgPj0gR0VUX01PREVfQUxJR05NRU5UIChERm1vZGUpKTsKICAg aWYgKFRBUkdFVF8zMkJJVCkKICAgICB7CiAgICAgICBpZiAoTUVNX1AgKG9w ZXJhbmRzWzBdKSkKSW5kZXg6IGdjYy9jb25maWcvYXJtL25lb24ubWQKPT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PQotLS0gZ2NjL2NvbmZpZy9hcm0vbmVvbi5t ZAkocmV2aXNpb24gMjc0NTMxKQorKysgZ2NjL2NvbmZpZy9hcm0vbmVvbi5t ZAkod29ya2luZyBjb3B5KQpAQCAtMTI3LDYgKzEyNywxMiBAQAogCShtYXRj aF9vcGVyYW5kOlRJIDEgImdlbmVyYWxfb3BlcmFuZCIpKV0KICAgIlRBUkdF VF9ORU9OIgogeworICBnY2NfY2hlY2tpbmdfYXNzZXJ0ICghTUVNX1AgKG9w ZXJhbmRzWzBdKQorCQkgICAgICAgfHwgTUVNX0FMSUdOIChvcGVyYW5kc1sw XSkKKwkJCSAgPj0gR0VUX01PREVfQUxJR05NRU5UIChUSW1vZGUpKTsKKyAg Z2NjX2NoZWNraW5nX2Fzc2VydCAoIU1FTV9QIChvcGVyYW5kc1sxXSkKKwkJ ICAgICAgIHx8IE1FTV9BTElHTiAob3BlcmFuZHNbMV0pCisJCQkgID49IEdF VF9NT0RFX0FMSUdOTUVOVCAoVEltb2RlKSk7CiAgIGlmIChjYW5fY3JlYXRl X3BzZXVkb19wICgpKQogICAgIHsKICAgICAgIGlmICghUkVHX1AgKG9wZXJh bmRzWzBdKSkKQEAgLTEzOSw2ICsxNDUsMTIgQEAKIAkobWF0Y2hfb3BlcmFu ZDpWU1RSVUNUIDEgImdlbmVyYWxfb3BlcmFuZCIpKV0KICAgIlRBUkdFVF9O RU9OIgogeworICBnY2NfY2hlY2tpbmdfYXNzZXJ0ICghTUVNX1AgKG9wZXJh bmRzWzBdKQorCQkgICAgICAgfHwgTUVNX0FMSUdOIChvcGVyYW5kc1swXSkK KwkJCSAgPj0gR0VUX01PREVfQUxJR05NRU5UICg8TU9ERT5tb2RlKSk7Cisg IGdjY19jaGVja2luZ19hc3NlcnQgKCFNRU1fUCAob3BlcmFuZHNbMV0pCisJ CSAgICAgICB8fCBNRU1fQUxJR04gKG9wZXJhbmRzWzFdKQorCQkJICA+PSBH RVRfTU9ERV9BTElHTk1FTlQgKDxNT0RFPm1vZGUpKTsKICAgaWYgKGNhbl9j cmVhdGVfcHNldWRvX3AgKCkpCiAgICAgewogICAgICAgaWYgKCFSRUdfUCAo b3BlcmFuZHNbMF0pKQpAQCAtMTUxLDYgKzE2MywxMiBAQAogCShtYXRjaF9v cGVyYW5kOlZIIDEgInNfcmVnaXN0ZXJfb3BlcmFuZCIpKV0KICAgIlRBUkdF VF9ORU9OIgogeworICBnY2NfY2hlY2tpbmdfYXNzZXJ0ICghTUVNX1AgKG9w ZXJhbmRzWzBdKQorCQkgICAgICAgfHwgTUVNX0FMSUdOIChvcGVyYW5kc1sw XSkKKwkJCSAgPj0gR0VUX01PREVfQUxJR05NRU5UICg8TU9ERT5tb2RlKSk7 CisgIGdjY19jaGVja2luZ19hc3NlcnQgKCFNRU1fUCAob3BlcmFuZHNbMV0p CisJCSAgICAgICB8fCBNRU1fQUxJR04gKG9wZXJhbmRzWzFdKQorCQkJICA+ PSBHRVRfTU9ERV9BTElHTk1FTlQgKDxNT0RFPm1vZGUpKTsKICAgaWYgKGNh bl9jcmVhdGVfcHNldWRvX3AgKCkpCiAgICAgewogICAgICAgaWYgKCFSRUdf UCAob3BlcmFuZHNbMF0pKQpJbmRleDogZ2NjL2NvbmZpZy9hcm0vdmVjLWNv bW1vbi5tZAo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSBnY2MvY29uZmln L2FybS92ZWMtY29tbW9uLm1kCShyZXZpc2lvbiAyNzQ1MzEpCisrKyBnY2Mv Y29uZmlnL2FybS92ZWMtY29tbW9uLm1kCSh3b3JraW5nIGNvcHkpCkBAIC0y Niw2ICsyNiwxMiBAQAogICAiVEFSR0VUX05FT04KICAgIHx8IChUQVJHRVRf UkVBTExZX0lXTU1YVCAmJiBWQUxJRF9JV01NWFRfUkVHX01PREUgKDxNT0RF Pm1vZGUpKSIKIHsKKyAgZ2NjX2NoZWNraW5nX2Fzc2VydCAoIU1FTV9QIChv cGVyYW5kc1swXSkKKwkJICAgICAgIHx8IE1FTV9BTElHTiAob3BlcmFuZHNb MF0pCisJCQkgID49IEdFVF9NT0RFX0FMSUdOTUVOVCAoPE1PREU+bW9kZSkp OworICBnY2NfY2hlY2tpbmdfYXNzZXJ0ICghTUVNX1AgKG9wZXJhbmRzWzFd KQorCQkgICAgICAgfHwgTUVNX0FMSUdOIChvcGVyYW5kc1sxXSkKKwkJCSAg Pj0gR0VUX01PREVfQUxJR05NRU5UICg8TU9ERT5tb2RlKSk7CiAgIGlmIChj YW5fY3JlYXRlX3BzZXVkb19wICgpKQogICAgIHsKICAgICAgIGlmICghUkVH X1AgKG9wZXJhbmRzWzBdKSkKSW5kZXg6IGdjYy9leHByLmMKPT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PQotLS0gZ2NjL2V4cHIuYwkocmV2aXNpb24gMjc0NTMx KQorKysgZ2NjL2V4cHIuYwkod29ya2luZyBjb3B5KQpAQCAtMTA3OTYsNiAr MTA3OTYsMTQgQEAgZXhwYW5kX2V4cHJfcmVhbF8xICh0cmVlIGV4cCwgcnR4 IHRhcmdldCwgbWFjaGluZV8KIAkgICAgTUVNX1ZPTEFUSUxFX1AgKG9wMCkg PSAxOwogCSAgfQogCisJaWYgKE1FTV9QIChvcDApICYmIFRSRUVfQ09ERSAo dGVtKSA9PSBGVU5DVElPTl9ERUNMKQorCSAgeworCSAgICBpZiAob3AwID09 IG9yaWdfb3AwKQorCSAgICAgIG9wMCA9IGNvcHlfcnR4IChvcDApOworCisJ ICAgIHNldF9tZW1fYWxpZ24gKG9wMCwgQklUU19QRVJfVU5JVCk7CisJICB9 CisKIAkvKiBJbiBjYXNlcyB3aGVyZSBhbiBhbGlnbmVkIHVuaW9uIGhhcyBh biB1bmFsaWduZWQgb2JqZWN0CiAJICAgYXMgYSBmaWVsZCwgd2UgbWlnaHQg YmUgZXh0cmFjdGluZyBhIEJMS21vZGUgdmFsdWUgZnJvbQogCSAgIGFuIGlu dGVnZXItbW9kZSAoZS5nLiwgU0ltb2RlKSBvYmplY3QuICBIYW5kbGUgdGhp cyBjYXNlCkluZGV4OiBnY2MvZnVuY3Rpb24uYwo9PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09Ci0tLSBnY2MvZnVuY3Rpb24uYwkocmV2aXNpb24gMjc0NTMxKQor KysgZ2NjL2Z1bmN0aW9uLmMJKHdvcmtpbmcgY29weSkKQEAgLTI2OTcsOCAr MjY5NywyMyBAQCBhc3NpZ25fcGFybV9maW5kX3N0YWNrX3J0bCAodHJlZSBw YXJtLCBzdHJ1Y3QgYXNzaQogICAgICBpbnRlbnRpb25hbGx5IGZvcmNpbmcg dXB3YXJkIHBhZGRpbmcuICBPdGhlcndpc2Ugd2UgaGF2ZSB0byBjb21lCiAg ICAgIHVwIHdpdGggYSBndWVzcyBhdCB0aGUgYWxpZ25tZW50IGJhc2VkIG9u IE9GRlNFVF9SVFguICAqLwogICBwb2x5X2ludDY0IG9mZnNldDsKLSAgaWYg KGRhdGEtPmxvY2F0ZS53aGVyZV9wYWQgIT0gUEFEX0RPV05XQVJEIHx8IGRh dGEtPmVudHJ5X3Bhcm0pCisgIGlmIChkYXRhLT5sb2NhdGUud2hlcmVfcGFk ID09IFBBRF9OT05FIHx8IGRhdGEtPmVudHJ5X3Bhcm0pCiAgICAgYWxpZ24g PSBib3VuZGFyeTsKKyAgZWxzZSBpZiAoZGF0YS0+bG9jYXRlLndoZXJlX3Bh ZCA9PSBQQURfVVBXQVJEKQorICAgIHsKKyAgICAgIGFsaWduID0gYm91bmRh cnk7CisgICAgICAvKiBJZiB0aGUgYXJndW1lbnQgb2Zmc2V0IGlzIGFjdHVh bGx5IG1vcmUgYWxpZ25lZCB0aGFuIHRoZSBub21pbmFsCisJIHN0YWNrIHNs b3QgYm91bmRhcnksIHRha2UgYWR2YW50YWdlIG9mIHRoYXQgZXhjZXNzIGFs aWdubWVudC4KKwkgRG9uJ3QgbWFrZSBhbnkgYXNzdW1wdGlvbnMgaWYgU1RB Q0tfUE9JTlRFUl9PRkZTRVQgaXMgaW4gdXNlLiAgKi8KKyAgICAgIGlmIChw b2x5X2ludF9ydHhfcCAob2Zmc2V0X3J0eCwgJm9mZnNldCkKKwkgICYmIFNU QUNLX1BPSU5URVJfT0ZGU0VUID09IDApCisJeworCSAgdW5zaWduZWQgaW50 IG9mZnNldF9hbGlnbiA9IGtub3duX2FsaWdubWVudCAob2Zmc2V0KSAqIEJJ VFNfUEVSX1VOSVQ7CisJICBpZiAob2Zmc2V0X2FsaWduID09IDAgfHwgb2Zm c2V0X2FsaWduID4gU1RBQ0tfQk9VTkRBUlkpCisJICAgIG9mZnNldF9hbGln biA9IFNUQUNLX0JPVU5EQVJZOworCSAgYWxpZ24gPSBNQVggKGFsaWduLCBv ZmZzZXRfYWxpZ24pOworCX0KKyAgICB9CiAgIGVsc2UgaWYgKHBvbHlfaW50 X3J0eF9wIChvZmZzZXRfcnR4LCAmb2Zmc2V0KSkKICAgICB7CiAgICAgICBh bGlnbiA9IGxlYXN0X2JpdF9od2kgKGJvdW5kYXJ5KTsKQEAgLTI4MTIsOCAr MjgyNywxMCBAQCBhc3NpZ25fcGFybV9hZGp1c3Rfc3RhY2tfcnRsIChzdHJ1 Y3QgYXNzaWduX3Bhcm1fZAogICAgICBzdGFjayBzbG90LCBpZiB3ZSBuZWVk IG9uZS4gICovCiAgIGlmIChzdGFja19wYXJtCiAgICAgICAmJiAoKEdFVF9N T0RFX0FMSUdOTUVOVCAoZGF0YS0+bm9taW5hbF9tb2RlKSA+IE1FTV9BTElH TiAoc3RhY2tfcGFybSkKLQkgICAmJiB0YXJnZXRtLnNsb3dfdW5hbGlnbmVk X2FjY2VzcyAoZGF0YS0+bm9taW5hbF9tb2RlLAotCQkJCQkgICAgIE1FTV9B TElHTiAoc3RhY2tfcGFybSkpKQorCSAgICYmICgob3B0YWJfaGFuZGxlciAo bW92bWlzYWxpZ25fb3B0YWIsIGRhdGEtPm5vbWluYWxfbW9kZSkKKwkJIT0g Q09ERV9GT1Jfbm90aGluZykKKwkgICAgICAgfHwgdGFyZ2V0bS5zbG93X3Vu YWxpZ25lZF9hY2Nlc3MgKGRhdGEtPm5vbWluYWxfbW9kZSwKKwkJCQkJCSBN RU1fQUxJR04gKHN0YWNrX3Bhcm0pKSkpCiAJICB8fCAoZGF0YS0+bm9taW5h bF90eXBlCiAJICAgICAgJiYgVFlQRV9BTElHTiAoZGF0YS0+bm9taW5hbF90 eXBlKSA+IE1FTV9BTElHTiAoc3RhY2tfcGFybSkKIAkgICAgICAmJiBNRU1f QUxJR04gKHN0YWNrX3Bhcm0pIDwgUFJFRkVSUkVEX1NUQUNLX0JPVU5EQVJZ KSkpCkBAIC0zNDY2LDExICszNDgzLDIwIEBAIGFzc2lnbl9wYXJtX3NldHVw X3N0YWNrIChzdHJ1Y3QgYXNzaWduX3Bhcm1fZGF0YV9hCiAJICBpbnQgYWxp Z24gPSBTVEFDS19TTE9UX0FMSUdOTUVOVCAoZGF0YS0+cGFzc2VkX3R5cGUs CiAJCQkJCSAgICBHRVRfTU9ERSAoZGF0YS0+ZW50cnlfcGFybSksCiAJCQkJ CSAgICBUWVBFX0FMSUdOIChkYXRhLT5wYXNzZWRfdHlwZSkpOworCSAgaWYg KGFsaWduIDwgKGludClHRVRfTU9ERV9BTElHTk1FTlQgKEdFVF9NT0RFIChk YXRhLT5lbnRyeV9wYXJtKSkKKwkgICAgICAmJiAoKG9wdGFiX2hhbmRsZXIg KG1vdm1pc2FsaWduX29wdGFiLAorCQkJCSAgR0VUX01PREUgKGRhdGEtPmVu dHJ5X3Bhcm0pKQorCQkgICAhPSBDT0RFX0ZPUl9ub3RoaW5nKQorCQkgIHx8 IHRhcmdldG0uc2xvd191bmFsaWduZWRfYWNjZXNzIChHRVRfTU9ERSAoZGF0 YS0+ZW50cnlfcGFybSksCisJCQkJCQkgICAgYWxpZ24pKSkKKwkgICAgYWxp Z24gPSBHRVRfTU9ERV9BTElHTk1FTlQgKEdFVF9NT0RFIChkYXRhLT5lbnRy eV9wYXJtKSk7CiAJICBkYXRhLT5zdGFja19wYXJtCiAJICAgID0gYXNzaWdu X3N0YWNrX2xvY2FsIChHRVRfTU9ERSAoZGF0YS0+ZW50cnlfcGFybSksCiAJ CQkJICBHRVRfTU9ERV9TSVpFIChHRVRfTU9ERSAoZGF0YS0+ZW50cnlfcGFy bSkpLAogCQkJCSAgYWxpZ24pOworCSAgYWxpZ24gPSBNRU1fQUxJR04gKGRh dGEtPnN0YWNrX3Bhcm0pOwogCSAgc2V0X21lbV9hdHRyaWJ1dGVzIChkYXRh LT5zdGFja19wYXJtLCBwYXJtLCAxKTsKKwkgIHNldF9tZW1fYWxpZ24gKGRh dGEtPnN0YWNrX3Bhcm0sIGFsaWduKTsKIAl9CiAKICAgICAgIGRlc3QgPSB2 YWxpZGl6ZV9tZW0gKGNvcHlfcnR4IChkYXRhLT5zdGFja19wYXJtKSk7Cklu ZGV4OiBnY2MvdGVzdHN1aXRlL2djYy50YXJnZXQvYXJtL3VuYWxpZ25lZC1h cmd1bWVudC0xLmMKPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQotLS0gZ2NjL3Rl c3RzdWl0ZS9nY2MudGFyZ2V0L2FybS91bmFsaWduZWQtYXJndW1lbnQtMS5j CShyZXZpc2lvbiAwKQorKysgZ2NjL3Rlc3RzdWl0ZS9nY2MudGFyZ2V0L2Fy bS91bmFsaWduZWQtYXJndW1lbnQtMS5jCSh3b3JraW5nIGNvcHkpCkBAIC0w LDAgKzEsMTkgQEAKKy8qIHsgZGctZG8gY29tcGlsZSB9ICovCisvKiB7IGRn LXJlcXVpcmUtZWZmZWN0aXZlLXRhcmdldCBhcm1fYXJtX29rIH0gKi8KKy8q IHsgZGctcmVxdWlyZS1lZmZlY3RpdmUtdGFyZ2V0IGFybV9sZHJkX3N0cmRf b2sgfSAqLworLyogeyBkZy1vcHRpb25zICItbWFybSAtbW5vLXVuYWxpZ25l ZC1hY2Nlc3MgLU8zIiB9ICovCisKK3N0cnVjdCBzIHsKKyAgaW50IGEsIGI7 Cit9IF9fYXR0cmlidXRlX18oKGFsaWduZWQoOCkpKTsKKworc3RydWN0IHMg ZjA7CisKK3ZvaWQgZihpbnQgYSwgaW50IGIsIGludCBjLCBpbnQgZCwgc3Ry dWN0IHMgZikKK3sKKyAgZjAgPSBmOworfQorCisvKiB7IGRnLWZpbmFsIHsg c2Nhbi1hc3NlbWJsZXItdGltZXMgImxkcmQiIDEgfSB9ICovCisvKiB7IGRn LWZpbmFsIHsgc2Nhbi1hc3NlbWJsZXItdGltZXMgInN0cmQiIDEgfSB9ICov CisvKiB7IGRnLWZpbmFsIHsgc2Nhbi1hc3NlbWJsZXItdGltZXMgInN0bSIg MCB9IH0gKi8KSW5kZXg6IGdjYy90ZXN0c3VpdGUvZ2NjLnRhcmdldC9hcm0v dW5hbGlnbmVkLWFyZ3VtZW50LTIuYwo9PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 Ci0tLSBnY2MvdGVzdHN1aXRlL2djYy50YXJnZXQvYXJtL3VuYWxpZ25lZC1h cmd1bWVudC0yLmMJKHJldmlzaW9uIDApCisrKyBnY2MvdGVzdHN1aXRlL2dj Yy50YXJnZXQvYXJtL3VuYWxpZ25lZC1hcmd1bWVudC0yLmMJKHdvcmtpbmcg Y29weSkKQEAgLTAsMCArMSwxOSBAQAorLyogeyBkZy1kbyBjb21waWxlIH0g Ki8KKy8qIHsgZGctcmVxdWlyZS1lZmZlY3RpdmUtdGFyZ2V0IGFybV9hcm1f b2sgfSAqLworLyogeyBkZy1yZXF1aXJlLWVmZmVjdGl2ZS10YXJnZXQgYXJt X2xkcmRfc3RyZF9vayB9ICovCisvKiB7IGRnLW9wdGlvbnMgIi1tYXJtIC1t bm8tdW5hbGlnbmVkLWFjY2VzcyAtTzMiIH0gKi8KKworc3RydWN0IHMgewor ICBpbnQgYSwgYjsKK30gX19hdHRyaWJ1dGVfXygoYWxpZ25lZCg4KSkpOwor CitzdHJ1Y3QgcyBmMDsKKwordm9pZCBmKGludCBhLCBpbnQgYiwgaW50IGMs IGludCBkLCBpbnQgZSwgc3RydWN0IHMgZikKK3sKKyAgZjAgPSBmOworfQor CisvKiB7IGRnLWZpbmFsIHsgc2Nhbi1hc3NlbWJsZXItdGltZXMgImxkcmQi IDAgfSB9ICovCisvKiB7IGRnLWZpbmFsIHsgc2Nhbi1hc3NlbWJsZXItdGlt ZXMgInN0cmQiIDAgfSB9ICovCisvKiB7IGRnLWZpbmFsIHsgc2Nhbi1hc3Nl bWJsZXItdGltZXMgInN0bSIgMSB9IH0gKi8KSW5kZXg6IGdjYy92YXJhc20u Ywo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSBnY2MvdmFyYXNtLmMJKHJl dmlzaW9uIDI3NDUzMSkKKysrIGdjYy92YXJhc20uYwkod29ya2luZyBjb3B5 KQpAQCAtNDcsNiArNDcsNyBAQCBhbG9uZyB3aXRoIEdDQzsgc2VlIHRoZSBm aWxlIENPUFlJTkczLiAgSWYgbm90IHNlZQogI2luY2x1ZGUgInN0bXQuaCIK ICNpbmNsdWRlICJleHByLmgiCiAjaW5jbHVkZSAiZXhwbWVkLmgiCisjaW5j bHVkZSAib3B0YWJzLmgiCiAjaW5jbHVkZSAib3V0cHV0LmgiCiAjaW5jbHVk ZSAibGFuZ2hvb2tzLmgiCiAjaW5jbHVkZSAiZGVidWcuaCIKQEAgLTEwODUs NiArMTA4NiwxMiBAQCBhbGlnbl92YXJpYWJsZSAodHJlZSBkZWNsLCBib29s IGRvbnRfb3V0cHV0X2RhdGEpCiAJfQogICAgIH0KIAorICBpZiAoYWxpZ24g PCBHRVRfTU9ERV9BTElHTk1FTlQgKERFQ0xfTU9ERSAoZGVjbCkpCisgICAg ICAmJiAoKG9wdGFiX2hhbmRsZXIgKG1vdm1pc2FsaWduX29wdGFiLCBERUNM X01PREUgKGRlY2wpKQorCSAgICE9IENPREVfRk9SX25vdGhpbmcpCisJICB8 fCB0YXJnZXRtLnNsb3dfdW5hbGlnbmVkX2FjY2VzcyAoREVDTF9NT0RFIChk ZWNsKSwgYWxpZ24pKSkKKyAgICBhbGlnbiA9IEdFVF9NT0RFX0FMSUdOTUVO VCAoREVDTF9NT0RFIChkZWNsKSk7CisKICAgLyogUmVzZXQgdGhlIGFsaWdu bWVudCBpbiBjYXNlIHdlIGhhdmUgbWFkZSBpdCB0aWdodGVyLCBzbyB3ZSBj YW4gYmVuZWZpdAogICAgICBmcm9tIGl0IGluIGdldF9wb2ludGVyX2FsaWdu bWVudC4gICovCiAgIFNFVF9ERUNMX0FMSUdOIChkZWNsLCBhbGlnbik7Cg== --_002_AM6PR10MB2566EF4119F45406B60B3972E4AC0AM6PR10MB2566EURP_--