From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by sourceware.org (Postfix) with ESMTPS id 919CD384A01B for ; Wed, 25 Mar 2020 10:03:43 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 919CD384A01B Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=jbeulich@suse.com X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 69E12ADF8; Wed, 25 Mar 2020 10:03:42 +0000 (UTC) Subject: Re: [PATCH 1/2] i386: Generate lfence with load/indirect branch/ret [CVE-2020-0551] To: Hongtao Liu Cc: "H.J. Lu" , Binutils , "Lu, Hongjiu" References: <20200310160528.303613-1-hjl.tools@gmail.com> <20200310160528.303613-2-hjl.tools@gmail.com> <6dcb50a3-d3ad-6abc-8a5f-703373df95a1@suse.com> From: Jan Beulich Message-ID: <70acc1ec-ce95-9df7-cdce-1d6c2305b1b4@suse.com> Date: Wed, 25 Mar 2020 11:03:37 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-16.2 required=5.0 tests=BAYES_00, KAM_DMARC_STATUS, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: binutils@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Binutils mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Mar 2020 10:03:45 -0000 On 25.03.2020 10:27, Hongtao Liu wrote: > On Thu, Mar 12, 2020 at 12:17 AM H.J. Lu wrote: >> On Wed, Mar 11, 2020 at 3:55 AM Jan Beulich wrote: >>> On 10.03.2020 17:05, H.J. Lu wrote: >>>> + dest = i.operands - 1; >>>> + >>>> + /* Check fake imm8 operand and 3 source operands. */ >>>> + if ((i.tm.opcode_modifier.immext >>>> + || i.tm.opcode_modifier.vexsources == VEX3SOURCES) >>>> + && i.types[dest].bitfield.imm8) >>>> + dest--; >>>> + >>>> + /* add, or, adc, sbb, and, sub, xor, cmp, test, xchg, xadd */ >>>> + if (!any_vex_p >>>> + && (i.tm.base_opcode == 0x0 >>>> + || i.tm.base_opcode == 0x1 >>>> + || i.tm.base_opcode == 0x8 >>>> + || i.tm.base_opcode == 0x9 >>>> + || i.tm.base_opcode == 0x10 >>>> + || i.tm.base_opcode == 0x11 >>>> + || i.tm.base_opcode == 0x18 >>>> + || i.tm.base_opcode == 0x19 >>>> + || i.tm.base_opcode == 0x20 >>>> + || i.tm.base_opcode == 0x21 >>>> + || i.tm.base_opcode == 0x28 >>>> + || i.tm.base_opcode == 0x29 >>>> + || i.tm.base_opcode == 0x30 >>>> + || i.tm.base_opcode == 0x31 >>>> + || i.tm.base_opcode == 0x38 >>>> + || i.tm.base_opcode == 0x39 >>>> + || (i.tm.base_opcode >= 0x84 && i.tm.base_opcode <= 0x87) >>>> + || i.tm.base_opcode == 0xfc0 >>>> + || i.tm.base_opcode == 0xfc1)) >>>> + return 1; >>> >>> Don't quite a few of these fit very well with ... >>> >> >> Changed. >> >>>> + /* Check for load instruction. */ >>>> + return (i.types[dest].bitfield.class != ClassNone >>>> + || i.types[dest].bitfield.instance == Accum); >>> >>> ... this generic expression? It would seem to me that only TEST >>> and XCHG need special casing, for allowing either operand order. >>> Same seems to apply to quite a few of the special cases in the >>> big "else" block further up, and even its if() [vldmxcsr] part. >> >> Hongtao, can you look into it? >> > > Many instruction take mem operand both as input and output, they also > need to be handled. But they're not fitting well in generic > expressions, because they either only 1 operand, or mem operand is in > the dest place. Well, my earlier reply wasn't quite precise enough, I think. Aiui what you're after to exclude are insns only writing their memory operand. With the pretty long list of excluded opcodes I wonder whether this can be re-arranged to use a common pattern (memory operand is destination) and only exclude the few ones which don't also read this operand. Then again by using a few & and | the list above could be shrunk significantly, and hence may no longer look this odd (I notice the committed version has this reduced a little, but not quite as much as would be possible). >>>> + if (lfence_before_ret != lfence_before_ret_none >>>> + && (i.tm.base_opcode == 0xc2 >>>> + || i.tm.base_opcode == 0xc3 >>>> + || i.tm.base_opcode == 0xca >>>> + || i.tm.base_opcode == 0xcb)) >>>> + { >>>> + if (last_insn.kind != last_insn_other >>>> + && last_insn.seg == now_seg) >>>> + { >>>> + as_warn_where (last_insn.file, last_insn.line, >>>> + _("`%s` skips -mlfence-before-ret on `%s`"), >>>> + last_insn.name, i.tm.name); >>>> + return; >>>> + } >>>> + if (lfence_before_ret == lfence_before_ret_or) >>>> + { >>>> + /* orl: 0x830c2400. */ >>>> + p = frag_more ((flag_code == CODE_64BIT ? 1 : 0) + 4 + 3); >>>> + if (flag_code == CODE_64BIT) >>>> + *p++ = 0x48; >>> >>> Shouldn't this depend on RET's operand size? Likewise wouldn't you >>> also need to insert 0x66/0x67 in certain cases? >> >> Hongtao, can you look into it? > > I suppose you mean OR's operand size? Not exactly - I mean RET's operand size ought to affect the one chosen for OR. Jan