public inbox for gcc-bugs@sourceware.org help / color / mirror / Atom feed
From: "gjl at gcc dot gnu.org" <gcc-bugzilla@gcc.gnu.org> To: gcc-bugs@gcc.gnu.org Subject: [Bug rtl-optimization/51374] [avr] insn combine reorders volatile memory accesses Date: Thu, 01 Dec 2011 09:58:00 -0000 [thread overview] Message-ID: <bug-51374-4-iR8KlzMFLb@http.gcc.gnu.org/bugzilla/> (raw) In-Reply-To: <bug-51374-4@http.gcc.gnu.org/bugzilla/> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=51374 Georg-Johann Lay <gjl at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|UNCONFIRMED |NEW Known to work| |4.7.0 Keywords| |wrong-code Last reconfirmed| |2011-12-01 Component|c |rtl-optimization CC| |gjl at gcc dot gnu.org Host|i386-redhat-linux | Ever Confirmed|0 |1 Summary|Volatile access reordered. |[avr] insn combine reorders | |volatile memory accesses Target Milestone|--- |4.6.3 Known to fail| |4.6.2 --- Comment #1 from Georg-Johann Lay <gjl at gcc dot gnu.org> 2011-12-01 09:57:16 UTC --- Please, supply informations that are needed to reproduce bugs like explained in http://gcc.gnu.org/bugs.html/#need when you report bugs like compiler switches applied. Thanks. To reprodoce, optimization must be turned on: $ avr-gcc-4.6.2 test.c -Os -dp -S __vector_18: in r24,44-0x20 ; 8 *movqi/4 [length = 1] sbis 43-0x20,4 ; 12 *sbix_branch [length = 2] rjmp .L1 lds r24,slot.1198 ; 14 *movhi/2 [length = 4] lds r25,slot.1198+1 ldi r18,hi8(-2) ; 15 *cmphi/5 [length = 3] cpi r24,lo8(-2) cpc r25,r18 brne .L1 ; 16 branch [length = 1] ldi r24,lo8(-1) ; 18 *movhi/4 [length = 2] ldi r25,hi8(-1) sts slot.1198+1,r25 ; 19 *movhi/3 [length = 4] sts slot.1198,r24 .L1: ret ; 29 return [length = 1] avr-gcc-4.7.0 (trunk 181838) compiles correct with -O1/2/s/3, here with -Os: __vector_18: in r24,0xb ; 6 movqi_insn/4 [length = 1] in r25,0xc ; 8 movqi_insn/4 [length = 1] sbrs r24,4 ; 11 *sbrx_branchqi [length = 2] rjmp .L1 lds r24,slot.1321 ; 13 *movhi/3 [length = 4] lds r25,slot.1321+1 adiw r24,2 ; 14 *cmphi/7 [length = 1] brne .L1 ; 15 branch [length = 1] ldi r24,lo8(-1) ; 17 *movhi/5 [length = 2] ldi r25,lo8(-1) sts slot.1321+1,r25 ; 18 *movhi/4 [length = 4] sts slot.1321,r24 .L1: ret ; 35 return [length = 1]
next prev parent reply other threads:[~2011-12-01 9:58 UTC|newest] Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top 2011-12-01 5:01 [Bug c/51374] New: Volatile access reordered andyw at pobox dot com 2011-12-01 9:58 ` gjl at gcc dot gnu.org [this message] 2011-12-01 9:59 ` [Bug rtl-optimization/51374] [avr] insn combine reorders volatile memory accesses gjl at gcc dot gnu.org 2011-12-01 10:08 ` gjl at gcc dot gnu.org 2011-12-01 13:23 ` andyw at pobox dot com 2011-12-08 16:48 ` gjl at gcc dot gnu.org 2011-12-18 20:11 ` gjl at gcc dot gnu.org 2012-01-13 16:00 ` gjl at gcc dot gnu.org 2012-01-13 16:18 ` gjl at gcc dot gnu.org 2012-02-01 11:36 ` gjl at gcc dot gnu.org 2012-02-01 12:41 ` gjl at gcc dot gnu.org 2012-02-01 12:47 ` gjl at gcc dot gnu.org 2012-02-01 12:57 ` gjl at gcc dot gnu.org
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=bug-51374-4-iR8KlzMFLb@http.gcc.gnu.org/bugzilla/ \ --to=gcc-bugzilla@gcc.gnu.org \ --cc=gcc-bugs@gcc.gnu.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).