public inbox for gcc-bugs@sourceware.org help / color / mirror / Atom feed
From: "cvs-commit at gcc dot gnu.org" <gcc-bugzilla@gcc.gnu.org> To: gcc-bugs@gcc.gnu.org Subject: [Bug libstdc++/77691] [8/9/10/11 regression] experimental/memory_resource/resource_adaptor.cc FAILs Date: Wed, 13 May 2020 07:51:41 +0000 [thread overview] Message-ID: <bug-77691-4-lEAtdUWUO4@http.gcc.gnu.org/bugzilla/> (raw) In-Reply-To: <bug-77691-4@http.gcc.gnu.org/bugzilla/> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77691 --- Comment #42 from CVS Commits <cvs-commit at gcc dot gnu.org> --- The master branch has been updated by Alexandre Oliva <aoliva@gcc.gnu.org>: https://gcc.gnu.org/g:883246530f1bb10d854f455e1c3d55b93675690a commit r11-347-g883246530f1bb10d854f455e1c3d55b93675690a Author: Alexandre Oliva <oliva@adacore.com> Date: Wed May 13 04:49:00 2020 -0300 x86-vxworks malloc aligns to 8 bytes like solaris Vxworks 7's malloc, like Solaris', only ensures 8-byte alignment of returned pointers on 32-bit x86, though GCC's stddef.h defines max_align_t with 16-byte alignment for __float128. This patch enables on x86-vxworks the same memory_resource workaround used for x86-solaris. The testsuite also had a workaround, defining BAD_MAX_ALIGN_T and xfailing the test; extend those to x86-vxworks as well, and remove the check for char-aligned requested allocation to be aligned like max_align_t. With that change, the test passes on x86-vxworks; I'm guessing that's the same reason for the test not to pass on x86-solaris (and on x86_64-solaris -m32), so with the fix, I'm tentatively removing the xfail. for libstdc++-v3/ChangeLog PR libstdc++/77691 * include/experimental/memory_resource (__resource_adaptor_imp::do_allocate): Handle max_align_t on x86-vxworks as on x86-solaris. (__resource_adaptor_imp::do_deallocate): Likewise. * testsuite/experimental/memory_resource/new_delete_resource.cc: Drop xfail. (BAD_MAX_ALIGN_T): Define on x86-vxworks as on x86-solaris. (test03): Drop max-align test for char-aligned alloc.
next parent reply other threads:[~2020-05-13 7:51 UTC|newest] Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top [not found] <bug-77691-4@http.gcc.gnu.org/bugzilla/> 2020-05-13 7:51 ` cvs-commit at gcc dot gnu.org [this message] 2020-05-26 10:15 ` cvs-commit at gcc dot gnu.org 2020-10-31 21:16 ` redi at gcc dot gnu.org 2021-02-17 16:30 ` danglin at gcc dot gnu.org 2021-02-17 16:46 ` redi at gcc dot gnu.org 2021-02-17 16:58 ` dave.anglin at bell dot net 2021-03-25 15:13 ` seurer at gcc dot gnu.org 2021-05-14 9:48 ` [Bug libstdc++/77691] [9/10/11/12 " jakub at gcc dot gnu.org 2021-06-01 8:08 ` rguenth at gcc dot gnu.org 2022-05-27 9:36 ` [Bug libstdc++/77691] [10/11/12/13 " rguenth at gcc dot gnu.org 2022-06-28 10:32 ` jakub at gcc dot gnu.org 2023-01-05 21:26 ` danglin at gcc dot gnu.org 2023-01-08 23:10 ` danglin at gcc dot gnu.org 2023-01-09 11:20 ` redi at gcc dot gnu.org 2023-01-09 22:14 ` dave.anglin at bell dot net 2023-01-12 20:59 ` cvs-commit at gcc dot gnu.org 2023-07-07 10:31 ` [Bug libstdc++/77691] [11/12/13/14 " rguenth at gcc dot gnu.org
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=bug-77691-4-lEAtdUWUO4@http.gcc.gnu.org/bugzilla/ \ --to=gcc-bugzilla@gcc.gnu.org \ --cc=gcc-bugs@gcc.gnu.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).