From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 119929 invoked by alias); 19 Oct 2015 08:41:51 -0000 Mailing-List: contact gcc-bugs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-bugs-owner@gcc.gnu.org Received: (qmail 119858 invoked by uid 48); 19 Oct 2015 08:41:47 -0000 From: "danielmicay at gmail dot com" To: gcc-bugs@gcc.gnu.org Subject: [Bug c/67999] Wrong optimization of pointer comparisons Date: Mon, 19 Oct 2015 08:41:00 -0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: c X-Bugzilla-Version: 5.2.0 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: danielmicay at gmail dot com X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-SW-Source: 2015-10/txt/msg01465.txt.bz2 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #9 from Daniel Micay --- (In reply to Florian Weimer from comment #8) > (In reply to Alexander Cherepanov from comment #4) > > > Am I right that the C standards do not allow for such a limitation (and > > hence this should not be reported to glibc as a bug) and gcc is not > > standards-compliant in this regard? Or I'm missing something? > > The standard explicitly acknowledges the possibility of arrays that have > more than PTRDIFF_MAX elements (it says that the difference of two pointers > within the same array is not necessarily representable in ptrdiff_t). > > I'm hesitant to put in artificial limits into glibc because in the mast, > there was significant demand for huge mappings in 32-bit programs (to the > degree that Red Hat even shipped special kernels for this purpose). I don't think there's much of a use case for allocating a single >2G allocation in a 3G or 4G address space. It has a high chance of failure simply due to virtual memory fragmentation, especially since the kernel's mmap allocation algorithm is so naive (keeps going downwards and ignores holes until it runs out, rather than using first-best-fit). Was the demand for a larger address space or was it really for the ability to allocate all that memory in one go?