public inbox for gcc-prs@sourceware.org
help / color / mirror / Atom feed
* Re: libstdc++/10029: Large files (> 2GB) work with C stdio, not with iostreams
@ 2003-03-11 17:14 paolo
0 siblings, 0 replies; 2+ messages in thread
From: paolo @ 2003-03-11 17:14 UTC (permalink / raw)
To: bert, gcc-bugs, gcc-prs, nobody
Synopsis: Large files (> 2GB) work with C stdio, not with iostreams
State-Changed-From-To: open->closed
State-Changed-By: paolo
State-Changed-When: Tue Mar 11 17:14:48 2003
State-Changed-Why:
Known problem, duplicate of libstdc++/8610.
http://gcc.gnu.org/cgi-bin/gnatsweb.pl?cmd=view%20audit-trail&database=gcc&pr=10029
^ permalink raw reply [flat|nested] 2+ messages in thread
* libstdc++/10029: Large files (> 2GB) work with C stdio, not with iostreams
@ 2003-03-11 16:06 bert
0 siblings, 0 replies; 2+ messages in thread
From: bert @ 2003-03-11 16:06 UTC (permalink / raw)
To: gcc-gnats
>Number: 10029
>Category: libstdc++
>Synopsis: Large files (> 2GB) work with C stdio, not with iostreams
>Confidential: no
>Severity: critical
>Priority: medium
>Responsible: unassigned
>State: open
>Class: sw-bug
>Submitter-Id: net
>Arrival-Date: Tue Mar 11 16:06:02 UTC 2003
>Closed-Date:
>Last-Modified:
>Originator: Bert Bril
>Release: g++ 3.2.2 and gcc 3.2
>Organization:
>Environment:
Linux 2.4.x and Solaris 2.8
>Description:
I have a test program that writes about 5 GB to a file in C:
#include <stdio.h>
#define CHUNKSZ 1048576
int main( int argc, char** argv )
{
int imb;
FILE* fp;
char buf[CHUNKSZ];
memset( buf, 0, CHUNKSZ );
if ( argc < 2 )
{ fprintf( stderr, "Usage: %s temp_file_name\n",
argv[0] ); return 1; }
fp = fopen( argv[1], "w" );
for ( imb=0; imb<5000; imb++ )
{
if ( imb % 100 == 0 )
{ fprintf( stderr, "." ); fflush(stderr); }
fwrite( buf, CHUNKSZ, 1, fp );
}
return 0;
}
gcc -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g3 -o tstlargefile_c tstlargefile_c.c
Works fine - delivers a file of 5 Gb size, both on Solaris and Linux.
Rewritten in C++/iostream, we could get:
#include <fstream>
#include <iostream>
using namespace std;
int main( int argc, char** argv )
{
const int chunksz = 1048576;
char buf[chunksz];
memset( buf, 0, chunksz );
if ( argc < 2 )
{ cerr << "Usage: " << argv[0] << " temp_file_name"
<< endl; return 1; }
ofstream strm( argv[1] );
for ( int imb=0; imb<5000; imb++ )
{
if ( imb % 100 == 0 )
{ cerr << '.'; cerr.flush(); }
strm.write( buf, chunksz );
}
return 0;
}
g++ -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g3 -o tstlargefile_cc tstlargefile_cc.cc
Now, the output is (on Linux):
# ./tstlargefile_cc lf_cc.out
.....................File size limit exceeded
# ls -l lf_cc.out
-rw-rw-r-- 1 bert users 2147483647 2003-02-28 12:04 lf_cc.out
Thus:
(1) This cannot be a 'ulimit' or Linux kernel or filesystem problem. Not only is the file size 'unlimited', and am I using SuSE Linux 8.1 with reiserfs, but the C variant should also fail then.
(2) On Solaris 2.8 , I get similar results only there the program crashes quietly.
[Note: this is not purely academic: I'd really like to be able to read&write arbitrary large files (at least up to 100 GB) using iostreams]
>How-To-Repeat:
Ehh, see the description ...
>Fix:
-
Note that I posted this on gnu.g++.bug, later on gnu.g++.help a week ago. No reaction.
>Release-Note:
>Audit-Trail:
>Unformatted:
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2003-03-11 17:14 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-03-11 17:14 libstdc++/10029: Large files (> 2GB) work with C stdio, not with iostreams paolo
-- strict thread matches above, loose matches on Subject: below --
2003-03-11 16:06 bert
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).