From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 28700 invoked by alias); 11 Mar 2003 16:06:03 -0000 Mailing-List: contact gcc-prs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Archive: List-Post: List-Help: Sender: gcc-prs-owner@gcc.gnu.org Received: (qmail 28574 invoked by uid 71); 11 Mar 2003 16:06:02 -0000 Resent-Date: 11 Mar 2003 16:06:02 -0000 Resent-Message-ID: <20030311160602.28572.qmail@sources.redhat.com> Resent-From: gcc-gnats@gcc.gnu.org (GNATS Filer) Resent-Cc: gcc-prs@gcc.gnu.org, gcc-bugs@gcc.gnu.org Resent-Reply-To: gcc-gnats@gcc.gnu.org, bert@dgb.nl Received: (qmail 25256 invoked by uid 48); 11 Mar 2003 15:58:42 -0000 Message-Id: <20030311155842.25255.qmail@sources.redhat.com> Date: Tue, 11 Mar 2003 16:06:00 -0000 From: bert@dgb.nl Reply-To: bert@dgb.nl To: gcc-gnats@gcc.gnu.org X-Send-Pr-Version: gnatsweb-2.9.3 (1.1.1.1.2.31) Subject: libstdc++/10029: Large files (> 2GB) work with C stdio, not with iostreams X-SW-Source: 2003-03/txt/msg00576.txt.bz2 List-Id: >Number: 10029 >Category: libstdc++ >Synopsis: Large files (> 2GB) work with C stdio, not with iostreams >Confidential: no >Severity: critical >Priority: medium >Responsible: unassigned >State: open >Class: sw-bug >Submitter-Id: net >Arrival-Date: Tue Mar 11 16:06:02 UTC 2003 >Closed-Date: >Last-Modified: >Originator: Bert Bril >Release: g++ 3.2.2 and gcc 3.2 >Organization: >Environment: Linux 2.4.x and Solaris 2.8 >Description: I have a test program that writes about 5 GB to a file in C: #include #define CHUNKSZ 1048576 int main( int argc, char** argv ) { int imb; FILE* fp; char buf[CHUNKSZ]; memset( buf, 0, CHUNKSZ ); if ( argc < 2 ) { fprintf( stderr, "Usage: %s temp_file_name\n", argv[0] ); return 1; } fp = fopen( argv[1], "w" ); for ( imb=0; imb<5000; imb++ ) { if ( imb % 100 == 0 ) { fprintf( stderr, "." ); fflush(stderr); } fwrite( buf, CHUNKSZ, 1, fp ); } return 0; } gcc -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g3 -o tstlargefile_c tstlargefile_c.c Works fine - delivers a file of 5 Gb size, both on Solaris and Linux. Rewritten in C++/iostream, we could get: #include #include using namespace std; int main( int argc, char** argv ) { const int chunksz = 1048576; char buf[chunksz]; memset( buf, 0, chunksz ); if ( argc < 2 ) { cerr << "Usage: " << argv[0] << " temp_file_name" << endl; return 1; } ofstream strm( argv[1] ); for ( int imb=0; imb<5000; imb++ ) { if ( imb % 100 == 0 ) { cerr << '.'; cerr.flush(); } strm.write( buf, chunksz ); } return 0; } g++ -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g3 -o tstlargefile_cc tstlargefile_cc.cc Now, the output is (on Linux): # ./tstlargefile_cc lf_cc.out .....................File size limit exceeded # ls -l lf_cc.out -rw-rw-r-- 1 bert users 2147483647 2003-02-28 12:04 lf_cc.out Thus: (1) This cannot be a 'ulimit' or Linux kernel or filesystem problem. Not only is the file size 'unlimited', and am I using SuSE Linux 8.1 with reiserfs, but the C variant should also fail then. (2) On Solaris 2.8 , I get similar results only there the program crashes quietly. [Note: this is not purely academic: I'd really like to be able to read&write arbitrary large files (at least up to 100 GB) using iostreams] >How-To-Repeat: Ehh, see the description ... >Fix: - Note that I posted this on gnu.g++.bug, later on gnu.g++.help a week ago. No reaction. >Release-Note: >Audit-Trail: >Unformatted: