From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 18508 invoked by alias); 15 Apr 2011 09:11:09 -0000 Received: (qmail 18490 invoked by uid 22791); 15 Apr 2011 09:11:07 -0000 X-SWARE-Spam-Status: No, hits=-2.9 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 X-Spam-Check-By: sourceware.org Received: from localhost (HELO gcc.gnu.org) (127.0.0.1) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Fri, 15 Apr 2011 09:11:04 +0000 From: "burnus at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug fortran/25829] [F2003] Asynchronous IO support X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: fortran X-Bugzilla-Keywords: X-Bugzilla-Severity: enhancement X-Bugzilla-Who: burnus at gcc dot gnu.org X-Bugzilla-Status: ASSIGNED X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: jb at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated Content-Type: text/plain; charset="UTF-8" MIME-Version: 1.0 Date: Fri, 15 Apr 2011 09:11:00 -0000 Mailing-List: contact gcc-bugs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-bugs-owner@gcc.gnu.org X-SW-Source: 2011-04/txt/msg01483.txt.bz2 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25829 --- Comment #21 from Tobias Burnus 2011-04-15 09:10:56 UTC --- (In reply to comment #19) > A brute-force method would be to add a __sync_synchronize Actually, this idea does not work properly - neither for INQUIRE(...,PENDING=) nor for ASYNCHRONOUS with MPI 3. (Cf. link below) (In reply to comment #20) > If ASYNCHRONOUS expands to volatile, no barrier should be necessary. Well, VOLATILE has the wrong semantics, i.e. it will only partially solve the problem. Additionally, you create huge missed-optimization issues. I have now asked at GCC@ (and fortran@) for some suggestions: http://gcc.gnu.org/ml/fortran/2011-04/msg00143.html (There is currently also a vivid discussion on J3's interop and MPI Forum's MPI3-Fortran mailing lists about ASYNCHRONOUS and nonblocking MPI calls.)