From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 94923 invoked by alias); 27 Sep 2018 13:32:25 -0000 Mailing-List: contact fortran-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Post: List-Help: , Sender: fortran-owner@gcc.gnu.org Received: (qmail 94872 invoked by uid 89); 27 Sep 2018 13:32:21 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: Yes, score=6.8 required=5.0 tests=AWL,BAYES_20,FOREIGN_BODY1,GIT_PATCH_2,RCVD_IN_DNSWL_NONE,SPAM_BODY,SPF_PASS autolearn=no version=3.3.2 spammy=H*F:D*ar, H*UA:Zimbra, H*x:Zimbra, Para X-HELO: mail.intec.unl.edu.ar Received: from intec.santafe-conicet.gov.ar (HELO mail.intec.unl.edu.ar) (200.9.237.134) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 27 Sep 2018 13:32:15 +0000 Received: from localhost (localhost [127.0.0.1]) by mail.intec.unl.edu.ar (Postfix) with ESMTP id 3C59A280D05; Thu, 27 Sep 2018 10:42:29 -0300 (ART) Received: from mail.intec.unl.edu.ar ([127.0.0.1]) by localhost (mail.intec.unl.edu.ar [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id U9r3X3BTfAqK; Thu, 27 Sep 2018 10:42:24 -0300 (ART) Received: from localhost (localhost [127.0.0.1]) by mail.intec.unl.edu.ar (Postfix) with ESMTP id 01A30282290; Thu, 27 Sep 2018 10:42:24 -0300 (ART) Received: from mail.intec.unl.edu.ar ([127.0.0.1]) by localhost (mail.intec.unl.edu.ar [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id iOo_UK1rsFT1; Thu, 27 Sep 2018 10:42:23 -0300 (ART) Received: from mail.intec.unl.edu.ar (mail.intranet [192.168.0.134]) by mail.intec.unl.edu.ar (Postfix) with ESMTP id B9B61282262; Thu, 27 Sep 2018 10:42:23 -0300 (ART) Date: Thu, 27 Sep 2018 13:32:00 -0000 From: Jorge D'Elia Reply-To: Jorge D'Elia To: Richard Biener Cc: mckinstry@debian.org, Toon Moene , Jerry DeLisle , Damian Rouson , Thomas Koenig , "Stubbs, Andrew" , Janne Blomqvist , GCC Patches , fortran@gcc.gnu.org Message-ID: <1007994966.4640.1538055743354.JavaMail.zimbra@intec.unl.edu.ar> In-Reply-To: References: <024e798b9539b765a1259cfc9cb2f1dc480b24ca.1536144068.git.ams@codesourcery.com> <7bef0368-f709-642b-3bb4-14cb07aaba25@netcologne.de> <594e5471-3e22-40ae-0be4-952b2c246e3a@charter.net> Subject: Re: OpenCoarrays integration with gfortran MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-IsSubscribed: yes X-SW-Source: 2018-09/txt/msg00215.txt.bz2 ----- Mensaje original ----- > De: "Richard Biener" > Para: mckinstry@debian.org > CC: "Toon Moene" , "Jerry DeLisle" , "Damian Rouson" > , "Thomas Koenig" , "Stubbs, Andrew" , > "Janne Blomqvist" , "GCC Patches" , fortran@gcc.gnu.org > Enviados: Jueves, 27 de Septiembre 2018 9:28:47 > Asunto: Re: OpenCoarrays integration with gfortran > On Mon, Sep 24, 2018 at 12:58 PM Alastair McKinstry > wrote: >> >> >> On 23/09/2018 10:46, Toon Moene wrote: >> > On 09/22/2018 01:23 AM, Jerry DeLisle wrote: >> > >> > I just installed opencoarrays on my system at home (Debian Testing): >> > >> > root@moene:~# apt-get install libcoarrays-openmpi-dev >> > ... >> > Setting up libcaf-openmpi-3:amd64 (2.2.0-3) ... >> > Setting up libcoarrays-openmpi-dev:amd64 (2.2.0-3) ... >> > Processing triggers for libc-bin (2.27-6) ... >> > >> > [ previously this led to apt errors, but not now. ] >> > >> > and moved my own installation of the OpenCoarrays-2.2.0.tar.gz out of >> > the way: >> > >> > toon@moene:~$ ls -ld *pen* >> > drwxr-xr-x 6 toon toon 4096 Aug 10 16:01 OpenCoarrays-2.2.0.opzij >> > drwxr-xr-x 8 toon toon 4096 Sep 15 11:26 opencoarrays-build.opzij >> > drwxr-xr-x 6 toon toon 4096 Sep 15 11:26 opencoarrays.opzij >> > >> > and recompiled my stuff: >> > >> > gfortran -g -fbacktrace -fcoarray=lib random-weather.f90 >> > -L/usr/lib/x86_64-linux-gnu/open-coarrays/openmpi/lib -lcaf_mpi >> > >> > [ Yes, the location of the libs is quite experimental, but OK for the >> > "Testing" variant of Debian ... ] >> > >> > I couldn't find cafrun, but mpirun works just fine: >> > >> > toon@moene:~/src$ echo ' &config /' | mpirun --oversubscribe --bind-to >> > none -np 20 ./a.out >> > Decomposition information on image 7 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > Decomposition information on image 6 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > Decomposition information on image 11 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > Decomposition information on image 15 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > Decomposition information on image 1 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > Decomposition information on image 13 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > Decomposition information on image 12 is 4 * 5 slabs with 21 >> > * 18 grid cells on this image. >> > Decomposition information on image 20 is 4 * 5 slabs with 21 >> > * 18 grid cells on this image. >> > Decomposition information on image 9 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > Decomposition information on image 14 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > Decomposition information on image 16 is 4 * 5 slabs with 21 >> > * 18 grid cells on this image. >> > Decomposition information on image 17 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > Decomposition information on image 18 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > Decomposition information on image 2 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > Decomposition information on image 4 is 4 * 5 slabs with 21 >> > * 18 grid cells on this image. >> > Decomposition information on image 5 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > Decomposition information on image 3 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > Decomposition information on image 8 is 4 * 5 slabs with 21 >> > * 18 grid cells on this image. >> > Decomposition information on image 10 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > Decomposition information on image 19 is 4 * 5 slabs with 23 >> > * 18 grid cells on this image. >> > >> > ... etc. (see http://moene.org/~toon/random-weather.f90). >> > >> > I presume other Linux distributors will follow shortly (this *is* >> > Debian Testing, which can be a bit testy at times - but I do trust my >> > main business at home on it for over 15 years now). >> > >> > Kind regards, >> > >> Thanks, good to see it being tested (I'm the Debian/Ubuntu packager). >> >> caf /cafrun has been dropped (for the moment ? ) in favour of mpirun, >> but I've added pkg-config caf packages so that becomes an option. >> >> $ pkg-config caf-mpich --libs >> >> -L/usr/lib/x86_64-linux-gnu/open-coarrays/mpich/lib -lcaf_mpich -Wl,-z,relro >> -lmpich -lm -lbacktrace -lpthread -lrt >> >> (My thinking is that for libraries in particular, the user need not know >> whether CAF is being used, and if lib foobar uses CAF, then adding a: >> >> Requires: caf >> >> into the pkg-config file gives you the correct linking transparently. >> >> The "strange" paths are due to Debians multiarch : it is possible to >> include libraries for multiple architectures simultaneously. This works >> ok with pkg-config and cmake , etc (which allow you to set >> PKG_CONFIG_PATH and have multiple pkgconfig files for different libs >> simultaneously) , but currently break wrappers such as caf / cafrun. >> >> I can add a new package for caf / cafrun but would rather not. (W e >> currently don't do non-MPI CAF builds). >> >> There is currently pkg-config files 'caf-mpich' and 'caf-openmpi' for >> testing, and I'm adding a default alias caf -> caf-$(default-MPI) > > So I've tried packaging of OpenCoarrays for SUSE and noticed a few things: > > - caf by default links libcaf_mpi static (why?) > - the build system makes the libcaf_mpi SONAME dependent on the compiler > version(?), I once got libcaf_mpi2 and once libcaf_mpi3 (gcc7 vs. gcc8) > > different SONAMEs definitely makes packaging difficult. Of course since > there's the first point I may very well elide the shared library > alltogether....? > > Other than that it seems to "work" (OBS home:rguenther/OpenCoarrays). > > Richard. The issue of the static libcaf_mpi reminded me of an old email (from Lisandro Dalcin and me) about the coarray and Gfortran integration strategies. I just hit one part below (possibly something outdated). Maybe I can contribute something. Regards, Jorge. #begin About an MPI based library for the coarray model: Nowadays, the coarray model is included in the lastest Fortran 2008 standard. However, since several Fortran compilers are built on the basis of the C/C++ ones, there is not a native way to add the coarray model. One option is to build an MPI based library for supporting the coarray model. In this case, it should be ensured that the final library is as neutral as possible on regards to a final user. However, due to the fact that the MPI binaries are not yet compatible among the available MPI implementations, this issue should be overcomed somehow. Among other possibilities, the following alternatives can be considered, from the simplest to the more elaborated ones: (1) A static library libcaf_mpi.a which uses a specific MPI implementation when the library is built (e.g. OpenMPI or MPICH). However, it only works with the distribution available when the library is built. (2) A dynamic library libcaf_mpi.so that uses a specific MPI implementation when the library is built. However, again, it only works with the MPI implementation available when the library is built. From a practical point of view, this option is not very different than the previous one. (3) A symbolic link libcaf_mpi.so that points to dynamic libraries libcaf_mpich.so or libcaf_openmpi.so. The sysadmin can manage the link using system tools as alternatives. The GNU/Linux distributions usually manage this infrastructure by themselves without additional work required from the compiler side. However, regular (non-root) users cannot switch the backend MPI. This is only available on POSIX systems. (4) Different dynamic libraries named libcaf_mpi.so built for each MPI implementation, each of them installed in different directories, e.g. /mpich/libcaf_mpi.so, or /openmpi/libcaf_mpi.so. By using the modules tool, users can select the preferred MPI implementation, e.g. module load mpich2-x86_64 in Fedora. This works by adding entries in the LD_LIBRARY_PATH environment variable. The GNU/Linux distributions usually manage this infrastructure by themselves and do not require additional work from the Fortran compiler side. Regular users are able to choose the preferred MPI implementation, and the dynamic linker loads the appropriate libcaf_mpi.so. (5) A dynamic library libcaf_mpi.so built by the dlopen() tool. This option could be practical if the number of MPI functions were not too large, and it can be built on both GNU/Linux and Windows OS. (6) A dynamic library libcaf_mpi.so that is not linked with the MPI library, but uses the dlopen() and dlsym() to load and access the contents of a specific "MPI-linked" dynamic library, e.g. libcaf_mpi_mpich2-openmpi-other.so. In this way, libcaf_mpi.so does not depend on any MPI implementation, but acts as a thin wrapper to the specific CAF+MPI library. By using environment variables or rc configuration files, the user can choose the preferred library to open at runtime with dlopen() in POSIX systems, or similar mechanisms on Windows OS. #end