public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
* [Bug fortran/37644]  New: compiler Segmentation fault
@ 2008-09-24 23:16 rlnaff at usgs dot gov
  2008-09-28 19:58 ` [Bug fortran/37644] " pinskia at gcc dot gnu dot org
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: rlnaff at usgs dot gov @ 2008-09-24 23:16 UTC (permalink / raw)
  To: gcc-bugs

Open MP directives in conjunction with LAM MPI calls is causing compiler to
fail:


(bash) niwot.pts/3%  export
LAMHF77=/z/stoch/home/rlnaff/usr/local/bin/gfortran4.3.2
(bash) niwot.pts/3% mpif77 -g -fopenmp -c reorder_parent.f90
reorder_parent.f90:470: internal compiler error: Segmentation fault
Please submit a full bug report,
with preprocessed source if appropriate.
See <http://gcc.gnu.org/bugs.html> for instructions.

(bash) niwot.pts/3% mpif77 -v -save-temps -g -fopenmp -c reorder_parent.f90
Using built-in specs.
Target: i686-pc-linux-gnu
Configured with: /z/stoch/home/rlnaff/gcc_source/gcc-4.3.2//configure
--prefix=/z/stoch/home/rlnaff/usr/local --program-suffix=4.3.2
-with-local-prefix=/z/stoch/home/rlnaff/usr/local --with-gnu-as --with-gnu-ld
--enable-threads --with-cpu-32=cpu --enable-languages=all --with-gmp
--with-mpfr
Thread model: posix
gcc version 4.3.2 (GCC) 
COLLECT_GCC_OPTIONS='-I/usr/local/include' '-pthread' '-v' '-save-temps' '-g'
'-fopenmp' '-c' '-mtune=generic' '-pthread'
 /a/stoch/home/rlnaff/usr/local/bin/../libexec/gcc/i686-pc-linux-gnu/4.3.2/f951
reorder_parent.f90 -quiet -dumpbase reorder_parent.f90 -mtune=generic -auxbase
reorder_parent -g -version -fopenmp -I/usr/local/include
-fintrinsic-modules-path
/a/stoch/home/rlnaff/usr/local/bin/../lib/gcc/i686-pc-linux-gnu/4.3.2/finclude
-o reorder_parent.s
GNU F95 (GCC) version 4.3.2 (i686-pc-linux-gnu)
        compiled by GNU C version 4.3.2, GMP version 4.2.2, MPFR version 2.3.1.
GGC heuristics: --param ggc-min-expand=99 --param ggc-min-heapsize=129368
reorder_parent.f90:470: internal compiler error: Segmentation fault
Please submit a full bug report,
with preprocessed source if appropriate.
See <http://gcc.gnu.org/bugs.html> for instructions.


-- 
           Summary: compiler Segmentation fault
           Product: gcc
           Version: 4.3.2
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: fortran
        AssignedTo: unassigned at gcc dot gnu dot org
        ReportedBy: rlnaff at usgs dot gov
 GCC build triplet: see description below
  GCC host triplet: see description below
GCC target triplet: see description below


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37644


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Bug fortran/37644] compiler Segmentation fault
  2008-09-24 23:16 [Bug fortran/37644] New: compiler Segmentation fault rlnaff at usgs dot gov
@ 2008-09-28 19:58 ` pinskia at gcc dot gnu dot org
  2008-09-29 13:56 ` rlnaff at usgs dot gov
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: pinskia at gcc dot gnu dot org @ 2008-09-28 19:58 UTC (permalink / raw)
  To: gcc-bugs



------- Comment #1 from pinskia at gcc dot gnu dot org  2008-09-28 19:57 -------
We need the source to figure out what is going wrong here.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37644


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Bug fortran/37644] compiler Segmentation fault
  2008-09-24 23:16 [Bug fortran/37644] New: compiler Segmentation fault rlnaff at usgs dot gov
  2008-09-28 19:58 ` [Bug fortran/37644] " pinskia at gcc dot gnu dot org
@ 2008-09-29 13:56 ` rlnaff at usgs dot gov
  2008-11-05  4:12 ` kargl at gcc dot gnu dot org
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: rlnaff at usgs dot gov @ 2008-09-29 13:56 UTC (permalink / raw)
  To: gcc-bugs



------- Comment #2 from rlnaff at usgs dot gov  2008-09-29 13:54 -------
Subject: Re:  compiler Segmentation fault

This was an experiment on the combined usage of Open MP and MPI, and may 
not be feasible.  However, that the compiler would simply fail...   R. Naff

pinskia at gcc dot gnu dot org wrote:
> ------- Comment #1 from pinskia at gcc dot gnu dot org  2008-09-28 19:57 -------
> We need the source to figure out what is going wrong here.
>
>
>   

module common_parameters
  implicit none
  ! OMPI? include '/usr/include/mpif.h'
  ! include 'mpif.h'
  ! ... kv: precision of variables used in assembly
  integer, parameter :: kv=selected_real_kind(p=10)
  ! ... common numbers
  real(kind=kv), parameter :: n0=0.0_kv, n1=1.0_kv, n2=2.0_kv, n3=3.0_kv, &
       n4=4.0_kv, n5=5.0_kv, n6=6.0_kv, n7=7.0_kv, n8=8.0_kv, n9=9.0_kv, &
       n10=10.0_kv, n100=100.0_kv
  ! ... common fractions
  real(kind=kv), parameter :: f2=0.5_kv, f3=n1/n3, f4=0.25_kv, f5=0.2_kv, &
       f6=n1/n6, f7=n1/n7, f8=0.125_kv, f9=n1/n9, f10=0.1_kv
  ! ... machine smallest number
  real(kind=kv), parameter :: machine_epsilon=epsilon(n0)
  real(kind=kv), parameter :: small=n100*machine_epsilon
  real(kind=kv), save :: MZ=tiny(n0)
  ! ... interim print
  character(len=32) :: file_name
  integer :: interim_print, data_print
end module common_parameters



! ... File shared_common_parent.f90
! ...
! ... Version last modified: R.L. Naff, 07/06
! ... Purpose: Allow for the transfer of information between modules
! ... and subroutines of "parent" type.
! ...
! ... Utilization: use "module name"
! ...
! ... Modules herein:
! ...   common_input_types_parent
! ...   common_partition_types_parent
! ...   common_MPI_types_parent
! ...   

module common_input_types_parent
  use common_parameters
  implicit none
  integer :: n_x, n_y, n_z
end module common_input_types_parent

module common_partition_types_parent
  use common_parameters
  implicit none
  integer, save :: max_part, npx, npy, npz
  integer, save :: ind_rot_rn, dim, no_rows
  integer, save :: tot_variables
  integer, save :: no_partitions, red_count, max_C
  integer, save :: red_part_count, red_node_count, black_node_count
  integer, save :: max_nodes_A, max_nx, max_ny, max_nz
  ! ... arrays
  integer, dimension(:), allocatable :: perm_p, inv_perm_p
  integer, dimension(:), allocatable :: part_end
  integer, dimension(:), allocatable :: perm
end module common_partition_types_parent

module common_reorder_types_parent
  use common_parameters
  implicit none
  integer, dimension(:), allocatable, target :: ii_1, ii_2, ii_3
  real(kind=kv), dimension(:), allocatable, target :: C1, C2, C3, &
         CC_1, CC_2, CC_3, coef
end module common_reorder_types_parent

module common_MPI_types_parent
  integer :: pc_intracomm, pc_intra_root
end module common_MPI_types_parent

module reorder_parent
  ! ... Version last modified: R.L. Naff, 02/07
  ! ... Purpose: reorder stiffness coefficients into partitions and 
  ! ... send coefficients to children (slaves).
  ! ...
  ! ... Subroutines herein:
  ! ...   subroutine AC_reorder
  ! ...     Called from subroutine MS_PCG_solve, module MS_PCG_parent
  ! ...     Sends or BCasts to Child: coef, C1, C2, C3, 
  ! ...       CC_1, CC_2, CC_3 (surrogates for hcoef, C_x, C_y, C_z 
  ! ...       and part_con arrays).
  ! ...
  ! ...
  use omp_lib
  use common_parameters
  use common_input_types_parent
  ! ... n_x, n_y, n_z
  use common_partition_types_parent
  use common_reorder_types_parent
  use common_MPI_types_parent
  !tmp use utilities_parent
  !tmp use error_handler
  ! ... pointer arrays holding incoming coefficients
  real, save, pointer, dimension(:) :: Cii, Cjj, Ckk, hcoef
  ! ... Arrays pointed in MS_PCG_solve
  ! ...
contains

  subroutine AC_reorder(i_bound, ib0_count)
    ! ... Based on domain partitioning, rearrange coefficients and 
    ! ... assign to a process.
    ! ...
    ! ... Argument list
    ! ...
    integer, intent(out) :: ib0_count
    integer, dimension(:), intent(in) :: i_bound
    ! ...
    ! ... local variables
    ! ...
    integer :: p, i, j, k, ii, jj, kk, i_org, xyz_loc
    integer :: i_1, i_2, i_3, np1, np2, np3
    integer :: d_1s, d_2s, d_3s, i_range=range(n1)
    integer :: n_1, n_2, n_3, d_1, d_2, d_3
    integer :: node, row_ct, level_ct, A_nodes
    integer :: pn_count, ls1, ls2, ls3, e_1, e_2, e_3
    integer :: ierr, error, tag_out, a_size
    integer :: i11, i22, i33, int_real_type
    integer, dimension(1:3) :: C_count
    integer, pointer, dimension(:) :: I_point
    real(kind=kv) :: C11, C22, C33, t_num
    real :: one=1.0, neg_one=-1.0
    real(kind=kv), pointer, dimension(:) :: R_point
    character(len=64) :: err_loc_message
    ! ........................................................................
    ! ... allocate work space
    error=0; t_num=n10**(-i_range/2)
    nullify (R_point)
    ! ...
    call rot_rn(1, n_x, n_x*n_y, e_1, e_2, e_3)
    ! ...
    call rot_rn(n_x, n_y, n_z, n_1, n_2, n_3)
    call rot_rn(npx, npy, npz, np1, np2, np3)
    d_1s=nint(real(n_1)/np1)
    d_2s=nint(real(n_2)/np2)
    d_3s=nint(real(n_3)/np3)
    ! ...
    ! ... main partitions
    ! ...
    pn_count=0; ib0_count=0
    call OMP_SET_NUM_THREADS(4)
!$OMP PARALLEL DEFAULT(private) SHARED(no_partitions, ind_rot_rn) &
!$OMP SHARED(npx, npy, npz, np1, np2, np3) &
!$OMP SHARED(n_1, n_2, n_3, e_1, e_2, e_3, d_1s, d_2s, d_3s) &
!$OMP SHARED(inv_perm_p, perm, i_bound, Cii, Cjj, Ckk, hcoef) 
!$OMP DO
    do p=1, no_partitions
       C_count=0
       ! ... (ii, jj, kk): regular, x first z last, partition numbering
       xyz_loc=inv_perm_p(p)
       kk=(xyz_loc-1)/(npx*npy)+1
       jj=(xyz_loc-(kk-1)*npx*npy-1)/npx+1
       ii=xyz_loc-(kk-1)*npx*npy-(jj-1)*npx
       call rot_rn(ii, jj, kk, i_1, i_2, i_3)
       ! ...
       d_1=d_1s
       if (i_1==np1) then
          d_1=n_1-d_1s*(np1-1)
          C_count(1)=-1
       endif
       d_2=d_2s
       if (i_2==np2) then
          d_2=n_2-d_2s*(np2-1)
          C_count(2)=-1
       endif
       d_3=d_3s
       if (i_3==np3) then
          d_3=n_3-d_3s*(np3-1)
          C_count(3)=-1
       endif
       ! ... 
       do k=1, d_3
          level_ct=d_1*d_2*(k-1)
          ls3=1 
          if (k==d_3)then
             if (i_3==np3) then
                ! ... external boundary
                ls3=0
             else
                ! ... internal boundary
                ls3=2
             endif
          endif
          do j=1, d_2
             row_ct=d_1*(j-1)
             ls2=1
             if (j==d_2) then
                if (i_2==np2) then
                   ! ... external boundary
                   ls2=0
                else
                   ! ... internal boundary
                   ls2=2
                endif
             endif
             do i=1, d_1
                ls1=1
                if (i==d_1) then
                   if (i_1==np1) then
                      ! ... external boundary
                      ls1=0
                   else
                      ! ... internal boundary
                      ls1=2
                   endif
                endif
                node=level_ct+row_ct+i
                i_org=perm(node+pn_count)
                ! ... Assign higher precision to coef
                coef(node)=hcoef(i_org)
                C11=n0; C22=n0; C33=n0
                ! ... Assign higher precision to C11, C22 and C33
                ! ... Note: all coefficients on external boundies
                ! ...   assigned null value.
                if (i_bound(i_org)>0) then
                   if (ls1>0) then
                      if (i_bound(i_org+e_1)/=0) C11=sign(Cii(i_org),one)
                   endif
                   if (ls2>0) then
                      if (i_bound(i_org+e_2)/=0) C22=sign(Cjj(i_org),one)
                   endif
                   if (ls3>0) then
                      if (i_bound(i_org+e_3)/=0) C33=sign(Ckk(i_org),one)
                   endif
                elseif (i_bound(i_org)==0) then
                   ib0_count=ib0_count+1
                else
                   ! ... assign constant-value cells a negative value
                   if (ls1>0) then
                      if (i_bound(i_org+e_1)/=0) C11=sign(Cii(i_org),neg_one)
                   endif
                   if (ls2>0) then
                      if (i_bound(i_org+e_2)/=0) C22=sign(Cjj(i_org),neg_one)
                   endif
                   if (ls3>0) then
                      if (i_bound(i_org+e_3)/=0) C33=sign(Ckk(i_org),neg_one)
                   endif
                   if (C11==n0.and.C22==n0.and.C33==n0) then
                      ! ... If no nonzero coefficients present, then assign 
                      ! ...  C11 a very small negative number.
                      C11=-t_num; ls1=1
                   endif
                endif
                ! ... Assign values to internal-boundary indicator array.
                ! ... Value assigned indicates node type across boundary.
                i11=1; i22=1; i33=1
                if (ls1==2) then
                   if (i_bound(i_org+e_1)==0) then
                      i11=0
                   elseif (i_bound(i_org+e_1)<0) then
                      i11=-1
                   endif
                endif
                if (ls2==2) then
                   if (i_bound(i_org+e_2)==0) then
                      i22=0
                   elseif (i_bound(i_org+e_2)<0) then
                      i22=-1
                   endif
                endif
                if (ls3==2) then
                   if (i_bound(i_org+e_3)==0) then
                      i33=0
                   elseif (i_bound(i_org+e_3)<0) then
                      i33=-1
                   endif
                endif                   
                ! ... Insert final coefficients into C1, C2 and C_ arrays
                ! ... or into connector coefficient arrays (lsx=2).
                call insert_coef(C11, C22, C33, i11, i22, i33, node)
                ! ... 
             enddo
          enddo
       enddo
       ! ... 
       A_nodes=d_1*d_2*d_3
       pn_count=pn_count+A_nodes
       ! xxx if (C_count(1)/=d_2*d_3) print*,'C_count(1)=',C_count(1), &
       ! xxx      ' face value=',d_2*d_3
       ! xxx if (C_count(2)/=d_1*d_3) print*,'C_count(2)=',C_count(2), &
       ! xxx      ' face value=',d_1*d_3
       ! xxx if (C_count(3)/=d_1*d_2) print*,'C_count(3)=',C_count(3), &
       ! xxx      ' face value=',d_1*d_2
       ! ...
       ! ... Send A partitions to spawned processes
       ! ... Received in subroutine coef_recv, module PCG_solve_child
       ! ...
       tag_out=1211
       R_point=>coef(1:A_nodes)
       call MPI_SSEND(R_point, A_nodes, MPI_DOUBLE_PRECISION, p, &
            tag_out, pc_intracomm, ierr)
       call error_class(pc_intracomm, ierr)
       tag_out=1212
       R_point=>C1(1:A_nodes)
       call MPI_SSEND(R_point, A_nodes, MPI_DOUBLE_PRECISION, p, &
            tag_out, pc_intracomm, ierr)
       call error_class(pc_intracomm, ierr)
       C1=n0
       if (dim>1) then
          tag_out=1213
          R_point=>C2(1:A_nodes)
          call MPI_SSEND(R_point, A_nodes, MPI_DOUBLE_PRECISION, p, &
               tag_out, pc_intracomm, ierr)
          call error_class(pc_intracomm, ierr)
          C2=n0
          if (dim>2) then
             tag_out=1214
             R_point=>C3(1:A_nodes)
             call MPI_SSEND(R_point, A_nodes, MPI_DOUBLE_PRECISION, p, &
                  tag_out, pc_intracomm, ierr)
             call error_class(pc_intracomm, ierr)
             C3=n0
          endif
       endif
       ! ...
       ! ... Send connectors to partitions
       ! ... Received in subroutine coef_recv, module PCG_solve_child
       ! ...
       if (C_count(1)>0) then
          err_loc_message='reorder_parent AC_reorder MPI_SEND 1'
          R_point=>CC_1(1:C_count(1))
          I_point=>ii_1(1:C_count(1))
          ! ...
          ! ??? 02/14/07
          ! ??? The following MPI structure is malfunctioning for unknown
          ! ??? reasons; using MPI sends 12150 and 12151 instead.
          ! ??? 04/04/07 now functioning
          tag_out=1215
          int_real_type=MPI_struct_int_real_array(I_point,R_point)
          call MPI_SSEND(I_point, 1, int_real_type, p, &
               tag_out, pc_intracomm, ierr)
          call error_class(pc_intracomm, ierr, err_loc_message)
          call MPI_TYPE_FREE(int_real_type,ierr)
          ! ...
          ! xxx tag_out=12150
          ! xxx call MPI_SEND(I_point, C_count(1), MPI_INTEGER, p, &
          ! xxx      tag_out, pc_intracomm, ierr)
          ! xxx call error_class(pc_intracomm, ierr, err_loc_message)
          ! xxx tag_out=12151
          ! xxx call MPI_SEND(R_point, C_count(1), MPI_DOUBLE_PRECISION, p, &
          ! xxx      tag_out, pc_intracomm, ierr)
          ! xxx call error_class(pc_intracomm, ierr, err_loc_message)
          ! ...
          CC_1=n0; II_1=0
       endif
       ! ...
       if (C_count(2)>0) then
          err_loc_message='reorder_parent AC_reorder MPI_SEND 2'
          R_point=>CC_2(1:C_count(2))
          I_point=>ii_2(1:C_count(2))
          ! ... 
          ! ??? 02/14/07
          ! ??? The following MPI structure is malfunctioning for unknown
          ! ??? reasons; using MPI sends 12160 and 12161 instead.
          ! ??? 04/04/07 now functioning
          int_real_type=MPI_struct_int_real_array(I_point,R_point)
          tag_out=1216
          call MPI_SSEND(I_point, 1, int_real_type, &
               p, tag_out, pc_intracomm, ierr)
          call error_class(pc_intracomm, ierr, err_loc_message)
          call MPI_TYPE_FREE(int_real_type,ierr)
          ! ...
          ! xxx tag_out=12160
          ! xxx call MPI_SEND(I_point, C_count(2), MPI_INTEGER, p, &
          ! xxx      tag_out, pc_intracomm, ierr)
          ! xxx call error_class(pc_intracomm, ierr, err_loc_message)
          ! xxx tag_out=12161
          ! xxx call MPI_SEND(R_point, C_count(2), MPI_DOUBLE_PRECISION, p, &
          ! xxx      tag_out, pc_intracomm, ierr)
          ! xxx call error_class(pc_intracomm, ierr, err_loc_message)
          ! ...
          CC_2=n0; II_2=0
       endif
       ! ...
       if (C_count(3)>0) then
          err_loc_message='reorder_parent AC_reorder MPI_SEND 3'
          R_point=>CC_3(1:C_count(3))
          I_point=>ii_3(1:C_count(3))
          ! ...
          ! ??? 02/14/07
          ! ??? The following MPI structure is malfunctioning for unknown
          ! ??? reasons; using MPI sends 12170 and 12171 instead.
          ! ??? 04/04/07 now functioning
          tag_out=1217
          int_real_type=MPI_struct_int_real_array(I_point,R_point)
          call MPI_SSEND(I_point, 1, int_real_type, p, &
               tag_out, pc_intracomm, ierr)
          call error_class(pc_intracomm, ierr, err_loc_message)
          call MPI_TYPE_FREE(int_real_type,ierr)
          ! ...
          ! xxx tag_out=12170
          ! xxx call MPI_SEND(I_point, C_count(3), MPI_INTEGER, p, &
          ! xxx      tag_out, pc_intracomm, ierr)
          ! xxx call error_class(pc_intracomm, ierr, err_loc_message)
          ! xxx tag_out=12171
          ! xxx call MPI_SEND(R_point, C_count(3), MPI_DOUBLE_PRECISION, p, &
          ! xxx      tag_out, pc_intracomm, ierr)
          ! xxx call error_class(pc_intracomm, ierr, err_loc_message)
          ! ...
          CC_3=n0; II_3=0 
       endif
       nullify (I_point, R_point)
       ! ...
       if (p<no_partitions) then
          C1=n0; C2=n0; C3=n0
          ! xxx ii_1=1; ii_2=1; ii_1=1; CC_1=n0; CC_2=n0; CC_3=n0
       endif
    enddo ! end outer partition loop
!$OMP END DO
!$OMP END PARALLEL 
    nullify (R_point)
    ! ...
  contains
    ! ...
    subroutine insert_coef(C11,C22,C33,i11,i22,i33,node)
      ! ...
      ! ... Argument list
      ! ...
      real(kind=kv), intent(in) :: C11,C22,C33
      integer, intent(in) :: i11,i22,i33
      integer, intent(in) :: node
      ! ......................................................
      ! ... 
      if (dim>2) then
         ! ... 3-D
         if (ls1==1) then
            C1(node)=C11
         elseif (ls1==2) then
            C_count(1)=C_count(1)+1
            CC_1(C_count(1))=C11
            ii_1(C_count(1))=i11
         endif
         if (ls2==1) then
            C2(node)=C22
         elseif (ls2==2) then
            C_count(2)=C_count(2)+1
            CC_2(C_count(2))=C22
            ii_2(C_count(2))=i22
         endif
         if (ls3==1) then
            C3(node)=C33
         elseif (ls3==2) then
            C_count(3)=C_count(3)+1
            CC_3(C_count(3))=C33
            ii_3(C_count(3))=i33
         endif
      elseif (dim>1) then
         ! ... 2-D slice
         if (ls1==1) then
            C1(node)=C11
         elseif (ls1==2) then
            C_count(1)=C_count(1)+1
            CC_1(C_count(1))=C11
            ii_1(C_count(1))=i11
         endif
         if (ls2==1) then
            C2(node)=C22
         elseif (ls2==2) then
            C_count(2)=C_count(2)+1
            CC_2(C_count(2))=C22
            ii_2(C_count(2))=i22
         endif
      else
         ! ... 1-D
         if (ls1==1) then
            C1(node)=C11
         elseif (ls1==2) then
            C_count(1)=C_count(1)+1
            CC_1(C_count(1))=C11
            ii_1(C_count(1))=i11
         endif
      endif
      ! ...
    end subroutine insert_coef
    ! ...
  end subroutine AC_reorder

  function MPI_struct_int_real_array(indx,value) result(type_int_real)
    ! ... Purpose: Build an MPI structure consisting of an integer array and 
    ! ... a double precision real array.
    ! ... Explicit interface required: assumed-shape arrays INDX and VALUE.
    ! ...
    ! ... argument list
    ! ...
    integer, dimension(:) :: indx
    real(kind=kv), dimension(:) :: value
    ! ...
    ! ... result
    ! ...
    integer :: type_int_real
    ! ...  
    ! ... local variables
    ! ...
    integer, dimension(1:2) :: blks, types, displs
    integer :: ierr, i_size, v_size, start_address, address
    character(len=64) :: err_loc_message= &
         'PCG_solve_child MPI_struct_int_real_array MPI_TYPE_COMMIT 1'
    ! .......................................................................
    type_int_real=0
    i_size=size(indx); v_size=size(value)
    blks=(/i_size, v_size/)
    types=(/MPI_INTEGER, MPI_DOUBLE_PRECISION/)
    displs(1)=0
    call MPI_ADDRESS(indx(1), start_address, ierr)
    call MPI_ADDRESS(value(1), address, ierr)
    displs(2)=address-start_address
    call MPI_TYPE_STRUCT(2, blks, displs, types, type_int_real, ierr)
    call MPI_TYPE_COMMIT(type_int_real,ierr)
    call error_class(pc_intracomm, ierr, err_loc_message)
    ! ...
  end function MPI_struct_int_real_array

end module reorder_parent


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37644


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Bug fortran/37644] compiler Segmentation fault
  2008-09-24 23:16 [Bug fortran/37644] New: compiler Segmentation fault rlnaff at usgs dot gov
  2008-09-28 19:58 ` [Bug fortran/37644] " pinskia at gcc dot gnu dot org
  2008-09-29 13:56 ` rlnaff at usgs dot gov
@ 2008-11-05  4:12 ` kargl at gcc dot gnu dot org
  2008-11-05  7:05 ` burnus at gcc dot gnu dot org
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: kargl at gcc dot gnu dot org @ 2008-11-05  4:12 UTC (permalink / raw)
  To: gcc-bugs



------- Comment #3 from kargl at gcc dot gnu dot org  2008-11-05 04:10 -------
Can you attach the code instead of embedded in a message?  

It's a PITA to strip out HTML from your code when I save it as a file.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37644


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Bug fortran/37644] compiler Segmentation fault
  2008-09-24 23:16 [Bug fortran/37644] New: compiler Segmentation fault rlnaff at usgs dot gov
                   ` (2 preceding siblings ...)
  2008-11-05  4:12 ` kargl at gcc dot gnu dot org
@ 2008-11-05  7:05 ` burnus at gcc dot gnu dot org
  2008-11-05 16:10 ` rlnaff at usgs dot gov
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: burnus at gcc dot gnu dot org @ 2008-11-05  7:05 UTC (permalink / raw)
  To: gcc-bugs



------- Comment #4 from burnus at gcc dot gnu dot org  2008-11-05 07:03 -------
I can reproduce the ICE (seg. fault) with 4.3.x, but not with 4.4.0


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37644


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Bug fortran/37644] compiler Segmentation fault
  2008-09-24 23:16 [Bug fortran/37644] New: compiler Segmentation fault rlnaff at usgs dot gov
                   ` (3 preceding siblings ...)
  2008-11-05  7:05 ` burnus at gcc dot gnu dot org
@ 2008-11-05 16:10 ` rlnaff at usgs dot gov
  2008-11-05 16:15 ` rlnaff at usgs dot gov
  2009-03-28 12:07 ` [Bug fortran/37644] [4.3 only] ICE on valid OpenMP code fxcoudert at gcc dot gnu dot org
  6 siblings, 0 replies; 8+ messages in thread
From: rlnaff at usgs dot gov @ 2008-11-05 16:10 UTC (permalink / raw)
  To: gcc-bugs



------- Comment #5 from rlnaff at usgs dot gov  2008-11-05 16:08 -------
Subject: Re:  compiler Segmentation fault

Compiled with gfortran 4.3.2


(bash) stoch.pts/10% export 
LAMHF77=/z/stoch/home/rlnaff/usr/local/bin/gfortran4.3.2
(bash) stoch.pts/10% mpif77 -c -fopenmp reorder_parent.f90
reorder_parent.f90:470: internal compiler error: Segmentation fault
Please submit a full bug report,
with preprocessed source if appropriate.
See <http://gcc.gnu.org/bugs.html> for instructions.



kargl at gcc dot gnu dot org wrote:
> ------- Comment #3 from kargl at gcc dot gnu dot org  2008-11-05 04:10 -------
> Can you attach the code instead of embedded in a message?  
>
> It's a PITA to strip out HTML from your code when I save it as a file.
>
>
>   

module common_parameters
  implicit none
  ! OMPI? include '/usr/include/mpif.h'
  ! include 'mpif.h'
  ! ... kv: precision of variables used in assembly
  integer, parameter :: kv=selected_real_kind(p=10)
  ! ... common numbers
  real(kind=kv), parameter :: n0=0.0_kv, n1=1.0_kv, n2=2.0_kv, n3=3.0_kv, &
       n4=4.0_kv, n5=5.0_kv, n6=6.0_kv, n7=7.0_kv, n8=8.0_kv, n9=9.0_kv, &
       n10=10.0_kv, n100=100.0_kv
  ! ... common fractions
  real(kind=kv), parameter :: f2=0.5_kv, f3=n1/n3, f4=0.25_kv, f5=0.2_kv, &
       f6=n1/n6, f7=n1/n7, f8=0.125_kv, f9=n1/n9, f10=0.1_kv
  ! ... machine smallest number
  real(kind=kv), parameter :: machine_epsilon=epsilon(n0)
  real(kind=kv), parameter :: small=n100*machine_epsilon
  real(kind=kv), save :: MZ=tiny(n0)
  ! ... interim print
  character(len=32) :: file_name
  integer :: interim_print, data_print
end module common_parameters



! ... File shared_common_parent.f90
! ...
! ... Version last modified: R.L. Naff, 07/06
! ... Purpose: Allow for the transfer of information between modules
! ... and subroutines of "parent" type.
! ...
! ... Utilization: use "module name"
! ...
! ... Modules herein:
! ...   common_input_types_parent
! ...   common_partition_types_parent
! ...   common_MPI_types_parent
! ...   

module common_input_types_parent
  use common_parameters
  implicit none
  integer :: n_x, n_y, n_z
end module common_input_types_parent

module common_partition_types_parent
  use common_parameters
  implicit none
  integer, save :: max_part, npx, npy, npz
  integer, save :: ind_rot_rn, dim, no_rows
  integer, save :: tot_variables
  integer, save :: no_partitions, red_count, max_C
  integer, save :: red_part_count, red_node_count, black_node_count
  integer, save :: max_nodes_A, max_nx, max_ny, max_nz
  ! ... arrays
  integer, dimension(:), allocatable :: perm_p, inv_perm_p
  integer, dimension(:), allocatable :: part_end
  integer, dimension(:), allocatable :: perm
end module common_partition_types_parent

module common_reorder_types_parent
  use common_parameters
  implicit none
  integer, dimension(:), allocatable, target :: ii_1, ii_2, ii_3
  real(kind=kv), dimension(:), allocatable, target :: C1, C2, C3, &
         CC_1, CC_2, CC_3, coef
end module common_reorder_types_parent

module common_MPI_types_parent
  integer :: pc_intracomm, pc_intra_root
end module common_MPI_types_parent

module reorder_parent
  ! ... Version last modified: R.L. Naff, 02/07
  ! ... Purpose: reorder stiffness coefficients into partitions and 
  ! ... send coefficients to children (slaves).
  ! ...
  ! ... Subroutines herein:
  ! ...   subroutine AC_reorder
  ! ...     Called from subroutine MS_PCG_solve, module MS_PCG_parent
  ! ...     Sends or BCasts to Child: coef, C1, C2, C3, 
  ! ...       CC_1, CC_2, CC_3 (surrogates for hcoef, C_x, C_y, C_z 
  ! ...       and part_con arrays).
  ! ...
  ! ...
  use omp_lib
  use common_parameters
  use common_input_types_parent
  ! ... n_x, n_y, n_z
  use common_partition_types_parent
  use common_reorder_types_parent
  use common_MPI_types_parent
  !tmp use utilities_parent
  !tmp use error_handler
  ! ... pointer arrays holding incoming coefficients
  real, save, pointer, dimension(:) :: Cii, Cjj, Ckk, hcoef
  ! ... Arrays pointed in MS_PCG_solve
  ! ...
contains

  subroutine AC_reorder(i_bound, ib0_count)
    ! ... Based on domain partitioning, rearrange coefficients and 
    ! ... assign to a process.
    ! ...
    ! ... Argument list
    ! ...
    integer, intent(out) :: ib0_count
    integer, dimension(:), intent(in) :: i_bound
    ! ...
    ! ... local variables
    ! ...
    integer :: p, i, j, k, ii, jj, kk, i_org, xyz_loc
    integer :: i_1, i_2, i_3, np1, np2, np3
    integer :: d_1s, d_2s, d_3s, i_range=range(n1)
    integer :: n_1, n_2, n_3, d_1, d_2, d_3
    integer :: node, row_ct, level_ct, A_nodes
    integer :: pn_count, ls1, ls2, ls3, e_1, e_2, e_3
    integer :: ierr, error, tag_out, a_size
    integer :: i11, i22, i33, int_real_type
    integer, dimension(1:3) :: C_count
    integer, pointer, dimension(:) :: I_point
    real(kind=kv) :: C11, C22, C33, t_num
    real :: one=1.0, neg_one=-1.0
    real(kind=kv), pointer, dimension(:) :: R_point
    character(len=64) :: err_loc_message
    ! ........................................................................
    ! ... allocate work space
    error=0; t_num=n10**(-i_range/2)
    nullify (R_point)
    ! ...
    call rot_rn(1, n_x, n_x*n_y, e_1, e_2, e_3)
    ! ...
    call rot_rn(n_x, n_y, n_z, n_1, n_2, n_3)
    call rot_rn(npx, npy, npz, np1, np2, np3)
    d_1s=nint(real(n_1)/np1)
    d_2s=nint(real(n_2)/np2)
    d_3s=nint(real(n_3)/np3)
    ! ...
    ! ... main partitions
    ! ...
    pn_count=0; ib0_count=0
    call OMP_SET_NUM_THREADS(4)
!$OMP PARALLEL DEFAULT(private) SHARED(no_partitions, ind_rot_rn) &
!$OMP SHARED(npx, npy, npz, np1, np2, np3) &
!$OMP SHARED(n_1, n_2, n_3, e_1, e_2, e_3, d_1s, d_2s, d_3s) &
!$OMP SHARED(inv_perm_p, perm, i_bound, Cii, Cjj, Ckk, hcoef) 
!$OMP DO
    do p=1, no_partitions
       C_count=0
       ! ... (ii, jj, kk): regular, x first z last, partition numbering
       xyz_loc=inv_perm_p(p)
       kk=(xyz_loc-1)/(npx*npy)+1
       jj=(xyz_loc-(kk-1)*npx*npy-1)/npx+1
       ii=xyz_loc-(kk-1)*npx*npy-(jj-1)*npx
       call rot_rn(ii, jj, kk, i_1, i_2, i_3)
       ! ...
       d_1=d_1s
       if (i_1==np1) then
          d_1=n_1-d_1s*(np1-1)
          C_count(1)=-1
       endif
       d_2=d_2s
       if (i_2==np2) then
          d_2=n_2-d_2s*(np2-1)
          C_count(2)=-1
       endif
       d_3=d_3s
       if (i_3==np3) then
          d_3=n_3-d_3s*(np3-1)
          C_count(3)=-1
       endif
       ! ... 
       do k=1, d_3
          level_ct=d_1*d_2*(k-1)
          ls3=1 
          if (k==d_3)then
             if (i_3==np3) then
                ! ... external boundary
                ls3=0
             else
                ! ... internal boundary
                ls3=2
             endif
          endif
          do j=1, d_2
             row_ct=d_1*(j-1)
             ls2=1
             if (j==d_2) then
                if (i_2==np2) then
                   ! ... external boundary
                   ls2=0
                else
                   ! ... internal boundary
                   ls2=2
                endif
             endif
             do i=1, d_1
                ls1=1
                if (i==d_1) then
                   if (i_1==np1) then
                      ! ... external boundary
                      ls1=0
                   else
                      ! ... internal boundary
                      ls1=2
                   endif
                endif
                node=level_ct+row_ct+i
                i_org=perm(node+pn_count)
                ! ... Assign higher precision to coef
                coef(node)=hcoef(i_org)
                C11=n0; C22=n0; C33=n0
                ! ... Assign higher precision to C11, C22 and C33
                ! ... Note: all coefficients on external boundies
                ! ...   assigned null value.
                if (i_bound(i_org)>0) then
                   if (ls1>0) then
                      if (i_bound(i_org+e_1)/=0) C11=sign(Cii(i_org),one)
                   endif
                   if (ls2>0) then
                      if (i_bound(i_org+e_2)/=0) C22=sign(Cjj(i_org),one)
                   endif
                   if (ls3>0) then
                      if (i_bound(i_org+e_3)/=0) C33=sign(Ckk(i_org),one)
                   endif
                elseif (i_bound(i_org)==0) then
                   ib0_count=ib0_count+1
                else
                   ! ... assign constant-value cells a negative value
                   if (ls1>0) then
                      if (i_bound(i_org+e_1)/=0) C11=sign(Cii(i_org),neg_one)
                   endif
                   if (ls2>0) then
                      if (i_bound(i_org+e_2)/=0) C22=sign(Cjj(i_org),neg_one)
                   endif
                   if (ls3>0) then
                      if (i_bound(i_org+e_3)/=0) C33=sign(Ckk(i_org),neg_one)
                   endif
                   if (C11==n0.and.C22==n0.and.C33==n0) then
                      ! ... If no nonzero coefficients present, then assign 
                      ! ...  C11 a very small negative number.
                      C11=-t_num; ls1=1
                   endif
                endif
                ! ... Assign values to internal-boundary indicator array.
                ! ... Value assigned indicates node type across boundary.
                i11=1; i22=1; i33=1
                if (ls1==2) then
                   if (i_bound(i_org+e_1)==0) then
                      i11=0
                   elseif (i_bound(i_org+e_1)<0) then
                      i11=-1
                   endif
                endif
                if (ls2==2) then
                   if (i_bound(i_org+e_2)==0) then
                      i22=0
                   elseif (i_bound(i_org+e_2)<0) then
                      i22=-1
                   endif
                endif
                if (ls3==2) then
                   if (i_bound(i_org+e_3)==0) then
                      i33=0
                   elseif (i_bound(i_org+e_3)<0) then
                      i33=-1
                   endif
                endif                   
                ! ... Insert final coefficients into C1, C2 and C_ arrays
                ! ... or into connector coefficient arrays (lsx=2).
                call insert_coef(C11, C22, C33, i11, i22, i33, node)
                ! ... 
             enddo
          enddo
       enddo
       ! ... 
       A_nodes=d_1*d_2*d_3
       pn_count=pn_count+A_nodes
       ! xxx if (C_count(1)/=d_2*d_3) print*,'C_count(1)=',C_count(1), &
       ! xxx      ' face value=',d_2*d_3
       ! xxx if (C_count(2)/=d_1*d_3) print*,'C_count(2)=',C_count(2), &
       ! xxx      ' face value=',d_1*d_3
       ! xxx if (C_count(3)/=d_1*d_2) print*,'C_count(3)=',C_count(3), &
       ! xxx      ' face value=',d_1*d_2
       ! ...
       ! ... Send A partitions to spawned processes
       ! ... Received in subroutine coef_recv, module PCG_solve_child
       ! ...
       tag_out=1211
       R_point=>coef(1:A_nodes)
       call MPI_SSEND(R_point, A_nodes, MPI_DOUBLE_PRECISION, p, &
            tag_out, pc_intracomm, ierr)
       call error_class(pc_intracomm, ierr)
       tag_out=1212
       R_point=>C1(1:A_nodes)
       call MPI_SSEND(R_point, A_nodes, MPI_DOUBLE_PRECISION, p, &
            tag_out, pc_intracomm, ierr)
       call error_class(pc_intracomm, ierr)
       C1=n0
       if (dim>1) then
          tag_out=1213
          R_point=>C2(1:A_nodes)
          call MPI_SSEND(R_point, A_nodes, MPI_DOUBLE_PRECISION, p, &
               tag_out, pc_intracomm, ierr)
          call error_class(pc_intracomm, ierr)
          C2=n0
          if (dim>2) then
             tag_out=1214
             R_point=>C3(1:A_nodes)
             call MPI_SSEND(R_point, A_nodes, MPI_DOUBLE_PRECISION, p, &
                  tag_out, pc_intracomm, ierr)
             call error_class(pc_intracomm, ierr)
             C3=n0
          endif
       endif
       ! ...
       ! ... Send connectors to partitions
       ! ... Received in subroutine coef_recv, module PCG_solve_child
       ! ...
       if (C_count(1)>0) then
          err_loc_message='reorder_parent AC_reorder MPI_SEND 1'
          R_point=>CC_1(1:C_count(1))
          I_point=>ii_1(1:C_count(1))
          ! ...
          ! ??? 02/14/07
          ! ??? The following MPI structure is malfunctioning for unknown
          ! ??? reasons; using MPI sends 12150 and 12151 instead.
          ! ??? 04/04/07 now functioning
          tag_out=1215
          int_real_type=MPI_struct_int_real_array(I_point,R_point)
          call MPI_SSEND(I_point, 1, int_real_type, p, &
               tag_out, pc_intracomm, ierr)
          call error_class(pc_intracomm, ierr, err_loc_message)
          call MPI_TYPE_FREE(int_real_type,ierr)
          ! ...
          ! xxx tag_out=12150
          ! xxx call MPI_SEND(I_point, C_count(1), MPI_INTEGER, p, &
          ! xxx      tag_out, pc_intracomm, ierr)
          ! xxx call error_class(pc_intracomm, ierr, err_loc_message)
          ! xxx tag_out=12151
          ! xxx call MPI_SEND(R_point, C_count(1), MPI_DOUBLE_PRECISION, p, &
          ! xxx      tag_out, pc_intracomm, ierr)
          ! xxx call error_class(pc_intracomm, ierr, err_loc_message)
          ! ...
          CC_1=n0; II_1=0
       endif
       ! ...
       if (C_count(2)>0) then
          err_loc_message='reorder_parent AC_reorder MPI_SEND 2'
          R_point=>CC_2(1:C_count(2))
          I_point=>ii_2(1:C_count(2))
          ! ... 
          ! ??? 02/14/07
          ! ??? The following MPI structure is malfunctioning for unknown
          ! ??? reasons; using MPI sends 12160 and 12161 instead.
          ! ??? 04/04/07 now functioning
          int_real_type=MPI_struct_int_real_array(I_point,R_point)
          tag_out=1216
          call MPI_SSEND(I_point, 1, int_real_type, &
               p, tag_out, pc_intracomm, ierr)
          call error_class(pc_intracomm, ierr, err_loc_message)
          call MPI_TYPE_FREE(int_real_type,ierr)
          ! ...
          ! xxx tag_out=12160
          ! xxx call MPI_SEND(I_point, C_count(2), MPI_INTEGER, p, &
          ! xxx      tag_out, pc_intracomm, ierr)
          ! xxx call error_class(pc_intracomm, ierr, err_loc_message)
          ! xxx tag_out=12161
          ! xxx call MPI_SEND(R_point, C_count(2), MPI_DOUBLE_PRECISION, p, &
          ! xxx      tag_out, pc_intracomm, ierr)
          ! xxx call error_class(pc_intracomm, ierr, err_loc_message)
          ! ...
          CC_2=n0; II_2=0
       endif
       ! ...
       if (C_count(3)>0) then
          err_loc_message='reorder_parent AC_reorder MPI_SEND 3'
          R_point=>CC_3(1:C_count(3))
          I_point=>ii_3(1:C_count(3))
          ! ...
          ! ??? 02/14/07
          ! ??? The following MPI structure is malfunctioning for unknown
          ! ??? reasons; using MPI sends 12170 and 12171 instead.
          ! ??? 04/04/07 now functioning
          tag_out=1217
          int_real_type=MPI_struct_int_real_array(I_point,R_point)
          call MPI_SSEND(I_point, 1, int_real_type, p, &
               tag_out, pc_intracomm, ierr)
          call error_class(pc_intracomm, ierr, err_loc_message)
          call MPI_TYPE_FREE(int_real_type,ierr)
          ! ...
          ! xxx tag_out=12170
          ! xxx call MPI_SEND(I_point, C_count(3), MPI_INTEGER, p, &
          ! xxx      tag_out, pc_intracomm, ierr)
          ! xxx call error_class(pc_intracomm, ierr, err_loc_message)
          ! xxx tag_out=12171
          ! xxx call MPI_SEND(R_point, C_count(3), MPI_DOUBLE_PRECISION, p, &
          ! xxx      tag_out, pc_intracomm, ierr)
          ! xxx call error_class(pc_intracomm, ierr, err_loc_message)
          ! ...
          CC_3=n0; II_3=0 
       endif
       nullify (I_point, R_point)
       ! ...
       if (p<no_partitions) then
          C1=n0; C2=n0; C3=n0
          ! xxx ii_1=1; ii_2=1; ii_1=1; CC_1=n0; CC_2=n0; CC_3=n0
       endif
    enddo ! end outer partition loop
!$OMP END DO
!$OMP END PARALLEL 
    nullify (R_point)
    ! ...
  contains
    ! ...
    subroutine insert_coef(C11,C22,C33,i11,i22,i33,node)
      ! ...
      ! ... Argument list
      ! ...
      real(kind=kv), intent(in) :: C11,C22,C33
      integer, intent(in) :: i11,i22,i33
      integer, intent(in) :: node
      ! ......................................................
      ! ... 
      if (dim>2) then
         ! ... 3-D
         if (ls1==1) then
            C1(node)=C11
         elseif (ls1==2) then
            C_count(1)=C_count(1)+1
            CC_1(C_count(1))=C11
            ii_1(C_count(1))=i11
         endif
         if (ls2==1) then
            C2(node)=C22
         elseif (ls2==2) then
            C_count(2)=C_count(2)+1
            CC_2(C_count(2))=C22
            ii_2(C_count(2))=i22
         endif
         if (ls3==1) then
            C3(node)=C33
         elseif (ls3==2) then
            C_count(3)=C_count(3)+1
            CC_3(C_count(3))=C33
            ii_3(C_count(3))=i33
         endif
      elseif (dim>1) then
         ! ... 2-D slice
         if (ls1==1) then
            C1(node)=C11
         elseif (ls1==2) then
            C_count(1)=C_count(1)+1
            CC_1(C_count(1))=C11
            ii_1(C_count(1))=i11
         endif
         if (ls2==1) then
            C2(node)=C22
         elseif (ls2==2) then
            C_count(2)=C_count(2)+1
            CC_2(C_count(2))=C22
            ii_2(C_count(2))=i22
         endif
      else
         ! ... 1-D
         if (ls1==1) then
            C1(node)=C11
         elseif (ls1==2) then
            C_count(1)=C_count(1)+1
            CC_1(C_count(1))=C11
            ii_1(C_count(1))=i11
         endif
      endif
      ! ...
    end subroutine insert_coef
    ! ...
  end subroutine AC_reorder

  function MPI_struct_int_real_array(indx,value) result(type_int_real)
    ! ... Purpose: Build an MPI structure consisting of an integer array and 
    ! ... a double precision real array.
    ! ... Explicit interface required: assumed-shape arrays INDX and VALUE.
    ! ...
    ! ... argument list
    ! ...
    integer, dimension(:) :: indx
    real(kind=kv), dimension(:) :: value
    ! ...
    ! ... result
    ! ...
    integer :: type_int_real
    ! ...  
    ! ... local variables
    ! ...
    integer, dimension(1:2) :: blks, types, displs
    integer :: ierr, i_size, v_size, start_address, address
    character(len=64) :: err_loc_message= &
         'PCG_solve_child MPI_struct_int_real_array MPI_TYPE_COMMIT 1'
    ! .......................................................................
    type_int_real=0
    i_size=size(indx); v_size=size(value)
    blks=(/i_size, v_size/)
    types=(/MPI_INTEGER, MPI_DOUBLE_PRECISION/)
    displs(1)=0
    call MPI_ADDRESS(indx(1), start_address, ierr)
    call MPI_ADDRESS(value(1), address, ierr)
    displs(2)=address-start_address
    call MPI_TYPE_STRUCT(2, blks, displs, types, type_int_real, ierr)
    call MPI_TYPE_COMMIT(type_int_real,ierr)
    call error_class(pc_intracomm, ierr, err_loc_message)
    ! ...
  end function MPI_struct_int_real_array

end module reorder_parent


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37644


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Bug fortran/37644] compiler Segmentation fault
  2008-09-24 23:16 [Bug fortran/37644] New: compiler Segmentation fault rlnaff at usgs dot gov
                   ` (4 preceding siblings ...)
  2008-11-05 16:10 ` rlnaff at usgs dot gov
@ 2008-11-05 16:15 ` rlnaff at usgs dot gov
  2009-03-28 12:07 ` [Bug fortran/37644] [4.3 only] ICE on valid OpenMP code fxcoudert at gcc dot gnu dot org
  6 siblings, 0 replies; 8+ messages in thread
From: rlnaff at usgs dot gov @ 2008-11-05 16:15 UTC (permalink / raw)
  To: gcc-bugs



------- Comment #6 from rlnaff at usgs dot gov  2008-11-05 16:14 -------
Subject: Re:  compiler Segmentation fault

I installed version 4.3.2  for this test; it was what was available at 
the time:

(bash) stoch.pts/10% ls -l /z/stoch/home/rlnaff/usr/local/bin/gfortran4.3.2
-rwxr-xr-x 2 rlnaff users 473210 2008-09-24 13:05 
/z/stoch/home/rlnaff/usr/local/bin/gfortran4.3.2*

(bash) stoch.pts/10% export 
LAMHF77=/z/stoch/home/rlnaff/usr/local/bin/gfortran4.3.2
(bash) stoch.pts/10% mpif77 -c -fopenmp reorder_parent.f90
reorder_parent.f90:470: internal compiler error: Segmentation fault
Please submit a full bug report,
with preprocessed source if appropriate.
See <http://gcc.gnu.org/bugs.html> for instructions.


I didn't have access to 4.4.0

R. Naff

burnus at gcc dot gnu dot org wrote:
> ------- Comment #4 from burnus at gcc dot gnu dot org  2008-11-05 07:03 -------
> I can reproduce the ICE (seg. fault) with 4.3.x, but not with 4.4.0
>
>
>   


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37644


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Bug fortran/37644] [4.3 only] ICE on valid OpenMP code
  2008-09-24 23:16 [Bug fortran/37644] New: compiler Segmentation fault rlnaff at usgs dot gov
                   ` (5 preceding siblings ...)
  2008-11-05 16:15 ` rlnaff at usgs dot gov
@ 2009-03-28 12:07 ` fxcoudert at gcc dot gnu dot org
  6 siblings, 0 replies; 8+ messages in thread
From: fxcoudert at gcc dot gnu dot org @ 2009-03-28 12:07 UTC (permalink / raw)
  To: gcc-bugs



------- Comment #7 from fxcoudert at gcc dot gnu dot org  2009-03-28 12:07 -------
This is a reduced test case, confirmed the ICE on x86_64-linux, both -m32 and
-m64 with gfortran 4.3.2:

module m
  integer :: n 
contains
  subroutine AC_reorder
    integer :: p
    !$OMP PARALLEL DEFAULT(private)
    !$OMP DO
    do p = 1, n
    enddo
    !$OMP END DO
    !$OMP END PARALLEL 
  contains
    subroutine insert_coef
    end subroutine insert_coef
  end subroutine AC_reorder
end module m

It works with the 4.4 branch and trunk at both -m32 and -m64. The backtrace for
the 4.3.2 segfault is:

#0  get_frame_type (info=0x0) at ../../gcc-4.3.2/gcc/tree-nested.c:198
#1  0x000000000066d0f8 in get_chain_decl (info=0x1504e4c0)
    at ../../gcc-4.3.2/gcc/tree-nested.c:304
#2  0x000000000066e12d in get_nonlocal_debug_decl (info=0x1504e4c0, 
    decl=0x2acef1a6baa0) at ../../gcc-4.3.2/gcc/tree-nested.c:904
#3  0x000000000066f986 in convert_nonlocal_omp_clauses (
    pclauses=<value optimized out>, wi=0x7fffbc6d6e50)
    at ../../gcc-4.3.2/gcc/tree-nested.c:1181
#4  0x000000000066f6a8 in convert_nonlocal_reference (
    tp=<value optimized out>, walk_subtrees=<value optimized out>, 
    data=<value optimized out>) at ../../gcc-4.3.2/gcc/tree-nested.c:1099
#5  0x000000000072e5d7 in walk_tree_1 (tp=0x2acef1a587d0, 
    func=0x66f440 <convert_nonlocal_reference>, data=0x7fffbc6d6e50, pset=0x0, 
    lh=0) at ../../gcc-4.3.2/gcc/tree.c:8415
#6  0x000000000066e5ca in walk_stmts (wi=0x7fffbc6d6e50, tp=0x2acef1a587d0)
    at ../../gcc-4.3.2/gcc/tree-nested.c:640
#7  0x000000000066e774 in walk_stmts (wi=0x7fffbc6d6e50, 
    tp=<value optimized out>) at ../../gcc-4.3.2/gcc/tree-nested.c:575
#8  0x000000000066e61c in walk_stmts (wi=0x7fffbc6d6e50, tp=0x2acef1a70098)
    at ../../gcc-4.3.2/gcc/tree-nested.c:594
#9  0x000000000066e81b in walk_body (callback=<value optimized out>, 
    info=<value optimized out>, stmt_p=0x0)
    at ../../gcc-4.3.2/gcc/tree-nested.c:657

which shows that it is a duplicate of PR36632.

To Richard: as a conclusion, it's fixed for 4.4 and will likely not be fixed on
the 4.3.x series, so your best bet is to update to 4.4.0 when it comes out,
which should be soon (a matter of days or few weeks).

*** This bug has been marked as a duplicate of 36632 ***


-- 

fxcoudert at gcc dot gnu dot org changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |RESOLVED
  GCC build triplet|see description below       |
   GCC host triplet|see description below       |
 GCC target triplet|see description below       |
           Keywords|                            |ice-on-valid-code
      Known to fail|                            |4.3.2
      Known to work|                            |4.4.0 4.5.0
         Resolution|                            |DUPLICATE
            Summary|compiler Segmentation fault |[4.3 only] ICE on valid
                   |                            |OpenMP code


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37644


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2009-03-28 12:07 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-09-24 23:16 [Bug fortran/37644] New: compiler Segmentation fault rlnaff at usgs dot gov
2008-09-28 19:58 ` [Bug fortran/37644] " pinskia at gcc dot gnu dot org
2008-09-29 13:56 ` rlnaff at usgs dot gov
2008-11-05  4:12 ` kargl at gcc dot gnu dot org
2008-11-05  7:05 ` burnus at gcc dot gnu dot org
2008-11-05 16:10 ` rlnaff at usgs dot gov
2008-11-05 16:15 ` rlnaff at usgs dot gov
2009-03-28 12:07 ` [Bug fortran/37644] [4.3 only] ICE on valid OpenMP code fxcoudert at gcc dot gnu dot org

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).