It has been a while since my last contribution. The following patch allows GCC's optimizers to more aggressively eliminate and optimize java array bounds checks. The results are quite impressive, for example producing a 26% performance improvement on the sieve.java benchmark given at http://keithlea.com/javabench/ on x86_64-pc-linux-gnu, reducing the runtime for 1 million iterations from 31.5 seconds on trunk, to 25.0s with this patch, in fact eliminating all array bounds checks. This is close to the 22.3s of an equivalent C/C++ implementation, and significantly closes the gap to Java HotSpot(TM) JIT at 23.0 seconds. The approach is to provide sufficient information in the gimple generated by the gcj front-end to allow the optimizers to do their thing. For array allocations of constant length, I propose generating an additional (cheap) write to the array length field returned from _Jv_NewPrimArray, which is then sufficient to allow this value to propagate throughout the optimizers. This is probably best explained by a simple example. Consider the array initializer below: private static int mk1[] = { 71,85,95 }; which gets compiled into the java byte code sequence below: 0: iconst_3 1: newarray int 3: dup 4: iconst_0 5: bipush 71 7: iastore 8: dup 9: iconst_1 10: bipush 85 12: iastore 13: dup 14: iconst_2 15: bipush 95 17: iastore 18: putstatic #3 // Field mk1:[I 21: return Currently, the .004t.gimple generated by gcj for the array allocation is the cryptic: #slot#0#0 = 3; #ref#0#2 = _Jv_NewPrimArray (&_Jv_intClass, #slot#0#0); #ref#1#4 = #ref#0#2; _ref_1_4.6 = #ref#1#4; which unfortunately doesn't provide many clues for the middle-end, so we end up generating the following .210t.optimized: void * _3 = _Jv_NewPrimArray (&_Jv_intClass, 3); int _4 = MEM[(struct int[] *)_3].length; unsigned int _5 = (unsigned int) _4; if (_4 == 0) goto ; else goto ; : _Jv_ThrowBadArrayIndex (0); : MEM[(int *)_3 + 12B] = 71; if (_5 == 1) goto ; else goto ; : _Jv_ThrowBadArrayIndex (1); : MEM[(int *)_3 + 16B] = 85; if (_5 == 2) goto ; else goto ; : _Jv_ThrowBadArrayIndex (2); : MEM[(int *)_3 + 20B] = 95; mk1 = _3; return; which obviously contains three run-time array bounds checks. These same checks appear in the x86_64 assembly language: subq $8, %rsp xorl %eax, %eax movl $3, %esi movl $_Jv_intClass, %edi call _Jv_NewPrimArray movl 8(%rax), %edx testl %edx, %edx je .L13 cmpl $1, %edx movl $71, 12(%rax) je .L14 cmpl $2, %edx movl $85, 16(%rax) je .L15 movl $95, 20(%rax) movq %rax, _ZN9TestArray3mk1E(%rip) addq $8, %rsp ret .L13: xorl %edi, %edi xorl %eax, %eax call _Jv_ThrowBadArrayIndex .L15: movl $2, %edi xorl %eax, %eax call _Jv_ThrowBadArrayIndex .L14: movl $1, %edi xorl %eax, %eax call _Jv_ThrowBadArrayIndex With the patch below, we now generate the much more informative .004t.gimple for this: D.926 = _Jv_NewPrimArray (&_Jv_intClass, 3); D.926->length = 3; This additional write to the array length field of the newly allocated array enables much more simplification. The resulting .210t.optimized for our array initialization now becomes: struct int[3] * _3; _3 = _Jv_NewPrimArray (&_Jv_intClass, 3); MEM[(int *)_3 + 8B] = { 3, 71, 85, 95 }; mk1 = _3; return; And the x86_64 assembly code is also much prettier: subq $8, %rsp movl $3, %esi movl $_Jv_intClass, %edi xorl %eax, %eax call _Jv_NewPrimArray movdqa .LC0(%rip), %xmm0 movq %rax, _ZN9TestArray3mk1E(%rip) movups %xmm0, 8(%rax) addq $8, %rsp ret .LC0: .long 3 .long 71 .long 85 .long 95 Achieving this result required two minor tweaks. The first is to allow the array length constant to reach the newarray call, by allowing constants to remain on the quickstack. This allows the call to _Jv_NewPrimArray to have a constant integer argument instead of the opaque #slot#0#0. Then in the code that constructs the call to _Jv_NewPrimArray we wrap it in a COMPOUND_EXPR allowing us to insert the superfluous, but helpful, write to the length field. Whilst working on this improvement I also noticed that the array bounds checks we were initially generating could also be improved. Currently, an array bound check in 004t.gimple looks like: D.925 = MEM[(struct int[] *)_ref_1_4.6].length; D.926 = (unsigned int) D.925; if (_slot_2_5.9 >= D.926) goto ; else goto ; : _Jv_ThrowBadArrayIndex (_slot_2_5.8); if (0 != 0) goto ; else goto ; : iftmp.7 = 1; goto ; : iftmp.7 = 0; : Notice the unnecessary "0 != 0" and the dead assignments to iftmp.7 (which is unused). With the patch below, we now not only avoid this conditional but also use __builtin_expect to inform the compiler that throwing an BadArrayIndex exception is typically unlikely. i.e. D.930 = MEM[(struct int[] *)_ref_1_4.4].length; D.931 = D.930 <= 1; D.932 = __builtin_expect (D.931, 0); if (D.932 != 0) goto ; else goto ; : _Jv_ThrowBadArrayIndex (0); : The following patch has been tested on x86_64-pc-linux-gnu with a full make bootstrap and make check, with no new failures/regressions. Please let me know what you think (for stage 1 once it reopens)? Roger -- Roger Sayle, Ph.D. CEO and founder NextMove Software Limited Registered in England No. 07588305 Registered Office: Innovation Centre (Unit 23), Cambridge Science Park, Cambridge CB4 0EY 2016-02-21 Roger Sayle * expr.c (push_value): Only call flush_quick_stack for non-constant arguments. (build_java_throw_out_of_bounds_exception): No longer wrap calls to _Jv_ThowBadArrayIndex in a COMPOUND_EXPR as no longer needed. (java_check_reference): Annotate COND_EXPR with __builtin_expect to indicate that calling _Jv_ThrowNullPointerException is unlikely. (build_java_arrayaccess): Construct an unlikely COND_EXPR instead of a TRUTH_ANDIF_EXPR in a COMPOUND_EXPR. Only generate array index MULT_EXPR when size_exp is not unity. (build_array_length_annotation): When optimizing, generate a write to the allocated array's length field to expose constant lengths to GCC's optimizers. (build_newarray): Call new build_array_length_annotation. (build_anewarray): Likewise.