From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 17992 invoked by alias); 8 Mar 2006 00:46:40 -0000 Received: (qmail 17970 invoked by uid 48); 8 Mar 2006 00:46:39 -0000 Date: Wed, 08 Mar 2006 00:46:00 -0000 Message-ID: <20060308004639.17969.qmail@sourceware.org> X-Bugzilla-Reason: CC References: Subject: [Bug libgcj/26483] Wrong parsing of doubles when interpreted on ia64 In-Reply-To: Reply-To: gcc-bugzilla@gcc.gnu.org To: java-prs@gcc.gnu.org From: "wilson at gcc dot gnu dot org" Mailing-List: contact java-prs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: java-prs-owner@gcc.gnu.org X-SW-Source: 2006-q1/txt/msg00331.txt.bz2 List-Id: ------- Comment #10 from wilson at gcc dot gnu dot org 2006-03-08 00:46 ------- I missed the denorm angle obviously. And the answer to the question about what is different between native and interpreted execution would of course be libffi, which took me far longer to remember than it should have. Anyways, looking at libffi, the issue appears to be the stf_spill function in the src/ia64/ffi.c file. This function spills an FP value to the stack, taking as argument a _float80 value, which is effectively a long double. So when we pass the denorm double to stf_spill, it gets normalized to a long double, and this normalization appears to be causing all of the trouble. This long double value then gets passed to dtoa in fdlibm which expects a double argument. dtoa then fails. I didn't debug dtoa to see why it fails, but it seems clear if we pass it an argument of the wrong type, we are asking for trouble. On IA-64, FP values are always stored in FP regs as a long double value rounded to the appropriate type, so the normalization will have no effect except on denorm values I think. This means only single-denorm and double-denorm arguments values are broken, which is something that would be easy to miss without a testcase. Stepping through ffi_call in gdb at the point whjere stf_spill is called, I see the incoming value is f6 4.9406564584124654417656879286822137e-324 (raw 0x000000000000fc010000000000000800) which has the minimum double exponent (fc01) and a denorm fraction (0...800). After the conversion to _float80, we have f6 4.9406564584124654417656879286822137e-324 (raw 0x000000000000fbcd8000000000000000) This now has an exponent invalid for double (fbcd) and a normal fraction (800...0). If I rewrite stf_spill to be a macro instead of a function, to avoid the argument type conversion, then the testcase works for both gcj and gij. ldf_spill appears to have the same problem, and is in need of the same solution. -- wilson at gcc dot gnu dot org changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |wilson at gcc dot gnu dot | |org http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26483