* [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support @ 2013-01-23 3:09 Maciej W. Rozycki 2013-01-23 5:05 ` Mike Frysinger 2013-01-23 17:06 ` Joseph S. Myers 0 siblings, 2 replies; 15+ messages in thread From: Maciej W. Rozycki @ 2013-01-23 3:09 UTC (permalink / raw) To: libc-ports Hi, We have an issue with the INTERNAL_SYSCALL_NCS wrapper in that it does not respect the kernel's syscall restart convention. That convention requires the instruction immediately preceding SYSCALL to initialize $v0 with the syscall number. Then if a restart triggers, $v0 will have been clobbered by the syscall interrupted, and needs to be reinititalized. The kernel will decrement the PC by 4 before switching back to the user mode so that $v0 has been reloaded before SYSCALL is executed again. This implies the place $v0 is loaded from must be preserved across a syscall, e.g. an immediate, static register, stack slot, etc. We use two wrapper macros to dispatch syscalls to the relevant pieces of code: INTERNAL_SYSCALL and INTERNAL_SYSCALL_NCS. They both ultimately cause a piece of inline assembly to be emitted. In the former case the piece starts with an LI instruction that loads $v0 with an immediate number of the syscall required. A SYSCALL instruction then immediately follows. In the latter case $v0 is arranged to have been preloaded and the piece starts with a SYSCALL instruction. That works if the syscall is executed the first time, because the compiler will have arranged for $v0 to contain the correct value. It does not in the case of a syscall restart as the compiler-generated instruction immediately preceding SYSCALL may not necessarily be one to load $v0 with the value required. The failure mode is unlikely to trigger as the INTERNAL_SYSCALL_NCS wrapper is only used in a couple of places and then the offending syscall would have to be restarted as well. The symptom would usually be an intermittent program failure and would be hard to debug. The issue was noticed by code inspection while making changes in this area. Here is a change to address the problem. It rearranges the wrappers such that there is always an instruction to reload $v0 before the SYSCALL instruction. I have chosen $s0 as the place to preserve the syscall number per usual ABI conventions and consequently the MOVE instruction to move it into place. That required a further arrangement where the library is built as microMIPS code -- microMIPS MOVE is normally encoded by GAS as a short two-byte (16-bit) instruction. That breaks the PC calculation done by the kernel as the restart would then happen either two instructions before SYSCALL (if the second previous instruction was 16-bit too) or, worse yet, in the middle of the second previous instruction (if that happened to be a 32-bit instruction). In the microMIPS mode the MOVE instruction is forcefully encoded into its 32-bit form with the use of an instruction size override suffix. This is what the MOVE32 macro is about. The change was regression-tested successfully for the following configurations (compiler flag/multilib options), for both endiannesses each (the -EB and -EL compiler option, respectively): * standard MIPS ISA, o32 (-mabi=32), * standard MIPS ISA, n64 (-mabi=64), * standard MIPS ISA, n32 (-mabi=n32), * standard MIPS ISA, o32, soft-float (-mabi=32 -msoft-float), * standard MIPS ISA, n64, soft-float (-mabi=64 -msoft-float), * standard MIPS ISA, n32, soft-float (-mabi=n32 -msoft-float), * microMIPS ISA, o32 (-mmicromips -mabi=32), * microMIPS ISA, o32, soft-float (-mmicromips -mabi=32 -msoft-float), with the MIPS32r2 or MIPS64r2 ISA level selected as applicable. Please apply. 2013-01-23 Maciej W. Rozycki <macro@codesourcery.com> [BZ #15054] * sysdeps/unix/sysv/linux/mips/mips32/sysdep.h (MOVE32): New macro. (INTERNAL_SYSCALL_NCS): Use it. Rewrite to respect the syscall restart convention. (INTERNAL_SYSCALL): Rewrite to respect the syscall restart convention. (internal_syscall0, internal_syscall1): Likewise. (internal_syscall2, internal_syscall3): Likewise. (internal_syscall4, internal_syscall5): Likewise. (internal_syscall6, internal_syscall7): Likewise. * sysdeps/unix/sysv/linux/mips/mips64/n32/sysdep.h (MOVE32): New macro. (INTERNAL_SYSCALL_NCS): Use it. Rewrite to respect the syscall restart convention. (INTERNAL_SYSCALL): Rewrite to respect the syscall restart convention. (internal_syscall0, internal_syscall1): Likewise. (internal_syscall2, internal_syscall3): Likewise. (internal_syscall4, internal_syscall5): Likewise. (internal_syscall6): Likewise. * sysdeps/unix/sysv/linux/mips/mips64/n64/sysdep.h (MOVE32): New macro. (INTERNAL_SYSCALL_NCS): Use it. Rewrite to respect the syscall restart convention. (INTERNAL_SYSCALL): Rewrite to respect the syscall restart convention. (internal_syscall0, internal_syscall1): Likewise. (internal_syscall2, internal_syscall3): Likewise. (internal_syscall4, internal_syscall5): Likewise. (internal_syscall6): Likewise. Maciej glibc-mips-syscall-restart.diff Index: ports/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h =================================================================== --- ports/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h 2013-01-17 00:52:34.000000000 +0000 +++ ports/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h 2013-01-17 01:12:40.907765959 +0000 @@ -67,25 +67,45 @@ #undef INTERNAL_SYSCALL_ERRNO #define INTERNAL_SYSCALL_ERRNO(val, err) ((void) (err), val) +/* Note that the Linux syscall restart convention requires the instruction + immediately preceding SYSCALL to initialize $v0 with the syscall number. + Then if a restart triggers, $v0 will have been clobbered by the syscall + interrupted, and needs to be reinititalized. The kernel will decrement + the PC by 4 before switching back to the user mode so that $v0 has been + reloaded before SYSCALL is executed again. This implies the place $v0 + is loaded from must be preserved across a syscall, e.g. an immediate, + static register, stack slot, etc. This also means we have to force a + 32-bit encoding of the microMIPS MOVE instruction if one is used. */ + +#ifdef __mips_micromips +#define MOVE32 "move32" +#else +#define MOVE32 "move" +#endif + #undef INTERNAL_SYSCALL -#define INTERNAL_SYSCALL(name, err, nr, args...) \ - internal_syscall##nr (, "li\t$2, %2\t\t\t# " #name "\n\t", \ - "i" (SYS_ify (name)), err, args) +#define INTERNAL_SYSCALL(name, err, nr, args...) \ + internal_syscall##nr ("li\t%0, %2\t\t\t# " #name "\n\t", \ + "IK" (SYS_ify (name)), \ + 0, err, args) #undef INTERNAL_SYSCALL_NCS -#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ - internal_syscall##nr (= number, , "r" (__v0), err, args) +#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ + internal_syscall##nr (MOVE32 "\t%0, %2\n\t", \ + "r" (__s0), \ + number, err, args) -#define internal_syscall0(ncs_init, cs_init, input, err, dummy...) \ +#define internal_syscall0(v0_init, input, number, err, dummy...) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -97,17 +117,18 @@ _sys_result; \ }) -#define internal_syscall1(ncs_init, cs_init, input, err, arg1) \ +#define internal_syscall1(v0_init, input, number, err, arg1) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -119,20 +140,21 @@ _sys_result; \ }) -#define internal_syscall2(ncs_init, cs_init, input, err, arg1, arg2) \ +#define internal_syscall2(v0_init, input, number, err, arg1, arg2) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ : input, "r" (__a0), "r" (__a1) \ : __SYSCALL_CLOBBERS); \ @@ -142,21 +164,23 @@ _sys_result; \ }) -#define internal_syscall3(ncs_init, cs_init, input, err, arg1, arg2, arg3)\ +#define internal_syscall3(v0_init, input, number, err, \ + arg1, arg2, arg3) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2) \ : __SYSCALL_CLOBBERS); \ @@ -166,21 +190,23 @@ _sys_result; \ }) -#define internal_syscall4(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4)\ +#define internal_syscall4(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ register long __a3 asm("$7") = (long) (arg4); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2) \ : __SYSCALL_CLOBBERS); \ @@ -197,13 +223,15 @@ #define FORCE_FRAME_POINTER \ void *volatile __fp_force __attribute__ ((unused)) = alloca (4) -#define internal_syscall5(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5)\ +#define internal_syscall5(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5) \ ({ \ long _sys_result; \ \ FORCE_FRAME_POINTER; \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -212,10 +240,10 @@ ".set\tnoreorder\n\t" \ "subu\t$29, 32\n\t" \ "sw\t%6, 16($29)\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ "addiu\t$29, 32\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2), \ "r" ((long) (arg5)) \ @@ -226,13 +254,15 @@ _sys_result; \ }) -#define internal_syscall6(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5, arg6)\ +#define internal_syscall6(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5, arg6) \ ({ \ long _sys_result; \ \ FORCE_FRAME_POINTER; \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -242,10 +272,10 @@ "subu\t$29, 32\n\t" \ "sw\t%6, 16($29)\n\t" \ "sw\t%7, 20($29)\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ "addiu\t$29, 32\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2), \ "r" ((long) (arg5)), "r" ((long) (arg6)) \ @@ -256,13 +286,15 @@ _sys_result; \ }) -#define internal_syscall7(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5, arg6, arg7)\ +#define internal_syscall7(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5, arg6, arg7) \ ({ \ long _sys_result; \ \ FORCE_FRAME_POINTER; \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -273,10 +305,10 @@ "sw\t%6, 16($29)\n\t" \ "sw\t%7, 20($29)\n\t" \ "sw\t%8, 24($29)\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ "addiu\t$29, 32\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2), \ "r" ((long) (arg5)), "r" ((long) (arg6)), "r" ((long) (arg7)) \ Index: ports/sysdeps/unix/sysv/linux/mips/mips64/n32/sysdep.h =================================================================== --- ports/sysdeps/unix/sysv/linux/mips/mips64/n32/sysdep.h 2013-01-17 00:52:34.000000000 +0000 +++ ports/sysdeps/unix/sysv/linux/mips/mips64/n32/sysdep.h 2013-01-17 01:12:40.907765959 +0000 @@ -71,25 +71,45 @@ #undef INTERNAL_SYSCALL_ERRNO #define INTERNAL_SYSCALL_ERRNO(val, err) ((void) (err), val) +/* Note that the Linux syscall restart convention requires the instruction + immediately preceding SYSCALL to initialize $v0 with the syscall number. + Then if a restart triggers, $v0 will have been clobbered by the syscall + interrupted, and needs to be reinititalized. The kernel will decrement + the PC by 4 before switching back to the user mode so that $v0 has been + reloaded before SYSCALL is executed again. This implies the place $v0 + is loaded from must be preserved across a syscall, e.g. an immediate, + static register, stack slot, etc. This also means we have to force a + 32-bit encoding of the microMIPS MOVE instruction if one is used. */ + +#ifdef __mips_micromips +#define MOVE32 "move32" +#else +#define MOVE32 "move" +#endif + #undef INTERNAL_SYSCALL -#define INTERNAL_SYSCALL(name, err, nr, args...) \ - internal_syscall##nr (, "li\t$2, %2\t\t\t# " #name "\n\t", \ - "i" (SYS_ify (name)), err, args) +#define INTERNAL_SYSCALL(name, err, nr, args...) \ + internal_syscall##nr ("li\t%0, %2\t\t\t# " #name "\n\t", \ + "IK" (SYS_ify (name)), \ + 0, err, args) #undef INTERNAL_SYSCALL_NCS -#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ - internal_syscall##nr (= number, , "r" (__v0), err, args) +#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ + internal_syscall##nr (MOVE32 "\t%0, %2\n\t", \ + "r" (__s0), \ + number, err, args) -#define internal_syscall0(ncs_init, cs_init, input, err, dummy...) \ +#define internal_syscall0(v0_init, input, number, err, dummy...) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__((unused)) = number; \ + register long long __v0 asm("$2"); \ register long long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -101,17 +121,18 @@ _sys_result; \ }) -#define internal_syscall1(ncs_init, cs_init, input, err, arg1) \ +#define internal_syscall1(v0_init, input, number, err, arg1) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__((unused)) = number; \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -123,18 +144,19 @@ _sys_result; \ }) -#define internal_syscall2(ncs_init, cs_init, input, err, arg1, arg2) \ +#define internal_syscall2(v0_init, input, number, err, arg1, arg2) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__((unused)) = number; \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -146,19 +168,21 @@ _sys_result; \ }) -#define internal_syscall3(ncs_init, cs_init, input, err, arg1, arg2, arg3) \ +#define internal_syscall3(v0_init, input, number, err, \ + arg1, arg2, arg3) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__((unused)) = number; \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a2 asm("$6") = ARGIFY (arg3); \ register long long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -170,19 +194,21 @@ _sys_result; \ }) -#define internal_syscall4(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4) \ +#define internal_syscall4(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__((unused)) = number; \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a2 asm("$6") = ARGIFY (arg3); \ register long long __a3 asm("$7") = ARGIFY (arg4); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ @@ -194,12 +220,14 @@ _sys_result; \ }) -#define internal_syscall5(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5) \ +#define internal_syscall5(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__((unused)) = number; \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a2 asm("$6") = ARGIFY (arg3); \ @@ -207,7 +235,7 @@ register long long __a4 asm("$8") = ARGIFY (arg5); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ @@ -219,12 +247,14 @@ _sys_result; \ }) -#define internal_syscall6(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5, arg6) \ +#define internal_syscall6(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5, arg6) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__((unused)) = number; \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a2 asm("$6") = ARGIFY (arg3); \ @@ -233,7 +263,7 @@ register long long __a5 asm("$9") = ARGIFY (arg6); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ Index: ports/sysdeps/unix/sysv/linux/mips/mips64/n64/sysdep.h =================================================================== --- ports/sysdeps/unix/sysv/linux/mips/mips64/n64/sysdep.h 2013-01-17 00:52:34.000000000 +0000 +++ ports/sysdeps/unix/sysv/linux/mips/mips64/n64/sysdep.h 2013-01-17 01:12:40.907765959 +0000 @@ -67,25 +67,45 @@ #undef INTERNAL_SYSCALL_ERRNO #define INTERNAL_SYSCALL_ERRNO(val, err) ((void) (err), val) +/* Note that the Linux syscall restart convention requires the instruction + immediately preceding SYSCALL to initialize $v0 with the syscall number. + Then if a restart triggers, $v0 will have been clobbered by the syscall + interrupted, and needs to be reinititalized. The kernel will decrement + the PC by 4 before switching back to the user mode so that $v0 has been + reloaded before SYSCALL is executed again. This implies the place $v0 + is loaded from must be preserved across a syscall, e.g. an immediate, + static register, stack slot, etc. This also means we have to force a + 32-bit encoding of the microMIPS MOVE instruction if one is used. */ + +#ifdef __mips_micromips +#define MOVE32 "move32" +#else +#define MOVE32 "move" +#endif + #undef INTERNAL_SYSCALL -#define INTERNAL_SYSCALL(name, err, nr, args...) \ - internal_syscall##nr (, "li\t$2, %2\t\t\t# " #name "\n\t", \ - "i" (SYS_ify (name)), err, args) +#define INTERNAL_SYSCALL(name, err, nr, args...) \ + internal_syscall##nr ("li\t%0, %2\t\t\t# " #name "\n\t", \ + "IK" (SYS_ify (name)), \ + 0, err, args) #undef INTERNAL_SYSCALL_NCS -#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ - internal_syscall##nr (= number, , "r" (__v0), err, args) +#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ + internal_syscall##nr (MOVE32 "\t%0, %2\n\t", \ + "r" (__s0), \ + number, err, args) -#define internal_syscall0(ncs_init, cs_init, input, err, dummy...) \ +#define internal_syscall0(v0_init, input, number, err, dummy...) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -97,17 +117,18 @@ _sys_result; \ }) -#define internal_syscall1(ncs_init, cs_init, input, err, arg1) \ +#define internal_syscall1(v0_init, input, number, err, arg1) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -119,18 +140,19 @@ _sys_result; \ }) -#define internal_syscall2(ncs_init, cs_init, input, err, arg1, arg2) \ +#define internal_syscall2(v0_init, input, number, err, arg1, arg2) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -142,19 +164,21 @@ _sys_result; \ }) -#define internal_syscall3(ncs_init, cs_init, input, err, arg1, arg2, arg3) \ +#define internal_syscall3(v0_init, input, number, err, \ + arg1, arg2, arg3) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -166,19 +190,21 @@ _sys_result; \ }) -#define internal_syscall4(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4) \ +#define internal_syscall4(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ register long __a3 asm("$7") = (long) (arg4); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ @@ -190,12 +216,14 @@ _sys_result; \ }) -#define internal_syscall5(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5) \ +#define internal_syscall5(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -203,7 +231,7 @@ register long __a4 asm("$8") = (long) (arg5); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ @@ -215,12 +243,14 @@ _sys_result; \ }) -#define internal_syscall6(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5, arg6) \ +#define internal_syscall6(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5, arg6) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__((unused)) = number; \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -229,7 +259,7 @@ register long __a5 asm("$9") = (long) (arg6); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support 2013-01-23 3:09 [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support Maciej W. Rozycki @ 2013-01-23 5:05 ` Mike Frysinger 2013-01-23 5:40 ` Maciej W. Rozycki 2013-01-23 17:06 ` Joseph S. Myers 1 sibling, 1 reply; 15+ messages in thread From: Mike Frysinger @ 2013-01-23 5:05 UTC (permalink / raw) To: libc-ports; +Cc: Maciej W. Rozycki [-- Attachment #1: Type: Text/Plain, Size: 941 bytes --] On Tuesday 22 January 2013 22:09:03 Maciej W. Rozycki wrote: > We have an issue with the INTERNAL_SYSCALL_NCS wrapper in that it does > not respect the kernel's syscall restart convention. > > That convention requires the instruction immediately preceding SYSCALL to > initialize $v0 with the syscall number. Then if a restart triggers, $v0 > will have been clobbered by the syscall interrupted, and needs to be > reinititalized. The kernel will decrement the PC by 4 before switching > back to the user mode so that $v0 has been reloaded before SYSCALL is > executed again. This implies the place $v0 is loaded from must be > preserved across a syscall, e.g. an immediate, static register, stack > slot, etc. naïvely, but why can't the mips kernel paths take care of the reload itself ? other arches have scratch space in their pt_regs for doing just this (a bunch of arches use the orig_<reg> convention). -mike [-- Attachment #2: This is a digitally signed message part. --] [-- Type: application/pgp-signature, Size: 836 bytes --] ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support 2013-01-23 5:05 ` Mike Frysinger @ 2013-01-23 5:40 ` Maciej W. Rozycki 2013-01-23 18:13 ` Mike Frysinger 0 siblings, 1 reply; 15+ messages in thread From: Maciej W. Rozycki @ 2013-01-23 5:40 UTC (permalink / raw) To: Mike Frysinger; +Cc: Ralf Baechle, libc-ports On Wed, 23 Jan 2013, Mike Frysinger wrote: > > We have an issue with the INTERNAL_SYSCALL_NCS wrapper in that it does > > not respect the kernel's syscall restart convention. > > > > That convention requires the instruction immediately preceding SYSCALL to > > initialize $v0 with the syscall number. Then if a restart triggers, $v0 > > will have been clobbered by the syscall interrupted, and needs to be > > reinititalized. The kernel will decrement the PC by 4 before switching > > back to the user mode so that $v0 has been reloaded before SYSCALL is > > executed again. This implies the place $v0 is loaded from must be > > preserved across a syscall, e.g. an immediate, static register, stack > > slot, etc. > > naïvely, but why can't the mips kernel paths take care of the reload itself ? > other arches have scratch space in their pt_regs for doing just this (a bunch > of arches use the orig_<reg> convention). I agree it would be the most reasonable approach if designing from scratch; unfortunately what we have is how the ABI has been set back in 1994. You won't be able to patch up all the kernel binaries out there, sigh... OTOH, the cost of hardcoding the extra instruction to precede SYSCALL is not something I would bend backwards to get rid of, especially given how rarely we make syscalls whose number is not a compilation-time constant. As a matter of curiosity I've run `objdump' across the set of shared libraries we build and found just two such places, in libpthread: sighandler_setxid and __nptl_setxid, out of 243 SYSCALL instances total. I don't suppose the number is going to rise dramatically anytime soon either. Maciej ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support 2013-01-23 5:40 ` Maciej W. Rozycki @ 2013-01-23 18:13 ` Mike Frysinger 2013-01-29 18:12 ` Maciej W. Rozycki 0 siblings, 1 reply; 15+ messages in thread From: Mike Frysinger @ 2013-01-23 18:13 UTC (permalink / raw) To: Maciej W. Rozycki; +Cc: Ralf Baechle, libc-ports [-- Attachment #1: Type: Text/Plain, Size: 1592 bytes --] On Wednesday 23 January 2013 00:40:24 Maciej W. Rozycki wrote: > On Wed, 23 Jan 2013, Mike Frysinger wrote: > > > We have an issue with the INTERNAL_SYSCALL_NCS wrapper in that it does > > > not respect the kernel's syscall restart convention. > > > > > > That convention requires the instruction immediately preceding SYSCALL > > > to initialize $v0 with the syscall number. Then if a restart triggers, > > > $v0 will have been clobbered by the syscall interrupted, and needs to > > > be reinititalized. The kernel will decrement the PC by 4 before > > > switching back to the user mode so that $v0 has been reloaded before > > > SYSCALL is executed again. This implies the place $v0 is loaded from > > > must be preserved across a syscall, e.g. an immediate, static > > > register, stack slot, etc. > > > > naïvely, but why can't the mips kernel paths take care of the reload > > itself ? other arches have scratch space in their pt_regs for doing just > > this (a bunch of arches use the orig_<reg> convention). > > I agree it would be the most reasonable approach if designing from > scratch; unfortunately what we have is how the ABI has been set back in > 1994. You won't be able to patch up all the kernel binaries out there, > sigh... sure, you won't be able to retroactively fixing kernels. but you'll be able to make future kernels more robust against shady userlands. as you've pointed out, this is an extremely subtle bug that can easily go unnoticed for a long time which simply injects random flakiness into the runtime system. -mike [-- Attachment #2: This is a digitally signed message part. --] [-- Type: application/pgp-signature, Size: 836 bytes --] ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support 2013-01-23 18:13 ` Mike Frysinger @ 2013-01-29 18:12 ` Maciej W. Rozycki 2013-01-29 19:04 ` Ralf Baechle 0 siblings, 1 reply; 15+ messages in thread From: Maciej W. Rozycki @ 2013-01-29 18:12 UTC (permalink / raw) To: Ralf Baechle, Mike Frysinger; +Cc: libc-ports On Wed, 23 Jan 2013, Mike Frysinger wrote: > > > > We have an issue with the INTERNAL_SYSCALL_NCS wrapper in that it does > > > > not respect the kernel's syscall restart convention. > > > > > > > > That convention requires the instruction immediately preceding SYSCALL > > > > to initialize $v0 with the syscall number. Then if a restart triggers, > > > > $v0 will have been clobbered by the syscall interrupted, and needs to > > > > be reinititalized. The kernel will decrement the PC by 4 before > > > > switching back to the user mode so that $v0 has been reloaded before > > > > SYSCALL is executed again. This implies the place $v0 is loaded from > > > > must be preserved across a syscall, e.g. an immediate, static > > > > register, stack slot, etc. > > > > > > naïvely, but why can't the mips kernel paths take care of the reload > > > itself ? other arches have scratch space in their pt_regs for doing just > > > this (a bunch of arches use the orig_<reg> convention). > > > > I agree it would be the most reasonable approach if designing from > > scratch; unfortunately what we have is how the ABI has been set back in > > 1994. You won't be able to patch up all the kernel binaries out there, > > sigh... > > sure, you won't be able to retroactively fixing kernels. but you'll be able to > make future kernels more robust against shady userlands. as you've pointed > out, this is an extremely subtle bug that can easily go unnoticed for a long > time which simply injects random flakiness into the runtime system. That's not unreasonable, I agree. Ralf, what do you think? Maciej ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support 2013-01-29 18:12 ` Maciej W. Rozycki @ 2013-01-29 19:04 ` Ralf Baechle 2013-01-29 19:12 ` Maciej W. Rozycki 0 siblings, 1 reply; 15+ messages in thread From: Ralf Baechle @ 2013-01-29 19:04 UTC (permalink / raw) To: Maciej W. Rozycki; +Cc: Mike Frysinger, libc-ports On Tue, Jan 29, 2013 at 06:12:07PM +0000, Maciej W. Rozycki wrote: > > > > > We have an issue with the INTERNAL_SYSCALL_NCS wrapper in that it does > > > > > not respect the kernel's syscall restart convention. > > > > > > > > > > That convention requires the instruction immediately preceding SYSCALL > > > > > to initialize $v0 with the syscall number. Then if a restart triggers, > > > > > $v0 will have been clobbered by the syscall interrupted, and needs to > > > > > be reinititalized. The kernel will decrement the PC by 4 before > > > > > switching back to the user mode so that $v0 has been reloaded before > > > > > SYSCALL is executed again. This implies the place $v0 is loaded from > > > > > must be preserved across a syscall, e.g. an immediate, static > > > > > register, stack slot, etc. > > > > > > > > naïvely, but why can't the mips kernel paths take care of the reload > > > > itself ? other arches have scratch space in their pt_regs for doing just > > > > this (a bunch of arches use the orig_<reg> convention). > > > > > > I agree it would be the most reasonable approach if designing from > > > scratch; unfortunately what we have is how the ABI has been set back in > > > 1994. You won't be able to patch up all the kernel binaries out there, > > > sigh... > > > > sure, you won't be able to retroactively fixing kernels. but you'll be able to > > make future kernels more robust against shady userlands. as you've pointed > > out, this is an extremely subtle bug that can easily go unnoticed for a long > > time which simply injects random flakiness into the runtime system. > > That's not unreasonable, I agree. Ralf, what do you think? Kernel commit 8f5a00eb422ed86e77bb8f67e08b9fe6d30f679a [MIPS: Sanitize restart logics] dated September 28, 2010 has already fixed this issue for Linux 2.6.36. The linux-mips.org kernel tree contains backports of this patch to all -stable branches all the way back to 2.6.16. Ralf ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support 2013-01-29 19:04 ` Ralf Baechle @ 2013-01-29 19:12 ` Maciej W. Rozycki 2013-01-30 1:26 ` Ralf Baechle 0 siblings, 1 reply; 15+ messages in thread From: Maciej W. Rozycki @ 2013-01-29 19:12 UTC (permalink / raw) To: Ralf Baechle; +Cc: Mike Frysinger, libc-ports On Tue, 29 Jan 2013, Ralf Baechle wrote: > > > sure, you won't be able to retroactively fixing kernels. but you'll be able to > > > make future kernels more robust against shady userlands. as you've pointed > > > out, this is an extremely subtle bug that can easily go unnoticed for a long > > > time which simply injects random flakiness into the runtime system. > > > > That's not unreasonable, I agree. Ralf, what do you think? > > Kernel commit 8f5a00eb422ed86e77bb8f67e08b9fe6d30f679a [MIPS: Sanitize > restart logics] dated September 28, 2010 has already fixed this issue > for Linux 2.6.36. > > The linux-mips.org kernel tree contains backports of this patch to all > -stable branches all the way back to 2.6.16. Good to know, thanks. This means we won't have to be concerned about this issue once we have stopped supporting kernel versions below 2.6.36. Maciej ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support 2013-01-29 19:12 ` Maciej W. Rozycki @ 2013-01-30 1:26 ` Ralf Baechle 2013-01-30 6:50 ` Mike Frysinger 0 siblings, 1 reply; 15+ messages in thread From: Ralf Baechle @ 2013-01-30 1:26 UTC (permalink / raw) To: Maciej W. Rozycki; +Cc: Mike Frysinger, libc-ports On Tue, Jan 29, 2013 at 07:12:29PM +0000, Maciej W. Rozycki wrote: > > The linux-mips.org kernel tree contains backports of this patch to all > > -stable branches all the way back to 2.6.16. > > Good to know, thanks. This means we won't have to be concerned about > this issue once we have stopped supporting kernel versions below 2.6.36. You may want to update the comments of your proposed libc patch to reflect the 96187fb0bc30cd7919759d371d810e928048249d fix. What is the minimum kernel version required for current libcs? Ralf ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support 2013-01-30 1:26 ` Ralf Baechle @ 2013-01-30 6:50 ` Mike Frysinger 0 siblings, 0 replies; 15+ messages in thread From: Mike Frysinger @ 2013-01-30 6:50 UTC (permalink / raw) To: Ralf Baechle; +Cc: Maciej W. Rozycki, libc-ports [-- Attachment #1: Type: Text/Plain, Size: 673 bytes --] On Tuesday 29 January 2013 20:26:23 Ralf Baechle wrote: > On Tue, Jan 29, 2013 at 07:12:29PM +0000, Maciej W. Rozycki wrote: > > > The linux-mips.org kernel tree contains backports of this patch to all > > > -stable branches all the way back to 2.6.16. > > > > Good to know, thanks. This means we won't have to be concerned about > > > > this issue once we have stopped supporting kernel versions below 2.6.36. > > You may want to update the comments of your proposed libc patch to reflect > the 96187fb0bc30cd7919759d371d810e928048249d fix. > > What is the minimum kernel version required for current libcs? glibc-2.17 requires linux-2.6.16+ -mike [-- Attachment #2: This is a digitally signed message part. --] [-- Type: application/pgp-signature, Size: 836 bytes --] ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support 2013-01-23 3:09 [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support Maciej W. Rozycki 2013-01-23 5:05 ` Mike Frysinger @ 2013-01-23 17:06 ` Joseph S. Myers 2013-01-24 12:47 ` Maciej W. Rozycki 1 sibling, 1 reply; 15+ messages in thread From: Joseph S. Myers @ 2013-01-23 17:06 UTC (permalink / raw) To: Maciej W. Rozycki; +Cc: libc-ports On Wed, 23 Jan 2013, Maciej W. Rozycki wrote: > +#ifdef __mips_micromips > +#define MOVE32 "move32" > +#else > +#define MOVE32 "move" > +#endif Indent preprocessor directives inside #if, so "# define", here and in the other instances of this code in the patch. > + register long __s0 asm("$16") __attribute__((unused)) = number; \ Space between __attribute__ and ((unused)), everywhere this construct appears in this patch. OK with those whitespace fixes; remember to update the list of fixed bugs in the NEWS file (a single list is used for both libc and ports bugs) as part of your commit of the fix, and to close the bug as fixed afterwards. -- Joseph S. Myers joseph@codesourcery.com ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support 2013-01-23 17:06 ` Joseph S. Myers @ 2013-01-24 12:47 ` Maciej W. Rozycki 2013-01-24 16:31 ` Joseph S. Myers 0 siblings, 1 reply; 15+ messages in thread From: Maciej W. Rozycki @ 2013-01-24 12:47 UTC (permalink / raw) To: Joseph S. Myers; +Cc: libc-ports On Wed, 23 Jan 2013, Joseph S. Myers wrote: > > +#ifdef __mips_micromips > > +#define MOVE32 "move32" > > +#else > > +#define MOVE32 "move" > > +#endif > > Indent preprocessor directives inside #if, so "# define", here and in the > other instances of this code in the patch. > > > + register long __s0 asm("$16") __attribute__((unused)) = number; \ > > Space between __attribute__ and ((unused)), everywhere this construct > appears in this patch. Oops, sorry about this oversight -- presumably there needs to be a space between asm and () as well, right? It looks like we don't respect this requirement at all right now throughout the files concerned. > OK with those whitespace fixes; remember to update the list of fixed bugs > in the NEWS file (a single list is used for both libc and ports bugs) as > part of your commit of the fix, and to close the bug as fixed afterwards. I'll do, thanks for the tips. Maciej ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support 2013-01-24 12:47 ` Maciej W. Rozycki @ 2013-01-24 16:31 ` Joseph S. Myers 2013-01-29 18:10 ` Maciej W. Rozycki 0 siblings, 1 reply; 15+ messages in thread From: Joseph S. Myers @ 2013-01-24 16:31 UTC (permalink / raw) To: Maciej W. Rozycki; +Cc: libc-ports On Thu, 24 Jan 2013, Maciej W. Rozycki wrote: > > > + register long __s0 asm("$16") __attribute__((unused)) = number; \ > > > > Space between __attribute__ and ((unused)), everywhere this construct > > appears in this patch. > > Oops, sorry about this oversight -- presumably there needs to be a space > between asm and () as well, right? It looks like we don't respect this > requirement at all right now throughout the files concerned. Yes, there should be such a space for asm as well. -- Joseph S. Myers joseph@codesourcery.com ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support 2013-01-24 16:31 ` Joseph S. Myers @ 2013-01-29 18:10 ` Maciej W. Rozycki 2013-01-29 23:59 ` Joseph S. Myers 0 siblings, 1 reply; 15+ messages in thread From: Maciej W. Rozycki @ 2013-01-29 18:10 UTC (permalink / raw) To: Joseph S. Myers; +Cc: libc-ports On Thu, 24 Jan 2013, Joseph S. Myers wrote: > > > > + register long __s0 asm("$16") __attribute__((unused)) = number; \ > > > > > > Space between __attribute__ and ((unused)), everywhere this construct > > > appears in this patch. > > > > Oops, sorry about this oversight -- presumably there needs to be a space > > between asm and () as well, right? It looks like we don't respect this > > requirement at all right now throughout the files concerned. > > Yes, there should be such a space for asm as well. OK, thanks for confirming, I'll post a change to address this separately. Here's a new version of the syscall wrappers fix, posted because I decided to make a further adjustment throughout, that is to parenthesize "number", following commit f59cba71d8486e4749742f1a703424a08a2be8a7. I hope I got the formatting right with this change. I made no further changes beyond this and ones you previously requested. OK in this form? While at it I'll mention that the patch changes the constraint used for INTERNAL_SYSCALL from "i" to "IK" -- this is because "i" accepts any constant integer and if one produced by GCC happens to be outside the range supported by hardware instructions, then GAS will be happy to expand the LI macro into a multiple-instruction sequence. This is not going to work with the restart convention concerned. The use of "I" and "K" that accept integers that are within the range of the immediate value accepted by the ADDIU and the ORI hardware instructions used by GAS in LI macro expansion guarantees that only a single instruction will ever be generated to load the integer into $v0. The compiler will signal an error when an integer outside the range of these constraints is used. This is currently a safeguard only as we don't have syscalls at the moment that have their numbers outside of ths said range. The range currently allocated for Linux syscalls is [4000,6999]. And finally, unlike with the MOVE instruction there is no need to force the 32-bit encoding of the LI instruction in the microMIPS mode, because the 16-bit microMIPS LI hardware instruction only supports numbers in the [-1,126] range that lies entirely outside the Linux syscall range quoted above. Therefore in the microMIPS mode the LI macro will expand to ADDIU or ORI just as in the standard MIPS mode; the instructions likewise have the same ranges of their immediate argument respectively. For the record -- the numbers below 4000 are not ever expected to be used in Linux, because they were originally used by other MIPS OSes (presumably RISC/OS or IRIX) and were intended to be kept reserved in Linux for the purpose of foreign ABI emulation (similarly to the iBCS subsystem and x86/Linux). That as we know has never happened and it is unlikely to ever change though. 2013-01-29 Maciej W. Rozycki <macro@codesourcery.com> * sysdeps/unix/sysv/linux/mips/mips32/sysdep.h (MOVE32): New macro. (INTERNAL_SYSCALL_NCS): Use it. Rewrite to respect the syscall restart convention. (INTERNAL_SYSCALL): Rewrite to respect the syscall restart convention. (internal_syscall0, internal_syscall1): Likewise. (internal_syscall2, internal_syscall3): Likewise. (internal_syscall4, internal_syscall5): Likewise. (internal_syscall6, internal_syscall7): Likewise. * sysdeps/unix/sysv/linux/mips/mips64/n32/sysdep.h (MOVE32): New macro. (INTERNAL_SYSCALL_NCS): Use it. Rewrite to respect the syscall restart convention. (INTERNAL_SYSCALL): Rewrite to respect the syscall restart convention. (internal_syscall0, internal_syscall1): Likewise. (internal_syscall2, internal_syscall3): Likewise. (internal_syscall4, internal_syscall5): Likewise. (internal_syscall6): Likewise. * sysdeps/unix/sysv/linux/mips/mips64/n64/sysdep.h (MOVE32): New macro. (INTERNAL_SYSCALL_NCS): Use it. Rewrite to respect the syscall restart convention. (INTERNAL_SYSCALL): Rewrite to respect the syscall restart convention. (internal_syscall0, internal_syscall1): Likewise. (internal_syscall2, internal_syscall3): Likewise. (internal_syscall4, internal_syscall5): Likewise. (internal_syscall6): Likewise. Maciej glibc-mips-syscall-restart.diff Index: ports/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h =================================================================== --- glibc-fsf-trunk-quilt.orig/ports/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h 2013-01-29 11:21:13.000000000 +0000 +++ ports/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h 2013-01-29 16:36:53.127475050 +0000 @@ -67,25 +67,46 @@ #undef INTERNAL_SYSCALL_ERRNO #define INTERNAL_SYSCALL_ERRNO(val, err) ((void) (err), val) +/* Note that the Linux syscall restart convention requires the instruction + immediately preceding SYSCALL to initialize $v0 with the syscall number. + Then if a restart triggers, $v0 will have been clobbered by the syscall + interrupted, and needs to be reinititalized. The kernel will decrement + the PC by 4 before switching back to the user mode so that $v0 has been + reloaded before SYSCALL is executed again. This implies the place $v0 + is loaded from must be preserved across a syscall, e.g. an immediate, + static register, stack slot, etc. This also means we have to force a + 32-bit encoding of the microMIPS MOVE instruction if one is used. */ + +#ifdef __mips_micromips +# define MOVE32 "move32" +#else +# define MOVE32 "move" +#endif + #undef INTERNAL_SYSCALL -#define INTERNAL_SYSCALL(name, err, nr, args...) \ - internal_syscall##nr (, "li\t$2, %2\t\t\t# " #name "\n\t", \ - "i" (SYS_ify (name)), err, args) +#define INTERNAL_SYSCALL(name, err, nr, args...) \ + internal_syscall##nr ("li\t%0, %2\t\t\t# " #name "\n\t", \ + "IK" (SYS_ify (name)), \ + 0, err, args) #undef INTERNAL_SYSCALL_NCS -#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ - internal_syscall##nr (= number, , "r" (__v0), err, args) +#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ + internal_syscall##nr (MOVE32 "\t%0, %2\n\t", \ + "r" (__s0), \ + number, err, args) -#define internal_syscall0(ncs_init, cs_init, input, err, dummy...) \ +#define internal_syscall0(v0_init, input, number, err, dummy...) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -97,17 +118,19 @@ _sys_result; \ }) -#define internal_syscall1(ncs_init, cs_init, input, err, arg1) \ +#define internal_syscall1(v0_init, input, number, err, arg1) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -119,20 +142,22 @@ _sys_result; \ }) -#define internal_syscall2(ncs_init, cs_init, input, err, arg1, arg2) \ +#define internal_syscall2(v0_init, input, number, err, arg1, arg2) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ : input, "r" (__a0), "r" (__a1) \ : __SYSCALL_CLOBBERS); \ @@ -142,21 +167,24 @@ _sys_result; \ }) -#define internal_syscall3(ncs_init, cs_init, input, err, arg1, arg2, arg3)\ +#define internal_syscall3(v0_init, input, number, err, \ + arg1, arg2, arg3) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2) \ : __SYSCALL_CLOBBERS); \ @@ -166,21 +194,24 @@ _sys_result; \ }) -#define internal_syscall4(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4)\ +#define internal_syscall4(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ register long __a3 asm("$7") = (long) (arg4); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2) \ : __SYSCALL_CLOBBERS); \ @@ -197,13 +228,16 @@ #define FORCE_FRAME_POINTER \ void *volatile __fp_force __attribute__ ((unused)) = alloca (4) -#define internal_syscall5(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5)\ +#define internal_syscall5(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5) \ ({ \ long _sys_result; \ \ FORCE_FRAME_POINTER; \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -212,10 +246,10 @@ ".set\tnoreorder\n\t" \ "subu\t$29, 32\n\t" \ "sw\t%6, 16($29)\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ "addiu\t$29, 32\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2), \ "r" ((long) (arg5)) \ @@ -226,13 +260,16 @@ _sys_result; \ }) -#define internal_syscall6(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5, arg6)\ +#define internal_syscall6(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5, arg6) \ ({ \ long _sys_result; \ \ FORCE_FRAME_POINTER; \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -242,10 +279,10 @@ "subu\t$29, 32\n\t" \ "sw\t%6, 16($29)\n\t" \ "sw\t%7, 20($29)\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ "addiu\t$29, 32\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2), \ "r" ((long) (arg5)), "r" ((long) (arg6)) \ @@ -256,13 +293,16 @@ _sys_result; \ }) -#define internal_syscall7(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5, arg6, arg7)\ +#define internal_syscall7(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5, arg6, arg7) \ ({ \ long _sys_result; \ \ FORCE_FRAME_POINTER; \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -273,10 +313,10 @@ "sw\t%6, 16($29)\n\t" \ "sw\t%7, 20($29)\n\t" \ "sw\t%8, 24($29)\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ "addiu\t$29, 32\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2), \ "r" ((long) (arg5)), "r" ((long) (arg6)), "r" ((long) (arg7)) \ Index: ports/sysdeps/unix/sysv/linux/mips/mips64/n32/sysdep.h =================================================================== --- glibc-fsf-trunk-quilt.orig/ports/sysdeps/unix/sysv/linux/mips/mips64/n32/sysdep.h 2013-01-29 13:40:36.000000000 +0000 +++ ports/sysdeps/unix/sysv/linux/mips/mips64/n32/sysdep.h 2013-01-29 16:38:56.436531593 +0000 @@ -71,25 +71,46 @@ #undef INTERNAL_SYSCALL_ERRNO #define INTERNAL_SYSCALL_ERRNO(val, err) ((void) (err), val) +/* Note that the Linux syscall restart convention requires the instruction + immediately preceding SYSCALL to initialize $v0 with the syscall number. + Then if a restart triggers, $v0 will have been clobbered by the syscall + interrupted, and needs to be reinititalized. The kernel will decrement + the PC by 4 before switching back to the user mode so that $v0 has been + reloaded before SYSCALL is executed again. This implies the place $v0 + is loaded from must be preserved across a syscall, e.g. an immediate, + static register, stack slot, etc. This also means we have to force a + 32-bit encoding of the microMIPS MOVE instruction if one is used. */ + +#ifdef __mips_micromips +# define MOVE32 "move32" +#else +# define MOVE32 "move" +#endif + #undef INTERNAL_SYSCALL -#define INTERNAL_SYSCALL(name, err, nr, args...) \ - internal_syscall##nr (, "li\t$2, %2\t\t\t# " #name "\n\t", \ - "i" (SYS_ify (name)), err, args) +#define INTERNAL_SYSCALL(name, err, nr, args...) \ + internal_syscall##nr ("li\t%0, %2\t\t\t# " #name "\n\t", \ + "IK" (SYS_ify (name)), \ + 0, err, args) #undef INTERNAL_SYSCALL_NCS -#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ - internal_syscall##nr (= number, , "r" (__v0), err, args) +#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ + internal_syscall##nr (MOVE32 "\t%0, %2\n\t", \ + "r" (__s0), \ + number, err, args) -#define internal_syscall0(ncs_init, cs_init, input, err, dummy...) \ +#define internal_syscall0(v0_init, input, number, err, dummy...) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__((unused)) \ + = (number); \ + register long long __v0 asm("$2"); \ register long long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -101,17 +122,19 @@ _sys_result; \ }) -#define internal_syscall1(ncs_init, cs_init, input, err, arg1) \ +#define internal_syscall1(v0_init, input, number, err, arg1) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__((unused)) \ + = (number); \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -123,18 +146,20 @@ _sys_result; \ }) -#define internal_syscall2(ncs_init, cs_init, input, err, arg1, arg2) \ +#define internal_syscall2(v0_init, input, number, err, arg1, arg2) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__((unused)) \ + = (number); \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -146,19 +171,22 @@ _sys_result; \ }) -#define internal_syscall3(ncs_init, cs_init, input, err, arg1, arg2, arg3) \ +#define internal_syscall3(v0_init, input, number, err, \ + arg1, arg2, arg3) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__((unused)) \ + = (number); \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a2 asm("$6") = ARGIFY (arg3); \ register long long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -170,19 +198,22 @@ _sys_result; \ }) -#define internal_syscall4(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4) \ +#define internal_syscall4(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__((unused)) \ + = (number); \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a2 asm("$6") = ARGIFY (arg3); \ register long long __a3 asm("$7") = ARGIFY (arg4); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ @@ -194,12 +225,15 @@ _sys_result; \ }) -#define internal_syscall5(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5) \ +#define internal_syscall5(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__((unused)) \ + = (number); \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a2 asm("$6") = ARGIFY (arg3); \ @@ -207,7 +241,7 @@ register long long __a4 asm("$8") = ARGIFY (arg5); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ @@ -219,12 +253,15 @@ _sys_result; \ }) -#define internal_syscall6(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5, arg6) \ +#define internal_syscall6(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5, arg6) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__((unused)) \ + = (number); \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a2 asm("$6") = ARGIFY (arg3); \ @@ -233,7 +270,7 @@ register long long __a5 asm("$9") = ARGIFY (arg6); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ Index: ports/sysdeps/unix/sysv/linux/mips/mips64/n64/sysdep.h =================================================================== --- glibc-fsf-trunk-quilt.orig/ports/sysdeps/unix/sysv/linux/mips/mips64/n64/sysdep.h 2013-01-29 13:40:36.000000000 +0000 +++ ports/sysdeps/unix/sysv/linux/mips/mips64/n64/sysdep.h 2013-01-29 16:39:20.336868564 +0000 @@ -67,25 +67,46 @@ #undef INTERNAL_SYSCALL_ERRNO #define INTERNAL_SYSCALL_ERRNO(val, err) ((void) (err), val) +/* Note that the Linux syscall restart convention requires the instruction + immediately preceding SYSCALL to initialize $v0 with the syscall number. + Then if a restart triggers, $v0 will have been clobbered by the syscall + interrupted, and needs to be reinititalized. The kernel will decrement + the PC by 4 before switching back to the user mode so that $v0 has been + reloaded before SYSCALL is executed again. This implies the place $v0 + is loaded from must be preserved across a syscall, e.g. an immediate, + static register, stack slot, etc. This also means we have to force a + 32-bit encoding of the microMIPS MOVE instruction if one is used. */ + +#ifdef __mips_micromips +# define MOVE32 "move32" +#else +# define MOVE32 "move" +#endif + #undef INTERNAL_SYSCALL -#define INTERNAL_SYSCALL(name, err, nr, args...) \ - internal_syscall##nr (, "li\t$2, %2\t\t\t# " #name "\n\t", \ - "i" (SYS_ify (name)), err, args) +#define INTERNAL_SYSCALL(name, err, nr, args...) \ + internal_syscall##nr ("li\t%0, %2\t\t\t# " #name "\n\t", \ + "IK" (SYS_ify (name)), \ + 0, err, args) #undef INTERNAL_SYSCALL_NCS -#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ - internal_syscall##nr (= number, , "r" (__v0), err, args) +#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ + internal_syscall##nr (MOVE32 "\t%0, %2\n\t", \ + "r" (__s0), \ + number, err, args) -#define internal_syscall0(ncs_init, cs_init, input, err, dummy...) \ +#define internal_syscall0(v0_init, input, number, err, dummy...) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -97,17 +118,19 @@ _sys_result; \ }) -#define internal_syscall1(ncs_init, cs_init, input, err, arg1) \ +#define internal_syscall1(v0_init, input, number, err, arg1) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -119,18 +142,20 @@ _sys_result; \ }) -#define internal_syscall2(ncs_init, cs_init, input, err, arg1, arg2) \ +#define internal_syscall2(v0_init, input, number, err, arg1, arg2) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -142,19 +167,22 @@ _sys_result; \ }) -#define internal_syscall3(ncs_init, cs_init, input, err, arg1, arg2, arg3) \ +#define internal_syscall3(v0_init, input, number, err, \ + arg1, arg2, arg3) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -166,19 +194,22 @@ _sys_result; \ }) -#define internal_syscall4(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4) \ +#define internal_syscall4(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ register long __a3 asm("$7") = (long) (arg4); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ @@ -190,12 +221,15 @@ _sys_result; \ }) -#define internal_syscall5(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5) \ +#define internal_syscall5(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -203,7 +237,7 @@ register long __a4 asm("$8") = (long) (arg5); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ @@ -215,12 +249,15 @@ _sys_result; \ }) -#define internal_syscall6(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5, arg6) \ +#define internal_syscall6(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5, arg6) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -229,7 +266,7 @@ register long __a5 asm("$9") = (long) (arg6); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support 2013-01-29 18:10 ` Maciej W. Rozycki @ 2013-01-29 23:59 ` Joseph S. Myers 2013-02-05 15:26 ` Maciej W. Rozycki 0 siblings, 1 reply; 15+ messages in thread From: Joseph S. Myers @ 2013-01-29 23:59 UTC (permalink / raw) To: Maciej W. Rozycki; +Cc: libc-ports On Tue, 29 Jan 2013, Maciej W. Rozycki wrote: > Here's a new version of the syscall wrappers fix, posted because I > decided to make a further adjustment throughout, that is to parenthesize > "number", following commit f59cba71d8486e4749742f1a703424a08a2be8a7. I > hope I got the formatting right with this change. I made no further > changes beyond this and ones you previously requested. OK in this form? You still need to correct the spacing with __attribute__ in the n32 version of the file. OK with that change. -- Joseph S. Myers joseph@codesourcery.com ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support 2013-01-29 23:59 ` Joseph S. Myers @ 2013-02-05 15:26 ` Maciej W. Rozycki 0 siblings, 0 replies; 15+ messages in thread From: Maciej W. Rozycki @ 2013-02-05 15:26 UTC (permalink / raw) To: Joseph S. Myers; +Cc: Ralf Baechle, libc-ports On Tue, 29 Jan 2013, Joseph S. Myers wrote: > > Here's a new version of the syscall wrappers fix, posted because I > > decided to make a further adjustment throughout, that is to parenthesize > > "number", following commit f59cba71d8486e4749742f1a703424a08a2be8a7. I > > hope I got the formatting right with this change. I made no further > > changes beyond this and ones you previously requested. OK in this form? > > You still need to correct the spacing with __attribute__ in the n32 > version of the file. OK with that change. Here's the final version I committed. Beyond the change requested it includes comment updates to add information supplied by Ralf, as we agreed off-list. Maciej commit b82ba2f011fc4628ceece07412846d0b4d50cac2 Author: Maciej W. Rozycki <macro@codesourcery.com> Date: Tue Feb 5 14:41:32 2013 +0000 MIPS: Respect the legacy syscall restart convention. That convention requires the instruction immediately preceding SYSCALL to initialize $v0 with the syscall number. Then if a restart triggers, $v0 will have been clobbered by the syscall interrupted, and needs to be reinititalized. The kernel will decrement the PC by 4 before switching back to the user mode so that $v0 has been reloaded before SYSCALL is executed again. This implies the place $v0 is loaded from must be preserved across a syscall, e.g. an immediate, static register, stack slot, etc. The restriction was lifted with Linux 2.6.36 kernel release and no special requirements are placed around the SYSCALL instruction anymore, however we still support older kernel binaries. diff --git a/NEWS b/NEWS index b5c465d..d6798a4 100644 --- a/NEWS +++ b/NEWS @@ -10,7 +10,7 @@ Version 2.18 * The following bugs are resolved with this release: 13951, 14142, 14200, 14317, 14327, 14496, 14964, 14981, 14982, 14985, - 14994, 14996, 15003, 15020, 15023, 15036, 15062. + 14994, 14996, 15003, 15020, 15023, 15036, 15054, 15062. \f Version 2.17 diff --git a/ports/ChangeLog.mips b/ports/ChangeLog.mips index 65d4206..c5a2cb9 100644 --- a/ports/ChangeLog.mips +++ b/ports/ChangeLog.mips @@ -1,3 +1,37 @@ +2013-02-05 Maciej W. Rozycki <macro@codesourcery.com> + + [BZ #15054] + * sysdeps/unix/sysv/linux/mips/mips32/sysdep.h (MOVE32): + New macro. + (INTERNAL_SYSCALL_NCS): Use it. Rewrite to respect the syscall + restart convention. + (INTERNAL_SYSCALL): Rewrite to respect the syscall restart + convention. + (internal_syscall0, internal_syscall1): Likewise. + (internal_syscall2, internal_syscall3): Likewise. + (internal_syscall4, internal_syscall5): Likewise. + (internal_syscall6, internal_syscall7): Likewise. + * sysdeps/unix/sysv/linux/mips/mips64/n32/sysdep.h (MOVE32): + New macro. + (INTERNAL_SYSCALL_NCS): Use it. Rewrite to respect the syscall + restart convention. + (INTERNAL_SYSCALL): Rewrite to respect the syscall restart + convention. + (internal_syscall0, internal_syscall1): Likewise. + (internal_syscall2, internal_syscall3): Likewise. + (internal_syscall4, internal_syscall5): Likewise. + (internal_syscall6): Likewise. + * sysdeps/unix/sysv/linux/mips/mips64/n64/sysdep.h (MOVE32): + New macro. + (INTERNAL_SYSCALL_NCS): Use it. Rewrite to respect the syscall + restart convention. + (INTERNAL_SYSCALL): Rewrite to respect the syscall restart + convention. + (internal_syscall0, internal_syscall1): Likewise. + (internal_syscall2, internal_syscall3): Likewise. + (internal_syscall4, internal_syscall5): Likewise. + (internal_syscall6): Likewise. + 2013-02-04 Joseph Myers <joseph@codesourcery.com> [BZ #13550] diff --git a/ports/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h b/ports/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h index e79fda9..51ae813 100644 --- a/ports/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h +++ b/ports/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h @@ -67,25 +67,57 @@ #undef INTERNAL_SYSCALL_ERRNO #define INTERNAL_SYSCALL_ERRNO(val, err) ((void) (err), val) +/* Note that the original Linux syscall restart convention required the + instruction immediately preceding SYSCALL to initialize $v0 with the + syscall number. Then if a restart triggered, $v0 would have been + clobbered by the syscall interrupted, and needed to be reinititalized. + The kernel would decrement the PC by 4 before switching back to the + user mode so that $v0 had been reloaded before SYSCALL was executed + again. This implied the place $v0 was loaded from must have been + preserved across a syscall, e.g. an immediate, static register, stack + slot, etc. + + The convention was relaxed in Linux with a change applied to the kernel + GIT repository as commit 96187fb0bc30cd7919759d371d810e928048249d, that + first appeared in the 2.6.36 release. Since then the kernel has had + code that reloads $v0 upon syscall restart and resumes right at the + SYSCALL instruction, so no special arrangement is needed anymore. + + For backwards compatibility with existing kernel binaries we support + the old convention by choosing the instruction preceding SYSCALL + carefully. This also means we have to force a 32-bit encoding of the + microMIPS MOVE instruction if one is used. */ + +#ifdef __mips_micromips +# define MOVE32 "move32" +#else +# define MOVE32 "move" +#endif + #undef INTERNAL_SYSCALL -#define INTERNAL_SYSCALL(name, err, nr, args...) \ - internal_syscall##nr (, "li\t$2, %2\t\t\t# " #name "\n\t", \ - "i" (SYS_ify (name)), err, args) +#define INTERNAL_SYSCALL(name, err, nr, args...) \ + internal_syscall##nr ("li\t%0, %2\t\t\t# " #name "\n\t", \ + "IK" (SYS_ify (name)), \ + 0, err, args) #undef INTERNAL_SYSCALL_NCS -#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ - internal_syscall##nr (= number, , "r" (__v0), err, args) +#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ + internal_syscall##nr (MOVE32 "\t%0, %2\n\t", \ + "r" (__s0), \ + number, err, args) -#define internal_syscall0(ncs_init, cs_init, input, err, dummy...) \ +#define internal_syscall0(v0_init, input, number, err, dummy...) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -97,17 +129,19 @@ _sys_result; \ }) -#define internal_syscall1(ncs_init, cs_init, input, err, arg1) \ +#define internal_syscall1(v0_init, input, number, err, arg1) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -119,20 +153,22 @@ _sys_result; \ }) -#define internal_syscall2(ncs_init, cs_init, input, err, arg1, arg2) \ +#define internal_syscall2(v0_init, input, number, err, arg1, arg2) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ : input, "r" (__a0), "r" (__a1) \ : __SYSCALL_CLOBBERS); \ @@ -142,21 +178,24 @@ _sys_result; \ }) -#define internal_syscall3(ncs_init, cs_init, input, err, arg1, arg2, arg3)\ +#define internal_syscall3(v0_init, input, number, err, \ + arg1, arg2, arg3) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2) \ : __SYSCALL_CLOBBERS); \ @@ -166,21 +205,24 @@ _sys_result; \ }) -#define internal_syscall4(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4)\ +#define internal_syscall4(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ register long __a3 asm("$7") = (long) (arg4); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2) \ : __SYSCALL_CLOBBERS); \ @@ -197,13 +239,16 @@ #define FORCE_FRAME_POINTER \ void *volatile __fp_force __attribute__ ((unused)) = alloca (4) -#define internal_syscall5(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5)\ +#define internal_syscall5(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5) \ ({ \ long _sys_result; \ \ FORCE_FRAME_POINTER; \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -212,10 +257,10 @@ ".set\tnoreorder\n\t" \ "subu\t$29, 32\n\t" \ "sw\t%6, 16($29)\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ "addiu\t$29, 32\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2), \ "r" ((long) (arg5)) \ @@ -226,13 +271,16 @@ _sys_result; \ }) -#define internal_syscall6(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5, arg6)\ +#define internal_syscall6(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5, arg6) \ ({ \ long _sys_result; \ \ FORCE_FRAME_POINTER; \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -242,10 +290,10 @@ "subu\t$29, 32\n\t" \ "sw\t%6, 16($29)\n\t" \ "sw\t%7, 20($29)\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ "addiu\t$29, 32\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2), \ "r" ((long) (arg5)), "r" ((long) (arg6)) \ @@ -256,13 +304,16 @@ _sys_result; \ }) -#define internal_syscall7(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5, arg6, arg7)\ +#define internal_syscall7(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5, arg6, arg7) \ ({ \ long _sys_result; \ \ FORCE_FRAME_POINTER; \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -273,10 +324,10 @@ "sw\t%6, 16($29)\n\t" \ "sw\t%7, 20($29)\n\t" \ "sw\t%8, 24($29)\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ "addiu\t$29, 32\n\t" \ - ".set\treorder" \ + ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ : input, "r" (__a0), "r" (__a1), "r" (__a2), \ "r" ((long) (arg5)), "r" ((long) (arg6)), "r" ((long) (arg7)) \ diff --git a/ports/sysdeps/unix/sysv/linux/mips/mips64/n32/sysdep.h b/ports/sysdeps/unix/sysv/linux/mips/mips64/n32/sysdep.h index 3ebbf89..41a6f22 100644 --- a/ports/sysdeps/unix/sysv/linux/mips/mips64/n32/sysdep.h +++ b/ports/sysdeps/unix/sysv/linux/mips/mips64/n32/sysdep.h @@ -71,25 +71,57 @@ #undef INTERNAL_SYSCALL_ERRNO #define INTERNAL_SYSCALL_ERRNO(val, err) ((void) (err), val) +/* Note that the original Linux syscall restart convention required the + instruction immediately preceding SYSCALL to initialize $v0 with the + syscall number. Then if a restart triggered, $v0 would have been + clobbered by the syscall interrupted, and needed to be reinititalized. + The kernel would decrement the PC by 4 before switching back to the + user mode so that $v0 had been reloaded before SYSCALL was executed + again. This implied the place $v0 was loaded from must have been + preserved across a syscall, e.g. an immediate, static register, stack + slot, etc. + + The convention was relaxed in Linux with a change applied to the kernel + GIT repository as commit 96187fb0bc30cd7919759d371d810e928048249d, that + first appeared in the 2.6.36 release. Since then the kernel has had + code that reloads $v0 upon syscall restart and resumes right at the + SYSCALL instruction, so no special arrangement is needed anymore. + + For backwards compatibility with existing kernel binaries we support + the old convention by choosing the instruction preceding SYSCALL + carefully. This also means we have to force a 32-bit encoding of the + microMIPS MOVE instruction if one is used. */ + +#ifdef __mips_micromips +# define MOVE32 "move32" +#else +# define MOVE32 "move" +#endif + #undef INTERNAL_SYSCALL -#define INTERNAL_SYSCALL(name, err, nr, args...) \ - internal_syscall##nr (, "li\t$2, %2\t\t\t# " #name "\n\t", \ - "i" (SYS_ify (name)), err, args) +#define INTERNAL_SYSCALL(name, err, nr, args...) \ + internal_syscall##nr ("li\t%0, %2\t\t\t# " #name "\n\t", \ + "IK" (SYS_ify (name)), \ + 0, err, args) #undef INTERNAL_SYSCALL_NCS -#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ - internal_syscall##nr (= number, , "r" (__v0), err, args) +#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ + internal_syscall##nr (MOVE32 "\t%0, %2\n\t", \ + "r" (__s0), \ + number, err, args) -#define internal_syscall0(ncs_init, cs_init, input, err, dummy...) \ +#define internal_syscall0(v0_init, input, number, err, dummy...) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long long __v0 asm("$2"); \ register long long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -101,17 +133,19 @@ _sys_result; \ }) -#define internal_syscall1(ncs_init, cs_init, input, err, arg1) \ +#define internal_syscall1(v0_init, input, number, err, arg1) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -123,18 +157,20 @@ _sys_result; \ }) -#define internal_syscall2(ncs_init, cs_init, input, err, arg1, arg2) \ +#define internal_syscall2(v0_init, input, number, err, arg1, arg2) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -146,19 +182,22 @@ _sys_result; \ }) -#define internal_syscall3(ncs_init, cs_init, input, err, arg1, arg2, arg3) \ +#define internal_syscall3(v0_init, input, number, err, \ + arg1, arg2, arg3) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a2 asm("$6") = ARGIFY (arg3); \ register long long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -170,19 +209,22 @@ _sys_result; \ }) -#define internal_syscall4(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4) \ +#define internal_syscall4(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a2 asm("$6") = ARGIFY (arg3); \ register long long __a3 asm("$7") = ARGIFY (arg4); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ @@ -194,12 +236,15 @@ _sys_result; \ }) -#define internal_syscall5(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5) \ +#define internal_syscall5(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a2 asm("$6") = ARGIFY (arg3); \ @@ -207,7 +252,7 @@ register long long __a4 asm("$8") = ARGIFY (arg5); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ @@ -219,12 +264,15 @@ _sys_result; \ }) -#define internal_syscall6(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5, arg6) \ +#define internal_syscall6(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5, arg6) \ ({ \ long _sys_result; \ \ { \ - register long long __v0 asm("$2") ncs_init; \ + register long long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long long __v0 asm("$2"); \ register long long __a0 asm("$4") = ARGIFY (arg1); \ register long long __a1 asm("$5") = ARGIFY (arg2); \ register long long __a2 asm("$6") = ARGIFY (arg3); \ @@ -233,7 +281,7 @@ register long long __a5 asm("$9") = ARGIFY (arg6); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ diff --git a/ports/sysdeps/unix/sysv/linux/mips/mips64/n64/sysdep.h b/ports/sysdeps/unix/sysv/linux/mips/mips64/n64/sysdep.h index 9d94995..fecd3e4 100644 --- a/ports/sysdeps/unix/sysv/linux/mips/mips64/n64/sysdep.h +++ b/ports/sysdeps/unix/sysv/linux/mips/mips64/n64/sysdep.h @@ -67,25 +67,57 @@ #undef INTERNAL_SYSCALL_ERRNO #define INTERNAL_SYSCALL_ERRNO(val, err) ((void) (err), val) +/* Note that the original Linux syscall restart convention required the + instruction immediately preceding SYSCALL to initialize $v0 with the + syscall number. Then if a restart triggered, $v0 would have been + clobbered by the syscall interrupted, and needed to be reinititalized. + The kernel would decrement the PC by 4 before switching back to the + user mode so that $v0 had been reloaded before SYSCALL was executed + again. This implied the place $v0 was loaded from must have been + preserved across a syscall, e.g. an immediate, static register, stack + slot, etc. + + The convention was relaxed in Linux with a change applied to the kernel + GIT repository as commit 96187fb0bc30cd7919759d371d810e928048249d, that + first appeared in the 2.6.36 release. Since then the kernel has had + code that reloads $v0 upon syscall restart and resumes right at the + SYSCALL instruction, so no special arrangement is needed anymore. + + For backwards compatibility with existing kernel binaries we support + the old convention by choosing the instruction preceding SYSCALL + carefully. This also means we have to force a 32-bit encoding of the + microMIPS MOVE instruction if one is used. */ + +#ifdef __mips_micromips +# define MOVE32 "move32" +#else +# define MOVE32 "move" +#endif + #undef INTERNAL_SYSCALL -#define INTERNAL_SYSCALL(name, err, nr, args...) \ - internal_syscall##nr (, "li\t$2, %2\t\t\t# " #name "\n\t", \ - "i" (SYS_ify (name)), err, args) +#define INTERNAL_SYSCALL(name, err, nr, args...) \ + internal_syscall##nr ("li\t%0, %2\t\t\t# " #name "\n\t", \ + "IK" (SYS_ify (name)), \ + 0, err, args) #undef INTERNAL_SYSCALL_NCS -#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ - internal_syscall##nr (= number, , "r" (__v0), err, args) +#define INTERNAL_SYSCALL_NCS(number, err, nr, args...) \ + internal_syscall##nr (MOVE32 "\t%0, %2\n\t", \ + "r" (__s0), \ + number, err, args) -#define internal_syscall0(ncs_init, cs_init, input, err, dummy...) \ +#define internal_syscall0(v0_init, input, number, err, dummy...) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -97,17 +129,19 @@ _sys_result; \ }) -#define internal_syscall1(ncs_init, cs_init, input, err, arg1) \ +#define internal_syscall1(v0_init, input, number, err, arg1) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set reorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -119,18 +153,20 @@ _sys_result; \ }) -#define internal_syscall2(ncs_init, cs_init, input, err, arg1, arg2) \ +#define internal_syscall2(v0_init, input, number, err, arg1, arg2) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -142,19 +178,22 @@ _sys_result; \ }) -#define internal_syscall3(ncs_init, cs_init, input, err, arg1, arg2, arg3) \ +#define internal_syscall3(v0_init, input, number, err, \ + arg1, arg2, arg3) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ register long __a3 asm("$7"); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "=r" (__a3) \ @@ -166,19 +205,22 @@ _sys_result; \ }) -#define internal_syscall4(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4) \ +#define internal_syscall4(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ register long __a3 asm("$7") = (long) (arg4); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ @@ -190,12 +232,15 @@ _sys_result; \ }) -#define internal_syscall5(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5) \ +#define internal_syscall5(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -203,7 +248,7 @@ register long __a4 asm("$8") = (long) (arg5); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ @@ -215,12 +260,15 @@ _sys_result; \ }) -#define internal_syscall6(ncs_init, cs_init, input, err, arg1, arg2, arg3, arg4, arg5, arg6) \ +#define internal_syscall6(v0_init, input, number, err, \ + arg1, arg2, arg3, arg4, arg5, arg6) \ ({ \ long _sys_result; \ \ { \ - register long __v0 asm("$2") ncs_init; \ + register long __s0 asm("$16") __attribute__ ((unused)) \ + = (number); \ + register long __v0 asm("$2"); \ register long __a0 asm("$4") = (long) (arg1); \ register long __a1 asm("$5") = (long) (arg2); \ register long __a2 asm("$6") = (long) (arg3); \ @@ -229,7 +277,7 @@ register long __a5 asm("$9") = (long) (arg6); \ __asm__ volatile ( \ ".set\tnoreorder\n\t" \ - cs_init \ + v0_init \ "syscall\n\t" \ ".set\treorder" \ : "=r" (__v0), "+r" (__a3) \ ^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2013-02-05 15:26 UTC | newest] Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2013-01-23 3:09 [PATCH][BZ #15054] MIPS: Fix syscall wrappers for syscall restart support Maciej W. Rozycki 2013-01-23 5:05 ` Mike Frysinger 2013-01-23 5:40 ` Maciej W. Rozycki 2013-01-23 18:13 ` Mike Frysinger 2013-01-29 18:12 ` Maciej W. Rozycki 2013-01-29 19:04 ` Ralf Baechle 2013-01-29 19:12 ` Maciej W. Rozycki 2013-01-30 1:26 ` Ralf Baechle 2013-01-30 6:50 ` Mike Frysinger 2013-01-23 17:06 ` Joseph S. Myers 2013-01-24 12:47 ` Maciej W. Rozycki 2013-01-24 16:31 ` Joseph S. Myers 2013-01-29 18:10 ` Maciej W. Rozycki 2013-01-29 23:59 ` Joseph S. Myers 2013-02-05 15:26 ` Maciej W. Rozycki
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).