public inbox for systemtap@sourceware.org
 help / color / mirror / Atom feed
* [PATCH 3/4] x86: add kprobe-booster to X86_64
@ 2007-12-17 22:27 Harvey Harrison
  2007-12-18 11:30 ` Ingo Molnar
  0 siblings, 1 reply; 12+ messages in thread
From: Harvey Harrison @ 2007-12-17 22:27 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Ananth N Mavinakayanahalli, Jim Keniston, Roland McGrath,
	Arjan van de Ven, prasanna, anil.s.keshavamurthy, davem,
	systemtap-ml, LKML, Andrew Morton

Sorry I missed an ifdef in this patch in the following hunk:

@@ -183,6 +185,9 @@ retry:
        }
 
        switch (opcode & 0xf0) {
+#ifdef X86_64
+       case 0x40:
+               goto retry; /* REX prefix is boostable */
        case 0x60:
                if (0x63 < opcode && opcode < 0x67)
                        goto retry; /* prefixes */

Just add the #ifdef to only catch case 0x40.

@@ -183,6 +185,10 @@ retry:
        }
 
        switch (opcode & 0xf0) {
+#ifdef X86_64
+       case 0x40:
+               goto retry; /* REX prefix is boostable */
+#endif
        case 0x60:
                if (0x63 < opcode && opcode < 0x67)
                        goto retry; /* prefixes */

Cheers,

Harvey

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4] x86: add kprobe-booster to X86_64
  2007-12-17 22:27 [PATCH 3/4] x86: add kprobe-booster to X86_64 Harvey Harrison
@ 2007-12-18 11:30 ` Ingo Molnar
  2007-12-18 11:42   ` Harvey Harrison
  0 siblings, 1 reply; 12+ messages in thread
From: Ingo Molnar @ 2007-12-18 11:30 UTC (permalink / raw)
  To: Harvey Harrison
  Cc: Ananth N Mavinakayanahalli, Jim Keniston, Roland McGrath,
	Arjan van de Ven, prasanna, anil.s.keshavamurthy, davem,
	systemtap-ml, LKML, Andrew Morton, Masami Hiramatsu


* Harvey Harrison <harvey.harrison@gmail.com> wrote:

> Sorry I missed an ifdef in this patch in the following hunk:

could you resend your kprobes cleanups against current x86.git? They
have been conceptually acked by Masami. This cuts out the unification
part of your queue which is bad luck but the effort has been duplicated
already so there's not much we can do about it i guess.

Your other 17 cleanup and unification patches are still queued up in
x86.git and passed a lot of testing, so they will likely go into
v2.6.25. Nice work!

	Ingo

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4] x86: add kprobe-booster to X86_64
  2007-12-18 11:30 ` Ingo Molnar
@ 2007-12-18 11:42   ` Harvey Harrison
  2007-12-18 13:51     ` Masami Hiramatsu
  2007-12-18 14:00     ` [PATCH 3/4] x86: add kprobe-booster to X86_64 Ingo Molnar
  0 siblings, 2 replies; 12+ messages in thread
From: Harvey Harrison @ 2007-12-18 11:42 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Ananth N Mavinakayanahalli, Jim Keniston, Roland McGrath,
	Arjan van de Ven, prasanna, anil.s.keshavamurthy, davem,
	systemtap-ml, LKML, Andrew Morton, Masami Hiramatsu

On Tue, 2007-12-18 at 12:29 +0100, Ingo Molnar wrote:
> * Harvey Harrison <harvey.harrison@gmail.com> wrote:
> 
> > Sorry I missed an ifdef in this patch in the following hunk:
> 
> could you resend your kprobes cleanups against current x86.git? They
> have been conceptually acked by Masami. This cuts out the unification
> part of your queue which is bad luck but the effort has been duplicated
> already so there's not much we can do about it i guess.
> 
> Your other 17 cleanup and unification patches are still queued up in
> x86.git and passed a lot of testing, so they will likely go into
> v2.6.25. Nice work!
> 
> 	Ingo

Ingo,

I'd suggest just tossing my kprobes cleanups.  I just sent you a rollup
of anything I saw that was left in mine that was still worthwhile
after Masami's, included below for reference.  It didn't amount to much
left so I rolled it all together:

Subject: [PATCH] x86: kprobes leftover cleanups

Eliminate __always_inline, all of these static functions are
only called once.  Minor whitespace cleanup.  Eliminate one
supefluous return at end of void function.  Reverse sense of
#ifndef to be #ifdef to show the case only affects X86_32.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
---
 arch/x86/kernel/kprobes.c |   14 ++++++--------
 1 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index 9aadd4d..1a0d96d 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -159,7 +159,7 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = {
 const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist);
 
 /* Insert a jump instruction at address 'from', which jumps to address 'to'.*/
-static __always_inline void set_jmp_op(void *from, void *to)
+static void set_jmp_op(void *from, void *to)
 {
 	struct __arch_jmp_op {
 		char op;
@@ -174,7 +174,7 @@ static __always_inline void set_jmp_op(void *from, void *to)
  * Returns non-zero if opcode is boostable.
  * RIP relative instructions are adjusted at copying time in 64 bits mode
  */
-static __always_inline int can_boost(kprobe_opcode_t *opcodes)
+static int can_boost(kprobe_opcode_t *opcodes)
 {
 	kprobe_opcode_t opcode;
 	kprobe_opcode_t *orig_opcodes = opcodes;
@@ -392,13 +392,13 @@ static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
 		kcb->kprobe_saved_flags &= ~IF_MASK;
 }
 
-static __always_inline void clear_btf(void)
+static void clear_btf(void)
 {
 	if (test_thread_flag(TIF_DEBUGCTLMSR))
 		wrmsr(MSR_IA32_DEBUGCTLMSR, 0, 0);
 }
 
-static __always_inline void restore_btf(void)
+static void restore_btf(void)
 {
 	if (test_thread_flag(TIF_DEBUGCTLMSR))
 		wrmsr(MSR_IA32_DEBUGCTLMSR, current->thread.debugctlmsr, 0);
@@ -409,7 +409,7 @@ static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
 	clear_btf();
 	regs->flags |= TF_MASK;
 	regs->flags &= ~IF_MASK;
-	/*single step inline if the instruction is an int3*/
+	/* single step inline if the instruction is an int3 */
 	if (p->opcode == BREAKPOINT_INSTRUCTION)
 		regs->ip = (unsigned long)p->addr;
 	else
@@ -767,7 +767,7 @@ static void __kprobes resume_execution(struct kprobe *p,
 	case 0xe8:	/* call relative - Fix return addr */
 		*tos = orig_ip + (*tos - copy_ip);
 		break;
-#ifndef CONFIG_X86_64
+#ifdef CONFIG_X86_32
 	case 0x9a:	/* call absolute -- same as call absolute, indirect */
 		*tos = orig_ip + (*tos - copy_ip);
 		goto no_change;
@@ -813,8 +813,6 @@ static void __kprobes resume_execution(struct kprobe *p,
 
 no_change:
 	restore_btf();
-
-	return;
 }
 
 /*
-- 
1.5.4.rc0.1143.g1a8a



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4] x86: add kprobe-booster to X86_64
  2007-12-18 11:42   ` Harvey Harrison
@ 2007-12-18 13:51     ` Masami Hiramatsu
  2007-12-19  2:30       ` Harvey Harrison
  2007-12-19  5:27       ` [PATCH] x86: __kprobes annotations Harvey Harrison
  2007-12-18 14:00     ` [PATCH 3/4] x86: add kprobe-booster to X86_64 Ingo Molnar
  1 sibling, 2 replies; 12+ messages in thread
From: Masami Hiramatsu @ 2007-12-18 13:51 UTC (permalink / raw)
  To: Harvey Harrison
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, Jim Keniston,
	Roland McGrath, Arjan van de Ven, prasanna, anil.s.keshavamurthy,
	davem, systemtap-ml, LKML, Andrew Morton

Hi Harvey,

Thank you for cleaning this up.

Harvey Harrison wrote:
> Subject: [PATCH] x86: kprobes leftover cleanups
> 
> Eliminate __always_inline, all of these static functions are
> only called once.  Minor whitespace cleanup.  Eliminate one
> supefluous return at end of void function.  Reverse sense of
> #ifndef to be #ifdef to show the case only affects X86_32.

Unfortunately, to prevent kprobe recursive call, all functions which
is called from kprobes must be inlined or have __kprobes.
If __always_inline macro still work, I prefer to use it. If not,
it must have a __kprobe attribute like as below.

> Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
> ---
>  arch/x86/kernel/kprobes.c |   14 ++++++--------
>  1 files changed, 6 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> index 9aadd4d..1a0d96d 100644
> --- a/arch/x86/kernel/kprobes.c
> +++ b/arch/x86/kernel/kprobes.c
> @@ -159,7 +159,7 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = {
>  const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist);
>  
>  /* Insert a jump instruction at address 'from', which jumps to address 'to'.*/
> -static __always_inline void set_jmp_op(void *from, void *to)
> +static void set_jmp_op(void *from, void *to)

+static void __kprobes set_jmp_op(void *from, void *to)

>  {
>  	struct __arch_jmp_op {
>  		char op;
> @@ -174,7 +174,7 @@ static __always_inline void set_jmp_op(void *from, void *to)
>   * Returns non-zero if opcode is boostable.
>   * RIP relative instructions are adjusted at copying time in 64 bits mode
>   */
> -static __always_inline int can_boost(kprobe_opcode_t *opcodes)
> +static int can_boost(kprobe_opcode_t *opcodes)

+static int __kprobes can_boost(kprobe_opcode_t *opcodes)


>  {
>  	kprobe_opcode_t opcode;
>  	kprobe_opcode_t *orig_opcodes = opcodes;
> @@ -392,13 +392,13 @@ static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
>  		kcb->kprobe_saved_flags &= ~IF_MASK;
>  }
>  
> -static __always_inline void clear_btf(void)
> +static void clear_btf(void)

+static void __kprobes clear_btf(void)

>  {
>  	if (test_thread_flag(TIF_DEBUGCTLMSR))
>  		wrmsr(MSR_IA32_DEBUGCTLMSR, 0, 0);
>  }
>  
> -static __always_inline void restore_btf(void)
> +static void restore_btf(void)

+static void __kprobes restore_btf(void)

>  {
>  	if (test_thread_flag(TIF_DEBUGCTLMSR))
>  		wrmsr(MSR_IA32_DEBUGCTLMSR, current->thread.debugctlmsr, 0);
> @@ -409,7 +409,7 @@ static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
>  	clear_btf();
>  	regs->flags |= TF_MASK;
>  	regs->flags &= ~IF_MASK;
> -	/*single step inline if the instruction is an int3*/
> +	/* single step inline if the instruction is an int3 */
>  	if (p->opcode == BREAKPOINT_INSTRUCTION)
>  		regs->ip = (unsigned long)p->addr;
>  	else
> @@ -767,7 +767,7 @@ static void __kprobes resume_execution(struct kprobe *p,
>  	case 0xe8:	/* call relative - Fix return addr */
>  		*tos = orig_ip + (*tos - copy_ip);
>  		break;
> -#ifndef CONFIG_X86_64
> +#ifdef CONFIG_X86_32
>  	case 0x9a:	/* call absolute -- same as call absolute, indirect */
>  		*tos = orig_ip + (*tos - copy_ip);
>  		goto no_change;
> @@ -813,8 +813,6 @@ static void __kprobes resume_execution(struct kprobe *p,
>  
>  no_change:
>  	restore_btf();
> -
> -	return;
>  }
>  
>  /*

Thanks again!

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America) Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com, masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4] x86: add kprobe-booster to X86_64
  2007-12-18 11:42   ` Harvey Harrison
  2007-12-18 13:51     ` Masami Hiramatsu
@ 2007-12-18 14:00     ` Ingo Molnar
  1 sibling, 0 replies; 12+ messages in thread
From: Ingo Molnar @ 2007-12-18 14:00 UTC (permalink / raw)
  To: Harvey Harrison
  Cc: Ananth N Mavinakayanahalli, Jim Keniston, Roland McGrath,
	Arjan van de Ven, prasanna, anil.s.keshavamurthy, davem,
	systemtap-ml, LKML, Andrew Morton, Masami Hiramatsu


* Harvey Harrison <harvey.harrison@gmail.com> wrote:

> On Tue, 2007-12-18 at 12:29 +0100, Ingo Molnar wrote:
> > * Harvey Harrison <harvey.harrison@gmail.com> wrote:
> > 
> > > Sorry I missed an ifdef in this patch in the following hunk:
> > 
> > could you resend your kprobes cleanups against current x86.git? They
> > have been conceptually acked by Masami. This cuts out the unification
> > part of your queue which is bad luck but the effort has been duplicated
> > already so there's not much we can do about it i guess.
> > 
> > Your other 17 cleanup and unification patches are still queued up in
> > x86.git and passed a lot of testing, so they will likely go into
> > v2.6.25. Nice work!
> > 
> > 	Ingo
> 
> Ingo,
> 
> I'd suggest just tossing my kprobes cleanups.  I just sent you a rollup
> of anything I saw that was left in mine that was still worthwhile
> after Masami's, included below for reference.  It didn't amount to much
> left so I rolled it all together:
> 
> Subject: [PATCH] x86: kprobes leftover cleanups
> 
> Eliminate __always_inline, all of these static functions are
> only called once.  Minor whitespace cleanup.  Eliminate one
> supefluous return at end of void function.  Reverse sense of
> #ifndef to be #ifdef to show the case only affects X86_32.

thanks, i've applied them.

	Ingo

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4] x86: add kprobe-booster to X86_64
  2007-12-18 13:51     ` Masami Hiramatsu
@ 2007-12-19  2:30       ` Harvey Harrison
  2007-12-19  4:44         ` Masami Hiramatsu
  2007-12-19  5:27       ` [PATCH] x86: __kprobes annotations Harvey Harrison
  1 sibling, 1 reply; 12+ messages in thread
From: Harvey Harrison @ 2007-12-19  2:30 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, Jim Keniston,
	Roland McGrath, Arjan van de Ven, prasanna, anil.s.keshavamurthy,
	davem, systemtap-ml, LKML, Andrew Morton

On Tue, 2007-12-18 at 08:50 -0500, Masami Hiramatsu wrote:
> Hi Harvey,
> 
> Thank you for cleaning this up.
> 
> Harvey Harrison wrote:
> > Subject: [PATCH] x86: kprobes leftover cleanups
> > 
> > Eliminate __always_inline, all of these static functions are
> > only called once.  Minor whitespace cleanup.  Eliminate one
> > supefluous return at end of void function.  Reverse sense of
> > #ifndef to be #ifdef to show the case only affects X86_32.
> 
> Unfortunately, to prevent kprobe recursive call, all functions which
> is called from kprobes must be inlined or have __kprobes.
> If __always_inline macro still work, I prefer to use it. If not,
> it must have a __kprobe attribute like as below.

I thought all static functions that were only called once were
automatically inlined these days?  Otherwise __always_inline and
inline are exactly the same in the kernel.

Harvey

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4] x86: add kprobe-booster to X86_64
  2007-12-19  2:30       ` Harvey Harrison
@ 2007-12-19  4:44         ` Masami Hiramatsu
  2007-12-19  5:21           ` Harvey Harrison
  0 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu @ 2007-12-19  4:44 UTC (permalink / raw)
  To: Harvey Harrison
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, Jim Keniston,
	Roland McGrath, Arjan van de Ven, prasanna, anil.s.keshavamurthy,
	davem, systemtap-ml, LKML, Andrew Morton

Harvey Harrison wrote:
> On Tue, 2007-12-18 at 08:50 -0500, Masami Hiramatsu wrote:
>> Hi Harvey,
>>
>> Thank you for cleaning this up.
>>
>> Harvey Harrison wrote:
>>> Subject: [PATCH] x86: kprobes leftover cleanups
>>>
>>> Eliminate __always_inline, all of these static functions are
>>> only called once.  Minor whitespace cleanup.  Eliminate one
>>> supefluous return at end of void function.  Reverse sense of
>>> #ifndef to be #ifdef to show the case only affects X86_32.
>> Unfortunately, to prevent kprobe recursive call, all functions which
>> is called from kprobes must be inlined or have __kprobes.
>> If __always_inline macro still work, I prefer to use it. If not,
>> it must have a __kprobe attribute like as below.
> 
> I thought all static functions that were only called once were
> automatically inlined these days?  Otherwise __always_inline and
> inline are exactly the same in the kernel.

Yes, it will be (not obviously) inlined, currently.
However, IMHO, it is not fail-safe coding.

I think we might better take care of someone who will modify the code
in the future. If they call those functions from other place,
it will not be inlined, and may be placed out of .kprobes.text.
In that case, we can not prevent inserting kprobes in those functions.

Thus, I recommend you to add __kprobes on those functions.
That indicates which functions will be used by kprobes and gives
hints how to write functions which will be called from kprobes.
(And also, it simplifies coding rule.)

Thank you,

> 
> Harvey
> 

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America) Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com, masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/4] x86: add kprobe-booster to X86_64
  2007-12-19  4:44         ` Masami Hiramatsu
@ 2007-12-19  5:21           ` Harvey Harrison
  0 siblings, 0 replies; 12+ messages in thread
From: Harvey Harrison @ 2007-12-19  5:21 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, Jim Keniston,
	Roland McGrath, Arjan van de Ven, prasanna, anil.s.keshavamurthy,
	davem, systemtap-ml, LKML, Andrew Morton

On Tue, 2007-12-18 at 23:43 -0500, Masami Hiramatsu wrote:
> Harvey Harrison wrote:
> > On Tue, 2007-12-18 at 08:50 -0500, Masami Hiramatsu wrote:
> >> Hi Harvey,
> >>
> >> Thank you for cleaning this up.
> >>
> >> Harvey Harrison wrote:
> >>> Subject: [PATCH] x86: kprobes leftover cleanups
> >>>
> >>> Eliminate __always_inline, all of these static functions are
> >>> only called once.  Minor whitespace cleanup.  Eliminate one
> >>> supefluous return at end of void function.  Reverse sense of
> >>> #ifndef to be #ifdef to show the case only affects X86_32.
> >> Unfortunately, to prevent kprobe recursive call, all functions which
> >> is called from kprobes must be inlined or have __kprobes.
> >> If __always_inline macro still work, I prefer to use it. If not,
> >> it must have a __kprobe attribute like as below.
> > 
> > I thought all static functions that were only called once were
> > automatically inlined these days?  Otherwise __always_inline and
> > inline are exactly the same in the kernel.
> 
> Yes, it will be (not obviously) inlined, currently.
> However, IMHO, it is not fail-safe coding.
> 

Fair enough, you seem to have a deeper understanding of the code than
I, I'd suggest __kprobes as a better annotation for this purpose though.

> I think we might better take care of someone who will modify the code
> in the future. If they call those functions from other place,
> it will not be inlined, and may be placed out of .kprobes.text.
> In that case, we can not prevent inserting kprobes in those functions.
> 
> Thus, I recommend you to add __kprobes on those functions.
> That indicates which functions will be used by kprobes and gives
> hints how to write functions which will be called from kprobes.
> (And also, it simplifies coding rule.)

Patch forthcoming.

Harvey

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] x86: __kprobes annotations
  2007-12-18 13:51     ` Masami Hiramatsu
  2007-12-19  2:30       ` Harvey Harrison
@ 2007-12-19  5:27       ` Harvey Harrison
  2007-12-19  5:43         ` Masami Hiramatsu
  2007-12-19  9:28         ` Ingo Molnar
  1 sibling, 2 replies; 12+ messages in thread
From: Harvey Harrison @ 2007-12-19  5:27 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, Jim Keniston,
	Roland McGrath, Arjan van de Ven, prasanna, anil.s.keshavamurthy,
	davem, systemtap-ml, LKML, Andrew Morton

__always_inline on some static functions was to ensure they ended
up in the .kprobes.text section. Mark this explicitly.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
---
 arch/x86/kernel/kprobes.c |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index c7a26be..521a469 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -159,7 +159,7 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = {
 const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist);
 
 /* Insert a jump instruction at address 'from', which jumps to address 'to'.*/
-static void set_jmp_op(void *from, void *to)
+static void __kprobes set_jmp_op(void *from, void *to)
 {
 	struct __arch_jmp_op {
 		char op;
@@ -174,7 +174,7 @@ static void set_jmp_op(void *from, void *to)
  * Returns non-zero if opcode is boostable.
  * RIP relative instructions are adjusted at copying time in 64 bits mode
  */
-static int can_boost(kprobe_opcode_t *opcodes)
+static int __kprobes can_boost(kprobe_opcode_t *opcodes)
 {
 	kprobe_opcode_t opcode;
 	kprobe_opcode_t *orig_opcodes = opcodes;
@@ -392,13 +392,13 @@ static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
 		kcb->kprobe_saved_flags &= ~X86_EFLAGS_IF;
 }
 
-static void clear_btf(void)
+static void __kprobes clear_btf(void)
 {
 	if (test_thread_flag(TIF_DEBUGCTLMSR))
 		wrmsr(MSR_IA32_DEBUGCTLMSR, 0, 0);
 }
 
-static void restore_btf(void)
+static void __kprobes restore_btf(void)
 {
 	if (test_thread_flag(TIF_DEBUGCTLMSR))
 		wrmsr(MSR_IA32_DEBUGCTLMSR, current->thread.debugctlmsr, 0);
-- 
1.5.4.rc0.1143.g1a8a



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] x86: __kprobes annotations
  2007-12-19  5:27       ` [PATCH] x86: __kprobes annotations Harvey Harrison
@ 2007-12-19  5:43         ` Masami Hiramatsu
  2007-12-19  9:28         ` Ingo Molnar
  1 sibling, 0 replies; 12+ messages in thread
From: Masami Hiramatsu @ 2007-12-19  5:43 UTC (permalink / raw)
  To: Harvey Harrison
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, Jim Keniston,
	Roland McGrath, Arjan van de Ven, prasanna, anil.s.keshavamurthy,
	davem, systemtap-ml, LKML, Andrew Morton

Harvey Harrison wrote:
> __always_inline on some static functions was to ensure they ended
> up in the .kprobes.text section. Mark this explicitly.

It is good to me.
Thanks!

> 
> Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>

> ---
>  arch/x86/kernel/kprobes.c |    8 ++++----
>  1 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> index c7a26be..521a469 100644
> --- a/arch/x86/kernel/kprobes.c
> +++ b/arch/x86/kernel/kprobes.c
> @@ -159,7 +159,7 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = {
>  const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist);
>  
>  /* Insert a jump instruction at address 'from', which jumps to address 'to'.*/
> -static void set_jmp_op(void *from, void *to)
> +static void __kprobes set_jmp_op(void *from, void *to)
>  {
>  	struct __arch_jmp_op {
>  		char op;
> @@ -174,7 +174,7 @@ static void set_jmp_op(void *from, void *to)
>   * Returns non-zero if opcode is boostable.
>   * RIP relative instructions are adjusted at copying time in 64 bits mode
>   */
> -static int can_boost(kprobe_opcode_t *opcodes)
> +static int __kprobes can_boost(kprobe_opcode_t *opcodes)
>  {
>  	kprobe_opcode_t opcode;
>  	kprobe_opcode_t *orig_opcodes = opcodes;
> @@ -392,13 +392,13 @@ static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
>  		kcb->kprobe_saved_flags &= ~X86_EFLAGS_IF;
>  }
>  
> -static void clear_btf(void)
> +static void __kprobes clear_btf(void)
>  {
>  	if (test_thread_flag(TIF_DEBUGCTLMSR))
>  		wrmsr(MSR_IA32_DEBUGCTLMSR, 0, 0);
>  }
>  
> -static void restore_btf(void)
> +static void __kprobes restore_btf(void)
>  {
>  	if (test_thread_flag(TIF_DEBUGCTLMSR))
>  		wrmsr(MSR_IA32_DEBUGCTLMSR, current->thread.debugctlmsr, 0);

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America) Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com, masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] x86: __kprobes annotations
  2007-12-19  5:27       ` [PATCH] x86: __kprobes annotations Harvey Harrison
  2007-12-19  5:43         ` Masami Hiramatsu
@ 2007-12-19  9:28         ` Ingo Molnar
  1 sibling, 0 replies; 12+ messages in thread
From: Ingo Molnar @ 2007-12-19  9:28 UTC (permalink / raw)
  To: Harvey Harrison
  Cc: Masami Hiramatsu, Ananth N Mavinakayanahalli, Jim Keniston,
	Roland McGrath, Arjan van de Ven, prasanna, anil.s.keshavamurthy,
	davem, systemtap-ml, LKML, Andrew Morton


* Harvey Harrison <harvey.harrison@gmail.com> wrote:

> __always_inline on some static functions was to ensure they ended up 
> in the .kprobes.text section. Mark this explicitly.

thanks, applied. I rolled this back into your cleanup patch to make sure 
we have a correct, bisectable kernel at every commit point.

	Ingo 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 3/4] x86: add kprobe-booster to X86_64
@ 2007-12-17 21:27 Harvey Harrison
  0 siblings, 0 replies; 12+ messages in thread
From: Harvey Harrison @ 2007-12-17 21:27 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Ananth N Mavinakayanahalli, Jim Keniston, Roland McGrath,
	Arjan van de Ven, prasanna, anil.s.keshavamurthy, davem,
	systemtap-ml, LKML, Andrew Morton

Based on X86_32, mostly by un-ifdeffing code.

Based on patch from Masami Hiramatsu <mhiramat@redhat.com>

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
---
 arch/x86/kernel/kprobes.c |   57 +++++++++++++++++++++++----------------------
 include/asm-x86/kprobes.h |   12 +++++----
 2 files changed, 36 insertions(+), 33 deletions(-)

diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index 64c702c..47bae2c 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -151,15 +151,17 @@ twobyte_has_modrm[256 / (sizeof(unsigned long) * 8)] = {
 #undef R4
 #undef RF
 
-/* insert a jmp code */
+/*
+ * Insert a jump instruction at address 'from' which jumps to address 'to' */
 static inline void set_jmp_op(void *from, void *to)
 {
 	struct __arch_jmp_op {
 		char op;
-		long raddr;
-	} __attribute__((packed)) *jop;
+		s32 raddr;
+	} __attribute__((packed)) * jop;
 	jop = (struct __arch_jmp_op *)from;
-	jop->raddr = (long)(to) - ((long)(from) + 5);
+
+	jop->raddr = (s32)((long)(to) - ((long)(from) + 5));
 	jop->op = RELATIVEJUMP_INSTRUCTION;
 }
 
@@ -183,6 +185,9 @@ retry:
 	}
 
 	switch (opcode & 0xf0) {
+#ifdef X86_64
+	case 0x40:
+		goto retry; /* REX prefix is boostable */
 	case 0x60:
 		if (0x63 < opcode && opcode < 0x67)
 			goto retry; /* prefixes */
@@ -202,7 +207,7 @@ retry:
 	case 0xf0:
 		if ((opcode & 0x0c) == 0 && opcode != 0xf1)
 			goto retry; /* lock/rep(ne) prefix */
-		/* clear and set flags can be boost */
+		/* clear and set flags are boostable */
 		return (opcode == 0xf5 || (0xf7 < opcode && opcode < 0xfe));
 	default:
 		if (opcode == 0x26 || opcode == 0x36 || opcode == 0x3e)
@@ -221,6 +226,10 @@ static s32 __kprobes *is_riprel(u8 *insn)
 {
 	int need_modrm;
 
+#ifdef CONFIG_X86_32
+	return NULL;
+#endif
+
 	/* Skip legacy instruction prefixes.  */
 	while (1) {
 		switch (*insn) {
@@ -266,18 +275,10 @@ static s32 __kprobes *is_riprel(u8 *insn)
 
 static void __kprobes arch_copy_kprobe(struct kprobe *p)
 {
-#ifdef CONFIG_X86_32
-	memcpy(p->ainsn.insn, p->addr,
-	       (MAX_INSN_SIZE + 1) * sizeof(kprobe_opcode_t));
-	p->opcode = *p->addr;
-	if (can_boost(p->addr)) {
-		p->ainsn.boostable = 0;
-	} else {
-		p->ainsn.boostable = -1;
-	}
-#else
 	s32 *ripdisp;
-	memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE);
+	memcpy(p->ainsn.insn, p->addr,
+	       MAX_INSN_SIZE + sizeof(kprobe_opcode_t));
+
 	ripdisp = is_riprel(p->ainsn.insn);
 	if (ripdisp) {
 		/*
@@ -297,8 +298,13 @@ static void __kprobes arch_copy_kprobe(struct kprobe *p)
 		BUG_ON((s64) (s32) disp != disp); /* Sanity check.  */
 		*ripdisp = disp;
 	}
+
 	p->opcode = *p->addr;
-#endif
+	if (can_boost(p->addr)) {
+		p->ainsn.boostable = 0;
+	} else {
+		p->ainsn.boostable = -1;
+	}
 }
 
 /*
@@ -343,11 +349,7 @@ void __kprobes arch_disarm_kprobe(struct kprobe *p)
 void __kprobes arch_remove_kprobe(struct kprobe *p)
 {
 	mutex_lock(&kprobe_mutex);
-#ifdef CONFIG_X86_32
 	free_insn_slot(p->ainsn.insn, (p->ainsn.boostable == 1));
-#else
-	free_insn_slot(p->ainsn.insn, 0);
-#endif
 	mutex_unlock(&kprobe_mutex);
 }
 
@@ -544,7 +546,7 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
 		return 1;
 
 ss_probe:
-#if defined(CONFIG_X86_32) && (!defined(CONFIG_PREEMPT) || defined(CONFIG_PM))
+#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
 	if (p->ainsn.boostable == 1 && !p->post_handler){
 		/* Boost up -- we can execute copied instructions directly */
 		reset_current_kprobe();
@@ -722,6 +724,11 @@ void *__kprobes trampoline_handler(struct pt_regs *regs)
  * that is atop the stack is the address following the copied instruction.
  * We need to make it the address following the original instruction.
  *
+ * If this is the first time we've single-stepped the instruction at
+ * this probepoint, and the instruction is boostable, boost it: add a
+ * jump instruction after the copied instruction, that jumps to the next
+ * instruction after the probepoint.
+ *
  * This function also checks instruction size for preparing direct execution.
  */
 static void __kprobes resume_execution(struct kprobe *p,
@@ -754,10 +761,8 @@ static void __kprobes resume_execution(struct kprobe *p,
 	case 0xcb:
 	case 0xcf:
 	case 0xea:		/* jmp absolute -- ip is correct */
-#ifdef CONFIG_X86_32
 		/* ip is already adjusted, no more changes required */
 		p->ainsn.boostable = 1;
-#endif
 		goto no_change;
 	case 0xe8:		/* call relative - Fix return addr */
 		*tos = orig_ip + (*tos - copy_ip);
@@ -777,10 +782,8 @@ static void __kprobes resume_execution(struct kprobe *p,
 		} else if (((insn[1] & 0x31) == 0x20) ||	/* jmp near, absolute indirect */
 			   ((insn[1] & 0x31) == 0x21)) {	/* jmp far, absolute indirect */
 			/* ip is correct. */
-#ifdef CONFIG_X86_32
 			/* And this is boostable */
 			p->ainsn.boostable = 1;
-#endif
 			goto no_change;
 		}
 		break;
@@ -788,7 +791,6 @@ static void __kprobes resume_execution(struct kprobe *p,
 		break;
 	}
 
-#ifdef CONFIG_X86_32
 	if (p->ainsn.boostable == 0) {
 		if ((regs->ip > copy_ip) &&
 		    (regs->ip - copy_ip) + 5 < (MAX_INSN_SIZE + 1)) {
@@ -803,7 +805,6 @@ static void __kprobes resume_execution(struct kprobe *p,
 			p->ainsn.boostable = -1;
 		}
 	}
-#endif
 	regs->ip = orig_ip + (regs->ip - copy_ip);
 
 no_change:
diff --git a/include/asm-x86/kprobes.h b/include/asm-x86/kprobes.h
index 7319c62..f9a4fd2 100644
--- a/include/asm-x86/kprobes.h
+++ b/include/asm-x86/kprobes.h
@@ -58,13 +58,15 @@ void kretprobe_trampoline(void);
 struct arch_specific_insn {
 	/* copy of the original instruction */
 	kprobe_opcode_t *insn;
-#ifdef CONFIG_X86_32
 	/*
-	 * If this flag is not 0, this kprobe can be boost when its
-	 * post_handler and break_handler is not set.
+	 * boostable = -1: This instruction type is not boostable.
+	 * boostable = 0: This instruction type is boostable.
+	 * boostable = 1: This instruction has been boosted: we have
+	 * added a relative jump after the instruction copy in insn,
+	 * so no single-step and fixup are needed (unless there's
+	 * a post_handler or break_handler).
 	 */
-	int boostable;
-#endif
+	 int boostable;
 };
 
 struct prev_kprobe {
-- 
1.5.4.rc0.1083.gf568


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2007-12-19  9:28 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-12-17 22:27 [PATCH 3/4] x86: add kprobe-booster to X86_64 Harvey Harrison
2007-12-18 11:30 ` Ingo Molnar
2007-12-18 11:42   ` Harvey Harrison
2007-12-18 13:51     ` Masami Hiramatsu
2007-12-19  2:30       ` Harvey Harrison
2007-12-19  4:44         ` Masami Hiramatsu
2007-12-19  5:21           ` Harvey Harrison
2007-12-19  5:27       ` [PATCH] x86: __kprobes annotations Harvey Harrison
2007-12-19  5:43         ` Masami Hiramatsu
2007-12-19  9:28         ` Ingo Molnar
2007-12-18 14:00     ` [PATCH 3/4] x86: add kprobe-booster to X86_64 Ingo Molnar
  -- strict thread matches above, loose matches on Subject: below --
2007-12-17 21:27 Harvey Harrison

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).