From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 28428 invoked by alias); 25 Nov 2009 16:55:45 -0000 Received: (qmail 28412 invoked by uid 22791); 25 Nov 2009 16:55:41 -0000 X-SWARE-Spam-Status: No, hits=-2.5 required=5.0 tests=AWL,BAYES_00,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: sourceware.org Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Wed, 25 Nov 2009 16:55:35 +0000 Received: from int-mx03.intmail.prod.int.phx2.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.16]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id nAPGtXsR001383 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Wed, 25 Nov 2009 11:55:33 -0500 Received: from [127.0.0.1] (dhcp-100-2-46.bos.redhat.com [10.16.2.46]) by int-mx03.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id nAPGtVbw028457; Wed, 25 Nov 2009 11:55:31 -0500 From: Masami Hiramatsu Subject: [PATCH -tip v6 04/11] kprobes: Jump optimization sysctl interface To: Frederic Weisbecker , Ingo Molnar , Ananth N Mavinakayanahalli , lkml Cc: systemtap, DLE, Masami Hiramatsu , Ananth N Mavinakayanahalli , Ingo Molnar , Jim Keniston , Srikar Dronamraju , Christoph Hellwig , Steven Rostedt , Frederic Weisbecker , "H. Peter Anvin" , Anders Kaseorg , Tim Abbott , Andi Kleen , Jason Baron , Mathieu Desnoyers Date: Wed, 25 Nov 2009 16:55:00 -0000 Message-ID: <20091125165541.6073.43956.stgit@harusame> In-Reply-To: <20091125165510.6073.48721.stgit@harusame> References: <20091125165510.6073.48721.stgit@harusame> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-IsSubscribed: yes Mailing-List: contact systemtap-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Post: List-Help: , Sender: systemtap-owner@sourceware.org X-SW-Source: 2009-q4/txt/msg00699.txt.bz2 Add /proc/sys/debug/kprobes-optimization sysctl which enables and disables kprobes jump optimization on the fly for debugging. Changes in v6: - Update comments and coding style. Signed-off-by: Masami Hiramatsu Cc: Ananth N Mavinakayanahalli Cc: Ingo Molnar Cc: Jim Keniston Cc: Srikar Dronamraju Cc: Christoph Hellwig Cc: Steven Rostedt Cc: Frederic Weisbecker Cc: H. Peter Anvin Cc: Anders Kaseorg Cc: Tim Abbott Cc: Andi Kleen Cc: Jason Baron Cc: Mathieu Desnoyers --- include/linux/kprobes.h | 8 ++++ kernel/kprobes.c | 89 +++++++++++++++++++++++++++++++++++++++++++++-- kernel/sysctl.c | 13 +++++++ 3 files changed, 107 insertions(+), 3 deletions(-) diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h index aed1f95..e7d1b2e 100644 --- a/include/linux/kprobes.h +++ b/include/linux/kprobes.h @@ -283,6 +283,14 @@ extern int arch_within_optimized_kprobe(struct optimized_kprobe *op, unsigned long addr); extern void opt_pre_handler(struct kprobe *p, struct pt_regs *regs); + +#ifdef CONFIG_SYSCTL +extern int sysctl_kprobes_optimization; +extern int proc_kprobes_optimization_handler(struct ctl_table *table, + int write, void __user *buffer, + size_t *length, loff_t *ppos); +#endif + #endif /* CONFIG_OPTPROBES */ /* Get the kprobe at this addr (if any) - called with preemption disabled */ diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 273fba7..d88f4c1 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -42,6 +42,7 @@ #include #include #include +#include #include #include #include @@ -383,6 +384,9 @@ static inline void copy_kprobe(struct kprobe *old_p, struct kprobe *p) } #ifdef CONFIG_OPTPROBES +/* NOTE: change this value only with kprobe_mutex held */ +static bool kprobes_allow_optimization; + /* * Call all pre_handler on the list, but ignores its return value. * This must be called from arch-dep optimized caller. @@ -447,7 +451,7 @@ static __kprobes void kprobe_optimizer(struct work_struct *work) /* Lock modules while optimizing kprobes */ mutex_lock(&module_mutex); mutex_lock(&kprobe_mutex); - if (kprobes_all_disarmed) + if (kprobes_all_disarmed || !kprobes_allow_optimization) goto end; /* @@ -490,7 +494,7 @@ static __kprobes void optimize_kprobe(struct kprobe *p) struct optimized_kprobe *op; /* Check if the kprobe is disabled or not ready for optimization. */ - if (!kprobe_optready(p) || + if (!kprobe_optready(p) || !kprobes_allow_optimization || (kprobe_disabled(p) || kprobes_all_disarmed)) return; @@ -606,6 +610,81 @@ static __kprobes void try_to_optimize_kprobe(struct kprobe *p) init_aggr_kprobe(ap, p); optimize_kprobe(ap); } + +#ifdef CONFIG_SYSCTL +static void __kprobes optimize_all_kprobes(void) +{ + struct hlist_head *head; + struct hlist_node *node; + struct kprobe *p; + unsigned int i; + + /* If optimization is already allowed, just return */ + if (kprobes_allow_optimization) + return; + + kprobes_allow_optimization = true; + mutex_lock(&text_mutex); + for (i = 0; i < KPROBE_TABLE_SIZE; i++) { + head = &kprobe_table[i]; + hlist_for_each_entry_rcu(p, node, head, hlist) + if (!kprobe_disabled(p)) + optimize_kprobe(p); + } + mutex_unlock(&text_mutex); + printk(KERN_INFO "Kprobes globally optimized\n"); +} + +static void __kprobes unoptimize_all_kprobes(void) +{ + struct hlist_head *head; + struct hlist_node *node; + struct kprobe *p; + unsigned int i; + + /* If optimization is already prohibited, just return */ + if (!kprobes_allow_optimization) + return; + + kprobes_allow_optimization = false; + printk(KERN_INFO "Kprobes globally unoptimized\n"); + get_online_cpus(); /* For avoiding text_mutex deadlock */ + mutex_lock(&text_mutex); + for (i = 0; i < KPROBE_TABLE_SIZE; i++) { + head = &kprobe_table[i]; + hlist_for_each_entry_rcu(p, node, head, hlist) { + if (!kprobe_disabled(p)) + unoptimize_kprobe(p); + } + } + + mutex_unlock(&text_mutex); + put_online_cpus(); + /* Allow all currently running kprobes to complete */ + synchronize_sched(); +} + +int sysctl_kprobes_optimization; +int proc_kprobes_optimization_handler(struct ctl_table *table, int write, + void __user *buffer, size_t *length, + loff_t *ppos) +{ + int ret; + + mutex_lock(&kprobe_mutex); + sysctl_kprobes_optimization = kprobes_allow_optimization ? 1 : 0; + ret = proc_dointvec_minmax(table, write, buffer, length, ppos); + + if (sysctl_kprobes_optimization) + optimize_all_kprobes(); + else + unoptimize_all_kprobes(); + mutex_unlock(&kprobe_mutex); + + return ret; +} +#endif /* CONFIG_SYSCTL */ + #else /* !CONFIG_OPTPROBES */ #define get_optimized_kprobe(addr) (NULL) #define optimize_kprobe(p) do {} while (0) @@ -1621,10 +1700,14 @@ static int __init init_kprobes(void) } } -#if defined(CONFIG_OPTPROBES) && defined(__ARCH_WANT_KPROBES_INSN_SLOT) +#if defined(CONFIG_OPTPROBES) +#if defined(__ARCH_WANT_KPROBES_INSN_SLOT) /* Init kprobe_optinsn_slots */ kprobe_optinsn_slots.insn_size = MAX_OPTINSN_SIZE; #endif + /* By default, kprobes can be optimized */ + kprobes_allow_optimization = true; +#endif /* By default, kprobes are armed */ kprobes_all_disarmed = false; diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 0060ce7..330aa70 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -51,6 +51,7 @@ #include #include #include +#include #include #include @@ -1621,6 +1622,18 @@ static struct ctl_table debug_table[] = { .proc_handler = proc_dointvec }, #endif +#if defined(CONFIG_OPTPROBES) + { + .ctl_name = CTL_UNNUMBERED, + .procname = "kprobes-optimization", + .data = &sysctl_kprobes_optimization, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_kprobes_optimization_handler, + .extra1 = &zero, + .extra2 = &one, + }, +#endif { .ctl_name = 0 } }; -- Masami Hiramatsu Software Engineer Hitachi Computer Products (America), Inc. Software Solutions Division e-mail: mhiramat@redhat.com