From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 5212 invoked by alias); 6 Mar 2006 15:11:57 -0000 Received: (qmail 5200 invoked by uid 22791); 6 Mar 2006 15:11:55 -0000 X-Spam-Status: No, hits=-0.8 required=5.0 tests=AWL,BAYES_00,DNS_FROM_RFC_ABUSE,SPF_SOFTFAIL X-Spam-Check-By: sourceware.org Received: from e4.ny.us.ibm.com (HELO e4.ny.us.ibm.com) (32.97.182.144) by sourceware.org (qpsmtpd/0.31) with ESMTP; Mon, 06 Mar 2006 15:11:53 +0000 Received: from d01relay02.pok.ibm.com (d01relay02.pok.ibm.com [9.56.227.234]) by e4.ny.us.ibm.com (8.12.11/8.12.11) with ESMTP id k26FBga1028060 for ; Mon, 6 Mar 2006 10:11:42 -0500 Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by d01relay02.pok.ibm.com (8.12.10/NCO/VER6.8) with ESMTP id k26FBhvE102870 for ; Mon, 6 Mar 2006 10:11:43 -0500 Received: from d01av02.pok.ibm.com (loopback [127.0.0.1]) by d01av02.pok.ibm.com (8.12.11/8.13.3) with ESMTP id k26FBgBI025320 for ; Mon, 6 Mar 2006 10:11:42 -0500 Received: from newton.in.ibm.com ([9.124.35.47]) by d01av02.pok.ibm.com (8.12.11/8.12.11) with ESMTP id k26FBfT4025207 for ; Mon, 6 Mar 2006 10:11:41 -0500 Received: by newton.in.ibm.com (Postfix, from userid 500) id BA2EBCE3; Mon, 6 Mar 2006 20:43:15 +0530 (IST) Date: Mon, 06 Mar 2006 15:11:00 -0000 From: Prasanna S Panchamukhi To: systemtap@sources.redhat.com Subject: Re: [PATH 3/3] User space probes-single stepping out-of-line take3-RFC Message-ID: <20060306151315.GE8589@in.ibm.com> Reply-To: prasanna@in.ibm.com References: <20060306151038.GC8589@in.ibm.com> <20060306151153.GD8589@in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20060306151153.GD8589@in.ibm.com> User-Agent: Mutt/1.4.1i X-IsSubscribed: yes Mailing-List: contact systemtap-help@sourceware.org; run by ezmlm Precedence: bulk List-Subscribe: List-Post: List-Help: , Sender: systemtap-owner@sourceware.org X-SW-Source: 2006-q1/txt/msg00711.txt.bz2 This patch provides a mechanism for probe handling and executing the user-specified handlers. Each userspace probe is uniquely identified by the combination of inode and offset, hence during registeration the inode and offset combination is added to uprobes hash table. Initially when breakpoint instruction is hit, the uprobes hash table is looked up for matching inode and offset. The pre_handlers are called in sequence if multiple probes are registered. Similar to kprobes, uprobes also adopts to single step out-of-line, so that probe miss in SMP environment can be avoided. But for userspace probes, instruction copied into kernel address space cannot be single stepped, hence the instruction must be copied to user address space. The solution is to find free space in the current process address space and then copy the original instruction and single step that instruction. User processes use stack space to store local variables, agruments and return values. Normally the stack space either below or above the stack pointer indicates the free stack space. The instruction to be single stepped can modify the stack space, hence before using the free stack space, sufficient stack space should be left. The instruction is copied to the bottom of the page and check is made such that the copied instruction does not cross the page boundry. The copied instruction is then single stepped. Several architectures does not allow the instruction to be executed from the stack location, since no-exec bit is set for the stack pages. In those architectures, the page table entry corresponding to the stack page is identified and the no-exec bit is unset making the instruction on that stack page to be executed. There are situations where even the free stack space is not enough for the user instruction to be copied and single stepped. In such situations, the virtual memory area(vma) can be expanded beyond the current stack vma. This expaneded stack can be used to copy the original instruction and single step out-of-line. Even if the vma cannot be extended then the instruction much be executed inline, by replacing the breakpoint instruction with original instruction. TODO list -------- 1. Execution of probe handlers are serialized using a uprobe_mutex, need to make them scalable. 2. As Yanmin mentioned, user-probes can reenter through signal handlers, need to support reentrancy similar to kprobes. 3. Synchornize usage of stack between signal handlers and user-space probe or prevent singnal handler using the stack space until the copied instruction is single stepped. 4. Insert probes on copy-on-write pages. Tracks all COW pages for the page containing the specified probe point and inserts/removes all the probe points for that page. 5. Optimize the insertion of probes through readpage hooks. Identify all the probes to be inserted on the read page and insert them at once. 6. A wrapper routines to calculate the offset from the probed file beginning. In case of dynamic shared library, the offset is calculated by subtracting the address of the probe point from the beginning of the file mapped address. 7. Robust fault handline to support faults while single stepping of original instruction. 8. To handle probes on special instructions like sub $64,%esp, sub %esi, %esp, mov %esi, %esp etc which can grow the stack. 9. We grap a mutex allowing it to sleep while probe processing, need to avoid sleeping while probe processing. Signed-off-by: Prasanna S Panchamukhi arch/i386/mm/fault.c | 3 include/linux/kprobes.h | 8 diff -puN arch/i386/kernel/kprobes.c~kprobes_userspace_probes-ss-out-of-line arch/i386/kernel/kprobes.c arch/i386/kernel/kprobes.c | 531 ++++++++++++++++++++++++++++++++++++++++++++- arch/i386/mm/fault.c | 3 include/asm-i386/kprobes.h | 18 + include/linux/kprobes.h | 8 4 files changed, 556 insertions(+), 4 deletions(-) diff -puN arch/i386/kernel/kprobes.c~kprobes_userspace_probes-ss-out-of-line arch/i386/kernel/kprobes.c --- linux-2.6.16-rc5-mm2/arch/i386/kernel/kprobes.c~kprobes_userspace_probes-ss-out-of-line 2006-03-06 19:16:45.000000000 +0530 +++ linux-2.6.16-rc5-mm2-prasanna/arch/i386/kernel/kprobes.c 2006-03-06 19:28:07.000000000 +0530 @@ -40,6 +40,8 @@ void jprobe_return_end(void); DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL; DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk); +static struct uprobe_ctlblk uprobe_ctlblk; +struct uprobe *current_uprobe; /* insert a jmp code */ static inline void set_jmp_op(void *from, void *to) @@ -111,6 +113,22 @@ int __kprobes arch_alloc_insn(struct kpr return 0; } +void __kprobes arch_disarm_uprobe(struct kprobe *p, kprobe_opcode_t *address) +{ + *address = p->opcode; +} + +void __kprobes arch_arm_uprobe(kprobe_opcode_t *address) +{ + *address = BREAKPOINT_INSTRUCTION; +} + +void __kprobes arch_copy_uprobe(struct kprobe *p, kprobe_opcode_t *address) +{ + memcpy(p->ainsn.insn, address, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); + p->opcode = *(kprobe_opcode_t *)address; +} + int __kprobes arch_prepare_kprobe(struct kprobe *p) { /* insn: must be on special executable page on i386. */ @@ -579,8 +597,8 @@ int __kprobes kprobe_exceptions_notify(s struct die_args *args = (struct die_args *)data; int ret = NOTIFY_DONE; - if (user_mode(args->regs)) - return ret; + if (args->regs && user_mode(args->regs)) + return uprobe_exceptions_notify(self, val, data); switch (val) { case DIE_INT3: @@ -671,6 +689,515 @@ int __kprobes longjmp_break_handler(stru return 0; } +/** + * This routines get the pte of the page containing the specified address. + */ +static pte_t __kprobes *get_uprobe_pte(unsigned long address) +{ + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + pte_t *pte = NULL; + + pgd = pgd_offset(current->mm, address); + if (!pgd) + goto out; + + pud = pud_offset(pgd, address); + if (!pud) + goto out; + + pmd = pmd_offset(pud, address); + if (!pmd) + goto out; + + pte = pte_alloc_map(current->mm, pmd, address); + +out: + return pte; +} + +/** + * This routine check for space in the current process's stack + * address space. If enough address space is found, copy the original + * instruction on that page for single stepping out-of-line. + */ +static int __kprobes copy_insn_on_new_page(struct uprobe *uprobe , + struct pt_regs *regs, struct vm_area_struct *vma) +{ + unsigned long addr, stack_addr = regs->esp; + int size = MAX_INSN_SIZE * sizeof(kprobe_opcode_t); + + if (vma->vm_flags & VM_GROWSDOWN) { + if (((stack_addr - sizeof(long long))) < + (vma->vm_start + size)) + return -ENOMEM; + addr = vma->vm_start; + } else if (vma->vm_flags & VM_GROWSUP) { + if ((vma->vm_end - size) < (stack_addr + sizeof(long long))) + return -ENOMEM; + addr = vma->vm_end - size; + } else + return -EFAULT; + + vma->vm_flags |= VM_LOCKED; + + if (__copy_to_user_inatomic((unsigned long *)addr, + (unsigned long *)uprobe->kp.ainsn.insn, size)) + return -EFAULT; + + regs->eip = addr; + + return 0; +} + +/** + * This routine expands the stack beyond the present process address + * space and copies the instruction to that location, so that + * processor can single step out-of-line. + */ +static int __kprobes copy_insn_onexpstack(struct uprobe *uprobe, + struct pt_regs *regs, struct vm_area_struct *vma) +{ + unsigned long addr, vm_addr; + int size = MAX_INSN_SIZE * sizeof(kprobe_opcode_t); + struct vm_area_struct *new_vma; + struct mm_struct *mm = current->mm; + + + if (!down_read_trylock(¤t->mm->mmap_sem)) + return -ENOMEM; + + if (vma->vm_flags & VM_GROWSDOWN) + vm_addr = vma->vm_start - size; + else if (vma->vm_flags & VM_GROWSUP) + vm_addr = vma->vm_end + size; + else { + up_read(¤t->mm->mmap_sem); + return -EFAULT; + } + + new_vma = find_extend_vma(mm, vm_addr); + if (!new_vma) { + up_read(¤t->mm->mmap_sem); + return -ENOMEM; + } + + if (new_vma->vm_flags & VM_GROWSDOWN) + addr = new_vma->vm_start; + else + addr = new_vma->vm_end - size; + + new_vma->vm_flags |= VM_LOCKED; + up_read(¤t->mm->mmap_sem); + + if (__copy_to_user_inatomic((unsigned long *)addr, + (unsigned long *)uprobe->kp.ainsn.insn, size)) + return -EFAULT; + + regs->eip = addr; + + return 0; +} + +/** + * This routine checks for stack free space below the stack pointer + * and then copies the instructions at that location so that the + * processor can single step out-of-line. If there is not enough stack + * space or if copy_to_user fails or if the vma is invalid, it returns + * error. + */ +static int __kprobes copy_insn_onstack(struct uprobe *uprobe, + struct pt_regs *regs, unsigned long flags) +{ + unsigned long page_addr, stack_addr = regs->esp; + int size = MAX_INSN_SIZE * sizeof(kprobe_opcode_t); + unsigned long *source = (unsigned long *)uprobe->kp.ainsn.insn; + + if (flags & VM_GROWSDOWN) { + page_addr = stack_addr & PAGE_MASK; + + if (((stack_addr - sizeof(long long))) < (page_addr + size)) + return -ENOMEM; + + if (__copy_to_user_inatomic((unsigned long *)page_addr, + source, size)) + return -EFAULT; + + regs->eip = page_addr; + } else if (flags & VM_GROWSUP) { + page_addr = stack_addr & PAGE_MASK; + + if (page_addr == stack_addr) + return -ENOMEM; + else + page_addr += PAGE_SIZE; + + if ((page_addr - size) < (stack_addr + sizeof(long long))) + return -ENOMEM; + + if (__copy_to_user_inatomic( + (unsigned long *)(page_addr - size), source, size)) + return -EFAULT; + + regs->eip = page_addr - size; + } else + return -EINVAL; + + return 0; +} + +/** + * This routines get the page containing the probe, maps it and + * replaced the instruction at the probed address with specified + * opcode. + */ +void __kprobes replace_original_insn(struct uprobe *uprobe, + struct pt_regs *regs, kprobe_opcode_t opcode) +{ + kprobe_opcode_t *addr; + struct page *page; + + page = find_get_page(uprobe->inode->i_mapping, + uprobe->offset >> PAGE_CACHE_SHIFT); + BUG_ON(!page); + + __lock_page(page); + + addr = (kprobe_opcode_t *)kmap_atomic(page, KM_USER1); + addr = (kprobe_opcode_t *)((unsigned long)addr + + (unsigned long)(uprobe->offset & ~PAGE_MASK)); + *addr = opcode; + /*TODO: flush vma ? */ + kunmap_atomic(addr, KM_USER1); + + unlock_page(page); + + if (page) + page_cache_release(page); + regs->eip = (unsigned long)uprobe->kp.addr; +} + +/** + * This routine provides the functionality of single stepping + * out-of-line. If single stepping out-of-line cannot be achieved, + * it replaces with the original instruction allowing it to single + * step inline. + */ +static inline int prepare_singlestep_uprobe(struct uprobe *uprobe, + struct uprobe_ctlblk *ucb, struct pt_regs *regs) +{ + unsigned long stack_addr = regs->esp, flags; + struct vm_area_struct *vma = NULL; + int err = 0; + + vma = find_vma(current->mm, (stack_addr & PAGE_MASK)); + if (!vma) { + /* TODO: Need better error reporting? */ + goto no_vma; + } + flags = vma->vm_flags; + + regs->eflags |= TF_MASK; + regs->eflags &= ~IF_MASK; + + /* + * Copy_insn_on_stack tries to find some room for the instruction slot + * in the same page as the current esp. + */ + err = copy_insn_onstack(uprobe, regs, flags); + + /* + * If copy_insn_on_stack() fails, copy_insn_on_new_page() is called to + * try to find some room in the next pages below the current esp; + */ + if (err) + err = copy_insn_on_new_page(uprobe, regs, vma); + /* + * If copy_insn_on_new_pagek() fails, copy_insn_on_expstack() is called to + * try to grow the stack's VM area by one page. + */ + if (err) + err = copy_insn_onexpstack(uprobe, regs, vma); + + ucb->uprobe_status = UPROBE_HIT_SS; + + if (!err) { + ucb->upte = get_uprobe_pte(regs->eip); + if (!ucb->upte) + goto no_vma; + ucb->upage = pte_page(*ucb->upte); + __lock_page(ucb->upage); + } +no_vma: + if (err) { + replace_original_insn(uprobe, regs, uprobe->kp.opcode); + ucb->uprobe_status = UPROBE_SS_INLINE; + } + + ucb->singlestep_addr = regs->eip; + + return 0; +} + +/* + * uprobe_handler() executes the user specified handler and setup for + * single stepping the original instruction either out-of-line or inline. + */ +static int __kprobes uprobe_handler(struct pt_regs *regs) +{ + struct kprobe *p; + int ret = 0; + kprobe_opcode_t *addr = NULL; + struct uprobe_ctlblk *ucb = &uprobe_ctlblk; + unsigned long limit; + + spin_lock_irqsave(&uprobe_lock, ucb->flags); + /* preemption is disabled, remains disabled + * untill we single step on original instruction. + */ + preempt_disable(); + + addr = (kprobe_opcode_t *)(get_segment_eip(regs, &limit) - 1); + + p = get_uprobe(addr); + if (!p) { + + if (*addr != BREAKPOINT_INSTRUCTION) { + /* + * The breakpoint instruction was removed right + * after we hit it. Another cpu has removed + * either a probepoint or a debugger breakpoint + * at this address. In either case, no further + * handling of this interrupt is appropriate. + * Back up over the (now missing) int3 and run + * the original instruction. + */ + regs->eip -= sizeof(kprobe_opcode_t); + ret = 1; + } + /* Not one of ours: let kernel handle it */ + goto no_uprobe; + } + + ucb->curr_p = p; + ucb->uprobe_status = UPROBE_HIT_ACTIVE; + ucb->uprobe_saved_eflags = (regs->eflags & (TF_MASK | IF_MASK)); + ucb->uprobe_old_eflags = (regs->eflags & (TF_MASK | IF_MASK)); + if (is_IF_modifier(p->opcode)) + ucb->uprobe_saved_eflags &= ~IF_MASK; + + if (p->pre_handler && p->pre_handler(p, regs)) + /* handler has already set things up, so skip ss setup */ + return 1; + + prepare_singlestep_uprobe(current_uprobe, ucb, regs); + /* + * Avoid scheduling the current while returning from + * kernel to user mode. + */ + clear_need_resched(); + return 1; + +no_uprobe: + spin_unlock_irqrestore(&uprobe_lock, ucb->flags); + preempt_enable_no_resched(); + + return ret; +} + +/* + * Called after single-stepping. p->addr is the address of the + * instruction whose first byte has been replaced by the "int 3" + * instruction. To avoid the SMP problems that can occur when we + * temporarily put back the original opcode to single-step, we + * single-stepped a copy of the instruction. The address of this + * copy is p->ainsn.insn. + * + * This function prepares to return from the post-single-step + * interrupt. We have to fix up the stack as follows: + * + * 0) Typically, the new eip is relative to the copied instruction. We + * need to make it relative to the original instruction. Exceptions are + * return instructions and absolute or indirect jump or call instructions. + * + * 1) If the single-stepped instruction was pushfl, then the TF and IF + * flags are set in the just-pushed eflags, and may need to be cleared. + * + * 2) If the single-stepped instruction was a call, the return address + * that is atop the stack is the address following the copied instruction. + * We need to make it the address following the original instruction. + */ +static void __kprobes resume_execution_user(struct kprobe *p, + struct pt_regs *regs, struct uprobe_ctlblk *ucb) +{ + unsigned long *tos = (unsigned long *)regs->esp; + unsigned long next_eip = 0; + unsigned long copy_eip = ucb->singlestep_addr; + unsigned long orig_eip = (unsigned long)p->addr; + + switch (p->ainsn.insn[0]) { + case 0x9c: /* pushfl */ + *tos &= ~(TF_MASK | IF_MASK); + *tos |= ucb->uprobe_old_eflags; + break; + case 0xc3: /* ret/lret */ + case 0xcb: + case 0xc2: + case 0xca: + regs->eflags &= ~TF_MASK; + next_eip = regs->eip; + /* eip is already adjusted, no more changes required*/ + return; + break; + case 0xe8: /* call relative - Fix return addr */ + *tos = orig_eip + (*tos - copy_eip); + break; + case 0xff: + if ((p->ainsn.insn[1] & 0x30) == 0x10) { + /* call absolute, indirect */ + /* Fix return addr; eip is correct. */ + next_eip = regs->eip; + *tos = orig_eip + (*tos - copy_eip); + } else if (((p->ainsn.insn[1] & 0x31) == 0x20) || + ((p->ainsn.insn[1] & 0x31) == 0x21)) { + /* jmp near or jmp far absolute indirect */ + /* eip is correct. */ + next_eip = regs->eip; + } + break; + case 0xea: /* jmp absolute -- eip is correct */ + next_eip = regs->eip; + break; + default: + break; + } + + regs->eflags &= ~TF_MASK; + if (next_eip) + regs->eip = next_eip; + else + regs->eip = orig_eip + (regs->eip - copy_eip); +} + +/* + * post_uprobe_handler(), executes the user specified handlers and + * resumes with the normal execution. + */ +static inline int post_uprobe_handler(struct pt_regs *regs) +{ + struct kprobe *cur; + struct uprobe_ctlblk *ucb; + + if (!current_uprobe) + return 0; + + ucb = &uprobe_ctlblk; + cur = ucb->curr_p; + + if (!cur) + return 0; + + if (cur->post_handler) { + if (ucb->uprobe_status == UPROBE_SS_INLINE) + ucb->uprobe_status = UPROBE_SSDONE_INLINE; + else + ucb->uprobe_status = UPROBE_HIT_SSDONE; + cur->post_handler(cur, regs, 0); + } + + resume_execution_user(cur, regs, ucb); + regs->eflags |= ucb->uprobe_saved_eflags; + + if (ucb->uprobe_status == UPROBE_SSDONE_INLINE) + replace_original_insn(current_uprobe, regs, + BREAKPOINT_INSTRUCTION); + else { + unlock_page(ucb->upage); + pte_unmap(ucb->upte); + } + current_uprobe = NULL; + spin_unlock_irqrestore(&uprobe_lock, ucb->flags); + preempt_enable_no_resched(); + /* + * if somebody else is singlestepping across a probe point, eflags + * will have TF set, in which case, continue the remaining processing + * of do_debug, as if this is not a probe hit. + */ + if (regs->eflags & TF_MASK) + return 0; + + return 1; +} + +static inline int uprobe_fault_handler(struct pt_regs *regs, int trapnr) +{ + struct kprobe *cur; + struct uprobe_ctlblk *ucb; + int ret = 0; + + if (!current_uprobe) + return 0; + + ucb = &uprobe_ctlblk; + cur = ucb->curr_p; + + if (!cur) + return 0; + + if ((ucb->uprobe_status == UPROBE_HIT_SS) || + (ucb->uprobe_status == UPROBE_SS_INLINE)) { + if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr)) + return 1; + + regs->eip = (unsigned long)cur->addr; + regs->eflags |= ucb->uprobe_old_eflags; + regs->eflags &= ~TF_MASK; + replace_original_insn(current_uprobe, regs, + BREAKPOINT_INSTRUCTION); + current_uprobe = NULL; + ret = 1; + spin_unlock_irqrestore(&uprobe_lock, ucb->flags); + preempt_enable_no_resched(); + } + return ret; +} + +/* + * Wrapper routine to for handling exceptions. + */ +int __kprobes uprobe_exceptions_notify(struct notifier_block *self, + unsigned long val, void *data) +{ + struct die_args *args = (struct die_args *)data; + int ret = NOTIFY_DONE; + + if (args->regs->eflags & VM_MASK) { + /* We are in virtual-8086 mode. Return NOTIFY_DONE */ + return ret; + } + + switch (val) { + case DIE_INT3: + if (uprobe_handler(args->regs)) + ret = NOTIFY_STOP; + break; + case DIE_DEBUG: + if (post_uprobe_handler(args->regs)) + ret = NOTIFY_STOP; + break; + case DIE_GPF: + case DIE_PAGE_FAULT: + if (current_uprobe && + uprobe_fault_handler(args->regs, args->trapnr)) + ret = NOTIFY_STOP; + break; + default: + break; + } + return ret; +} + int __init arch_init_kprobes(void) { return 0; diff -puN arch/i386/mm/fault.c~kprobes_userspace_probes-ss-out-of-line arch/i386/mm/fault.c --- linux-2.6.16-rc5-mm2/arch/i386/mm/fault.c~kprobes_userspace_probes-ss-out-of-line 2006-03-06 19:16:45.000000000 +0530 +++ linux-2.6.16-rc5-mm2-prasanna/arch/i386/mm/fault.c 2006-03-06 19:16:45.000000000 +0530 @@ -71,8 +71,7 @@ void bust_spinlocks(int yes) * * This is slow, but is very rarely executed. */ -static inline unsigned long get_segment_eip(struct pt_regs *regs, - unsigned long *eip_limit) +unsigned long get_segment_eip(struct pt_regs *regs, unsigned long *eip_limit) { unsigned long eip = regs->eip; unsigned seg = regs->xcs & 0xffff; diff -puN include/asm-i386/kprobes.h~kprobes_userspace_probes-ss-out-of-line include/asm-i386/kprobes.h --- linux-2.6.16-rc5-mm2/include/asm-i386/kprobes.h~kprobes_userspace_probes-ss-out-of-line 2006-03-06 19:16:45.000000000 +0530 +++ linux-2.6.16-rc5-mm2-prasanna/include/asm-i386/kprobes.h 2006-03-06 19:16:45.000000000 +0530 @@ -26,6 +26,7 @@ */ #include #include +#include #define __ARCH_WANT_KPROBES_INSN_SLOT @@ -77,6 +78,18 @@ struct kprobe_ctlblk { struct prev_kprobe prev_kprobe; }; +/* per user probe control block */ +struct uprobe_ctlblk { + unsigned long uprobe_status; + unsigned long uprobe_saved_eflags; + unsigned long uprobe_old_eflags; + unsigned long singlestep_addr; + unsigned long flags; + struct kprobe *curr_p; + pte_t *upte; + struct page *upage; +}; + /* trap3/1 are intr gates for kprobes. So, restore the status of IF, * if necessary, before executing the original int3/1 (trap) handler. */ @@ -88,4 +101,9 @@ static inline void restore_interrupts(st extern int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, void *data); +int uprobe_exceptions_notify(struct notifier_block *self, + unsigned long val, void *data); +extern unsigned long get_segment_eip(struct pt_regs *regs, + unsigned long *eip_limit); + #endif /* _ASM_KPROBES_H */ diff -puN include/linux/kprobes.h~kprobes_userspace_probes-ss-out-of-line include/linux/kprobes.h --- linux-2.6.16-rc5-mm2/include/linux/kprobes.h~kprobes_userspace_probes-ss-out-of-line 2006-03-06 19:16:45.000000000 +0530 +++ linux-2.6.16-rc5-mm2-prasanna/include/linux/kprobes.h 2006-03-06 19:16:45.000000000 +0530 @@ -51,6 +51,13 @@ #define KPROBE_REENTER 0x00000004 #define KPROBE_HIT_SSDONE 0x00000008 +/* uprobe_status settings */ +#define UPROBE_HIT_ACTIVE 0x00000001 +#define UPROBE_HIT_SS 0x00000002 +#define UPROBE_HIT_SSDONE 0x00000004 +#define UPROBE_SS_INLINE 0x00000008 +#define UPROBE_SSDONE_INLINE 0x00000010 + /* Attach to insert probes on any functions which should be ignored*/ #define __kprobes __attribute__((__section__(".kprobes.text"))) @@ -183,6 +190,7 @@ struct kretprobe_instance { struct task_struct *task; }; +extern spinlock_t uprobe_lock; extern spinlock_t kretprobe_lock; extern struct mutex kprobe_mutex; extern int arch_prepare_kprobe(struct kprobe *p); _ -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Email: prasanna@in.ibm.com Ph: 91-80-51776329