From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 18241 invoked by alias); 23 Oct 2013 08:29:19 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Received: (qmail 18087 invoked by uid 89); 23 Oct 2013 08:29:18 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.0 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.2 X-HELO: relay1.mentorg.com Received: from relay1.mentorg.com (HELO relay1.mentorg.com) (192.94.38.131) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Wed, 23 Oct 2013 08:29:16 +0000 Received: from svr-orw-exc-10.mgc.mentorg.com ([147.34.98.58]) by relay1.mentorg.com with esmtp id 1VYtoV-0006WD-6M from Yao_Qi@mentor.com for gdb-patches@sourceware.org; Wed, 23 Oct 2013 01:29:11 -0700 Received: from SVR-ORW-FEM-02.mgc.mentorg.com ([147.34.96.206]) by SVR-ORW-EXC-10.mgc.mentorg.com with Microsoft SMTPSVC(6.0.3790.4675); Wed, 23 Oct 2013 01:29:11 -0700 Received: from qiyao.dyndns.dyndns.org (147.34.91.1) by svr-orw-fem-02.mgc.mentorg.com (147.34.96.168) with Microsoft SMTP Server id 14.2.247.3; Wed, 23 Oct 2013 01:29:10 -0700 From: Yao Qi To: Subject: [PATCH 3/5] set/show code-cache Date: Wed, 23 Oct 2013 08:29:00 -0000 Message-ID: <1382516855-32218-4-git-send-email-yao@codesourcery.com> In-Reply-To: <1382516855-32218-1-git-send-email-yao@codesourcery.com> References: <1382516855-32218-1-git-send-email-yao@codesourcery.com> MIME-Version: 1.0 Content-Type: text/plain X-IsSubscribed: yes X-SW-Source: 2013-10/txt/msg00714.txt.bz2 Similar to stack cache, in this patch, we add TARGET_OBJECT_CODE_MEMORY to read code from target and add a new option "set code-cache on|off" to control use code cache or not. The invalidation of code cache is identical to stack cache, with this patch, function target_dcache_invalidate invalidates both. Command "set {stack,code}-cache" invalidates either of them. gdb: 2013-10-23 Yao Qi * target.c (struct target_dcache) : New field. (target_dcache_alloc): Initialize field 'code'. (set_stack_cache_enabled_p): Invalidate corresponding dcache. (code_cache_enabled_p_1): New. (code_cache_enabled_p): New. (set_code_cache_enabled_p): New function. (show_code_cache_enabled_p): New function. (target_dcache_xfree): Free code cache. (target_dcache_invalidate): Invalidate code cache. (memory_xfer_partial_1): Handle TARGET_OBJECT_CODE_MEMORY and code_cache_enabled_p. (target_xfer_partial): Likewise. (target_read_code): New function. (initialize_targets): Register command. * target.h (enum target_object) : New. (target_read_code): Declare. --- gdb/target.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++++++-------- gdb/target.h | 5 +++ 2 files changed, 89 insertions(+), 13 deletions(-) diff --git a/gdb/target.c b/gdb/target.c index 2ae42a8..624d41a 100644 --- a/gdb/target.c +++ b/gdb/target.c @@ -218,6 +218,9 @@ struct target_dcache /* Cache the memory on stack and areas specified by memory attributes. */ DCACHE *general; + + /* Cache the code. */ + DCACHE *code; }; /* Allocate an instance of struct target_dcache. */ @@ -228,6 +231,7 @@ target_dcache_alloc (void) struct target_dcache *dcache = xmalloc (sizeof (*dcache)); dcache->general = dcache_init (); + dcache->code = dcache_init (); return dcache; } @@ -240,6 +244,7 @@ target_dcache_xfree (struct target_dcache *dcache) if (dcache != NULL) { dcache_free (dcache->general); + dcache_free (dcache->code); xfree (dcache); } } @@ -289,7 +294,7 @@ set_stack_cache_enabled_p (char *args, int from_tty, struct cmd_list_element *c) { if (stack_cache_enabled_p != stack_cache_enabled_p_1) - target_dcache_invalidate (); + dcache_invalidate (target_dcache_get ()->general); stack_cache_enabled_p = stack_cache_enabled_p_1; } @@ -301,6 +306,35 @@ show_stack_cache_enabled_p (struct ui_file *file, int from_tty, fprintf_filtered (file, _("Cache use for stack accesses is %s.\n"), value); } +/* The option sets this. */ +static int code_cache_enabled_p_1 = 1; +/* And set_code_cache_enabled_p updates this. + The reason for the separation is so that we don't flush the cache for + on->on transitions. */ +static int code_cache_enabled_p = 1; + +/* This is called *after* the code-cache has been set. + Flush the cache for off->on and on->off transitions. + There's no real need to flush the cache for on->off transitions, + except cleanliness. */ + +static void +set_code_cache_enabled_p (char *args, int from_tty, + struct cmd_list_element *c) +{ + if (code_cache_enabled_p != code_cache_enabled_p_1) + dcache_invalidate (target_dcache_get ()->code); + + code_cache_enabled_p = code_cache_enabled_p_1; +} + +static void +show_code_cache_enabled_p (struct ui_file *file, int from_tty, + struct cmd_list_element *c, const char *value) +{ + fprintf_filtered (file, _("Cache use for code accesses is %s.\n"), value); +} + /* Invalidate the target dcache. */ void @@ -309,6 +343,7 @@ target_dcache_invalidate (void) struct target_dcache *td = target_dcache_get (); dcache_invalidate (td->general); + dcache_invalidate (td->code); } /* The user just typed 'target' without the name of a target. */ @@ -1651,17 +1686,24 @@ memory_xfer_partial_1 (struct target_ops *ops, enum target_object object, the collected memory range fails. */ && get_traceframe_number () == -1 && (region->attrib.cache - || (stack_cache_enabled_p && object == TARGET_OBJECT_STACK_MEMORY))) + || (stack_cache_enabled_p && object == TARGET_OBJECT_STACK_MEMORY) + || (code_cache_enabled_p && object == TARGET_OBJECT_CODE_MEMORY))) { + struct target_dcache *td = target_dcache_get (); + DCACHE *dcache = NULL; + + if (code_cache_enabled_p && object == TARGET_OBJECT_CODE_MEMORY) + dcache = td->code; + else + dcache = td->general; + if (readbuf != NULL) - res = dcache_xfer_memory (ops, target_dcache_get ()->general, - memaddr, readbuf, reg_len, 0); + res = dcache_xfer_memory (ops, dcache, memaddr, readbuf, reg_len, 0); else /* FIXME drow/2006-08-09: If we're going to preserve const correctness dcache_xfer_memory should take readbuf and writebuf. */ - res = dcache_xfer_memory (ops, target_dcache_get ()->general, - memaddr, (void *) writebuf, + res = dcache_xfer_memory (ops, dcache, memaddr, (void *) writebuf, reg_len, 1); if (res <= 0) return -1; @@ -1701,13 +1743,17 @@ memory_xfer_partial_1 (struct target_ops *ops, enum target_object object, if (res > 0 && inf != NULL - && writebuf != NULL - && !region->attrib.cache - && stack_cache_enabled_p - && object != TARGET_OBJECT_STACK_MEMORY) + && writebuf != NULL) { - dcache_update (target_dcache_get ()->general, memaddr, - (void *) writebuf, res); + struct target_dcache *td = target_dcache_get (); + + if (!region->attrib.cache + && stack_cache_enabled_p + && object != TARGET_OBJECT_STACK_MEMORY) + dcache_update (td->general, memaddr, (void *) writebuf, res); + + if (code_cache_enabled_p && object != TARGET_OBJECT_CODE_MEMORY) + dcache_update (td->code, memaddr, (void *) writebuf, res); } /* If we still haven't got anything, return the last error. We @@ -1792,7 +1838,8 @@ target_xfer_partial (struct target_ops *ops, /* If this is a memory transfer, let the memory-specific code have a look at it instead. Memory transfers are more complicated. */ - if (object == TARGET_OBJECT_MEMORY || object == TARGET_OBJECT_STACK_MEMORY) + if (object == TARGET_OBJECT_MEMORY || object == TARGET_OBJECT_STACK_MEMORY + || object == TARGET_OBJECT_CODE_MEMORY) retval = memory_xfer_partial (ops, object, readbuf, writebuf, offset, len); else @@ -1894,6 +1941,19 @@ target_read_stack (CORE_ADDR memaddr, gdb_byte *myaddr, ssize_t len) return TARGET_XFER_E_IO; } +/* Like target_read_memory, but specify explicitly that this is a read from + the target's code. This may trigger different cache behavior. */ + +int +target_read_code (CORE_ADDR memaddr, gdb_byte *myaddr, ssize_t len) +{ + if (target_read (current_target.beneath, TARGET_OBJECT_CODE_MEMORY, NULL, + myaddr, memaddr, len) == len) + return 0; + else + return TARGET_XFER_E_IO; +} + /* Write LEN bytes from MYADDR to target memory at address MEMADDR. Returns either 0 for success or a target_xfer_error value if any error occurs. If an error occurs, no guarantee is made about how @@ -5199,6 +5259,17 @@ By default, caching for stack access is on."), show_stack_cache_enabled_p, &setlist, &showlist); + add_setshow_boolean_cmd ("code-cache", class_support, + &code_cache_enabled_p_1, _("\ +Set cache use for code access."), _("\ +Show cache use for code access."), _("\ +When on, use the data cache for all code access, regardless of any\n\ +configured memory regions. This improves remote performance significantly.\n\ +By default, caching for code access is on."), + set_code_cache_enabled_p, + show_code_cache_enabled_p, + &setlist, &showlist); + add_setshow_boolean_cmd ("may-write-registers", class_support, &may_write_registers_1, _("\ Set permission to write into registers."), _("\ diff --git a/gdb/target.h b/gdb/target.h index 56ca40c..eb00b83 100644 --- a/gdb/target.h +++ b/gdb/target.h @@ -144,6 +144,9 @@ enum target_object if it is not in a region marked as such, since it is known to be "normal" RAM. */ TARGET_OBJECT_STACK_MEMORY, + /* Memory known to be part of the target code. This is cached even + if it is not in a region marked as such. */ + TARGET_OBJECT_CODE_MEMORY, /* Kernel Unwind Table. See "ia64-tdep.c". */ TARGET_OBJECT_UNWIND_TABLE, /* Transfer auxilliary vector. */ @@ -1052,6 +1055,8 @@ extern int target_read_memory (CORE_ADDR memaddr, gdb_byte *myaddr, extern int target_read_stack (CORE_ADDR memaddr, gdb_byte *myaddr, ssize_t len); +extern int target_read_code (CORE_ADDR memaddr, gdb_byte *myaddr, ssize_t len); + extern int target_write_memory (CORE_ADDR memaddr, const gdb_byte *myaddr, ssize_t len); -- 1.7.7.6