public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
From: Ankur Saini <arsenic.secondary@gmail.com>
To: David Malcolm <dmalcolm@redhat.com>
Cc: gcc@gcc.gnu.org
Subject: progress update
Date: Tue, 15 Jun 2021 19:42:08 +0530	[thread overview]
Message-ID: <DB58F8E1-9BB5-4A64-93CB-BE13B4E943BF@gmail.com> (raw)
In-Reply-To: <2f0f10959bae2881354522af81688de0076ea920.camel@redhat.com>



> On 13-Jun-2021, at 8:22 PM, David Malcolm <dmalcolm@redhat.com> wrote:
> 
> On Sun, 2021-06-13 at 19:11 +0530, Ankur Saini wrote:
>> 
>> 
>>> On 08-Jun-2021, at 11:24 PM, David Malcolm <dmalcolm@redhat.com <mailto:dmalcolm@redhat.com>>
>>> wrote:
>>> 
>>> Is there a URL for your branch?
>> 
>> no, currently it only local branch on my machine. Should I upload it on
>> a hosting site ( like GitHub ) ? or can I create a branch on remote
>> also ?
> 
> At some point we want you to be able to push patches to trunk, so as a
> step towards that I think it would be good for you to have a personal
> branch on the gcc git repository.
> 
> A guide to getting access is here:
>  https://gcc.gnu.org/gitwrite.html <https://gcc.gnu.org/gitwrite.html>
> 
> I will sponsor you.

I have filled the form.

> 
>> 
>>> The issue is that the analyzer currently divides calls into
>>> (a) calls where GCC's middle-end "knows" which function is called,
>>> and
>>> thus the call site has a cgraph_node.
>>> (b) calls where GCC's middle-end doesn't "know" which function is
>>> called.
>>> 
>>> The analyzer handles
>>>  (a) by building call and return edges in the supergraph, and
>>> processing them, and
>>>  (b) with an "unknown call" handler, which conservatively sets lots
>>> of
>>> state to "unknown" to handle the effects of an arbitrary call, and
>>> where the call doesn't get its own exploded_edge.
>> 
>>> 
>>> In this bug we have a variant of (b), let's call it (c): GCC's
>>> middle-
>>> end doesn't know which function is called, but the analyzer's
>>> region_model *does* know at a particular exploded_node.
>> 
>> but how will the we know this at the time of creation of supergraph?
>> isn’t exploded graph and regional model created after the supergraph ?
> 
> You are correct.
> 
> What I'm thinking is that when we create the supergraph we should split
> the nodes at more calls, not just at those calls that have a
> cgraph_edge, but also at those that are calls to an unknown function
> pointer (or maybe even split them at *all* calls).
> 
> Then, later, when engine.cc <http://engine.cc/> is building the exploded_graph, the
> supergraph will have a superedge for those calls, and we can create an
> exploded_edge representing the call.  That way if we discover the
> function pointer then (rather than having it from a cgraph_edge), we
> can build exploded nodes and exploded edges that are similar to the "we
> had a cgraph_edge" case.  You may need to generalize some of the event-
> handling code to do this.
> 
> Does that make sense?
> 
> You might want to try building some really simple examples of this, to
> make it as easy as possible to see what's happening, and to debug.

ok let me see what can I do.

[...]

> Great.
> 
> Let me know how you get on.
> 
> As I understand it, Google recommends that we're exchanging emails
> about our GSoC project at least two times a week, so please do continue
> to report in, whether you're making progress, or if you feel you're
> stuck on something.

ok I would be more active from now on.

—

btw while using the gdb on “xgcc”, for some reason, debugger is not tracing the call to "run_checkers()” and is directly jumping from "pass_analyzer::execute()” to some instruction inside "ana::dump_analyzer_json()”. 

I am invoking debugger like this  :- 

—
$ ./xgcc /Users/ankursaini/Desktop/test.c -fanalyzer -B . -wrapper gdb,—args
—

and then while putting a breakpoint on “ana::run_checkers()”, gdb places 2 breakpoints ( one on correct position and another weirdly inside a different function in the same file )

—
(gdb) br ana::run_checkers() 
Breakpoint 3 at 0x101640990 (2 locations)

(gdb) info br
Num     Type           Disp Enb Address            What
1       breakpoint     keep y   0x000000010174ade7 in fancy_abort(char const*, int, char const*) at ../../gcc-source/gcc/diagnostic.c:1915
2       breakpoint     keep y   0x000000010174ee01 in internal_error(char const*, ...) at ../../gcc-source/gcc/diagnostic.c:1835

3       breakpoint     keep y   <MULTIPLE>         
3.1                                  y   0x0000000101640990 <ana::dump_analyzer_json(ana::supergraph const&, ana::exploded_graph const&)+48>
3.2                                  y   0x0000000101640ba0 in ana::run_checkers() at ../../gcc-source/gcc/analyzer/engine.cc:4918 <http://engine.cc:4918/>
—

but during the execution it only hits the breakpoint 3.1 ( which is inside the function "ana::dump_analyzer_json()” which according to me is called during the execution of "impl_run_checkers()”, after completing the analysis to dump the results in json format ) 

after looking at backtrace, I could see it calling "pass_analyzer::execute()” where “run_checkers()” should be called, but no such call (or a call to "impl_run_checkers()”)  is seen there .

here is the backtrace when debugger hits this breakpoint 3.1
—
(gdb) c
Continuing.
[New Thread 0x1c17 of process 2392]

Thread 2 hit Breakpoint 3, 0x0000000101640990 in ana::dump_analyzer_json (sg=..., eg=...) at ../../gcc-source/gcc/analyzer/engine.cc:4751
4751	  char *filename = concat (dump_base_name, ".analyzer.json.gz", NULL);

(gdb) bt
#0  0x0000000101640990 in ana::dump_analyzer_json (sg=..., eg=...) at ../../gcc-source/gcc/analyzer/engine.cc:4751
#1  0x000000010161a919 in (anonymous namespace)::pass_analyzer::execute (this=0x142b0a660) at ../../gcc-source/gcc/analyzer/analyzer-pass.cc:87
#2  0x000000010106319c in execute_one_pass (pass=<optimized out>, pass@entry=<opt_pass* 0x0>) at ../../gcc-source/gcc/passes.c:2567
#3  0x0000000101064e1c in execute_ipa_pass_list (pass=<opt_pass* 0x0>) at ../../gcc-source/gcc/passes.c:2996
#4  0x0000000100a89065 in symbol_table::output_weakrefs (this=<optimized out>) at ../../gcc-source/gcc/cgraphunit.c:2262
#5  0x0000000102038600 in ?? ()
#6  0x0000000000000000 in ?? ()
—

but at the end I can see the analyzer doing it’s work and generating the required warning as intended.

I never used to experience this problem earlier when using debugger on a full bootstrapped build. Looks like I am missing something here.

Thanks, 

- Ankur


  reply	other threads:[~2021-06-15 14:12 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <000000000000ca574305c2b1c1f8@google.com>
2021-05-30 15:08 ` progress update after initial GSoC virtual meetup Ankur Saini
2021-06-01 13:08   ` David Malcolm
2021-06-08 15:50     ` Ankur Saini
2021-06-08 17:54       ` David Malcolm
2021-06-13 13:41         ` Ankur Saini
2021-06-13 14:52           ` David Malcolm
2021-06-15 14:12             ` Ankur Saini [this message]
2021-06-15 18:29               ` progress update David Malcolm
2021-06-16  9:24                 ` Martin Jambor

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DB58F8E1-9BB5-4A64-93CB-BE13B4E943BF@gmail.com \
    --to=arsenic.secondary@gmail.com \
    --cc=dmalcolm@redhat.com \
    --cc=gcc@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).