public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* Multi-Threading GCC Compiler Internal Data
@ 2019-09-16 18:25 Nicholas Krause
  2019-09-17  6:37 ` Richard Biener
  0 siblings, 1 reply; 5+ messages in thread
From: Nicholas Krause @ 2019-09-16 18:25 UTC (permalink / raw)
  To: rguenther; +Cc: gcc

Greetings Richard,

I don't know if it's currently possible but whats the best way to either 
so about or

use a tool to expose shared state at both the GIMPLE and RTL level.  
This would

allow us to figure out much better what algorthims or data structures to 
choose

to allow this to scale much better than the current prototype.


Thanks,


Nick

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Multi-Threading GCC Compiler Internal Data
  2019-09-16 18:25 Multi-Threading GCC Compiler Internal Data Nicholas Krause
@ 2019-09-17  6:37 ` Richard Biener
  2019-09-17 15:20   ` Nicholas Krause
  0 siblings, 1 reply; 5+ messages in thread
From: Richard Biener @ 2019-09-17  6:37 UTC (permalink / raw)
  To: Nicholas Krause; +Cc: gcc

[-- Attachment #1: Type: text/plain, Size: 1270 bytes --]

On Mon, 16 Sep 2019, Nicholas Krause wrote:

> Greetings Richard,
> 
> I don't know if it's currently possible but whats the best way to either so
> about or
> 
> use a tool to expose shared state at both the GIMPLE and RTL level.  This
> would
> 
> allow us to figure out much better what algorthims or data structures to
> choose
> 
> to allow this to scale much better than the current prototype.

You are mixing independent issues.  Shared state needs to be identified
and protected for correctness reasons.  In some cases changing the
data structure to be protected can make it cheaper to do so.  The
scaling of the current prototype is limited by the fraction of the
compilation we parallelize as well as the granularity.

Going forward the most useful things are a) reducing the amount of
state that ends up being shared when we paralellize, b) increase
the fraction of the compilation we paralellize by tackling
RTL optimizations and the early GIMPLE pipeline

The prototype showed that paralellization is beneficial and that it
can be done with a reasonable amount of work.

Richard.

-- 
Richard Biener <rguenther@suse.de>
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imendörffer; HRB 247165 (AG München)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Multi-Threading GCC Compiler Internal Data
  2019-09-17  6:37 ` Richard Biener
@ 2019-09-17 15:20   ` Nicholas Krause
  2019-09-18  8:01     ` Richard Biener
  0 siblings, 1 reply; 5+ messages in thread
From: Nicholas Krause @ 2019-09-17 15:20 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc


On 9/17/19 2:37 AM, Richard Biener wrote:
> On Mon, 16 Sep 2019, Nicholas Krause wrote:
>
>> Greetings Richard,
>>
>> I don't know if it's currently possible but whats the best way to either so
>> about or
>>
>> use a tool to expose shared state at both the GIMPLE and RTL level.  This
>> would
>>
>> allow us to figure out much better what algorthims or data structures to
>> choose
>>
>> to allow this to scale much better than the current prototype.
> You are mixing independent issues.  Shared state needs to be identified
> and protected for correctness reasons.  In some cases changing the
> data structure to be protected can make it cheaper to do so.  The
> scaling of the current prototype is limited by the fraction of the
> compilation we parallelize as well as the granularity.
>
> Going forward the most useful things are a) reducing the amount of
> state that ends up being shared when we paralellize, b) increase
> the fraction of the compilation we paralellize by tackling
> RTL optimizations and the early GIMPLE pipeline
>
> The prototype showed that paralellization is beneficial and that it
> can be done with a reasonable amount of work.
>
> Richard.
>
Richard,

Sorry I think your misunderstanding me. I was asking whats the best way to

write a tool to expose twhere and how the shared state is using being used.

As from experience it seems  the best way forward is to figure out what

we have in terms of shared state and write a core set of classes or API \

for scaling the shared  state. If we have a tool for collecting data 
this would

be much easier.

My reasoning for this is threefold:

1. It removes the issues with each pass needing be scaled seperately

2. It allows us to deal with future added passes being able to be 
palleralized

3.  Allows us to get data about scaling from other jobs like make -j without

issues and as the user would assume it worked. This was discussed at 
Cauldron

and other people seem to agree that working well with make is a good idea.

Hopefully that explains it better,

Nick

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Multi-Threading GCC Compiler Internal Data
  2019-09-17 15:20   ` Nicholas Krause
@ 2019-09-18  8:01     ` Richard Biener
  2019-09-18 20:09       ` Nicholas Krause
  0 siblings, 1 reply; 5+ messages in thread
From: Richard Biener @ 2019-09-18  8:01 UTC (permalink / raw)
  To: Nicholas Krause; +Cc: gcc

[-- Attachment #1: Type: text/plain, Size: 1539 bytes --]

On Tue, 17 Sep 2019, Nicholas Krause wrote:

> 
> On 9/17/19 2:37 AM, Richard Biener wrote:
> > On Mon, 16 Sep 2019, Nicholas Krause wrote:
> >
> >> Greetings Richard,
> >>
> >> I don't know if it's currently possible but whats the best way to either so
> >> about or
> >>
> >> use a tool to expose shared state at both the GIMPLE and RTL level.  This
> >> would
> >>
> >> allow us to figure out much better what algorthims or data structures to
> >> choose
> >>
> >> to allow this to scale much better than the current prototype.
> > You are mixing independent issues.  Shared state needs to be identified
> > and protected for correctness reasons.  In some cases changing the
> > data structure to be protected can make it cheaper to do so.  The
> > scaling of the current prototype is limited by the fraction of the
> > compilation we parallelize as well as the granularity.
> >
> > Going forward the most useful things are a) reducing the amount of
> > state that ends up being shared when we paralellize, b) increase
> > the fraction of the compilation we paralellize by tackling
> > RTL optimizations and the early GIMPLE pipeline
> >
> > The prototype showed that paralellization is beneficial and that it
> > can be done with a reasonable amount of work.
> >
> > Richard.
> >
> Richard,
> 
> Sorry I think your misunderstanding me. I was asking whats the best way to
> 
> write a tool to expose twhere and how the shared state is using being used.

Such tool would need to solve the halting problem so it cannot exist.

Richard.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Multi-Threading GCC Compiler Internal Data
  2019-09-18  8:01     ` Richard Biener
@ 2019-09-18 20:09       ` Nicholas Krause
  0 siblings, 0 replies; 5+ messages in thread
From: Nicholas Krause @ 2019-09-18 20:09 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc


On 9/18/19 4:01 AM, Richard Biener wrote:
> On Tue, 17 Sep 2019, Nicholas Krause wrote:
>
>> On 9/17/19 2:37 AM, Richard Biener wrote:
>>> On Mon, 16 Sep 2019, Nicholas Krause wrote:
>>>
>>>> Greetings Richard,
>>>>
>>>> I don't know if it's currently possible but whats the best way to either so
>>>> about or
>>>>
>>>> use a tool to expose shared state at both the GIMPLE and RTL level.  This
>>>> would
>>>>
>>>> allow us to figure out much better what algorthims or data structures to
>>>> choose
>>>>
>>>> to allow this to scale much better than the current prototype.
>>> You are mixing independent issues.  Shared state needs to be identified
>>> and protected for correctness reasons.  In some cases changing the
>>> data structure to be protected can make it cheaper to do so.  The
>>> scaling of the current prototype is limited by the fraction of the
>>> compilation we parallelize as well as the granularity.
>>>
>>> Going forward the most useful things are a) reducing the amount of
>>> state that ends up being shared when we paralellize, b) increase
>>> the fraction of the compilation we paralellize by tackling
>>> RTL optimizations and the early GIMPLE pipeline
>>>
>>> The prototype showed that paralellization is beneficial and that it
>>> can be done with a reasonable amount of work.
>>>
>>> Richard.
>>>
>> Richard,
>>
>> Sorry I think your misunderstanding me. I was asking whats the best way to
>>
>> write a tool to expose twhere and how the shared state is using being used.
> Such tool would need to solve the halting problem so it cannot exist.
>
> Richard.

I figured but is it still possible to get the data out of perf if 
possible. Not sure

what the best way to get the data for GCC directly is or out of around 
profiler

as I assuming even just splitting out some of the larger GIMPLE functions as

a start may help. So what tools or ways are the easiest to profile make -jx

to your knowledge.

Thats where I starting to profile or get real data for the work,


Nick

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-09-18 20:09 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-16 18:25 Multi-Threading GCC Compiler Internal Data Nicholas Krause
2019-09-17  6:37 ` Richard Biener
2019-09-17 15:20   ` Nicholas Krause
2019-09-18  8:01     ` Richard Biener
2019-09-18 20:09       ` Nicholas Krause

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).