public inbox for gcc-help@gcc.gnu.org
 help / color / mirror / Atom feed
* Compilation of lengthy C++ Files
@ 2023-10-18 16:04 Kai Song
  2023-10-18 21:59 ` Jonathan Wakely
  2023-10-19 12:47 ` David Brown
  0 siblings, 2 replies; 20+ messages in thread
From: Kai Song @ 2023-10-18 16:04 UTC (permalink / raw)
  To: gcc-help

[-- Attachment #1: Type: text/plain, Size: 2891 bytes --]

Dear GCC Developers,

I am unsuccessfully using g++ 12.0.4 to compile lengthy c++ codes. Those
codes are automatically generated from my own code-generator tools that
depend on parameters p.
Typical applications are:
- Taylor series of order p inserted into consistency conditions of
numerical schemes, to determine optimal method parameters (think of, e.g.,
Runge-Kutta methods)
- recursive automatic code transformation (think of adjoints of adjoints of
adjoints...) of recursion level p
- Hilbert curves or other space-filling curves to generate code that
simulates cache utilization in a Monte-Carlo context

I verify that for small p the codes compile and execute to the expected
result. However, there is always a threshold for p so that the generated
cpp file is so long that the compiler will just terminate after ~10min
without monitor output but return the value +1.
My cpp files range from 600k LOC up to 1Bio LOC. Often, the file comprises
of one single c++ template class member function definition that relies on
a few thousand lines of template-classes.

I would like to know:
1) Am I doing something wrong in that GCC should be able to compile lengthy
codes?
2) Is it known that GCC is unable to compile lengthy codes?
3) Is it acknowledged that a compiler's ability to compile large files is
relevant?
4) Are possible roots known for this inability and are these deliberate?

To give just one glimpse at the relevance: I analyzed methods for solving
F(x)=0 by using k stages -DF(x)*vk= F(x+sum_i bki*vi).
Compiling a Taylor series code with p=3, I was able to verify that three
iterations of the Chord method converge of fourth order, thus surpassing
Newton's method (same cost, but only quadratic order).
Checking whether any four-stage method exists that converges of fifth order
necessitates the compilation of a single cpp file function that is 1.2Mio
lines of code long. Since 2018 I am occasionally trying to find a way to
compile it.
I also used this code to automatically generate Butcher-type tableaus for
generalizations to initial-value problem solvers. One could really only
dream of what scientific leaps become possible only if lengthy codes were
compilable.

Right now, whenever a situation of this kind occurs, I have to give up, cut
my (possibly months of work) losses, and move on to working on something
else -- because it seems there is nothing I can do about it.
Even spitting a code of 1Bio LOC into chunks of 100k LOC and then
automating to compile and link these in turn seems hopeless; and probably
the linker has limitations as well..
But since this problem is to frequent to my life, I would really like to
find out how to fix it once and for all.
This is a problem that I have been suffering from severely for years
because it basically cripples how I can put my mathematical expertise to
use.

Thank you for getting in touch.

Kind regards,
Kai

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-18 16:04 Compilation of lengthy C++ Files Kai Song
@ 2023-10-18 21:59 ` Jonathan Wakely
  2023-10-19  8:36   ` Andrew Haley
  2023-10-19 12:47 ` David Brown
  1 sibling, 1 reply; 20+ messages in thread
From: Jonathan Wakely @ 2023-10-18 21:59 UTC (permalink / raw)
  To: Kai Song; +Cc: gcc-help

[-- Attachment #1: Type: text/plain, Size: 3647 bytes --]

On Wed, 18 Oct 2023, 17:05 Kai Song via Gcc-help, <gcc-help@gcc.gnu.org>
wrote:

> Dear GCC Developers,
>
> I am unsuccessfully using g++ 12.0.4 to compile lengthy c++ codes. Those
> codes are automatically generated from my own code-generator tools that
> depend on parameters p.
> Typical applications are:
> - Taylor series of order p inserted into consistency conditions of
> numerical schemes, to determine optimal method parameters (think of, e.g.,
> Runge-Kutta methods)
> - recursive automatic code transformation (think of adjoints of adjoints of
> adjoints...) of recursion level p
> - Hilbert curves or other space-filling curves to generate code that
> simulates cache utilization in a Monte-Carlo context
>
> I verify that for small p the codes compile and execute to the expected
> result. However, there is always a threshold for p so that the generated
> cpp file is so long that the compiler will just terminate after ~10min
> without monitor output but return the value +1.
> My cpp files range from 600k LOC up to 1Bio LOC. Often, the file comprises
> of one single c++ template class member function definition that relies on
> a few thousand lines of template-classes.
>
> I would like to know:
> 1) Am I doing something wrong in that GCC should be able to compile lengthy
> codes?
>

Do you have enough RAM?

2) Is it known that GCC is unable to compile lengthy codes?
>

Yes, there are loads of bug reports about this kind of thing, some fixed,
some not.

3) Is it acknowledged that a compiler's ability to compile large files is
> relevant?
>

Yes, within reason. Some generated code is just silly and could be written
differently.

4) Are possible roots known for this inability and are these deliberate?
>

It depends on the code. Sometimes there are quadratic (or worse) algorithms
involved. Sometimes GCC's intermediate representation is just too memory
hungry.


> To give just one glimpse at the relevance: I analyzed methods for solving
> F(x)=0 by using k stages -DF(x)*vk= F(x+sum_i bki*vi).
> Compiling a Taylor series code with p=3, I was able to verify that three
> iterations of the Chord method converge of fourth order, thus surpassing
> Newton's method (same cost, but only quadratic order).
> Checking whether any four-stage method exists that converges of fifth order
> necessitates the compilation of a single cpp file function that is 1.2Mio
> lines of code long.


That's going to use insane amounts of memory just to represent the
function, before even trying to analyse it out optimise it.

Since 2018 I am occasionally trying to find a way to
> compile it.
> I also used this code to automatically generate Butcher-type tableaus for
> generalizations to initial-value problem solvers. One could really only
> dream of what scientific leaps become possible only if lengthy codes were
> compilable.
>
> Right now, whenever a situation of this kind occurs, I have to give up, cut
> my (possibly months of work) losses, and move on to working on something
> else -- because it seems there is nothing I can do about it.
>

Have you tried using clang instead?

Even spitting a code of 1Bio LOC into chunks of 100k LOC and then
> automating to compile and link these in turn seems hopeless; and probably
> the linker has limitations as well..
>

Very different ones though.

But since this problem is to frequent to my life, I would really like to
> find out how to fix it once and for all.
> This is a problem that I have been suffering from severely for years
> because it basically cripples how I can put my mathematical expertise to
> use.
>
> Thank you for getting in touch.
>
> Kind regards,
> Kai
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-18 21:59 ` Jonathan Wakely
@ 2023-10-19  8:36   ` Andrew Haley
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Haley @ 2023-10-19  8:36 UTC (permalink / raw)
  To: gcc-help

> On Wed, 18 Oct 2023, 17:05 Kai Song via Gcc-help, <gcc-help@gcc.gnu.org>
> wrote:
> 
>> But since this problem is to frequent to my life, I would really like to
>> find out how to fix it once and for all.
>> This is a problem that I have been suffering from severely for years
>> because it basically cripples how I can put my mathematical expertise to
>> use.

It's time to generate subroutines. When you output a slab of code,
don't just spit it all out inline, but put it into a subroutine.

-- 
Andrew Haley  (he/him)
Java Platform Lead Engineer
Red Hat UK Ltd. <https://www.redhat.com>
https://keybase.io/andrewhaley
EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-18 16:04 Compilation of lengthy C++ Files Kai Song
  2023-10-18 21:59 ` Jonathan Wakely
@ 2023-10-19 12:47 ` David Brown
  2023-10-19 12:47   ` David Brown
  2023-10-19 14:16   ` Kai Song
  1 sibling, 2 replies; 20+ messages in thread
From: David Brown @ 2023-10-19 12:47 UTC (permalink / raw)
  To: Kai Song, gcc-help

On 18/10/2023 18:04, Kai Song via Gcc-help wrote:
> Dear GCC Developers,
> 
> I am unsuccessfully using g++ 12.0.4 to compile lengthy c++ codes. Those
> codes are automatically generated from my own code-generator tools that
> depend on parameters p.
> Typical applications are:
> - Taylor series of order p inserted into consistency conditions of
> numerical schemes, to determine optimal method parameters (think of, e.g.,
> Runge-Kutta methods)
> - recursive automatic code transformation (think of adjoints of adjoints of
> adjoints...) of recursion level p
> - Hilbert curves or other space-filling curves to generate code that
> simulates cache utilization in a Monte-Carlo context
> 
> I verify that for small p the codes compile and execute to the expected
> result. However, there is always a threshold for p so that the generated
> cpp file is so long that the compiler will just terminate after ~10min
> without monitor output but return the value +1.
> My cpp files range from 600k LOC up to 1Bio LOC. Often, the file comprises
> of one single c++ template class member function definition that relies on
> a few thousand lines of template-classes.
> 
> I would like to know:
> 1) Am I doing something wrong in that GCC should be able to compile lengthy
> codes?
> 2) Is it known that GCC is unable to compile lengthy codes?
> 3) Is it acknowledged that a compiler's ability to compile large files is
> relevant?
> 4) Are possible roots known for this inability and are these deliberate?
> 

I am curious to know why you are generating code like this.  I can see 
how some code generators for mathematical code could easily produce 
large amounts of code, but it is rarely ideal for real-world uses.  Such 
flattened code can reduce overheads and improve optimisation 
opportunities (like inlining, constant folding, function cloning, etc.) 
for small cases, but then they get impractical for compiling while the 
costs for cache misses outweigh the overhead cost for the loops or 
recursion needed for general solutions.

Any compiler is going to be targeted and tuned towards "normal" or 
"typical" code.  That means primarily hand-written code, or smaller 
generated code.  I know that some systems generate very large functions 
or large files, but those are primarily C code, and the code is often 
very simple and "flat".  (Applications here include compilers that use C 
as a intermediary target language, and simulators of various kinds.)  It 
typically makes sense to disable certain optimisation passes here, and a 
number of passes scale badly (quadratic or perhaps worse) with function 
size.

However, if you are generating huge templates in C++, you are going a 
big step beyond that - templates are, in a sense, code generators 
themselves that run at compile time as an interpreted meta-language.  I 
don't expect that there has been a deliberate decision to limit GCC's 
handling of larger files, but I can't imagine that huge templates are a 
major focus for the compiler development.  And I would expect enormous 
memory use and compile times even when it does work.



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-19 12:47 ` David Brown
@ 2023-10-19 12:47   ` David Brown
  2023-10-19 14:16   ` Kai Song
  1 sibling, 0 replies; 20+ messages in thread
From: David Brown @ 2023-10-19 12:47 UTC (permalink / raw)
  To: gcc-help

On 18/10/2023 18:04, Kai Song via Gcc-help wrote:
> Dear GCC Developers,
> 
> I am unsuccessfully using g++ 12.0.4 to compile lengthy c++ codes. Those
> codes are automatically generated from my own code-generator tools that
> depend on parameters p.
> Typical applications are:
> - Taylor series of order p inserted into consistency conditions of
> numerical schemes, to determine optimal method parameters (think of, e.g.,
> Runge-Kutta methods)
> - recursive automatic code transformation (think of adjoints of adjoints of
> adjoints...) of recursion level p
> - Hilbert curves or other space-filling curves to generate code that
> simulates cache utilization in a Monte-Carlo context
> 
> I verify that for small p the codes compile and execute to the expected
> result. However, there is always a threshold for p so that the generated
> cpp file is so long that the compiler will just terminate after ~10min
> without monitor output but return the value +1.
> My cpp files range from 600k LOC up to 1Bio LOC. Often, the file comprises
> of one single c++ template class member function definition that relies on
> a few thousand lines of template-classes.
> 
> I would like to know:
> 1) Am I doing something wrong in that GCC should be able to compile lengthy
> codes?
> 2) Is it known that GCC is unable to compile lengthy codes?
> 3) Is it acknowledged that a compiler's ability to compile large files is
> relevant?
> 4) Are possible roots known for this inability and are these deliberate?
> 

I am curious to know why you are generating code like this.  I can see 
how some code generators for mathematical code could easily produce 
large amounts of code, but it is rarely ideal for real-world uses.  Such 
flattened code can reduce overheads and improve optimisation 
opportunities (like inlining, constant folding, function cloning, etc.) 
for small cases, but then they get impractical for compiling while the 
costs for cache misses outweigh the overhead cost for the loops or 
recursion needed for general solutions.

Any compiler is going to be targeted and tuned towards "normal" or 
"typical" code.  That means primarily hand-written code, or smaller 
generated code.  I know that some systems generate very large functions 
or large files, but those are primarily C code, and the code is often 
very simple and "flat".  (Applications here include compilers that use C 
as a intermediary target language, and simulators of various kinds.)  It 
typically makes sense to disable certain optimisation passes here, and a 
number of passes scale badly (quadratic or perhaps worse) with function 
size.

However, if you are generating huge templates in C++, you are going a 
big step beyond that - templates are, in a sense, code generators 
themselves that run at compile time as an interpreted meta-language.  I 
don't expect that there has been a deliberate decision to limit GCC's 
handling of larger files, but I can't imagine that huge templates are a 
major focus for the compiler development.  And I would expect enormous 
memory use and compile times even when it does work.




^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-19 12:47 ` David Brown
  2023-10-19 12:47   ` David Brown
@ 2023-10-19 14:16   ` Kai Song
  2023-10-19 14:26     ` Jonathan Wakely
  2023-10-19 15:15     ` David Brown
  1 sibling, 2 replies; 20+ messages in thread
From: Kai Song @ 2023-10-19 14:16 UTC (permalink / raw)
  To: gcc-help

[-- Attachment #1: Type: text/plain, Size: 7555 bytes --]

Thank you for your feedback!

I forward the requested information to everyone as it may be relevant.

>> Jonathan & David: How much RAM? Quadratic time-complexity.
I have 256GB of RAM and 64 cores. I launch g++ from the Windows 10
Terminal. From my information, Windows 10 will not limit the resources
given to the Terminal.
I am perfectly fine if compile times are in the time-span of months. For
the Runge-Kutta method, I would be happy with compile times of years as
this would mean I have a faster method in a few years.

>> Probably everyone: Is your code reasonable?
When trying to code-gen into shorter code, there are a few considerations:
1) Will any generated application be limitable into a certain volume of
source-code? Certainly not, so why even argue on one particular instance of
volume?
2) May it necessitate significant amounts of work to enable a
code-generator to produce shorter code? Certainly.
3) Suppose a code-generator exploits structure of a given user-provided
problem instance for generation of shorter code. Will this mean the set of
feasible problems treatable by the code-generator shinks? Likely.
4) Is 99% of the code trivial and should be compilable in read-time? I
believe so, given my naive information.
5) Should months of work be invested into generating shorter code in order
for that to be able to compile in 5 minutes when really this code is
compiled and used only once in a lifetime? I am unsure about this.

The stinging issue of argument may be on aspect 4), which makes it seem as
though the other 99% should be wrappable into functions.
The, if we call it so, mis-expectation is that then clean interfaces to
these functions are generatable or may even not exist. To give an example,
suppose a skein of code-strings in which each invoked 1% non-trivial code
is changing one string so that it induces chaos into the naming of all
other temporary variable names, causing mismatch between contiguity between
data at code-gen time and generated-code time. That is the central issue
that I --admittedly-- brute-force by committing into lengthy code.

>> Jonathan: Have you tried using clang instead?

I tried ICC (won't install successfully on my Windows 10 PC) and then CLANG
17.0.0 .

CLANG requires a provided STL, so I used the ones from Microsoft Visual
Studio 2022, which I referenced via the -isystem flag.
In order to compile it, I had to make tons of changes that g++ would not
have complained about; like putting a "d" at the end of a floating-point
value.

To test all these changes, I chose a smaller codegen parameter for which
g++ is able to compile the code and yields the expected result.
On that smaller code, clang instead fails with the following error:

// source code from line 201: therein, k is a template parameter, nx is a
static constexpr std::array<const size_t>, and mx is a static constexpr
size_t .
random_generator.template setRandn<(size_t)(mx-nx[k])>(  x+nx[k]);
random_generator.template setRandn<(size_t)(mx-nx[k])>(  xp+nx[k]);
random_generator.template setRandn<(size_t)(mb-nb[k])>( Ax+nb[k]);
random_generator.template setRandn<(size_t)(mb-nb[k])>( Axp+nb[k]);
random_generator.template setRandn<(size_t)(mx-nx[k])>( dx+nx[k]);
random_generator.template setRandn<(size_t)(mb-nb[k])>(dAx+nb[k]);
for(size_t i=nc[k];i<mc;++i){ calc.template setRandn<numPages>(gc[i]);
calc.template setRandn<numPages>(gc0[i]); }
random_generator.template setRandn<(size_t)(mx-nx[k])>( x0+nx[k]);

// clang compiler error message
error: no matching member function for call to 'setRandn'
206 | random_generator.template setRandn<(size_t)(mx-nx[k])>( x0+nx[k]);

The error appears unsensical to me, particularly since g++ was able to
compile it, and since clang accepts the same expression five lines earlier
but not in the present line.
The code snippet is from a test class that adds noise to vector-components
that should be uninvolved in certain test computations.

If it helps, I can work on making it compilable to clang in an effort to
identify up to which size either compiler is capable of compiling it.
Please let me know what helps.

Kind regards,
Kai

On Thu, Oct 19, 2023 at 2:47 PM David Brown <david@westcontrol.com> wrote:

> On 18/10/2023 18:04, Kai Song via Gcc-help wrote:
> > Dear GCC Developers,
> >
> > I am unsuccessfully using g++ 12.0.4 to compile lengthy c++ codes. Those
> > codes are automatically generated from my own code-generator tools that
> > depend on parameters p.
> > Typical applications are:
> > - Taylor series of order p inserted into consistency conditions of
> > numerical schemes, to determine optimal method parameters (think of,
> e.g.,
> > Runge-Kutta methods)
> > - recursive automatic code transformation (think of adjoints of adjoints
> of
> > adjoints...) of recursion level p
> > - Hilbert curves or other space-filling curves to generate code that
> > simulates cache utilization in a Monte-Carlo context
> >
> > I verify that for small p the codes compile and execute to the expected
> > result. However, there is always a threshold for p so that the generated
> > cpp file is so long that the compiler will just terminate after ~10min
> > without monitor output but return the value +1.
> > My cpp files range from 600k LOC up to 1Bio LOC. Often, the file
> comprises
> > of one single c++ template class member function definition that relies
> on
> > a few thousand lines of template-classes.
> >
> > I would like to know:
> > 1) Am I doing something wrong in that GCC should be able to compile
> lengthy
> > codes?
> > 2) Is it known that GCC is unable to compile lengthy codes?
> > 3) Is it acknowledged that a compiler's ability to compile large files is
> > relevant?
> > 4) Are possible roots known for this inability and are these deliberate?
> >
>
> I am curious to know why you are generating code like this.  I can see
> how some code generators for mathematical code could easily produce
> large amounts of code, but it is rarely ideal for real-world uses.  Such
> flattened code can reduce overheads and improve optimisation
> opportunities (like inlining, constant folding, function cloning, etc.)
> for small cases, but then they get impractical for compiling while the
> costs for cache misses outweigh the overhead cost for the loops or
> recursion needed for general solutions.
>
> Any compiler is going to be targeted and tuned towards "normal" or
> "typical" code.  That means primarily hand-written code, or smaller
> generated code.  I know that some systems generate very large functions
> or large files, but those are primarily C code, and the code is often
> very simple and "flat".  (Applications here include compilers that use C
> as a intermediary target language, and simulators of various kinds.)  It
> typically makes sense to disable certain optimisation passes here, and a
> number of passes scale badly (quadratic or perhaps worse) with function
> size.
>
> However, if you are generating huge templates in C++, you are going a
> big step beyond that - templates are, in a sense, code generators
> themselves that run at compile time as an interpreted meta-language.  I
> don't expect that there has been a deliberate decision to limit GCC's
> handling of larger files, but I can't imagine that huge templates are a
> major focus for the compiler development.  And I would expect enormous
> memory use and compile times even when it does work.
>
>
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-19 14:16   ` Kai Song
@ 2023-10-19 14:26     ` Jonathan Wakely
  2023-10-19 15:11       ` Kai Song
  2023-10-19 15:15     ` David Brown
  1 sibling, 1 reply; 20+ messages in thread
From: Jonathan Wakely @ 2023-10-19 14:26 UTC (permalink / raw)
  To: Kai Song; +Cc: gcc-help

On Thu, 19 Oct 2023 at 15:17, Kai Song via Gcc-help
<gcc-help@gcc.gnu.org> wrote:
>
> Thank you for your feedback!
>
> I forward the requested information to everyone as it may be relevant.
>
> >> Jonathan & David: How much RAM? Quadratic time-complexity.
> I have 256GB of RAM and 64 cores. I launch g++ from the Windows 10
> Terminal. From my information, Windows 10 will not limit the resources
> given to the Terminal.
> I am perfectly fine if compile times are in the time-span of months. For
> the Runge-Kutta method, I would be happy with compile times of years as
> this would mean I have a faster method in a few years.
>
> >> Probably everyone: Is your code reasonable?
> When trying to code-gen into shorter code, there are a few considerations:
> 1) Will any generated application be limitable into a certain volume of
> source-code? Certainly not, so why even argue on one particular instance of
> volume?

I have no idea what this means.

> 2) May it necessitate significant amounts of work to enable a
> code-generator to produce shorter code? Certainly.

Well at the moment your code doesn't even compile. So you can keep
generating billion line functions that don't work, or you could try
refactoring it. It might not be easy, but if it compiles and works,
surely that's better than billion line functions that don't work?


> 3) Suppose a code-generator exploits structure of a given user-provided
> problem instance for generation of shorter code. Will this mean the set of
> feasible problems treatable by the code-generator shinks? Likely.

I don't see why that should be true.

> 4) Is 99% of the code trivial and should be compilable in read-time? I
> believe so, given my naive information.

A billion lines of trivial operations still consumes ridiculous
amounts of resources to compile. Your naive view doesn't seem
relevant.

> 5) Should months of work be invested into generating shorter code in order
> for that to be able to compile in 5 minutes when really this code is
> compiled and used only once in a lifetime? I am unsure about this.
>
> The stinging issue of argument may be on aspect 4), which makes it seem as
> though the other 99% should be wrappable into functions.
> The, if we call it so, mis-expectation is that then clean interfaces to
> these functions are generatable or may even not exist. To give an example,
> suppose a skein of code-strings in which each invoked 1% non-trivial code
> is changing one string so that it induces chaos into the naming of all
> other temporary variable names, causing mismatch between contiguity between
> data at code-gen time and generated-code time. That is the central issue
> that I --admittedly-- brute-force by committing into lengthy code.

I don't understand this either, sorry.

Nothing you have said actually explains why you can't refactor the
code into utility functions that avoid doing everything in one huge
function.

>
> >> Jonathan: Have you tried using clang instead?
>
> I tried ICC (won't install successfully on my Windows 10 PC) and then CLANG
> 17.0.0 .
>
> CLANG requires a provided STL, so I used the ones from Microsoft Visual
> Studio 2022, which I referenced via the -isystem flag.
> In order to compile it, I had to make tons of changes that g++ would not
> have complained about; like putting a "d" at the end of a floating-point
> value.

Eh?!

You must be doing something wrong. Clang does not require that.

> To test all these changes, I chose a smaller codegen parameter for which
> g++ is able to compile the code and yields the expected result.
> On that smaller code, clang instead fails with the following error:
>
> // source code from line 201: therein, k is a template parameter, nx is a
> static constexpr std::array<const size_t>, and mx is a static constexpr
> size_t .
> random_generator.template setRandn<(size_t)(mx-nx[k])>(  x+nx[k]);
> random_generator.template setRandn<(size_t)(mx-nx[k])>(  xp+nx[k]);
> random_generator.template setRandn<(size_t)(mb-nb[k])>( Ax+nb[k]);
> random_generator.template setRandn<(size_t)(mb-nb[k])>( Axp+nb[k]);
> random_generator.template setRandn<(size_t)(mx-nx[k])>( dx+nx[k]);
> random_generator.template setRandn<(size_t)(mb-nb[k])>(dAx+nb[k]);
> for(size_t i=nc[k];i<mc;++i){ calc.template setRandn<numPages>(gc[i]);
> calc.template setRandn<numPages>(gc0[i]); }
> random_generator.template setRandn<(size_t)(mx-nx[k])>( x0+nx[k]);
>
> // clang compiler error message
> error: no matching member function for call to 'setRandn'
> 206 | random_generator.template setRandn<(size_t)(mx-nx[k])>( x0+nx[k]);
>
> The error appears unsensical to me, particularly since g++ was able to
> compile it, and since clang accepts the same expression five lines earlier
> but not in the present line.
> The code snippet is from a test class that adds noise to vector-components
> that should be uninvolved in certain test computations.
>
> If it helps, I can work on making it compilable to clang in an effort to
> identify up to which size either compiler is capable of compiling it.
> Please let me know what helps.
>
> Kind regards,
> Kai
>
> On Thu, Oct 19, 2023 at 2:47 PM David Brown <david@westcontrol.com> wrote:
>
> > On 18/10/2023 18:04, Kai Song via Gcc-help wrote:
> > > Dear GCC Developers,
> > >
> > > I am unsuccessfully using g++ 12.0.4 to compile lengthy c++ codes. Those
> > > codes are automatically generated from my own code-generator tools that
> > > depend on parameters p.
> > > Typical applications are:
> > > - Taylor series of order p inserted into consistency conditions of
> > > numerical schemes, to determine optimal method parameters (think of,
> > e.g.,
> > > Runge-Kutta methods)
> > > - recursive automatic code transformation (think of adjoints of adjoints
> > of
> > > adjoints...) of recursion level p
> > > - Hilbert curves or other space-filling curves to generate code that
> > > simulates cache utilization in a Monte-Carlo context
> > >
> > > I verify that for small p the codes compile and execute to the expected
> > > result. However, there is always a threshold for p so that the generated
> > > cpp file is so long that the compiler will just terminate after ~10min
> > > without monitor output but return the value +1.
> > > My cpp files range from 600k LOC up to 1Bio LOC. Often, the file
> > comprises
> > > of one single c++ template class member function definition that relies
> > on
> > > a few thousand lines of template-classes.
> > >
> > > I would like to know:
> > > 1) Am I doing something wrong in that GCC should be able to compile
> > lengthy
> > > codes?
> > > 2) Is it known that GCC is unable to compile lengthy codes?
> > > 3) Is it acknowledged that a compiler's ability to compile large files is
> > > relevant?
> > > 4) Are possible roots known for this inability and are these deliberate?
> > >
> >
> > I am curious to know why you are generating code like this.  I can see
> > how some code generators for mathematical code could easily produce
> > large amounts of code, but it is rarely ideal for real-world uses.  Such
> > flattened code can reduce overheads and improve optimisation
> > opportunities (like inlining, constant folding, function cloning, etc.)
> > for small cases, but then they get impractical for compiling while the
> > costs for cache misses outweigh the overhead cost for the loops or
> > recursion needed for general solutions.
> >
> > Any compiler is going to be targeted and tuned towards "normal" or
> > "typical" code.  That means primarily hand-written code, or smaller
> > generated code.  I know that some systems generate very large functions
> > or large files, but those are primarily C code, and the code is often
> > very simple and "flat".  (Applications here include compilers that use C
> > as a intermediary target language, and simulators of various kinds.)  It
> > typically makes sense to disable certain optimisation passes here, and a
> > number of passes scale badly (quadratic or perhaps worse) with function
> > size.
> >
> > However, if you are generating huge templates in C++, you are going a
> > big step beyond that - templates are, in a sense, code generators
> > themselves that run at compile time as an interpreted meta-language.  I
> > don't expect that there has been a deliberate decision to limit GCC's
> > handling of larger files, but I can't imagine that huge templates are a
> > major focus for the compiler development.  And I would expect enormous
> > memory use and compile times even when it does work.
> >
> >
> >

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-19 14:26     ` Jonathan Wakely
@ 2023-10-19 15:11       ` Kai Song
  2023-10-19 16:03         ` David Brown
  0 siblings, 1 reply; 20+ messages in thread
From: Kai Song @ 2023-10-19 15:11 UTC (permalink / raw)
  To: gcc-help, jwakely.gcc

[-- Attachment #1: Type: text/plain, Size: 6361 bytes --]

Dear everyone,

Now CLANG does compile the lengthy code of 600k LOC into a functioning
object that can be linked and yields the expected result.
Clang was just reluctant to make a certain pointer type-cast by itself.

Given that CLANG compiles the object, what can be inferred?

Further questions:
- Are there any pragmas that can help the compiler identify code portions
with linear compile-complexity?
- What compiler flag combination should I try to increase chances at
successful compilation of lengthy files?


@Jonathan, I am happy to clarify:

> >
> > 1) Will any generated application be limitable into a certain volume of
> > source-code? Certainly not, so why even argue on one particular instance
> of
> > volume?
>
> I have no idea what this means.
>

Is every program on earth writable into X lines of code? Then, which value
is X?
If X exists and is known, one can discuss whether source code of length Y
should exist.


> > 2) May it necessitate significant amounts of work to enable a
> > code-generator to produce shorter code? Certainly.
>
> Well at the moment your code doesn't even compile. So you can keep
> generating billion line functions that don't work, or you could try
> refactoring it. It might not be easy, but if it compiles and works,
> surely that's better than billion line functions that don't work?
>

But certainly, executing a code of length N is an easier problem (linear in
N) than figuring out whether a code of length N can be reduced into length
M (exponentially hard in N).
Also, being able to execute a code of length N is by far more useful than
knowing for one niche class of problems by how much N can be reduced, not
even mentioning whether M is smaller enough..


> > 3) Suppose a code-generator exploits structure of a given user-provided
> > problem instance for generation of shorter code. Will this mean the set
> of
> > feasible problems treatable by the code-generator shinks? Likely.
>
> I don't see why that should be true.
>

In order to seek a structure that the code-generator would track from the
user-provided instance, that information-processor would have to be
implemented, verified, executed, and function reliably.
Whenever the structure does not apply, the benefit cannot be stroked. Then
still the program length won't reduce and all effort was in vain.

In the context of information theory, the general hyperbola applies that
then attempting to solve problems more "smartly", this results in a solver
that is more efficient but less robust.
Efficacy, robustness, and genericity do always form a Pareto curve. Think
of higher-order numerical methods, which always add overhead while only
paying off when instances are sufficiently smooth.
Or think of advanced alpha-beta-pruning-schemes that only work beneficially
for highly structured niche problems, such as chess.


>
> > 4) Is 99% of the code trivial and should be compilable in read-time? I
> > believe so, given my naive information.
>
> A billion lines of trivial operations still consumes ridiculous
> amounts of resources to compile. Your naive view doesn't seem
> relevant.
>

I am a prisoner of myself, so I must respect that you say I cannot know.
I do know that I would know how to compute my particular code in linear
time of length with pencil and paper -- just my human machine-speed is 9
orders of magnitude too slow.


> > 5) Should months of work be invested into generating shorter code in
> order
> > for that to be able to compile in 5 minutes when really this code is
> > compiled and used only once in a lifetime? I am unsure about this.
> >
> > The stinging issue of argument may be on aspect 4), which makes it seem
> as
> > though the other 99% should be wrappable into functions.
> > The, if we call it so, mis-expectation is that then clean interfaces to
> > these functions are generatable or may even not exist. To give an
> example,
> > suppose a skein of code-strings in which each invoked 1% non-trivial code
> > is changing one string so that it induces chaos into the naming of all
> > other temporary variable names, causing mismatch between contiguity
> between
> > data at code-gen time and generated-code time. That is the central issue
> > that I --admittedly-- brute-force by committing into lengthy code.
>
> I don't understand this either, sorry.
>

Argument 5 was on the proportionality between work time and compile time. I
should rather spend 5 minutes into coding and 5 months into compiling than
1 week into coding and 5min into compiling, unless the code must be
compiled and run multiple times.

The elaboration on four gives a plastical description of code pattern that
results in chaos, supporting the hypothesis that the problem of reducing a
program of length N is in O(exp(N)) whereas running it is in O(N).


> Nothing you have said actually explains why you can't refactor the
> code into utility functions that avoid doing everything in one huge
> function.
>

Because there are no two pieces of code that are exactly identical. The
relative distance of two variables being involved into an identical formula
will change with every line.
Example:
Line 1000: tmp[100]=foo(tmp[101],tmp[102]);
Line 2000: tmp[200]=foo(tmp[201],tmp[203]); // dang it, not tmp[202] but
tmp[203]
It is like with penrose tiling. It seems all identical but the details
break it. You just do not find two identical local areas. No-where. And if,
you have to particularly search for them by brute-force, and that should
become useless whenever the particular pattern you try to find does just
not exist in that particular user's instance.


>
> >
> > >> Jonathan: Have you tried using clang instead?
> >
> > I tried ICC (won't install successfully on my Windows 10 PC) and then
> CLANG
> > 17.0.0 .
> >
> > CLANG requires a provided STL, so I used the ones from Microsoft Visual
> > Studio 2022, which I referenced via the -isystem flag.
> > In order to compile it, I had to make tons of changes that g++ would not
> > have complained about; like putting a "d" at the end of a floating-point
> > value.
>
> Eh?!
>
> You must be doing something wrong. Clang does not require that.
>

As I now found, ICC might now work because my CPU is AMD.
I was told LLVM does not come with its own STL. I understand you say it
does.
While g++ accepts Tfloat a=0.1d; the same appeared untrue for CLANG.

Kind regards,
Kai

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-19 14:16   ` Kai Song
  2023-10-19 14:26     ` Jonathan Wakely
@ 2023-10-19 15:15     ` David Brown
  1 sibling, 0 replies; 20+ messages in thread
From: David Brown @ 2023-10-19 15:15 UTC (permalink / raw)
  To: Kai Song, gcc-help

On 19/10/2023 16:16, Kai Song via Gcc-help wrote:
> Thank you for your feedback!
> 
> I forward the requested information to everyone as it may be relevant.

(Note - I am not a GCC developer, I am just a user, whereas Jonathan is 
one of the key developers of GCC.  So if he ever writes something that 
contradicts me, believe Jonathan, not me!)

> 
>>> Jonathan & David: How much RAM? Quadratic time-complexity.
> I have 256GB of RAM and 64 cores. I launch g++ from the Windows 10
> Terminal. From my information, Windows 10 will not limit the resources
> given to the Terminal.

I don't want to sound anti-Windows, but I think it is very unusual to 
see someone using Windows with something like this.  My own experience, 
as someone who has both Linux and Windows machines on my desk since I 
have work that uses both systems, is that big builds with GCC can be at 
least twice as fast on Linux than Windows with the same hardware.  Linux 
is simply better than Windows for this kind of thing - it has better 
handling of large amounts of ram, better multi-core handling, better 
file handling and better handling of multiple processes.  (Windows is 
better for some other things, but not this kind of thing.)  What kind of 
difference it would make for you here, I don't know - but with the 
massive compile times you have, it is worth the effort checking.

It also seems crazy to use off-the-shelf builds of GCC here, if that is 
what you are doing - under the same circumstances, I'd be looking at 
building GCC from source with the best choice of flags to shave the last 
few percent off the times.  I'd be using "-march=native" to fine-tune 
GCC to get the best from my 64 cores.  (Consider a Linux distro like 
Gentoo that compiles everything from source if you don't want to learn 
how to build GCC yourself.)

> I am perfectly fine if compile times are in the time-span of months. For
> the Runge-Kutta method, I would be happy with compile times of years as
> this would mean I have a faster method in a few years.
> 

But would the resulting binary, generated after months, actually be 
faster?  A big source code is going to result in a big binary, which is 
likely to be slower in practice due to cache misses.

Also note that there are pretty much no calculations taking years to run 
that are actually worth running.  If a calculation is going to take 2 
years to do, you are better off selling your current computer, investing 
the money in a fund, taking it out in a years time, buying a new 
computer at that point which is three times the speed, and running the 
calculation in 8 months - finishing sooner than if you started today 
with your current machine.

Perhaps I am completely misunderstanding what you are actually trying to 
achieve.

In my mind, I am comparing your task to something simpler (one that 
everyone here can understand, avoiding the post-grad level mathematics). 
  Sometimes it is nice to have bigger integer types available than the 
usual biggest size of 64 bits.  It's reasonably straightforward to make 
a C++ class for a 128 bit integer type with operator overloading, built 
from two 64-bit integers.  Then you can make a 256 bit integer type 
build from two 128 bit integers.  Then a template that does this 
automatically, and your recursive template setup will let you write:

	using uint4096_t = big_uint<14>;	// 2 ** 14 bits

and you are ready to use it in your RSA encryption code.  It sounds 
simple - and works well for 128-bit and 256-bit integers.  But while it 
will compile for 4096-bit integers, and give the right answers, it is 
likely to be terrible in performance.  Long before reaching 4096 bits, 
it will make sense to move to arbitrary-precision integer libraries, 
using loops over 64-bit chunks rather than just throwing a big template 
at the compiler and drinking coffee during compiles.

Your problem sounds like the same principle, only more so.  But as I 
say, maybe I am missing something vital in my understanding of your problem.




^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-19 15:11       ` Kai Song
@ 2023-10-19 16:03         ` David Brown
  2023-10-20  9:32           ` Kai Song
  0 siblings, 1 reply; 20+ messages in thread
From: David Brown @ 2023-10-19 16:03 UTC (permalink / raw)
  To: Kai Song, gcc-help, jwakely.gcc

On 19/10/2023 17:11, Kai Song via Gcc-help wrote:

(It can be helpful to include attributions when replying, so that it is 
clear who wrote what.  Thus you should have this line included:)

 > On 19/10/2023 16:26, Jonathan Wakely via Gcc-help wrote:


>> Nothing you have said actually explains why you can't refactor the
>> code into utility functions that avoid doing everything in one huge
>> function.
>>
> 
> Because there are no two pieces of code that are exactly identical. The
> relative distance of two variables being involved into an identical formula
> will change with every line.
> Example:
> Line 1000: tmp[100]=foo(tmp[101],tmp[102]);
> Line 2000: tmp[200]=foo(tmp[201],tmp[203]); // dang it, not tmp[202] but
> tmp[203]
> It is like with penrose tiling. It seems all identical but the details
> break it. You just do not find two identical local areas. No-where. And if,
> you have to particularly search for them by brute-force, and that should
> become useless whenever the particular pattern you try to find does just
> not exist in that particular user's instance.
> 

Surely this is screaming out for a table?

const int foo_table[][3] = {
     { 100, 101, 102 },
     { 200, 201, 203 },
	...
};


	for (int i = 0; i < 100000000; i++) {
		tmp[foo_table[i][0]] =
			foo(tmp[foo_table[i][1]], tmp[foo_table[i][2]]);
	}

This will turn your million-line program into a four line program with a 
big table, which will compile quickly and run faster.




^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-19 16:03         ` David Brown
@ 2023-10-20  9:32           ` Kai Song
  2023-10-20 10:19             ` Jonathan Wakely
       [not found]             ` <CACJ51z3rYUSSe7XpcL4d2xfAhMaiVZpxWAnpkqZc1cn2DRf+uA@mail.gmail.com>
  0 siblings, 2 replies; 20+ messages in thread
From: Kai Song @ 2023-10-20  9:32 UTC (permalink / raw)
  To: David Brown; +Cc: gcc-help, jwakely.gcc

[-- Attachment #1: Type: text/plain, Size: 4030 bytes --]

@David: I am sorry there was a misunderstanding with the example. You wrote
four lines of code to be able to replace two lines of code into a for-loop
of length 2, somewhere in a 1Bio line program where this pattern
coincidentally occured once in one generated instance. (In the next
instance, the "pattern" might not occur, so then you just added four
lines.) Also, it is unclear how you would implement in turn the code which
locates the line 200 and decides to replace it with the for-loop. Further,
you left unclear where to place this for-loop (in line 100 or line 200?),
both of which will lead to the wrong result in the generated program due to
false assumption on raise-conditions.

I think my initial question has been mis-correlated. I think so because we
drift into a discussion of speed and what is better and faster and
what-not. But this is not the question at all.
The setting is rather: I have a list of lengthy instructions. Think of a
grocery list. The most convenient language for me to write this grocery
list in is c++.
Now I don't care whether my doctor thinks I want to eat this all by myself
or on a weekly basis, and likes that thought or not. I just need to get the
groceries.
I can get them myself. In this case, it would mean I have to learn to
generate code of linear compile-complexity into assembly directly by myself.

I am not even interested in the compilation per se. I just need to be able
to evaluate the code as written.
I am equally fine with using an interpreted language or a different
language. But interpreters have the exact same issue: They refuse to start
computing a flat program code when the file containing it is too big.
So suppose I have a substitution of Taylor serieses into serieses and it
_does_ become chaotic after reordering of terms. That is after all the work
that the program does. That is what it is made for.
And now I just want some way to evaluate that formula once to tell me
whether y=f(x) yields y=0 for a particular value of x. This is as basic as
it gets.

I suppose the enablement of this capability is not in the scope of GCC. Is
there a name for this problem (evaluating lengthy list of compute
instructions/formulas)? Where should I look to find a solution to that
problem?

On Thu, Oct 19, 2023 at 6:03 PM David Brown <david@westcontrol.com> wrote:

> On 19/10/2023 17:11, Kai Song via Gcc-help wrote:
>
> (It can be helpful to include attributions when replying, so that it is
> clear who wrote what.  Thus you should have this line included:)
>
>  > On 19/10/2023 16:26, Jonathan Wakely via Gcc-help wrote:
>
>
> >> Nothing you have said actually explains why you can't refactor the
> >> code into utility functions that avoid doing everything in one huge
> >> function.
> >>
> >
> > Because there are no two pieces of code that are exactly identical. The
> > relative distance of two variables being involved into an identical
> formula
> > will change with every line.
> > Example:
> > Line 1000: tmp[100]=foo(tmp[101],tmp[102]);
> > Line 2000: tmp[200]=foo(tmp[201],tmp[203]); // dang it, not tmp[202] but
> > tmp[203]
> > It is like with penrose tiling. It seems all identical but the details
> > break it. You just do not find two identical local areas. No-where. And
> if,
> > you have to particularly search for them by brute-force, and that should
> > become useless whenever the particular pattern you try to find does just
> > not exist in that particular user's instance.
> >
>
> Surely this is screaming out for a table?
>
> const int foo_table[][3] = {
>      { 100, 101, 102 },
>      { 200, 201, 203 },
>         ...
> };
>
>
>         for (int i = 0; i < 100000000; i++) {
>                 tmp[foo_table[i][0]] =
>                         foo(tmp[foo_table[i][1]], tmp[foo_table[i][2]]);
>         }
>
> This will turn your million-line program into a four line program with a
> big table, which will compile quickly and run faster.
>
>
>
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-20  9:32           ` Kai Song
@ 2023-10-20 10:19             ` Jonathan Wakely
       [not found]             ` <CACJ51z3rYUSSe7XpcL4d2xfAhMaiVZpxWAnpkqZc1cn2DRf+uA@mail.gmail.com>
  1 sibling, 0 replies; 20+ messages in thread
From: Jonathan Wakely @ 2023-10-20 10:19 UTC (permalink / raw)
  To: Kai Song; +Cc: David Brown, gcc-help

[-- Attachment #1: Type: text/plain, Size: 4643 bytes --]

On Fri, 20 Oct 2023, 10:32 Kai Song, <kaisong1515@gmail.com> wrote:

> @David: I am sorry there was a misunderstanding with the example. You
> wrote four lines of code to be able to replace two lines of code into a
> for-loop of length 2, somewhere in a 1Bio line program where this pattern
> coincidentally occured once in one generated instance. (In the next
> instance, the "pattern" might not occur, so then you just added four
> lines.) Also, it is unclear how you would implement in turn the code which
> locates the line 200 and decides to replace it with the for-loop. Further,
> you left unclear where to place this for-loop (in line 100 or line 200?),
> both of which will lead to the wrong result in the generated program due to
> false assumption on raise-conditions.
>
> I think my initial question has been mis-correlated. I think so because we
> drift into a discussion of speed and what is better and faster and
> what-not. But this is not the question at all.
> The setting is rather: I have a list of lengthy instructions. Think of a
> grocery list. The most convenient language for me to write this grocery
> list in is c++.
> Now I don't care whether my doctor thinks I want to eat this all by myself
> or on a weekly basis, and likes that thought or not. I just need to get the
> groceries.
> I can get them myself. In this case, it would mean I have to learn to
> generate code of linear compile-complexity into assembly directly by myself.
>
> I am not even interested in the compilation per se. I just need to be able
> to evaluate the code as written.
> I am equally fine with using an interpreted language or a different
> language. But interpreters have the exact same issue: They refuse to start
> computing a flat program code when the file containing it is too big.
>

So break it up into smaller functions. You've been given the answer, but
you don't want to accept it and want to argue that it's too difficult.

Maybe that's true, but it's the only way you're going to get the code to
compile.


So suppose I have a substitution of Taylor serieses into serieses and it
> _does_ become chaotic after reordering of terms. That is after all the work
> that the program does. That is what it is made for.
> And now I just want some way to evaluate that formula once to tell me
> whether y=f(x) yields y=0 for a particular value of x. This is as basic as
> it gets.
>
> I suppose the enablement of this capability is not in the scope of GCC. Is
> there a name for this problem (evaluating lengthy list of compute
> instructions/formulas)? Where should I look to find a solution to that
> problem?
>

Use subroutines. Put some of the instructions into a separate function, and
call that function.

The way you're generating code is not going to work at scale.





> On Thu, Oct 19, 2023 at 6:03 PM David Brown <david@westcontrol.com> wrote:
>
>> On 19/10/2023 17:11, Kai Song via Gcc-help wrote:
>>
>> (It can be helpful to include attributions when replying, so that it is
>> clear who wrote what.  Thus you should have this line included:)
>>
>>  > On 19/10/2023 16:26, Jonathan Wakely via Gcc-help wrote:
>>
>>
>> >> Nothing you have said actually explains why you can't refactor the
>> >> code into utility functions that avoid doing everything in one huge
>> >> function.
>> >>
>> >
>> > Because there are no two pieces of code that are exactly identical. The
>> > relative distance of two variables being involved into an identical
>> formula
>> > will change with every line.
>> > Example:
>> > Line 1000: tmp[100]=foo(tmp[101],tmp[102]);
>> > Line 2000: tmp[200]=foo(tmp[201],tmp[203]); // dang it, not tmp[202] but
>> > tmp[203]
>> > It is like with penrose tiling. It seems all identical but the details
>> > break it. You just do not find two identical local areas. No-where. And
>> if,
>> > you have to particularly search for them by brute-force, and that should
>> > become useless whenever the particular pattern you try to find does just
>> > not exist in that particular user's instance.
>> >
>>
>> Surely this is screaming out for a table?
>>
>> const int foo_table[][3] = {
>>      { 100, 101, 102 },
>>      { 200, 201, 203 },
>>         ...
>> };
>>
>>
>>         for (int i = 0; i < 100000000; i++) {
>>                 tmp[foo_table[i][0]] =
>>                         foo(tmp[foo_table[i][1]], tmp[foo_table[i][2]]);
>>         }
>>
>> This will turn your million-line program into a four line program with a
>> big table, which will compile quickly and run faster.
>>
>>
>>
>>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
       [not found]             ` <CACJ51z3rYUSSe7XpcL4d2xfAhMaiVZpxWAnpkqZc1cn2DRf+uA@mail.gmail.com>
@ 2023-10-20 21:08               ` Kai Song
  2023-10-20 22:03                 ` Paul Smith
  0 siblings, 1 reply; 20+ messages in thread
From: Kai Song @ 2023-10-20 21:08 UTC (permalink / raw)
  To: Andrew Bell, gcc-help

[-- Attachment #1: Type: text/plain, Size: 2592 bytes --]

Yes. I have invested time on my end to cause some understanding for an
issue. But as I can confirm onto what has been my experience in that regard
all along, when speaking to niche experts in an interdisciplinary interface
and attempting to do something unconventional or maybe even novel, they are
refuting, denying, obscuring, defaming, fatiguing, reinterpreting,
straw-manning, etc. your problem/situation/fact from who you are, what you
are doing, what you are knowing, what you are even asking, why you are even
asking, the way you are asking, etc. I think what really happens is that
when a person in domain X is contacting to a person in domain Y and asks x
then Y will go all the way to transform x into y. And I would harshly claim
that the motivation in doing so is to pull x out of X into the domain Y
where they can claim with upper hand that what you do makes no sense.
Essentially I have been asking (maybe eight times or so) to execute lengthy
instructions. C++ does not forbid this. You cannot do it. And you give me
flack for it by basically repeating "you must be doing something wrong",
equivalent in saying to <I can add 1 and 1>. I won't participate in further
dialogue.

On Fri, Oct 20, 2023 at 4:15 PM Andrew Bell <andrew.bell.ia@gmail.com>
wrote:

>
> On Fri, Oct 20, 2023 at 5:33 AM Kai Song via Gcc-help <
> gcc-help@gcc.gnu.org> wrote:
>
>> @David: I am sorry there was a misunderstanding with the example. You
>> wrote
>> four lines of code to be able to replace two lines of code into a for-loop
>> of length 2, somewhere in a 1Bio line program where this pattern
>> coincidentally occured once in one generated instance. (In the next
>> instance, the "pattern" might not occur, so then you just added four
>> lines.) Also, it is unclear how you would implement in turn the code which
>> locates the line 200 and decides to replace it with the for-loop. Further,
>> you left unclear where to place this for-loop (in line 100 or line 200?),
>> both of which will lead to the wrong result in the generated program due
>> to
>> false assumption on raise-conditions.
>>
>
> Hi,
>
> You seem out of your depth with regard to this programming task. You are
> conversing with experts who have kindly given their time to help address
> your issue. Perhaps you should hire someone locally to help you turn your
> current effort into something more manageable. There are many ways to solve
> most problems and it seems likely that your current method is not the best.
>
> --
> Andrew Bell
> andrew.bell.ia@gmail.com
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-20 21:08               ` Kai Song
@ 2023-10-20 22:03                 ` Paul Smith
  2023-10-21  6:52                   ` Jonathan Wakely
  0 siblings, 1 reply; 20+ messages in thread
From: Paul Smith @ 2023-10-20 22:03 UTC (permalink / raw)
  To: Kai Song, gcc-help

On Fri, 2023-10-20 at 23:08 +0200, Kai Song via Gcc-help wrote:
> Yes. I have invested time on my end to cause some understanding for
> an issue.

Maybe I can provide context.  Note I'm not a GCC developer although I
have worked on compiler implementations in the past.

The C++ standard defines an abstract definition.  Individual
implementations of compilers for the C++ standard will have limitations
on the abstract definition, obviously: computers are not abstract and
so they are limited.

When a compiler compiles a single function, _especially_ if you enable
optimization, it's not the case that it can simply churn through that
function line by line, process that line and spit out some assembly,
then move to the next line without any context.  Instead, within a
single function the compiler must remember all sorts of things about
the lines in the function that it already parsed.

The longer your function is, the more things the compiler has to
remember about all the previous content that it already parsed.

When programmers write code they naturally break it up into functions
that can managed and understood by other programmers, so compilers
expect that individual functions are of "mangeable" size and aren't
hundreds of thousands of lines long, and implementations make choices
that reflect that expectation.

As a result there will be some limitations on the size of a function
that a compiler can handle with any sort of grace: beyond that the
implementation may well run into asymptotic runtimes due to choices in
internal data structure and algorithms.

Is this unfortunate?  Of course, abstractly it's always unfortunate
when a compiler limits the program you want to write.  But, it's quite
reasonable for the compiler authors to decide to concentrate on the
99.999% of code they expect to see, and not worrying about the 0.001%
of code they don't expect, and of which they probably don't even have
examples to test.

So, you can either decide you want to try to help fix the compiler to
be able to better handle code like yours, by doing performance analysis
and figuring out where the bottlenecks are and providing patches that
help the compiler in these extreme edge cases (without damaging it in
the "normal" cases).

Or, you can change your code generator to output code which is more
like what the compiler expects, where you have many more smaller
functions, potentially even in separate source files, rather than one
gigantic function containing all the code.

Or, I guess, you can look for a different compiler that has different
goals.

So to answer your questions:

> 1) Am I doing something wrong in that GCC should be able to compile
> lengthy codes?

I'm not sure exactly what this question is asking.  If what you mean
is, is there some flag or option you can give to GCC to get it to build
your code then no, I doubt it.  Disabling optimization altogether will
help, but may not be enough; I assume you already tried that.

If you are asking should GCC be able to compile these huge functions,
then I guess the answer is that the GCC authors don't think that this
is an important use-case so they haven't tried to make it work well.

> 2) Is it known that GCC is unable to compile lengthy codes?

I doubt that it's known per se because I'm quite sure that there are no
tests in the compiler test suite that try to compile functions at this
scale.

But if you tell us you do see that it's unable to be done, I doubt that
anyone on this list is particularly surprised by that.

> 3) Is it acknowledged that a compiler's ability to compile large
> files is relevant?

As above, I doubt the GCC developers feel that it's particularly
relevant, no.  Which is not the same thing as saying that, if someone
points out a specific problem and provides a reasonable fix, that they
wouldn't include it.

> 4) Are possible roots known for this inability and are these
> deliberate?

I would say, as above, that the GCC developers do not have such huge
functions in their test suite and so no, they are not aware of exactly
what problems would occur when trying to compile them.  If they were to
sit and think about it for a while, I'm sure various developers could
imagine possible bottlenecks that would cause huge functions to compile
slowly, but as always with performance the only way to really know is
to measure it.

I don't know what you mean by "are they deliberate".  I'm sure they
didn't sit down to write the compiler implementation and say "let's
make it slow for huge functions!"

But it could well be that when making design choices if there was a way
to do it that was much easier to implement or allowed for faster
compilation for medium-sized functions but would be hugely inefficient
for gigantic functions, they may have said "well, these huge functions
are extremely unlikely so we won't optimize for that".

There's no way to know until, as above, someone sits down with such a
huge file and performs performance analysis on the compiler as it tries
to compile the function, and figures out what is going on.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-20 22:03                 ` Paul Smith
@ 2023-10-21  6:52                   ` Jonathan Wakely
  2023-10-21 14:10                     ` Kai Song
  0 siblings, 1 reply; 20+ messages in thread
From: Jonathan Wakely @ 2023-10-21  6:52 UTC (permalink / raw)
  To: Paul Smith; +Cc: Kai Song, gcc-help

[-- Attachment #1: Type: text/plain, Size: 1248 bytes --]

On Fri, 20 Oct 2023, 23:04 Paul Smith via Gcc-help, <gcc-help@gcc.gnu.org>
wrote:

> On Fri, 2023-10-20 at 23:08 +0200, Kai Song via Gcc-help wrote:
> > Yes. I have invested time on my end to cause some understanding for
> > an issue.
>
> Maybe I can provide context.  Note I'm not a GCC developer although I
> have worked on compiler implementations in the past.
>
> The C++ standard defines an abstract definition.  Individual
> implementations of compilers for the C++ standard will have limitations
> on the abstract definition, obviously: computers are not abstract and
> so they are limited.
>

And this is explicitly called out in the C++ standard:
https://eel.is/c++draft/intro.compliance#general-2.1
https://eel.is/c++draft/implimits


Kai, we are not sceptics who are doubting your marvelous theories. It is an
empirical fact that gcc will fail to compile ridiculously large functions
written in your preferred coding style. Some of us have tried to suggest
alternatives that might have more success.

At this point I consider any further engagement in this thread to be a
waste of time. You have answers to your questions now, but you seem
unwilling to take the advice given.

Good luck compiling your ridiculously constructed programs.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-21  6:52                   ` Jonathan Wakely
@ 2023-10-21 14:10                     ` Kai Song
  2023-10-24 14:57                       ` Paul Smith
  0 siblings, 1 reply; 20+ messages in thread
From: Kai Song @ 2023-10-21 14:10 UTC (permalink / raw)
  To: Jonathan Wakely; +Cc: Paul Smith, gcc-help

[-- Attachment #1: Type: text/plain, Size: 2892 bytes --]

Thank you to Jonathan and Paul for clarifying the scope of the discussion.
I was under the impression that the goal was to move the subject at "if
your code is too long you have to refactor it", which I interpreted after
all as "you must be doing something wrong". Because we both proceeded by
alternately replying with the same piece of information.

Today I did implement a cpp program that will evaluate a file containing a
formula of length N and its derivative in O(N), compiled with GCC :D. Of
course my program can only treat very trivial files and is slow but for
most of my tasks that does not matter. I understand (and needless to say
couldn't do it better) that it may just be for now a too difficult problem
to actually compile lengthy cpp. The question that I was targeting at is:
What can I realistically do (implying that refactoring in the remaining
project's period appears nonviable)? The formula evaluator programs helps
for a subset of these problems, clearly. Are there languages with similar
capabilities to cpp that I could generate into equivalently and that are as
easy to learn (i.e., excluding assembly or any languages that won't permit
templates or flexible types)?

My initial phrasing of "how can I complie my code?" was wrong as I can see
now. It should have been "how can I somehow just make it so I can evaluate
it?, particularly when I see myself unable for now to transform it into
equivalent shorter code?"

On Sat, Oct 21, 2023 at 8:52 AM Jonathan Wakely <jwakely.gcc@gmail.com>
wrote:

>
>
> On Fri, 20 Oct 2023, 23:04 Paul Smith via Gcc-help, <gcc-help@gcc.gnu.org>
> wrote:
>
>> On Fri, 2023-10-20 at 23:08 +0200, Kai Song via Gcc-help wrote:
>> > Yes. I have invested time on my end to cause some understanding for
>> > an issue.
>>
>> Maybe I can provide context.  Note I'm not a GCC developer although I
>> have worked on compiler implementations in the past.
>>
>> The C++ standard defines an abstract definition.  Individual
>> implementations of compilers for the C++ standard will have limitations
>> on the abstract definition, obviously: computers are not abstract and
>> so they are limited.
>>
>
> And this is explicitly called out in the C++ standard:
> https://eel.is/c++draft/intro.compliance#general-2.1
> https://eel.is/c++draft/implimits
>
>
> Kai, we are not sceptics who are doubting your marvelous theories. It is
> an empirical fact that gcc will fail to compile ridiculously large
> functions written in your preferred coding style. Some of us have tried to
> suggest alternatives that might have more success.
>
> At this point I consider any further engagement in this thread to be a
> waste of time. You have answers to your questions now, but you seem
> unwilling to take the advice given.
>
> Good luck compiling your ridiculously constructed programs.
>
>
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-21 14:10                     ` Kai Song
@ 2023-10-24 14:57                       ` Paul Smith
  2023-10-25 11:09                         ` Richard Earnshaw
  0 siblings, 1 reply; 20+ messages in thread
From: Paul Smith @ 2023-10-24 14:57 UTC (permalink / raw)
  To: Kai Song; +Cc: gcc-help

On Sat, 2023-10-21 at 16:10 +0200, Kai Song via Gcc-help wrote:
> I understand (and needless to say couldn't do it better) that it may
> just be for now a too difficult problem to actually compile lengthy
> cpp.

I just want to be very clear here.  When you say "lengthy cpp" that is
not very precise because "cpp" is not an accurate term.

What is true is the the compiler will have problems compiling extremely
long SINGLE FUNCTIONS.  It doesn't really matter how long the source
file is (how many functions it contains).  What matters is now large
each function is.

In order to make your generated code digestible for compilers, you need
to break up your algorithms into multiple smaller functions.

> The question that I was targeting at is: What can I realistically do
> (implying that refactoring in the remaining project's period appears
> nonviable)?

Of course we cannot decide what is realistic or not.  Your comments
here seem to imply that you would be wiling to rewrite the entire thing
to generate output in a completely different language so it seems you
are willing and able to make very sweeping changes.

It's hard for us to understand why you can't extract parts of your
generated code and put them into separate functions.  Surely you must
have loops in your code: can't the body of some of these loops become
separate functions?  If there are steps to the algorithm can't each of
these steps become a separate function?  You can collect all the state
of the algorithm into one large global structure, rather than having
individual local variables, and all the functions can refer to that
global state structure, so you don't have to pass the data around as
arguments.

> Are there languages with similar capabilities to cpp that I could
> generate into equivalently and that are as easy to learn

Since we don't know which particular capabilities of C++ you are
relying on that's difficult to say.  However, my feeling is that no
matter what language you use you will run into these limitations:
compilers in general simply are not written to have single functions
which are hundreds of thousands of lines long.  Functions help humans
organize their thinking and programming, but they also are critical for
compilers to make their jobs simpler (or even possible).

Rather than learning a new language, I think you should be revisiting
your code generation and attempting to understand how to break it up. 
I don't think there's any feasible alternative.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-24 14:57                       ` Paul Smith
@ 2023-10-25 11:09                         ` Richard Earnshaw
  2023-10-25 14:49                           ` Paul Smith
  0 siblings, 1 reply; 20+ messages in thread
From: Richard Earnshaw @ 2023-10-25 11:09 UTC (permalink / raw)
  To: psmith, Kai Song; +Cc: gcc-help



On 24/10/2023 15:57, Paul Smith via Gcc-help wrote:
> On Sat, 2023-10-21 at 16:10 +0200, Kai Song via Gcc-help wrote:
>> I understand (and needless to say couldn't do it better) that it may
>> just be for now a too difficult problem to actually compile lengthy
>> cpp.
> 
> I just want to be very clear here.  When you say "lengthy cpp" that is
> not very precise because "cpp" is not an accurate term.
> 
> What is true is the the compiler will have problems compiling extremely
> long SINGLE FUNCTIONS.  It doesn't really matter how long the source
> file is (how many functions it contains).  What matters is now large
> each function is.
> 

That's only partially true.  Modern compilers read the whole translation 
unit into memory, parse all of it and then optimize the various 
functions.  Smaller functions helps to reduce the overall memory 
footprint while doing the optimizations, but does not entirely eliminate 
the file size issue.

> In order to make your generated code digestible for compilers, you need
> to break up your algorithms into multiple smaller functions.
> 
>> The question that I was targeting at is: What can I realistically do
>> (implying that refactoring in the remaining project's period appears
>> nonviable)?
> 
> Of course we cannot decide what is realistic or not.  Your comments
> here seem to imply that you would be wiling to rewrite the entire thing
> to generate output in a completely different language so it seems you
> are willing and able to make very sweeping changes.
> 
> It's hard for us to understand why you can't extract parts of your
> generated code and put them into separate functions.  Surely you must
> have loops in your code: can't the body of some of these loops become
> separate functions?  If there are steps to the algorithm can't each of
> these steps become a separate function?  You can collect all the state
> of the algorithm into one large global structure, rather than having
> individual local variables, and all the functions can refer to that
> global state structure, so you don't have to pass the data around as
> arguments.
> 
>> Are there languages with similar capabilities to cpp that I could
>> generate into equivalently and that are as easy to learn
> 
> Since we don't know which particular capabilities of C++ you are
> relying on that's difficult to say.  However, my feeling is that no
> matter what language you use you will run into these limitations:
> compilers in general simply are not written to have single functions
> which are hundreds of thousands of lines long.  Functions help humans
> organize their thinking and programming, but they also are critical for
> compilers to make their jobs simpler (or even possible).
> 
> Rather than learning a new language, I think you should be revisiting
> your code generation and attempting to understand how to break it up.
> I don't think there's any feasible alternative.

The most important consideration doesn't seem to have been mentioned on 
this thread: CPU architectures.  Modern CPUs have a very strong 
preference to repeatedly executing small bits of code.  Linear code will 
kill their performance.  This comes from the fact that the CPU can 
consume instructions far faster than the memory system can supply them. 
So to keep performance up, the system uses caches; but those are tiny 
and they only help if the instructions held in them are re-used. 
Furthermore, branches for most programs are practically free these days, 
especially when you take the benefit of caches into account: the days of 
unrolling loops dozens (or more) times to avoid branch overheads are 
pretty much gone.

Finally, there are practical limits (especially on RISC processors) of 
how large a function can be before you need to take defensive 
compilation measures to handle very large offsets between addresses. 
Those defensive measures further slow down your compiled programs.

R.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-25 11:09                         ` Richard Earnshaw
@ 2023-10-25 14:49                           ` Paul Smith
  2023-10-26 11:19                             ` David Brown
  0 siblings, 1 reply; 20+ messages in thread
From: Paul Smith @ 2023-10-25 14:49 UTC (permalink / raw)
  To: Richard Earnshaw, Kai Song; +Cc: gcc-help

On Wed, 2023-10-25 at 12:09 +0100, Richard Earnshaw wrote:
> Those defensive measures further slow down your compiled programs.

Just to remind, Kai Song has said multiple times in these threads that
they're not asking about improving the performance of the resulting
compiled program.  They don't seem to mind if the resulting program
takes days or weeks to run.

They are asking here about how to get the compiler to successfully
compile the code, not how to make the compiled code faster.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Compilation of lengthy C++ Files
  2023-10-25 14:49                           ` Paul Smith
@ 2023-10-26 11:19                             ` David Brown
  0 siblings, 0 replies; 20+ messages in thread
From: David Brown @ 2023-10-26 11:19 UTC (permalink / raw)
  To: gcc-help

On 25/10/2023 16:49, Paul Smith via Gcc-help wrote:
> On Wed, 2023-10-25 at 12:09 +0100, Richard Earnshaw wrote:
>> Those defensive measures further slow down your compiled programs.
> 
> Just to remind, Kai Song has said multiple times in these threads that
> they're not asking about improving the performance of the resulting
> compiled program.  They don't seem to mind if the resulting program
> takes days or weeks to run.
> 

He has actually said he doesn't mind if the /compile/ time is months, or 
even years!

> They are asking here about how to get the compiler to successfully
> compile the code, not how to make the compiled code faster.
> 

Yes - with an emphasis on wanting to persuade GCC to compile his 
generated code, rather than on wanting to generate code that can be 
compiled by GCC.



^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2023-10-26 11:19 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-18 16:04 Compilation of lengthy C++ Files Kai Song
2023-10-18 21:59 ` Jonathan Wakely
2023-10-19  8:36   ` Andrew Haley
2023-10-19 12:47 ` David Brown
2023-10-19 12:47   ` David Brown
2023-10-19 14:16   ` Kai Song
2023-10-19 14:26     ` Jonathan Wakely
2023-10-19 15:11       ` Kai Song
2023-10-19 16:03         ` David Brown
2023-10-20  9:32           ` Kai Song
2023-10-20 10:19             ` Jonathan Wakely
     [not found]             ` <CACJ51z3rYUSSe7XpcL4d2xfAhMaiVZpxWAnpkqZc1cn2DRf+uA@mail.gmail.com>
2023-10-20 21:08               ` Kai Song
2023-10-20 22:03                 ` Paul Smith
2023-10-21  6:52                   ` Jonathan Wakely
2023-10-21 14:10                     ` Kai Song
2023-10-24 14:57                       ` Paul Smith
2023-10-25 11:09                         ` Richard Earnshaw
2023-10-25 14:49                           ` Paul Smith
2023-10-26 11:19                             ` David Brown
2023-10-19 15:15     ` David Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).