public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
From: Richard Biener <richard.guenther@gmail.com>
To: NightStrike <nightstrike@gmail.com>
Cc: joel@rtems.org, pmenzel+gcc.gnu.org@molgen.mpg.de,
		GCC Development <gcc@gcc.gnu.org>
Subject: Re: How to get GCC on par with ICC?
Date: Thu, 21 Jun 2018 09:20:00 -0000	[thread overview]
Message-ID: <CAFiYyc2mk_U1xoHndtYU4p2jP_Hjw-6esXvVFkJ+EuBiGT7mdg@mail.gmail.com> (raw)
In-Reply-To: <CAF1jjLuoNjPrpWQL4fiwCBs2-xJX2MR7GDdhWYPhWGvXdKG8kA@mail.gmail.com>

On Wed, Jun 20, 2018 at 11:12 PM NightStrike <nightstrike@gmail.com> wrote:
>
> On Wed, Jun 6, 2018 at 11:57 AM, Joel Sherrill <joel@rtems.org> wrote:
> >
> > On Wed, Jun 6, 2018 at 10:51 AM, Paul Menzel <
> > pmenzel+gcc.gnu.org@molgen.mpg.de> wrote:
> >
> > > Dear GCC folks,
> > >
> > >
> > > Some scientists in our organization still want to use the Intel compiler,
> > > as they say, it produces faster code, which is then executed on clusters.
> > > Some resources on the Web [1][2] confirm this. (I am aware, that it’s
> > > heavily dependent on the actual program.)
> > >
> >
> > Do they have specific examples where icc is better for them? Or can point
> > to specific GCC PRs which impact them?
> >
> >
> > GCC versions?
> >
> > Are there specific CPU model variants of concern?
> >
> > What flags are used to compile? Some times a bit of advice can produce
> > improvements.
> >
> > Without specific examples, it is hard to set goals.
>
> If I could perhaps jump in here for a moment...  Just today I hit upon
> a series of small (in lines of code) loops that gcc can't vectorize,
> and intel vectorizes like a madman.  They all involve a lot of heavy
> use of std::vector<std::vector<float>>.  Comparisons were with gcc

Ick - C++ ;)

> 8.1, intel 2018.u1, an AMD Opteron 6386 SE, with the program running
> as sched_FIFO, mlockall, affinity set to its own core, and all
> interrupts vectored off that core.  So, as close to not-noisy as
> possible.
>
> I was surprised at the results results, but using each compiler's methods of
> dumping vectorization info, intel wins on two points:
>
> 1) It actually vectorizes
> 2) It's vectorizing output is much more easily readable
>
> Options were:
>
> gcc -Wall -ggdb3 -std=gnu++17 -flto -Ofast -march=native
>
> vs:
>
> icc -Ofast -std=gnu++14
>
>
> So, not exactly exact, but pretty close.
>
>
> So here's an example of a chunk of code (not very readable, sorry
> about that) that intel can vectorize, and subsequently make about 50%
> faster:
>
>         std::size_t nLayers { input.nn.size() };
>         //std::size_t ySize = std::max_element(input.nn.cbegin(),
> input.nn.cend(), [](auto a, auto b){ return a.size() < b.size();
> })->size();
>         std::size_t ySize = 0;
>         for (auto const & nn: input.nn)
>                 ySize = std::max(ySize, nn.size());
>
>         float yNorm[ySize];
>         for (auto & y: yNorm)
>                 y = 0.0f;
>         for (std::size_t i = 0; i < xSize; ++i)
>                 yNorm[i] = xNorm[i];
>         for (std::size_t layer = 0; layer < nLayers; ++layer) {
>                 auto & nn = input.nn[layer];
>                 auto & b = nn.back();
>                 float y[ySize];
>                 for (std::size_t i = 0; i < nn[0].size(); ++i) {
>                         y[i] = b[i];
>                         for (std::size_t j = 0; j < nn.size() - 1; ++j)
>                                 y[i] += nn.at(j).at(i) * yNorm[j];
>                 }
>                 for (std::size_t i = 0; i < ySize; ++i) {
>                         if (layer < nLayers - 1)
>                                 y[i] = std::max(y[i], 0.0f);
>                         yNorm[i] = y[i];
>                 }
>         }
>
>
> If I was better at godbolt, I could show the asm, but I'm not.  I'm
> willing to learn, though.

A compilable testcase would be more useful - just file a bugzilla.

Richard.

  reply	other threads:[~2018-06-21  8:23 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-06 15:57 Paul Menzel
2018-06-06 16:14 ` Joel Sherrill
2018-06-06 16:20   ` Paul Menzel
2018-06-20 22:42   ` NightStrike
2018-06-21  9:20     ` Richard Biener [this message]
2018-06-22  0:48     ` Steve Ellcey
2018-06-06 16:22 ` Bin.Cheng
2018-06-06 18:31 ` Dmitry Mikushin
2018-06-06 21:10   ` Ryan Burn
2018-06-07 10:02     ` Richard Biener
2018-06-06 22:43   ` Zan Lynx
2018-06-07  9:54     ` Richard Biener
2018-06-07 10:06 ` Richard Biener
2018-06-08 22:08   ` Steve Ellcey
2018-06-09 15:32     ` Marc Glisse
2018-06-11 14:50     ` Martin Jambor
2018-06-22 22:41       ` Szabolcs Nagy
2018-06-15 11:48 Wilco Dijkstra
2018-06-15 17:03 ` Jeff Law
2018-06-15 18:01   ` Joseph Myers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAFiYyc2mk_U1xoHndtYU4p2jP_Hjw-6esXvVFkJ+EuBiGT7mdg@mail.gmail.com \
    --to=richard.guenther@gmail.com \
    --cc=gcc@gcc.gnu.org \
    --cc=joel@rtems.org \
    --cc=nightstrike@gmail.com \
    --cc=pmenzel+gcc.gnu.org@molgen.mpg.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).