From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by sourceware.org (Postfix) with ESMTPS id 721BC398383B for ; Wed, 14 Jul 2021 12:58:39 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 721BC398383B Received: by mail-pj1-x1034.google.com with SMTP id jx7-20020a17090b46c7b02901757deaf2c8so1383257pjb.0 for ; Wed, 14 Jul 2021 05:58:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=dK3WWHVituzo21EywLcwRbmBNOQkXZUaMDGPaowXvw8=; b=D+28AtsgglV/JqOlDNDNP4YESIEYjHry8iEtRWguq1pcO7YVRMEWNNHebA61uo+WbW LGIR5IZZz6dxcmISLA686nb44JVo2C8Xr+RULgSD7Jg7Md2Hg1zR5vyRX3HTZ51jYImX vG49pUj480p4JvcKrgpEzQY110AlQ+Fq2f82VylpU+wZPLj2xm+Z9FF6/vaIgucGEPuD lxC4BdkpeXPlzGyOKEHTDnpGgU7XqQG50aOViQxlyOHblz4jt75P66xfgLKtHGjpjFRB G5Edyen+0P3VA+us9f7iZKinB5Q+QV/ka/3nih7o7W1B9TikJJ9qfieAI1OM55WRNSlp DysQ== X-Gm-Message-State: AOAM530HYrERv6Ex9tvy2kcJPoajGTu9njscsyIcjjhM0HQRE6EvqYSz AWYVATvuISVbZ8kZLbbUa0DFbA== X-Google-Smtp-Source: ABdhPJwrtFucH4GXAyNJRz23yjxA5acCcQ4RwdM9yCKhYY1rmVEVqptOzrIMnsCid79GCfaunJXXKA== X-Received: by 2002:a17:90a:564a:: with SMTP id d10mr9740929pji.120.1626267518453; Wed, 14 Jul 2021 05:58:38 -0700 (PDT) Received: from [192.168.1.108] ([177.194.59.218]) by smtp.gmail.com with ESMTPSA id d25sm3134596pgn.42.2021.07.14.05.58.37 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 14 Jul 2021 05:58:38 -0700 (PDT) Subject: Re: memcpy performance on skylake server To: "Ji, Cheng" , Libc-help , "H.J. Lu" References: From: Adhemerval Zanella Message-ID: <6ee56912-dbe1-181e-6981-8d286c0325f3@linaro.org> Date: Wed, 14 Jul 2021 09:58:35 -0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-6.3 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, NICE_REPLY_A, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-help@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-help mailing list List-Unsubscribe: , List-Archive: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Jul 2021 12:58:41 -0000 On 06/07/2021 05:17, Ji, Cheng via Libc-help wrote: > Hello, > > I found that memcpy is slower on skylake server CPUs during our > optimization work, and I can't really explain what we got and need some > guidance here. > > The problem is that memcpy is noticeably slower than a simple for loop when > copying large chunks of data. This genuinely sounds like an amateur mistake > in our testing code but here's what we have tried: > > * The test data is large enough: 1GB. > * We noticed a change quite a while ago regarding skylake and AVX512: > https://patchwork.ozlabs.org/project/glibc/patch/20170418183712.GA22211@intel.com/ > * We updated glibc from 2.17 to the latest 2.33, we did see memcpy is 5% > faster but still slower than a simple loop. > * We tested on multiple bare metal machines with different cpus: Xeon Gold > 6132, Gold 6252, Silver 4114, as well as a virtual machine on google cloud, > the result is reproducible. > * On an older generation Xeon E5-2630 v3, memcpy is about 50% faster than > the simple loop. On my desktop (i7-7700k) memcpy is also significantly > faster. > * numactl is used to ensure everything is running on a single core. > * The code is compiled by gcc 10.3 > > The numbers on a Xeon Gold 6132, with glibc 2.33: > simple_memcpy 4.18 seconds, 4.79 GiB/s 5.02 GB/s > simple_copy 3.68 seconds, 5.44 GiB/s 5.70 GB/s > simple_memcpy 4.18 seconds, 4.79 GiB/s 5.02 GB/s > simple_copy 3.68 seconds, 5.44 GiB/s 5.71 GB/s > > The result is worse with system provided glibc 2.17: > simple_memcpy 4.38 seconds, 4.57 GiB/s 4.79 GB/s > simple_copy 3.68 seconds, 5.43 GiB/s 5.70 GB/s > simple_memcpy 4.38 seconds, 4.56 GiB/s 4.78 GB/s > simple_copy 3.68 seconds, 5.44 GiB/s 5.70 GB/s > > > The code to generate this result (compiled with g++ -O2 -g, run with: numactl > --membind 0 --physcpubind 0 -- ./a.out) > ===== > > #include > #include > #include > #include > #include > > class TestCase { > using clock_t = std::chrono::high_resolution_clock; > using sec_t = std::chrono::duration>; > > public: > static constexpr size_t NUM_VALUES = 128 * (1 << 20); // 128 million * > 8 bytes = 1GiB > > void init() { > vals_.resize(NUM_VALUES); > for (size_t i = 0; i < NUM_VALUES; ++i) { > vals_[i] = i; > } > dest_.resize(NUM_VALUES); > } > > void run(std::string name, std::function *, size_t)> &&func) { > // ignore the result from first run > func(vals_.data(), dest_.data(), vals_.size()); > constexpr size_t count = 20; > auto start = clock_t::now(); > for (size_t i = 0; i < count; ++i) { > func(vals_.data(), dest_.data(), vals_.size()); > } > auto end = clock_t::now(); > double duration = > std::chrono::duration_cast(end-start).count(); > printf("%s %.2f seconds, %.2f GiB/s, %.2f GB/s\n", name.data(), > duration, > sizeof(int64_t) * NUM_VALUES / double(1 << 30) * count / > duration, > sizeof(int64_t) * NUM_VALUES / double(1e9) * count / > duration); > } > > private: > std::vector vals_; > std::vector dest_; > }; > > void simple_memcpy(const int64_t *src, int64_t *dest, size_t n) { > memcpy(dest, src, n * sizeof(int64_t)); > } > > void simple_copy(const int64_t *src, int64_t *dest, size_t n) { > for (size_t i = 0; i < n; ++i) { > dest[i] = src[i]; > } > } > > int main(int, char **) { > TestCase c; > c.init(); > > c.run("simple_memcpy", simple_memcpy); > c.run("simple_copy", simple_copy); > c.run("simple_memcpy", simple_memcpy); > c.run("simple_copy", simple_copy); > } > > ===== > > The assembly of simple_copy generated by gcc is very simple: > Dump of assembler code for function _Z11simple_copyPKlPlm: > 0x0000000000401440 <+0>: mov %rdx,%rcx > 0x0000000000401443 <+3>: test %rdx,%rdx > 0x0000000000401446 <+6>: je 0x401460 <_Z11simple_copyPKlPlm+32> > 0x0000000000401448 <+8>: xor %eax,%eax > 0x000000000040144a <+10>: nopw 0x0(%rax,%rax,1) > 0x0000000000401450 <+16>: mov (%rdi,%rax,8),%rdx > 0x0000000000401454 <+20>: mov %rdx,(%rsi,%rax,8) > 0x0000000000401458 <+24>: inc %rax > 0x000000000040145b <+27>: cmp %rax,%rcx > 0x000000000040145e <+30>: jne 0x401450 <_Z11simple_copyPKlPlm+16> > 0x0000000000401460 <+32>: retq > > When compiling with -O3, gcc vectorized the loop using xmm0, the > simple_loop is around 1% faster. Usually differences of that magnitude falls either in noise or may be something related to OS jitter. > > I took a brief look at the glibc source code. Though I don't have enough > knowledge to understand it yet, I'm curious about the underlying mechanism. > Thanks. H.J, do you have any idea what might be happening here?