From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by sourceware.org (Postfix) with ESMTPS id 051C13861C54 for ; Tue, 6 Jul 2021 08:17:41 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 051C13861C54 Received: by mail-pl1-x632.google.com with SMTP id o4so8527066plg.1 for ; Tue, 06 Jul 2021 01:17:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=PPASBNLgnizrz9dGjXNMOTv1wuWhWUQEY8ntx78HpSs=; b=s007AfPvi3p5ssIGXhXI2ztNTNLHdmGUbcdXAUFT10so3HEWRVeoh9lgypPUaltOez EzmxLCiHqo/7kNtb9qZ5fBe2drqDRJ5mCMOkaYd7oAkTwX+65tOPB598Ke22aiQNmasQ IcWUJ0ULgW1/kss4Os4dEROtIurV2KXMGYIYDJsxKPtzHTOe37CZki1ecu1C8bSrlrVw MBX7oqmaJw/Vss86HGPLRKQUVkBZvXiOtpy90Pbod05aRVtync09J+WoQZRjU5x6DQlJ 5aaJj0HUpZF5PWWIpSeTkeh3KYbqvj+YjkSl9IaXb9nErY6sxLl35MoUx+64bATPcwYo wp9A== X-Gm-Message-State: AOAM533TKFPON2n5S/mD6c5JysS3Ves0an3qrmHjapsDBGmkOLqjgDN+ GnG1vLa5/LkB1dNeC4mRT8d6kkzzgXJVPfFF+1GMyv4+htg= X-Google-Smtp-Source: ABdhPJyHAd0TjvA8m3pbpdCR+M4hZ4TtsEGuor3/TbDqk+Mr2tnBIke1bTjrbxX9EtayvWSuLlgNJvmRew7FMy3WhDY= X-Received: by 2002:a17:902:e04e:b029:10f:133f:87c8 with SMTP id x14-20020a170902e04eb029010f133f87c8mr15806108plx.70.1625559459935; Tue, 06 Jul 2021 01:17:39 -0700 (PDT) MIME-Version: 1.0 From: "Ji, Cheng" Date: Tue, 6 Jul 2021 16:17:14 +0800 Message-ID: Subject: memcpy performance on skylake server To: libc-help@sourceware.org X-Spam-Status: No, score=0.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM, HTML_MESSAGE, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: libc-help@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-help mailing list List-Unsubscribe: , List-Archive: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jul 2021 08:17:43 -0000 Hello, I found that memcpy is slower on skylake server CPUs during our optimization work, and I can't really explain what we got and need some guidance here. The problem is that memcpy is noticeably slower than a simple for loop when copying large chunks of data. This genuinely sounds like an amateur mistake in our testing code but here's what we have tried: * The test data is large enough: 1GB. * We noticed a change quite a while ago regarding skylake and AVX512: https://patchwork.ozlabs.org/project/glibc/patch/20170418183712.GA22211@intel.com/ * We updated glibc from 2.17 to the latest 2.33, we did see memcpy is 5% faster but still slower than a simple loop. * We tested on multiple bare metal machines with different cpus: Xeon Gold 6132, Gold 6252, Silver 4114, as well as a virtual machine on google cloud, the result is reproducible. * On an older generation Xeon E5-2630 v3, memcpy is about 50% faster than the simple loop. On my desktop (i7-7700k) memcpy is also significantly faster. * numactl is used to ensure everything is running on a single core. * The code is compiled by gcc 10.3 The numbers on a Xeon Gold 6132, with glibc 2.33: simple_memcpy 4.18 seconds, 4.79 GiB/s 5.02 GB/s simple_copy 3.68 seconds, 5.44 GiB/s 5.70 GB/s simple_memcpy 4.18 seconds, 4.79 GiB/s 5.02 GB/s simple_copy 3.68 seconds, 5.44 GiB/s 5.71 GB/s The result is worse with system provided glibc 2.17: simple_memcpy 4.38 seconds, 4.57 GiB/s 4.79 GB/s simple_copy 3.68 seconds, 5.43 GiB/s 5.70 GB/s simple_memcpy 4.38 seconds, 4.56 GiB/s 4.78 GB/s simple_copy 3.68 seconds, 5.44 GiB/s 5.70 GB/s The code to generate this result (compiled with g++ -O2 -g, run with: numactl --membind 0 --physcpubind 0 -- ./a.out) ===== #include #include #include #include #include class TestCase { using clock_t = std::chrono::high_resolution_clock; using sec_t = std::chrono::duration>; public: static constexpr size_t NUM_VALUES = 128 * (1 << 20); // 128 million * 8 bytes = 1GiB void init() { vals_.resize(NUM_VALUES); for (size_t i = 0; i < NUM_VALUES; ++i) { vals_[i] = i; } dest_.resize(NUM_VALUES); } void run(std::string name, std::function &&func) { // ignore the result from first run func(vals_.data(), dest_.data(), vals_.size()); constexpr size_t count = 20; auto start = clock_t::now(); for (size_t i = 0; i < count; ++i) { func(vals_.data(), dest_.data(), vals_.size()); } auto end = clock_t::now(); double duration = std::chrono::duration_cast(end-start).count(); printf("%s %.2f seconds, %.2f GiB/s, %.2f GB/s\n", name.data(), duration, sizeof(int64_t) * NUM_VALUES / double(1 << 30) * count / duration, sizeof(int64_t) * NUM_VALUES / double(1e9) * count / duration); } private: std::vector vals_; std::vector dest_; }; void simple_memcpy(const int64_t *src, int64_t *dest, size_t n) { memcpy(dest, src, n * sizeof(int64_t)); } void simple_copy(const int64_t *src, int64_t *dest, size_t n) { for (size_t i = 0; i < n; ++i) { dest[i] = src[i]; } } int main(int, char **) { TestCase c; c.init(); c.run("simple_memcpy", simple_memcpy); c.run("simple_copy", simple_copy); c.run("simple_memcpy", simple_memcpy); c.run("simple_copy", simple_copy); } ===== The assembly of simple_copy generated by gcc is very simple: Dump of assembler code for function _Z11simple_copyPKlPlm: 0x0000000000401440 <+0>: mov %rdx,%rcx 0x0000000000401443 <+3>: test %rdx,%rdx 0x0000000000401446 <+6>: je 0x401460 <_Z11simple_copyPKlPlm+32> 0x0000000000401448 <+8>: xor %eax,%eax 0x000000000040144a <+10>: nopw 0x0(%rax,%rax,1) 0x0000000000401450 <+16>: mov (%rdi,%rax,8),%rdx 0x0000000000401454 <+20>: mov %rdx,(%rsi,%rax,8) 0x0000000000401458 <+24>: inc %rax 0x000000000040145b <+27>: cmp %rax,%rcx 0x000000000040145e <+30>: jne 0x401450 <_Z11simple_copyPKlPlm+16> 0x0000000000401460 <+32>: retq When compiling with -O3, gcc vectorized the loop using xmm0, the simple_loop is around 1% faster. I took a brief look at the glibc source code. Though I don't have enough knowledge to understand it yet, I'm curious about the underlying mechanism. Thanks. Cheng