From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by sourceware.org (Postfix) with ESMTPS id 4FB93382CC0D for ; Thu, 19 Aug 2021 11:26:49 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 4FB93382CC0D Received: by mail-pf1-x430.google.com with SMTP id t13so5169295pfl.6 for ; Thu, 19 Aug 2021 04:26:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=S2woU8A8AU6Xf+aLtf/tiSAhNSng7GNqjCpOw4nDMY0=; b=nydepl0Y3BkQ/GPoPqZEkDPm8Ep3yHtFL/nOTQ3L3xUCYnOsGckRq4qAcD3e2wo8Xn x/dKKY3wHVehneTK5LVlS/b25pjnrcYO8RAVwUhAcs+qgOvhYf9xtESxuSzf0elhze+j 304aaE+QRBn/C5rmH1tWF3ajVNPNptqk/gBJg2/MFoMgeU9Yjvb8XGSQKPr/YZkSw+O+ r1gwZolKcOr1qBxjzz68wR53/qDavZyXIArhnPJHdIoXIguxzAiqsFGitabECpxOyTv3 TF9b9IWZzcEjoW3qSmuu37A5UVY2YOh3MPpiWPbTPGZl4/Uie5wikfcGV9ab6RjT1xxi jPFg== X-Gm-Message-State: AOAM533sxf+AoTUVsjrg60POlBvpnUsLiiaQHoV9d3jvlUOuBlUy0aCK FNj2e4RCwf8XI7+o7vI1G6VkFDaMLTJSog== X-Google-Smtp-Source: ABdhPJyTjFO4xsB30X77bmFO0YrRxIDKnhtjsZtaN9ucjxAxirrGDQjzjeJjZtg4T8ESzQn4PUqRnQ== X-Received: by 2002:a63:cd51:: with SMTP id a17mr13676181pgj.63.1629372408244; Thu, 19 Aug 2021 04:26:48 -0700 (PDT) Received: from ?IPv6:2804:431:c7ca:cd83:aa1a:7bd:9935:9bba? ([2804:431:c7ca:cd83:aa1a:7bd:9935:9bba]) by smtp.gmail.com with ESMTPSA id v9sm3538901pga.82.2021.08.19.04.26.46 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 19 Aug 2021 04:26:47 -0700 (PDT) Subject: Re: [PATCH v2 0/4] malloc: Improve Huge Page support To: Siddhesh Poyarekar , libc-alpha@sourceware.org Cc: Norbert Manthey , Guillaume Morin References: <20210818142000.128752-1-adhemerval.zanella@linaro.org> From: Adhemerval Zanella Message-ID: <5e37cb66-fd93-5d27-ec7b-28f7cf636246@linaro.org> Date: Thu, 19 Aug 2021 08:26:45 -0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-5.4 required=5.0 tests=BAYES_00, BODY_8BITS, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, NICE_REPLY_A, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Aug 2021 11:27:00 -0000 On 18/08/2021 15:11, Siddhesh Poyarekar wrote: > On 8/18/21 7:49 PM, Adhemerval Zanella wrote: >> Linux currently supports two ways to use Huge Pages: either by using >> specific flags directly with the syscall (MAP_HUGETLB for mmap(), or >> SHM_HUGETLB for shmget()), or by using Transparent Huge Pages (THP) >> where the kernel will try to move allocated anonymous pages to Huge >> Pages blocks transparent to application. >> >> Also, THP current support three different modes [1]: 'never', 'madvise', >> and 'always'.  The 'never' is self-explanatory and 'always' will enable >> THP for all anonymous memory.  However, 'madvise' is still the default >> for some systems and for such cases THP will be only used if the memory >> range is explicity advertise by the program through a >> madvise(MADV_HUGEPAGE) call. >> >> This patchset adds a two new tunables to improve malloc() support with >> Huge Page: > > I wonder if this could be done with just the one tunable, glibc.malloc.hugepages where: > > 0: Disabled (default) > 1: Transparent, where we emulate "always" behaviour of THP > 2: HugeTLB enabled with default hugepage size > : HugeTLB enabled with the specified page size I though about it, and decided to use two tunables because although for mmap() system allocation both tunable are mutually exclusive (since it does not make sense to madvise() a mmap(MAP_HUGETLB) we still use sbrk() on main arena. The way I did for sbrk() is to align to the THP page size advertisen by the kernel, so using the tunable does change the behavior slightly (it is not 'transparent' as the madvise call). So to use only one tunable would require to either drop the sbrk() madvise when MAP_HUGETLB is used, move it to another tunable (say '3: HugeTLB enabled with default hugepage size and madvise() on sbrk()), or assume it when huge pages should be used. (and how do we handle sbrk() with explicit size?) If one tunable is preferable I think it would be something like: 0: Disabled (default) 1: Transparent, where we emulate "always" behaviour of THP sbrk() is also aligned to huge page size and issued madvise() 2: HugeTLB enabled with default hugepage size and sbrk() as handled are 1 > : HugeTLB enabled with the specified page size and sbrk() are handled as 1 By forcing the sbrk() and madvise() on all tunables value make the expectation to use huge pages in all possible occasions. > > When using HugeTLB, we don't really need to bother with THP so they seem mutually exclusive. > >> >>    - glibc.malloc.thp_madvise: instruct the system allocator to issue >>      a madvise(MADV_HUGEPAGE) call after a mmap() one for sizes larger >>      than the default huge page size.  The default behavior is to >>      disable it and if the system does not support THP the tunable also >>      does not enable the madvise() call. >> >>    - glibc.malloc.mmap_hugetlb: instruct the system allocator to round >>      allocation to huge page sizes along with the required flags >>      (MAP_HUGETLB for Linux).  If the memory allocation fails, the >>      default system page size is used instead.  The default behavior is >>      to disable and a value of 1 uses the default system huge page size. >>      A positive value larger than 1 means to use a specific huge page >>      size, which is matched against the supported ones by the system. >> >> The 'thp_madvise' tunable also changes the sbrk() usage by malloc >> on main arenas, where the increment is now aligned to the huge page >> size, instead of default page size. >> >> The 'mmap_hugetlb' aims to replace the 'morecore' removed callback >> from 2.34 for libhugetlbfs (where the library tries to leverage the >> huge pages usage instead to provide a system allocator).  By >> implementing the support directly on the mmap() code patch there is >> no need to try emulate the morecore()/sbrk() semantic which simplifies >> the code and make memory shrink logic more straighforward. >> >> The performance improvements are really dependent of the workload >> and the platform, however a simple testcase might show the possible >> improvements: > > A simple test like below in benchtests would be very useful to at least get an initial understanding of the behaviour differences with different tunable values.  Later those who care can add more relevant workloads. Yeah, I am open to suggestions on how to properly test it. The issue is we need to have specific system configuration either by proper kernel support (THP) or with reserved large pages to actually test it. For THP the issue is really 'transparent' for user, which means that we will need to poke on specific Linux sysfs information to check if huge pages are being used. And we might not get the expected answer depending of the system load and memory utilization (the advised pages might not be moved to large pages if there is no sufficient memory). > >> >> $ cat hugepages.cc >> #include >> >> int >> main (int argc, char *argv[]) >> { >>    std::size_t iters = 10000000; >>    std::unordered_map ht; >>    ht.reserve (iters); >>    for (std::size_t i = 0; i < iters; ++i) >>      ht.try_emplace (i, i); >> >>    return 0; >> } >> $ g++ -std=c++17 -O2 hugepages.cc -o hugepages >> >> On a x86_64 (Ryzen 9 5900X): >> >>   Performance counter stats for 'env >> GLIBC_TUNABLES=glibc.malloc.thp_madvise=0 ./testrun.sh ./hugepages': >> >>              98,874      faults >>             717,059      dTLB-loads >>             411,701      dTLB-load-misses          #   57.42% of all dTLB >> cache accesses >>           3,754,927      cache-misses              #    8.479 % of all >> cache refs >>          44,287,580      cache-references >> >>         0.315278378 seconds time elapsed >> >>         0.238635000 seconds user >>         0.076714000 seconds sys >> >>   Performance counter stats for 'env >> GLIBC_TUNABLES=glibc.malloc.thp_madvise=1 ./testrun.sh ./hugepages': >> >>               1,871      faults >>             120,035      dTLB-loads >>              19,882      dTLB-load-misses          #   16.56% of all dTLB >> cache accesses >>           4,182,942      cache-misses              #    7.452 % of all >> cache refs >>          56,128,995      cache-references >> >>         0.262620733 seconds time elapsed >> >>         0.222233000 seconds user >>         0.040333000 seconds sys >> >> >> On an AArch64 (cortex A72): >> >>   Performance counter stats for 'env >> GLIBC_TUNABLES=glibc.malloc.thp_madvise=0 ./testrun.sh ./hugepages': >> >>               98835      faults >>          2007234756      dTLB-loads >>             4613669      dTLB-load-misses          #    0.23% of all dTLB >> cache accesses >>             8831801      cache-misses              #    0.504 % of all >> cache refs >>          1751391405      cache-references >> >>         0.616782575 seconds time elapsed >> >>         0.460946000 seconds user >>         0.154309000 seconds sys >> >>   Performance counter stats for 'env >> GLIBC_TUNABLES=glibc.malloc.thp_madvise=1 ./testrun.sh ./hugepages': >> >>                 955      faults >>          1787401880      dTLB-loads >>              224034      dTLB-load-misses          #    0.01% of all dTLB >> cache accesses >>             5480917      cache-misses              #    0.337 % of all >> cache refs >>          1625937858      cache-references >> >>         0.487773443 seconds time elapsed >> >>         0.440894000 seconds user >>         0.046465000 seconds sys >> >> >> And on a powerpc64 (POWER8): >> >>   Performance counter stats for 'env >> GLIBC_TUNABLES=glibc.malloc.thp_madvise=0 ./testrun.sh ./hugepages >> ': >> >>                5453      faults >>                9940      dTLB-load-misses >>             1338152      cache-misses              #    0.101 % of all >> cache refs >>          1326037487      cache-references >> >>         1.056355887 seconds time elapsed >> >>         1.014633000 seconds user >>         0.041805000 seconds sys >> >>   Performance counter stats for 'env >> GLIBC_TUNABLES=glibc.malloc.thp_madvise=1 ./testrun.sh ./hugepages >> ': >> >>                1016      faults >>                1746      dTLB-load-misses >>              399052      cache-misses              #    0.030 % of all >> cache refs >>          1316059877      cache-references >> >>         1.057810501 seconds time elapsed >> >>         1.012175000 seconds user >>         0.045624000 seconds sys >> >> It is worth to note that the powerpc64 machine has 'always' set >> on '/sys/kernel/mm/transparent_hugepage/enabled'. >> >> Norbert Manthey's paper has more information with a more thoroughly >> performance analysis. >> >> For testing run make check on x86_64-linux-gnu with thp_pagesize=1 >> (directly on ptmalloc_init() after tunable initialiazation) and >> with mmap_hugetlb=1 (also directly on ptmalloc_init()) with about >> 10 large pages (so the fallback mmap() call is used) and with >> 1024 large pages (so all mmap(MAP_HUGETLB) are successful). > > You could add tests similar to mcheck and malloc-check, i.e. add $(tests-hugepages) to run all malloc tests again with the various tunable values.  See tests-mcheck for example. Ok, I can work with this. This might not add much if the system is not configured with either THP or with some huge page pool but at least adds some coverage. > >> -- >> >> Changes from previous version: >> >>    - Renamed thp_pagesize to thp_madvise and make it a boolean state. >>    - Added MAP_HUGETLB support for mmap(). >>    - Remove system specific hooks for THP huge page size in favor of >>      Linux generic implementation. >>    - Initial program segments need to be page aligned for the >>      first madvise call. >> >> Adhemerval Zanella (4): >>    malloc: Add madvise support for Transparent Huge Pages >>    malloc: Add THP/madvise support for sbrk >>    malloc: Move mmap logic to its own function >>    malloc: Add Huge Page support for sysmalloc >> >>   NEWS                                       |   9 +- >>   elf/dl-tunables.list                       |   9 + >>   elf/tst-rtld-list-tunables.exp             |   2 + >>   include/libc-pointer-arith.h               |  10 + >>   malloc/arena.c                             |   7 + >>   malloc/malloc-internal.h                   |   1 + >>   malloc/malloc.c                            | 263 +++++++++++++++------ >>   manual/tunables.texi                       |  23 ++ >>   sysdeps/generic/Makefile                   |   8 + >>   sysdeps/generic/malloc-hugepages.c         |  37 +++ >>   sysdeps/generic/malloc-hugepages.h         |  49 ++++ >>   sysdeps/unix/sysv/linux/malloc-hugepages.c | 201 ++++++++++++++++ >>   12 files changed, 542 insertions(+), 77 deletions(-) >>   create mode 100644 sysdeps/generic/malloc-hugepages.c >>   create mode 100644 sysdeps/generic/malloc-hugepages.h >>   create mode 100644 sysdeps/unix/sysv/linux/malloc-hugepages.c >> >