From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qv1-xf29.google.com (mail-qv1-xf29.google.com [IPv6:2607:f8b0:4864:20::f29]) by sourceware.org (Postfix) with ESMTPS id E58FA3858CD1 for ; Sat, 2 Sep 2023 10:06:35 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org E58FA3858CD1 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-qv1-xf29.google.com with SMTP id 6a1803df08f44-6515d44b562so16256406d6.3 for ; Sat, 02 Sep 2023 03:06:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693649195; x=1694253995; darn=gcc.gnu.org; h=to:subject:message-id:date:from:mime-version:from:to:cc:subject :date:message-id:reply-to; bh=DZA3fwjjGm3jRGCEnBS4r19AXWosAqbjAhIgVA+4kG0=; b=XHUdtwVNG/jKpfN8ibfmavQhmHs8oHlTw707UxrLqM9wRSEs0C+tJRh/Kca6N82uLI MCsjiBNdpU/tvPTNKoV9Df3LEsSbmwlEyON2yHWXGnOsH6hRw51MhayvmYd8Pqls4F/2 YvGEAteCHROTaR6DZB4kZL8IdvXT8M5MIoFfgytl0jVfYAretrMDc8Cs0fjoyxdUKslp QLCdDx+i7Ub7/zzvnvQLuw/xSd2xVeAZG5U+TU+08yDnZttCtnXhKF6nIRaeGm1cojt0 gmeT6j77Nb4K/LKAPC8X16PGxbzjmDxFoy5oLNfSPxPezQV+v/v6T++3BShoTvso2P5R HhpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693649195; x=1694253995; h=to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=DZA3fwjjGm3jRGCEnBS4r19AXWosAqbjAhIgVA+4kG0=; b=NpkdNn7aBZruVnSKc4j9wmt2xv5S9g3Pv9IJw6CZzqfBGK3w0Hkv+fro711+aC66Fj 9b0Uyn1oxALf4LHVTZbbJDdl1JKo1KehpftRS7iTZm+ABGalOicnfUQiVqEe3fI1dJRS hWgSgxB+gnpHsvARouRyVBs+oYTfdm4Xj/VUU7pBH18qRydogTeBGGpVloKxtqbUASIm nBi6fshsYsxKjMxSqKObRjv2/Sd0w+ktbsKo5jqpB+HIJpAxhzSTESefMtbFMton8ris M+FnyhBkRqHVKgH4+1/yRnSDFz6jPL6pf907XBamLs8wT0wTRc+XmXSXVWyuJ8HxcQ3k I1TQ== X-Gm-Message-State: AOJu0YwXX20Ez/V3Km4dXAv08qW5/rcVJCN1zMmFEyovjaVMySDOXHWO PyTkrsn5SvLin/ZWln7sOYCAbwT4NhUQklXdNNU= X-Google-Smtp-Source: AGHT+IFIvusJglCHXrnWe8UrPixmbG7/XFkH+QJvelvK5/Zs47lc8Sw6Ql1sqo6AKfjl2+AxkKkzAgh0NZu6c5o1cY4= X-Received: by 2002:a0c:aada:0:b0:635:e113:a0fd with SMTP id g26-20020a0caada000000b00635e113a0fdmr4570790qvb.33.1693649195008; Sat, 02 Sep 2023 03:06:35 -0700 (PDT) MIME-Version: 1.0 From: Duke Abbaddon Date: Sat, 2 Sep 2023 11:06:23 +0100 Message-ID: Subject: Embedded Hardened Pointer Table Cache for 3D Chips : RS To: gimp@gnome.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=1.4 required=5.0 tests=BAYES_20,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,KAM_SHORT,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Level: * X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Embedded Hardened Pointer Table Cache for 3D Chips : RS Based on PCI Edge RAM, Internal Loop Dynamic RAM; With internalised DMA Memory transfers.. In the process the feature has the ability to set a page table; 1MB, 2MB, 4MB, 16MB > 1TB,The Ram can be internally written to without invoking ALU or OS, Pages are allocated; The GPU is an example; Physical pages are allocated in RAM that is directly Set by OS & Firmware/ROM Parameters... Internal access to the RAM is set within the page allocation set, But all internal mapping & paging is done directly & though ALU & Memory Management Unit MMU. With 1MB Cache set aside per feature; Not entirely unreasonable these days... Most if a process such as SiMD can be carried out on internal loops.. Depending on Cache/RAM Space; Based on PCI Edge RAM Internal DataSet Size based on Dynamic RAM Variable; That is set per USE &Or Per Settings or application, That being said; RAM Allocations best be per session & directly after Setting is changed on reboot or refresh, Load & unload cycling. Rupert S * Temporary HardLinking in Prefetching Matrix instructions, Gather/Scatter operations of localised random scattering of information to ram & retrieval Gather for (i = 0; i < N; ++i) x[i] = y[idx[i]]; Scatter for (i = 0; i < N; ++i) y[idx[i]] = x[i]; Firstly i read statistical gathing & Seeding; Pre-Fetching is a method of anticipating & preloading data, So what do i want to do ? In Vector Matrix Prefetch Logical Gather Potentially i would like to use: Softlink (ram retrieval & multiple value) HardLink (maths) Prefetching logic {such as, Run length prefetching, Follow & Forward loading Cache, Entire instruction load & Timing Pre-fetch & Statistic for Loop time & load frequency } So on any potential layout for SiMD Matrix a most likely configuration is: A B C : FMA A B = C : Mul or ADD So a logical statement is, A, B Gather/Seed C; Directly logical AKA Prefetch A B C D; Logical fields of prefetch are localised to parameter... Only likely to draw data from a specific subset of points, Byte Swapping is obviously A1 B1,2,3 Most specifically if the command is a hardlink With A B C; Then most likely Storage is directly linked; Like a HardLink on a HDD in NT, The hard link is direct value fetching from a specific Var table & most likely a sorted list! If the list is not sorted; We are probably sorting the list.. If we do not HardLink data in a matrix (Example): Var = V+n, Table a b c d 1[V1][V1][V1][V1] 2[V2][V2][V2][V2] 3[V3][V3][V3][V3] 4[V4][V4][V4][V4] A Matrix HardLink is a temporary Table specific logical reading of instructions & direct memory load and save, Registers {A,B,C,D}=v{1,2,3,4}.. Directly read direct memory table logic & optimise resulting likely storage or retrieval locations & Soft Link (pointer table) Solutions include multiple Gather/Scatter & 'Gather/Scatter Stride' Cube Block multi load/save.. Logical Cache Storage History Pointer Table, Group Sorted RAM Save/Load by classification {A,B,C,D}=v{1,2,3,4} When X + Xa + Xb + Xc, When Y + a b c, When Y or X Prefetch Pointer Table + Data { a, b, c } Example Gather/Scatter logical multiple var pointer [p1] {a ,b, c, d} var pointer [p2] {1 ,2, 3, 4} Gather for (i = 0; i < N; ++i) x[i] = y[idx[i]]; fetch y {p1, p2}; {a, b, c, d}:{1 ,2, 3, 4} Scatter for (i = 0; i < N; ++i) y[idx[i]] = x[i]; send x {p1, p2}; {a, b, c, d}:{1 ,2, 3, 4} Rupert S : Reference https://en.wikipedia.org/wiki/Gather/scatter_(vector_addressing) * Pre-Fetching; Statistically Ordered Gather/Scatter & The Scatter/Gather Commands (SiMD) The gather/scatter commands may seem particularly random? But we can use this in machine learning: Gather The equivalent of Gathering a group of factors or memories into a group & thinking about them in the context of our code! (our thought rules), Scatter Now if we think about scatter; we have to limit the radius of our through to a small area of brain matter (or ram)... Or the process will leave us "Scatter-Brained" Statistical Pre-Fetching: Ordered Scatter When you know approximately where to scatter Ordered Gather Where you know approximately where to gather Free Thought So now we can associate scatter & gather as a form of free thought? Yes but chaotic... So we add order to that chaos! We limit the scattering to a single field. Stride Stride is the equivalent of following a line in the field; Do we also gather &Or Scatter while we stride ? Do we simply stride a field? Now to answer this question we simply have to denote motive! In seeding we can scatter; Will we do better with an Ordered Scatter ? Yes we could! Statistically Ordered Gather/Scatter & The Scatter/Gather Commands Pre-Fetched Rupert S ******** When you need to Upscale or Sort Databases; Matrix-Blas_Libs-Compile https://is.gd/HPC_HIP_CUDA Excellence born! look to links for Python Machine learning configurations in HPC, Linux, Android, Windows Python Deep Learning: configurations AndroLinuxML : https://drive.google.com/file/d/1N92h-nHnzO5Vfq1rcJhkF952aZ1PPZGB/view?usp=sharing Linux : https://drive.google.com/file/d/1u64mj6vqWwq3hLfgt0rHis1Bvdx_o3vL/view?usp=sharing Windows : https://drive.google.com/file/d/1dVJHPx9kdXxCg5272fPvnpgY8UtIq57p/view?usp=sharing Matrix-Blas_Libs-Compile https://is.gd/HPC_HIP_CUDA Excellence Reference operators https://science.n-helix.com/2023/06/map.html https://science.n-helix.com/2022/10/ml.html https://science.n-helix.com/2023/02/smart-compression.html https://science.n-helix.com/2022/08/jit-dongle.html https://science.n-helix.com/2022/06/jit-compiler.html https://science.n-helix.com/2021/03/brain-bit-precision-int32-fp32-int16.html