public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* Embedded Hardened Pointer Table Cache for 3D Chips : RS
@ 2023-09-02 10:06 Duke Abbaddon
  0 siblings, 0 replies; only message in thread
From: Duke Abbaddon @ 2023-09-02 10:06 UTC (permalink / raw)
  To: gimp

Embedded Hardened Pointer Table Cache for 3D Chips : RS

Based on PCI Edge RAM, Internal Loop Dynamic RAM; With internalised
DMA Memory transfers..

In the process the feature has the ability to set a page table; 1MB,
2MB, 4MB, 16MB > 1TB,The Ram can be internally written to without
invoking ALU or OS,

Pages are allocated; The GPU is an example; Physical pages are
allocated in RAM that is directly Set by OS & Firmware/ROM
Parameters...

Internal access to the RAM is set within the page allocation set, But
all internal mapping & paging is done directly & though ALU & Memory
Management Unit MMU.

With 1MB Cache set aside per feature; Not entirely unreasonable these days...

Most if a process such as SiMD can be carried out on internal loops..

Depending on Cache/RAM Space; Based on PCI Edge RAM

Internal DataSet Size based on Dynamic RAM Variable; That is set per
USE &Or Per Settings or application,

That being said; RAM Allocations best be per session & directly after
Setting is changed on reboot or refresh, Load & unload cycling.

Rupert S

*

Temporary HardLinking in Prefetching Matrix instructions,
Gather/Scatter operations of localised random scattering of
information to ram & retrieval

Gather
for (i = 0; i < N; ++i)
    x[i] = y[idx[i]];

Scatter
for (i = 0; i < N; ++i)
    y[idx[i]] = x[i];

Firstly i read statistical gathing & Seeding; Pre-Fetching is a method
of anticipating & preloading data,
So what do i want to do ? In Vector Matrix Prefetch Logical Gather

Potentially i would like to use:

Softlink (ram retrieval & multiple value)
HardLink (maths)
Prefetching logic {such as,

Run length prefetching,
Follow & Forward loading Cache,
Entire instruction load & Timing Pre-fetch & Statistic for Loop time &
load frequency
}

So on any potential layout for SiMD Matrix a most likely configuration is:

A B C : FMA
A B = C : Mul or ADD

So a logical statement is, A, B Gather/Seed C; Directly logical AKA Prefetch
A B C D; Logical fields of prefetch are localised to parameter...

Only likely to draw data from a specific subset of points,
Byte Swapping is obviously A1 B1,2,3

Most specifically if the command is a hardlink With A B C; Then most
likely Storage is directly linked; Like a HardLink on a HDD in NT,

The hard link is direct value fetching from a specific Var table &
most likely a sorted list!
If the list is not sorted; We are probably sorting the list..

If we do not HardLink data in a matrix (Example):

Var = V+n, Table
  a   b   c   d
1[V1][V1][V1][V1]
2[V2][V2][V2][V2]
3[V3][V3][V3][V3]
4[V4][V4][V4][V4]

A Matrix HardLink is a temporary Table specific logical reading of
instructions & direct memory load and save,
Registers {A,B,C,D}=v{1,2,3,4}..

Directly read direct memory table logic & optimise resulting likely
storage or retrieval locations & Soft Link (pointer table)

Solutions include multiple Gather/Scatter & 'Gather/Scatter Stride'
Cube Block multi load/save..
Logical Cache Storage History Pointer Table, Group Sorted RAM
Save/Load by classification {A,B,C,D}=v{1,2,3,4}
When X + Xa + Xb + Xc, When Y + a b c, When Y or X Prefetch Pointer
Table + Data { a, b, c }

Example Gather/Scatter logical multiple

var pointer [p1] {a ,b, c, d}
var pointer [p2] {1 ,2, 3, 4}

Gather
for (i = 0; i < N; ++i)
    x[i] = y[idx[i]];
fetch y {p1, p2}; {a, b, c, d}:{1 ,2, 3, 4}

Scatter
for (i = 0; i < N; ++i)
    y[idx[i]] = x[i];
send x {p1, p2}; {a, b, c, d}:{1 ,2, 3, 4}

Rupert S : Reference
https://en.wikipedia.org/wiki/Gather/scatter_(vector_addressing)

*

Pre-Fetching; Statistically Ordered Gather/Scatter & The Scatter/Gather Commands

(SiMD) The gather/scatter commands may seem particularly random?
But we can use this in machine learning:

Gather
The equivalent of Gathering a group of factors or memories into a
group & thinking about them in the context of our code! (our thought
rules),

Scatter
Now if we think about scatter; we have to limit the radius of our
through to a small area of brain matter (or ram)... Or the process
will leave us "Scatter-Brained"

Statistical Pre-Fetching:

Ordered Scatter
When you know approximately where to scatter

Ordered Gather
Where you know approximately where to gather

Free Thought
So now we can associate scatter & gather as a form of free thought?
Yes but chaotic...
So we add order to that chaos! We limit the scattering to a single field.

Stride
Stride is the equivalent of following a line in the field; Do we also
gather &Or Scatter while we stride ?
Do we simply stride a field?

Now to answer this question we simply have to denote motive!
In seeding we can scatter; Will we do better with an Ordered Scatter ?
Yes we could!

Statistically Ordered Gather/Scatter & The Scatter/Gather Commands
Pre-Fetched

Rupert S


********

When you need to Upscale or Sort Databases; Matrix-Blas_Libs-Compile
https://is.gd/HPC_HIP_CUDA Excellence born! look to links for Python
Machine learning configurations in HPC, Linux, Android, Windows
Python Deep Learning: configurations

AndroLinuxML : https://drive.google.com/file/d/1N92h-nHnzO5Vfq1rcJhkF952aZ1PPZGB/view?usp=sharing

Linux : https://drive.google.com/file/d/1u64mj6vqWwq3hLfgt0rHis1Bvdx_o3vL/view?usp=sharing

Windows : https://drive.google.com/file/d/1dVJHPx9kdXxCg5272fPvnpgY8UtIq57p/view?usp=sharing

Matrix-Blas_Libs-Compile
https://is.gd/HPC_HIP_CUDA Excellence

Reference operators

https://science.n-helix.com/2023/06/map.html
https://science.n-helix.com/2022/10/ml.html

https://science.n-helix.com/2023/02/smart-compression.html

https://science.n-helix.com/2022/08/jit-dongle.html
https://science.n-helix.com/2022/06/jit-compiler.html

https://science.n-helix.com/2021/03/brain-bit-precision-int32-fp32-int16.html

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-09-02 10:06 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-02 10:06 Embedded Hardened Pointer Table Cache for 3D Chips : RS Duke Abbaddon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).