From: Duke Abbaddon <duke.abbaddon@gmail.com>
To: press@google.com
Subject: Inferencing 4Bit, lessons from the RS, Now in the case study we will be edge enhancing with an inferencer..
Date: Wed, 3 Jan 2024 05:10:35 +0000 [thread overview]
Message-ID: <CAHpNFcP4UP+FVdezGHsyY_BNA+pMRrOv5ndyH0Uu9rgamT0JVQ@mail.gmail.com> (raw)
Inferencing 4Bit, lessons from the RS, Now in the case study we will
be edge enhancing with an inferencer..
We do not assume we 4Bit inference; We assume any bitwidth..
We however assume that we multibyte every inference so that we can
fill the instruction with..
MPi multibyte parallel instructions.
AC
BD
EG
FH
& So on; for every instruction inference or edge, 4Bit, 8bit, ++Nbit
Now I have spoken to you before about edge detection in Python &
observed that obviously this is a sharpening edge detection made to
order!
So what do we do ?
4 Byte code: does ? A = B + C (edge interpolation, for training we
assume the rule A + B = C)
We assume that if A + B = (C/2) , that they are the same C & then we...
A + C = (D/2) & B+C = (E/2),
And forever yep...
So what do we do this for, We know A & B are a line or a curve?, So why not ask?
Is G/Z buffered Polygon { A , B, C, D & so on} & Then:
A + B = (C/2) & A + C = (D/2) & B+C = (E/2) But aso Shape from
Polygon:{ A , B, C, D & so on},
Now normally can & will!
But we do not "Inferencing what we already know!"; We inference what we do not!
For example exploding fragment polygons without a buffer (in a shader
in the 64KB RAM Cache),
A mouse pointer that we do not cache! &or DMA Device pointer.
Rupert S
Study Subject Reduction :
https://science.n-helix.com/2021/03/brain-bit-precision-int32-fp32-int16.html
https://science.n-helix.com/2022/10/ml.html
https://blog.openvino.ai/blog-posts/q123-technology-update-low-precision-and-model-optimization
https://blog.openvino.ai/blog-posts/q223-technology-update-low-precision-and-model-optimization
https://blog.openvino.ai/blog-posts/q323-technology-update-low-precision-and-model-optimization
https://blog.openvino.ai/blog-posts/q423-technology-update-low-precision-and-model-optimization
https://is.gd/CJS_DictionarySort
Python & JS Configurations
https://is.gd/DictionarySortJS
reply other threads:[~2024-01-03 5:10 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAHpNFcP4UP+FVdezGHsyY_BNA+pMRrOv5ndyH0Uu9rgamT0JVQ@mail.gmail.com \
--to=duke.abbaddon@gmail.com \
--cc=press@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).