From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qv1-xf36.google.com (mail-qv1-xf36.google.com [IPv6:2607:f8b0:4864:20::f36]) by sourceware.org (Postfix) with ESMTPS id 504BE3858C5F for ; Sun, 24 Dec 2023 15:39:16 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 504BE3858C5F Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 504BE3858C5F Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2607:f8b0:4864:20::f36 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703432358; cv=none; b=X+5qP3Cxhv4xvUOIYxxRn23s+nUPKXVuYAvQ+ejfxGM+4rMm4yUcVoX5SXJSJGeU/nkeRMv8Ro7Q5cGVXgFhXQGzthdqVPoVlgqNLPbkUOXZAHB537oJRVtSded1v+/lf9+s/1tvaLXFbICvFb6pvLD7mg8ohk+1k49aqbeNQAo= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703432358; c=relaxed/simple; bh=nI10g6trouB8B5oHPkKNfv1tnLkSy3wgREu9MKedbhY=; h=DKIM-Signature:MIME-Version:From:Date:Message-ID:Subject:To; b=Z3o6RkzW1pVAD0iWNsp2weloJxlHorsb25TWQslL6+Vi0R0Yh3DDeAl6bPgOGH21OugGPNbpDG7XbeiXtgCDJumbGveptcuanSbFlurq4SlkyiuQXu6js/s1gLtgIk8MqmUqPMCH3NeepNUneDJUAPuvK3uQyD1MZo+eeZS3h4Q= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-qv1-xf36.google.com with SMTP id 6a1803df08f44-67f47b15fa3so28360466d6.1 for ; Sun, 24 Dec 2023 07:39:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703432355; x=1704037155; darn=gcc.gnu.org; h=content-transfer-encoding:to:subject:message-id:date:from :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=uljaWiFUdKaZD3gahg7niJd6HnbYanBFrO2aTLHoRdA=; b=GQq+OeqfvJtGYz80hR9oOd5LHIblqnTj10GIPmVlhLi+V0w4KTxZTHGF7S97StxrM4 HHlTaa7xiWEaB32XniJ5Bbp1kvLLiywUkA7o/u9WYVWR16i9L9hiZ7wYN4y5COZ8PbHE KYCPY3s8y7uobfM9YzVJK/d27OpHd7n78IEYR3+r/sPIPVfB/Yg8+uLOZlHzN96QU/Qm l72iIzx9D+bMbLtnN85hBzI3tYW8GKCdzfGzLNVLCphsXCHcEwl+3N+2EQ+juW0iCKTe qQFGvQc1YANxC91NDxJ+prDtsYFrzyVQSI7r7QXPzt0DCYRBoZrq6reRPjvh3JjU1OrD Kpug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703432355; x=1704037155; h=content-transfer-encoding:to:subject:message-id:date:from :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=uljaWiFUdKaZD3gahg7niJd6HnbYanBFrO2aTLHoRdA=; b=Rp2RN3nUEKsyznL+oN3EYk4e9YPKlvDim1u+f7K06+7qjArNdo8lM1I1OObqjtpket Cihi9RxpreJczq3BTLy05zhNz68kr2JHLGpA/SGbDia3dFECyyP+9DstASTy2b4NpzfH 5BW+UTUInJscNoNVkdKDD4lqw5tp7XxozRvKDNr1xnpB9vPjGNjalxlN0w6FmongEPJs u2YBDDxubmrIF3CuaQqyqqXDFpP4DeKMfqUw50uxTCotlvNlTGZmkckD/VgnPjcAZIbp 2h6zG4POkP77ne9zqnWJyHY6fXBk2MaAfYZ4EtiL0qPQqnpFv5hcPzyLLs9Se6YQWeUt Lx+w== X-Gm-Message-State: AOJu0Yz4W89xVkfUYzA7o1Y1JOIvZeoQx2/A2fRrzk9CrNwe6PZkpjHq aOExQQTVNICoIuXCG4xmN7ctmeSZuoLDFucQrQo= X-Google-Smtp-Source: AGHT+IEHu0oYCRiul+JP8Wz4ZVXIRHodGCzXWAT0SYEynDUhF4JWadGuWFS3C5sXDg04OHrGGRwiFVp/aLlXzJaW27g= X-Received: by 2002:a05:6214:21e6:b0:67f:6ddb:58b4 with SMTP id p6-20020a05621421e600b0067f6ddb58b4mr7397786qvj.61.1703432355277; Sun, 24 Dec 2023 07:39:15 -0800 (PST) MIME-Version: 1.0 From: Duke Abbaddon Date: Sun, 24 Dec 2023 15:39:04 +0000 Message-ID: Subject: ML Classification Bundling for HIM & Her Connection specifics for a better brain; classified by type & example: Human Brain cells have 1000 connections, squid 10000; Each connection does: To: press@google.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=4.7 required=5.0 tests=BAYES_50,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,KAM_SHORT,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Level: **** X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: ML Classification Bundling for HIM & Her Connection specifics for a better brain; classified by type & example: Human Brain cells have 1000 connections, squid 10000; Each connection does: 7Bit regular 8Bit, sharp 9Bit on better effort, 10Bit on clarity & meditation + work hard 6Bit on relaxed, 5Bit on drunk Connections for dedicated skills such as maths have: Dedication bundling (multiple connections) Multiple Affirmations, A-Synchronous, Synchronous 1 5Bit to 7Bit 2, 5Bit to 18Bit 3, 7Bit to 26Bit 4, 16Bit to 38Bit 5, 17Bit to 48Bit Eyes for example can bundle 5 on training, colour purity.. lower bundling offers more flexibility, High bundling offers assurety & speed & retention. RS Python & JS Configurations https://is.gd/DictionarySortJS https://science.n-helix.com/2021/03/brain-bit-precision-int32-fp32-int16.ht= ml https://science.n-helix.com/2022/10/ml.html ********************* Brain Bit Precision Int32 FP32, Int16 FP16, Int8 FP8, Int6 FP6, Int4? Idealness of Computational Machine Learning ML TOPS for the human brain: Brain level Int/Float inferencing is ideally in Int8/7 with error bits or float remainders Comparison List : RS 48Bit Int+Float Int48+FP48 (many connections, Eyes for example) HDR Vison 40BitInt+Float Int40+FP40 HDR Basic Int16 FP32 Int8 Float16(2Channel, Brain Node)(3% Brain Study) Int7 (20% Brain Study) Int6 (80% Brain Study) Int5 (Wolves (some are 6+)) Int4 (Sheep & worms) Int3 (Germ biosystems) Statistically a science test stated 80% of brains in man quantify Bit at 6 20% to 7Bit XBox X & PlayStation 5 do down to INT4Bit (quite likely for quick inferenci= ng) Be aware that using 4 bit Int instructions .. potentially means more instructions used per clock cycle & more micro data transfers.. Int8 is most commonly liable to quantify data with minimum error in 8Bit like the Atari STE or the Nintendo 8Bit.. Colour perception for example is many orders of magnitude higher! Or 8bit colours EGA is all we would use.. 16Bit was not good enough.. But 32Bit suites most people! But 10Bit(x4) 40Bit & Dolby 12Bit(x4) 48Bit is a luxury & we love it! Precision Quality Control in ML: While nothing is sure, Human beings appear to have Integer of around 8 & are more surely able to practice Float units, Bundling is when multiple Neuron roots go to the same neuron in Sync from the same response cluster Neurons, This feature enhances data integrity & precision by multiplying data transfer & response precision.. Eye Neurons are an example & so are feelings from clustered neurons such as hands, feet & the sensory organs, Memory & Maths calculations. (c)Rupert S https://is.gd/ProcessorLasso https://science.n-helix.com/2021/03/brain-bit-precision-int32-fp32-int16.ht= ml https://science.n-helix.com/2022/10/ml.html ML Classification Bundling for HIM & Her Connection specifics for a better brain; classified by type & example: Human Brain cells have 1000 connections, squid 10000; Each connection does: 7Bit regular 8Bit, sharp 9Bit on better effort, 10Bit on clarity & meditation + work hard 6Bit on relaxed, 5Bit on drunk Connections for dedicated skills such as maths have: Dedication bundling (multiple connections) Multiple Affirmations, A-Synchronous, Synchronous 1 5Bit to 7Bit 2, 5Bit to 18Bit 3, 7Bit to 26Bit 4, 16Bit to 38Bit 5, 17Bit to 48Bit Eyes for example can bundle 5 on training, colour purity.. lower bundling offers more flexibility, High bundling offers assurety & speed & retention. RS Python & JS Configurations https://is.gd/DictionarySortJS Restricted Boltzmann ML Networks : Brain Efficient I propose that SIMD of large scale width & depth can implement the model : Restricted Boltzmann Machines (RBMs) have been proposed for developing neural networks for a variety of unsupervised machine learning applications Restricted Boltzmann Machines utilize a percentage correctness based upon energy levels of multiple node values; That represent a percentage chance of a correct solution, My impression is that Annealer machine simply utilise more hidden values per node on a neural network, Thus i propose that SIMD of large scale width & depth can implement the mod= el.. A flexible approach is to experiment with percentages from a base value... 100 or 1000; We can therefore attempt to work with percentiles in order to adapt classical computation to the theory of multiplicity. SiMD in parallel can; As we know with RISC Architecture .. Attempt to run an ideal network composing many times Factor & regression learning model.. Once the rules are set; Millions of independent IO OPS can be performed in cyclic learning, Without sending or receiving data in a way that interferes with the main CPU & GPU Function.. Localised DMA. Adaptive hyperparameter updating for training restricted Boltzmann machines= on: Quantum annealers Wide Path SiMD "Adaptive hyperparameter updating for training restricted Boltzmann machines on quantum annealers" https://www.nature.com/articles/s41598-021-82197-1.pdf https://www.nature.com/articles/s41598-021-82197-1 https://science.n-helix.com/2019/06/vulkan-stack.html "Restricted Boltzmann Machines (RBMs) have been proposed for developing neural networks for a variety of unsupervised machine learning applications such as image recognition, drug discovery, and materials design. The Boltzmann probability distribution is used as a model to identify network parameters by optimizing the likelihood of predicting an output given hidden states trained on available data. Training such networks often requires sampling over a large probability space that must be approximated during gradient based optimization. Quantum annealing has been proposed as a means to search this space more efficiently which has been experimentally investigated on D-Wave hardware. D-Wave implementation requires selection of an effective inverse temperature or hyperparameter (=CE=B2) within the Boltzmann distribution which can strongly influence optimization. Here, we show how this parameter can be estimated as a hyperparameter applied to D-Wave hardware during neural network training by maximizing the likelihood or minimizing the Shannon entropy. We find both methods improve training RBMs based upon D-Wave hardware experimental validation on an image recognition problem. Neural network image reconstruction errors are evaluated using Bayesian uncertainty analysis which illustrate more than an order magnitude lower image reconstruction error using the maximum likelihood over manually optimizing the hyperparameter. The maximum likelihood method is also shown to out-perform minimizing the Shannon entropy for image reconstruction." (c)Rupert S https:/science.n-helix.com Example ML Statistic Variable Conversion : Super Sampling Virtual Resolutions : Talking about machine learning & Hardware functions to use it/Run it; To run within the SiMD & AVX feature-set. For example this works well with fonts & web browsers & consoles or standard input display hubs or User Interfaces, UI & JS & Webpage code. In the old days photo applications did exist to use ML Image enhancement on older processors.. So how do they exploit Machine Learning on hardware with MMX for example ? Procedural process data analytics: Converting large statistics data bases; On general Tessellation/Interpolation of images The procedural element is writing the code that interpolates data based upon the statistics database... Associated colours.. Face identity... Linearity or curvature... Association of grain & texture... Databases get large fast & a 2 MB to 15MB Database makes the most sense... Averages have to be categorized by either being worthy of 2 Places in the database or an average.. You can still run ML on a database object & then the points in the table are called nodes! Indeed you can do both, However database conversion makes datasets way more manageable to run within the SiMD & AVX feature-set. However the matter of inferencing then has to be reduced to statistical averages & sometimes ML runs fine inferencing this way. Both ways work, Whatever is best for you & the specific hardware. (c)Rupert S DL-ML slide : Machine Learning DL-ML By my logic the implementation of a CPU+GPU model would be fluid to both.. Machine Learning : Scientific details relevant to DL-ML slide (CPU,GPU,SiMD Hash table(M1 Vector Matrix-table +Speed) The vector logic is compatible to both CPU+GPU+SiMD+AVX. Relevant because we use Vector Matrix Table hardware.. and in notes the Matrix significantly speeds up the process. (Quantum Light Matrix) The relevance to us is immense with world VM servers DL-ML Machine Learning Model compatible with our hardware By my logic the implementation of a CPU+GPU model would be fluid to both.. The vector logic is compatible with both CPU+GPU. However this is a model we can use & train.. For common core : Rupert S https://is.gd/ProcessorLasso https://www.marktechpost.com/2021/04/10/computer-scientists-from-rice-unive= rsity-display-cpu-algorithm-that-trains-deep-neural-networks-15-times-faste= r-than-gpu/ https://arxiv.org/pdf/2103.10891.pdf "State-of-the-art approaches such as OpenMP and OpenCL" https://is.gd/LEDSource High performance ML Versatile Compute chips https://www.xilinx.com/content/dam/xilinx/support/documents/data_sheets/ds9= 50-versal-overview.pdf https://science.n-helix.com/2022/10/ml.html https://science.n-helix.com/2021/03/brain-bit-precision-int32-fp32-int16.ht= ml https://science.n-helix.com/2023/06/tops.html machine learning https://www.amazon.com/dp/B08V134ZFD Tokma ML Python & JS Configurations https://is.gd/DictionarySortJS https://iopscience.iop.org/article/10.1088/1741-4326/ad142f https://is.gd/TokmaML