From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qv1-xf29.google.com (mail-qv1-xf29.google.com [IPv6:2607:f8b0:4864:20::f29]) by sourceware.org (Postfix) with ESMTPS id 175193858430 for ; Wed, 20 Dec 2023 23:09:13 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 175193858430 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 175193858430 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2607:f8b0:4864:20::f29 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703113757; cv=none; b=HVAVpZOvkUgHuZHjK1dUNzFPg/ghAQ2g+iENo6Lb6cnm3qWXDgIztqLdKvWp007lqjYZGAX4xsD2cHjYS4x9tOOzU9JREmqvyzb7M9pJP7XT88v2wX1u9Mv9KkqZapvwm0eurCCbuhdISbP61HfhwiAAynX0MrAueP4Q454jRio= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703113757; c=relaxed/simple; bh=+8Yd0lpKl0wjA4M8HkwPTBsf4g3bkvH9eYJtBwQuvgQ=; h=DKIM-Signature:MIME-Version:From:Date:Message-ID:Subject:To; b=nbHauLAdIr01qd4e1omRDrmivHJ6GRglNkF2WJA9+DXzD8+a55lFFyQZ8j47jj9WzQ31NhUHIEFcLNoDrlcZ7e2RiIDNnfvSz/la2yXCNUHENSbz4Irg1aQfLGbml6tUGwR5245iwaUJMYd+qW9DCwKCcHDSEcUXpKCBR2cix+g= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-qv1-xf29.google.com with SMTP id 6a1803df08f44-67f147c04b7so760166d6.2 for ; Wed, 20 Dec 2023 15:09:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703113752; x=1703718552; darn=gcc.gnu.org; h=content-transfer-encoding:to:subject:message-id:date:from :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=SS92eXD+y5H9watpWWONDELaonHpZCXsG8kLFO/MVYk=; b=DiHpOwmnBSwnCYBC/LzKW986XNzVRWe94sUSLB4VLIWy0GNiYacmBSboo7Ujg2M3GY C3ML/D62YuALMl0kh7Nzlha04/1e9Qyi3X5ojUo2gNrilWT4wUgq5H6k2lXbcnCl0l02 ptF2k6MT+MbtUl2ZwJqvgKexyt38IH8mmP9197RfwwGXhaIaJJzbUbNpF8LI8z2mQVwQ 7nbRR0wtwAQ3b27ev3+74ckrN6ggQJpHUDdJ6+aybHYTGfivNUmt+DJYlTaSHcQIDwrp QZ0djLERnRPr348nW6HAofrUfnRIYabBLAc9F9rBgicZ7J43Y5AGMP8hvyrbC1GZS2Ud K9zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703113752; x=1703718552; h=content-transfer-encoding:to:subject:message-id:date:from :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=SS92eXD+y5H9watpWWONDELaonHpZCXsG8kLFO/MVYk=; b=DM9i38jMQuhlIbCAX3DAiss+uPUjMtc8GbfuNIMX6iYNLoYRzTzuIZg6PjXqCwXxKd 5UfACsmtEvykJODEuveZ5DwCvaKmol24u0GO+hKlva06GhYJglCn85PfJCRd53eaP4H6 QjpJvKVSpbxfnUDiObp07+EZQGSgwIw7hoVOZc/IcVjaFzhw0k0l+2mE39HCU/sMk7+I GHpv4Z0expEHq7QtMynCVpDEoGIwqweWjC5GBSv2XDkWhch9MH5ogLn98M9nOPKcvf7I PBY3r8ctaWgvQkZKtczceYrLallNSQriIcby/xA6CzzFj6QAcBWMCCnmn0V0S7Hwutja T6+A== X-Gm-Message-State: AOJu0YxsGcOBLBZzyBkLN6lYiL5hNLarlRqwDpmZBkbbXtKQq0Kqu+fc P3iOdX8fFXXfsA32fFDcMJIryfAOnk3duHgLV2M= X-Google-Smtp-Source: AGHT+IEdF9gtWpmurK+PBuOw+UNRa6RcU2mNLXPuPi3RNf50KsRcwZw17s4eDdnc3Zv59eroZaV4K9OLFkqrW5qiep0= X-Received: by 2002:a05:6214:10c3:b0:67a:a3dc:3f71 with SMTP id r3-20020a05621410c300b0067aa3dc3f71mr22986543qvs.59.1703113752012; Wed, 20 Dec 2023 15:09:12 -0800 (PST) MIME-Version: 1.0 From: Duke Abbaddon Date: Wed, 20 Dec 2023 23:09:01 +0000 Message-ID: Subject: The Voyager Tensor Expression 64K (c)RS "Voyager knows of no bounds for surely you have done all the traveling & left but Nothing but footprints in the snows of chaos." To: Media@xilinx.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-0.8 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,KAM_SHORT,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Problems voyager faces: So Nasa & ESA and international UN you may be wondering? How on earth ? Could we ZFS an old Voyager MIPS ? Linux has answers! 20:01 17/12/2023 The application of ZFS & ZLib is very applicable to systems such as Voyager 1 & 2 & newer, The new innovations in ZLib & ZFS mean that application to MIPS (satellites like voyager).. Compressed RAM & Clean ROMS Mean more RAM to use with a small processing cost & the ability to use minimised tensors such as JS & NumPy https://is.gd/DictionarySortJS 1: Electrophiles & the act of cable; The sun as you probably understand is a high energy electron place, The ones who understand tokmas already know that tokmas are a source of High V- e- charged free electrons, Any reaction of nuclear type has the effect of releasing and potentially receiving electrons.. High heat dry environments also send electrons; Because water generally ionises & therefore uses the energy of the electron, However Water ionised also is Active ionic & releases Hydrogen & oxygen briefly in ionised states. 2: Now in reference to our Voyager Satellites, deep space is a massive source of ionic interference! Inferencing may restore Voyager's sensibility! however as far as i know the voyager has 68KB RAM & Tape drive, those tape drives must not wear out or we could use Swap file or drive to enhance RAM, Now we could! but shall we ? Now we can use NumPy for i have heard it claimed to be tensors! now NumPy is a 4MB file ? In assembler with minimal compiled features & rooflined from a Server HPC; we could maybe fit NumPy into an Operating system of say 15Kb to 35Kb! But is that true ? How long does tape loading work for in time & load and wearing fabric? I dreadfully have a tiny bit of experience with the commodore 64 & by the fact Atari Cassette Cartridge & ROM, So unless we send a fast recovery interceptor millions of miles we could not get a cassette to the Voyager 1 & 2 But we can now! ZLib & ZFS have been recently improved with Py Directory Sort https://is.gd/CJS_DictionarySort Python & JS Configurations https://is.gd/DictionarySortJS So we can use ZLib, GZip, Brotli & BZip compression library & Application loading from tape! So Nasa & ESA and international UN you may be wondering? How on earth ? Could we ZFS an old Voyager MIPS ? Linux has answers! 3: Problems voyager faces: Ion storms Micro meteors Micro Black holes UFO BORG =C3=B4o So what can we do with light? #DirectlySeeSend&Recieve #Energiser #Cel414 4 Laser Space Communications, Now you might be wondering? Voyager ? Laser coms ? sure?! But we can convert a readable signal from a solar panel into an input receptor for a signal recorded to TAPE or RAM or Direct PIO/DMA to a storage media! So how could we use this today ? We could light beam past the moon to the L2 & directly to the solar panel & have a message! We can laser radio or radar burst back or to there? yes We could power on systems with sustained light redirection ? yes We could signal with flickered light from a mirror that has LED crystal darkeners (such as LED TV)?! Yes We can send energy & therefore receive it from powered on systems! But from how far ? That depends on the size & accuracy of the dish! Even a hubble could do that; The Romans sure could/Would. So what can we do with light? #DirectlySeeSend&Recieve #Energiser #Cel414 Rupert S 2023-12 ***** The Voyager Tensor Expression 64K (c)RS "Voyager knows of no bounds for surely you have done all the traveling & left but Nothing but footprints in the snows of chaos." The thing about tensors is that while python & JS & C can run & train them; The Tensor configuration can be run compressed inside a 62KB ZLib/ZFS, Now running a very tight Tensor configuration on the budget of Voyager or a Gameboy? on a 3 CPU Set? Entirely potential! But we have to keep the logic pure & maths to really run full POWER CPU; if we have AVX, Vector or Nano & Matrix Array. Now we will be training the code on a 32Bit/16Bit Float F16b & F32 full precision array of GPU & NPU & Matrix! However we will be funneling the final version though a test pattern grid & Dumping it into a 64KB ... Fully memory resident array; With paging for new tasks in 4KB Chunks of ZLib compressed Data. Total footprint under 1MB per Data Transfer.. Voyager knows of no bounds for surely you have done all the traveling & left but Nothing but footprints in the snows of chaos. Tensors C, JS & NumPy https://is.gd/DictionarySortJS Rupert Summerskill *************** Python 200 epoch machine learning device query & problem solve Running machine learning (ML) models for 200 epochs can be computationally expensive, so it's important to consider the hardware and resources you have available. Here are some general recommendations for running ML models with 200 epochs: CPU: For smaller models or simpler tasks, you may be able to get away with using a CPU. However, for larger or more complex models, you'll likely need a GPU to provide the necessary processing power. RAM: You'll also need to have enough RAM to store the model and the data you're training it on. For 200 epochs, it's a good idea to have at least 16GB of RAM, and preferably 32GB or more. Storage: You'll need enough storage space to store the model, the data you're training it on, and any intermediate results. For 200 epochs, you'll likely need at least 1TB of storage, and preferably 2TB or more. In addition to hardware, you'll also need to consider the following factors when running ML models with 200 epochs: Model complexity: The more complex your model is, the longer it will take to train it. This is because the model has more parameters to optimize, which requires more computational power. Data size: The more data you have to train your model on, the longer it will take. This is because the model has to process more data to find patterns and learn from it. Learning rate: The learning rate is a parameter that controls how quickly the model learns. A higher learning rate will speed up training, but it may also lead to overfitting. A lower learning rate will be slower, but it may be more likely to find a good solution. If you're running into problems with running ML models with 200 epochs, there are a few things you can try: Reduce the model complexity: If your model is too complex, it may be taking too long to train. Try simplifying the model by removing some of its layers or neurons. Reduce the data size: If you have a lot of data, you may be able to speed up training by reducing the amount of data you use. Try using a smaller subset of the data to start with, or try using a data augmentation technique to create more data. Adjust the learning rate: The learning rate is a crucial parameter for training ML models. If you're not seeing results after 200 epochs, you may need to adjust the learning rate. Try increasing or decreasing the learning rate to see if it improves the training process. By carefully considering your hardware, resources, and model parameters, you can make sure that your ML models are trained efficiently and accurately. Python code for 200 epoch machine learning device query & problem solve #Python Machine learning core code import tensorflow as tf from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D, Dropout (x_train, y_train), (x_test, y_test) =3D tf.keras.datasets.mnist.load_data(= ) x_train =3D x_train.reshape(-1, 28, 28, 1) # Flatten images x_train =3D x_train.astype('float32') x_train /=3D 255 # Normalize values between 0 and 1 x_test =3D x_test.reshape(-1, 28, 28, 1) x_test =3D x_test.astype('float32') x_test /=3D 255 model =3D tf.keras.Sequential([ Conv2D(32, kernel_size=3D(3, 3), activation=3D'relu', input_shape=3D(28= , 28, 1)), MaxPooling2D(pool_size=3D(2, 2)), Conv2D(64, kernel_size=3D(3, 3), activation=3D'relu'), MaxPooling2D(pool_size=3D(2, 2)), Flatten(), Dense(128, activation=3D'relu'), Dropout(0.2), Dense(10, activation=3D'softmax') ]) model.compile(optimizer=3D'adam', loss=3D'sparse_categorical_crossentropy', metrics=3D['accuracy']) model.fit(x_train, y_train, epochs=3D200) test_loss, test_acc =3D model.evaluate(x_test, y_test) print('Test accuracy:', test_acc) #JS Machine learning core code const { Tensor } =3D require('ndarrayjs'); const LinearRegression =3D require('jsml').LinearRegression; // Load the dataset const data =3D [ [1, 2], [2, 3], [3, 4], ]; // Split the data into training and testing sets const X =3D data.map(x =3D> Tensor([-1, x[0]]).transpose()); const Y =3D data.map(y =3D> Tensor([y])); const trainingData =3D { X, Y }; // Create the machine learning model const model =3D new LinearRegression(); // Train the model for (let i =3D 0; i < 200; i++) { model.fit(trainingData); } // Evaluate the model const testData =3D [[4], [5]]; const X_test =3D testData.map(x =3D> Tensor([-1, x[0]]).transpose()); const Y_pred =3D model.predict(X_test); console.log('Predicted values:'); console.log(Y_pred); #C Machine learning core code #include #include #include #include #include typedef struct Neuron { float *weights; float bias; } Neuron; typedef struct Layer { Neuron *neurons; int numNeurons; } Layer; typedef struct Network { Layer *layers; int numLayers; } Network; // Load training data from CSV file float *dataX =3D loadDataX("training_data.csv"); float *dataY =3D loadDataY("training_labels.csv"); // Shuffle data to improve model generalization int dataLen =3D len(dataX); for (int i =3D 0; i < dataLen; i++) { int j =3D rand() % dataLen; float tempX =3D dataX[i]; float tempY =3D dataY[i]; dataX[i] =3D dataX[j]; dataY[i] =3D dataY[j]; dataX[j] =3D tempX; dataY[j] =3D tempY; } for (int epoch =3D 0; epoch < 200; epoch++) { // Forward Pass // ... // Backpropagation // ... // Update weights and biases // ... } // Load testing data from CSV file float *testDataX =3D loadDataX("testing_data.csv"); float *testDataY =3D loadDataY("testing_labels.csv"); // Calculate accuracy int numCorrect =3D 0; for (int i =3D 0; i < len(testDataX); i++) { float predictedY =3D predict(testDataX[i]); if (predictedY =3D=3D testDataY[i]) { numCorrect++; } } float accuracy =3D (float)numCorrect / len(testDataX); printf("Accuracy: %f\n", accuracy); *************** Compression, Dictionary Sort & Same Size Copy Match & Unite Same with location in 2D Matrix #JS #C #Python RS 2023 https://is.gd/CJS_DictionarySort Python & JS Configurations https://is.gd/DictionarySortJS The code appears complex; But there you go! In assembler it may be 15KB to 64KB; I am not entirely sure, However the objective is for ZSTD, GZIP, Brotli & Codecs such as DSC, The compression rate improves & the quality of the compression also! Depending on the encryption type; You may also improve the complexity by deviating from commons or by compressing better first. CPU + Cache, Task & Lists & Tables & Compressed Data, Task RAM & Storage: Ordering data sets In reference to : https://science.n-helix.com/2021/11/monticarlo-workload-selector.html Sorting memory load arrays so they align neatly into a block; Accounting Dates into Same ID & Order; Ordered identical ID; Taxing blocks, Accounting for block loads & saves so that blocks in the same locations & size are loaded & stored neatly.. Aligned minimal block formations save space on block systems such as NTFS & EFS & Advanced Format with Extended Directories such as on the Amiga & Mac.. The ordering of NTFS, EFS & ZFS directory tables & data caches & The ordering of all RAM load save cycles in contexts such as.. Storage access & writing; Boot partition ordering etcetera. These & many more uses depending on speed & parallel code ordering & identification Unification, Depending entirely on block size & efficiency of sort & collation grouping. * Data flows & parallel streams; Identical RAM & Storage workloads.. By ordering the data groups into dictionary content with minimal group identifiers the code then becomes a compression library or a group archiver or a shorthand writer with unified comment commits such as legal point 1, 2, N amendums & common informations, Sorting is relatively fast in 128x & 64x & 32X & 16x Cubes in SiMD Vector Matrix; Aligning information cubes & commons.. Seconds. Many uses exist for these functions & coding excellence is defined by you. * Machine Learning Additive D!=3DS + (D!=3DS)M illness ? Dictionary Sort can sort your X-rays before classification & after ML & classification distillation, good for robots & humans alike! But most humanity lack code compile, You can essentially use Python, JS, C code to collate & sort & identify the identical, You may know but image classifications of identical are reduced to commons like red blood or cell factors in python. * Modes of Conduct MS-HO=C2=B2C row.sort((a, b) =3D> a - b))= ; // Find the first occurrence of each value in each row const firstOccurrences =3D []; for (let i =3D 0; i < matrix.length; i++) { const row =3D matrix[i]; const firstOccurrencesRow =3D {}; for (let j =3D 0; j < row.length; j++) { const value =3D row[j]; if (!firstOccurrencesRow.hasOwnProperty(value)) { firstOccurrencesRow[value] =3D j; } } firstOccurrences.push(firstOccurrencesRow); } // Find the last occurrence of each value in each row const lastOccurrences =3D []; for (let i =3D 0; i < matrix.length; i++) { const row =3D matrix[i]; const lastOccurrencesRow =3D {}; for (let j =3D row.length - 1; j >=3D 0; j--) { const value =3D row[j]; if (!lastOccurrencesRow.hasOwnProperty(value)) { lastOccurrencesRow[value] =3D j; } } lastOccurrences.push(lastOccurrencesRow); } // Find the first and last occurrences of each value in the matrix const firstOccurrencesAll =3D {}; for (const row of firstOccurrences) { for (const value in row) { if (!firstOccurrencesAll.hasOwnProperty(value) || firstOccurrencesAll[value] > row[value]) { firstOccurrencesAll[value] =3D row[value]; } } } const lastOccurrencesAll =3D {}; for (const row of lastOccurrences) { for (const value in row) { if (!lastOccurrencesAll.hasOwnProperty(value) || lastOccurrencesAll[value] < row[value]) { lastOccurrencesAll[value] =3D row[value]; } } } // Find the rows that contain the same values const sameRows =3D []; for (let i =3D 0; i < matrix.length; i++) { const row =3D matrix[i]; let same =3D true; for (let j =3D 0; j < row.length; j++) { const value =3D row[j]; const firstOccurrence =3D firstOccurrencesAll[value]; const lastOccurrence =3D lastOccurrencesAll[value]; if (firstOccurrence !=3D=3D lastOccurrence || firstOccurrences[i][value] !=3D=3D firstOccurrence || lastOccurrences[i][value] !=3D=3D lastOccurrence) { same =3D false; break; } } if (same) { sameRows.push(i); } } // Combine the same rows into a single row const combinedMatrix =3D []; for (const row of sameRows) { combinedMatrix.push(matrix[row]); } // Sort the combined matrix const sortedCombinedMatrix =3D combinedMatrix.map((row) =3D> row.sort((a, b) =3D> a - b)); return { sortedCombinedMatrix, sameRows }; } const matrix =3D [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]; const result =3D sortAndMatch(matrix); console.log(result.sortedCombinedMatrix); console.log(result.sameRows); #C #include #include typedef struct { int value; int firstOccurrence; int lastOccurrence; } ValueInfo; void sortAndMatch(int matrix[][3], int rows, int cols, int** sortedCombinedMatrix, int* sameRows, int* sameRowsCount) { // Allocate memory for the first and last occurrences of each value in each row ValueInfo** firstOccurrences =3D (ValueInfo**)malloc(sizeof(ValueInfo*) *= rows); for (int i =3D 0; i < rows; i++) { firstOccurrences[i] =3D (ValueInfo*)malloc(sizeof(ValueInfo) * cols); } ValueInfo** lastOccurrences =3D (ValueInfo**)malloc(sizeof(ValueInfo*) * = rows); for (int i =3D 0; i < rows; i++) { lastOccurrences[i] =3D (ValueInfo*)malloc(sizeof(ValueInfo) * cols); } // Find the first and last occurrences of each value in each row for (int i =3D 0; i < rows; i++) { for (int j =3D 0; j < cols; j++) { int value =3D matrix[i][j]; bool foundFirst =3D false; bool foundLast =3D false; for (int k =3D 0; k < cols; k++) { if (matrix[i][k] =3D=3D value && !foundFirst) { firstOccurrences[i][j].value =3D value; firstOccurrences[i][j].firstOccurrence =3D k; foundFirst =3D true; } if (matrix[i][k] =3D=3D value && !foundLast) { lastOccurrences[i][j].value =3D value; lastOccurrences[i][j].lastOccurrence =3D k; foundLast =3D true; } } } } // Find the first and last occurrences of each value in the matrix ValueInfo* firstOccurrencesAll =3D (ValueInfo*)malloc(sizeof(ValueInfo) *= cols); for (int i =3D 0; i < cols; i++) { firstOccurrencesAll[i].value =3D -1; firstOccurrencesAll[i].firstOccurrence =3D -1; firstOccurrencesAll[i].lastOccurrence =3D -1; } ValueInfo* lastOccurrencesAll =3D (ValueInfo*)malloc(sizeof(ValueInfo) * = cols); for (int i =3D 0; i < cols; i++) { lastOccurrencesAll[i]. #Python import numpy as np def sort_and_match(matrix): # Sort each row of the matrix sorted_matrix =3D np.sort(matrix, axis=3D1) # Find the first occurrence of each value in each row first_occurrences =3D np.zeros_like(matrix) for i in range(matrix.shape[0]): for j in range(matrix.shape[1]): if matrix[i, j] not in first_occurrences[i, :j]: first_occurrences[i, j] =3D j # Find the last occurrence of each value in each row last_occurrences =3D np.zeros_like(matrix) for i in range(matrix.shape[0]): for j in range(matrix.shape[1]-1, -1, -1): if matrix[i, j] not in last_occurrences[i, j+1:]: last_occurrences[i, j] =3D j # Find the first and last occurrences of each value in the matrix first_occurrences_all =3D np.min(first_occurrences, axis=3D0) last_occurrences_all =3D np.max(last_occurrences, axis=3D0) # Find the rows that contain the same values same_rows =3D [] for i in range(matrix.shape[0]): if np.all(first_occurrences[i, :] =3D=3D last_occurrences[i, :]): same_rows.append(i) # Combine the same rows into a single row combined_matrix =3D np.zeros((len(same_rows), matrix.shape[1])) for i, row in enumerate(same_rows): combined_matrix[i, :] =3D matrix[row, :] # Sort the combined matrix sorted_combined_matrix =3D np.sort(combined_matrix, axis=3D1) return sorted_combined_matrix, same_rows matrix =3D np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]) sorted_combined_matrix, same_rows =3D sort_and_match(matrix) print(sorted_combined_matrix) print(same_rows)