From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qv1-xf33.google.com (mail-qv1-xf33.google.com [IPv6:2607:f8b0:4864:20::f33]) by sourceware.org (Postfix) with ESMTPS id 257803858C42 for ; Sun, 24 Dec 2023 19:34:44 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 257803858C42 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 257803858C42 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2607:f8b0:4864:20::f33 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703446487; cv=none; b=HKeN/F2A9+dK9Q0RWSUnjR9LnsHer6tfHuhwheJdTFQHHtrH6O/05Fpjqtjf4H/sfEK637tuf2/21AR3akKi8K5LRrtWH/dlKdtF63GndR7sqrRhdSoQinuKekS1VB3/FezWGmByVXosvbd+sVERnBz7SagnQXmDZop6t40kHWo= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703446487; c=relaxed/simple; bh=AgQCRFI9IKCvt1JRlTlkxDjCtjkQcPfM+n7KxDQBb/o=; h=DKIM-Signature:MIME-Version:From:Date:Message-ID:Subject:To; b=YgiOxZU2mbR1s6Y+AaeP8+c1YRO2nUyZTIE9tKulFTi8MQkDz4g1iYIRbOkQ6l2nAM5PB5RToE4NcTnI+VhvJ3iYEcdWtH+/qa1s72fey0p/hYBuQvKWJT4BLM2lWS1N1bUU3wqfpyxj5ZGkK9TwHfeHzzxkK51smyI3G7us9Pc= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-qv1-xf33.google.com with SMTP id 6a1803df08f44-67f47b15fa3so29821456d6.1 for ; Sun, 24 Dec 2023 11:34:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703446483; x=1704051283; darn=gcc.gnu.org; h=content-transfer-encoding:to:subject:message-id:date:from :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=NSlF2Jlj8DTj83eMeIzJTOK0IwPX4ajuHlmy0bFOSbQ=; b=bDdOzcJYFBH+gR0u7nRCRoJ/BqN/FHhITH65WUJTR5pxtZ7+kAwZT3LvFXhUV1ZEqJ pYWZ9gRl5amv/uFYlKTrvgd0o9NGokBjziBPWmzAiPlfzkWlLOyPsf7MosbanO7cDHPx cPG54J8Qv2DUzIID4nSNxzVmVZSqDpgwrXqyqVcoYH5o6qDAVZMNdtqINs/ABMNedKvu oSJIH3DobnWBG2x2uClJPhJGndFsrYqJpDCV7iLAKgcO2WR5jt17oZIUe9ynxJztbqm0 R6cHN9Yc2KRoNu1PQnmZIDSQD2wHsG5ReRPl+ix/EAqIvxjURR4OxvhNjKTl92avb+hX SkOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703446483; x=1704051283; h=content-transfer-encoding:to:subject:message-id:date:from :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=NSlF2Jlj8DTj83eMeIzJTOK0IwPX4ajuHlmy0bFOSbQ=; b=GyDCHmDUvW290P5sKvebdoqsKoFvillN1Eo6C7mdr67lP14IUu5txnNP+PeZGSqmLR PYvDsKNbpXqIaZRxROjSLKsxW+XFylFAC13Jya6I/zk6NJzl4D4zaPUlY2ZvsGfxBfSx 1QMKF/8RTC+FG/sP54mXKm3kjnCXXoj1QC+/0hFluyeGXj/XfIoCvS286GZ51CY6T8QT feP4fQz2xSqd3I0Bl/grfMPE9VUmt0hFYGb7MVIAoBFo9GXXXrT/qnnvyDXPdUssSm+q VGiEwh67DKH78QQ2PWGXdJtJfnnJSeRTPhqvJT5q7WXkqSicqopv6kvqclDX9nQsI3kW ua1w== X-Gm-Message-State: AOJu0YzYmbFr+vqVaBn/i3ivNi1c3/vroxsUqfP8ycwlv49pX4lvJ98D I/3rA/gYth9a02c3Ds3tVffoXw0Wfi7bq+8XtsM= X-Google-Smtp-Source: AGHT+IFCWpWhSkV92S1E4Kgk/HqnsjceiaWG+0fsC20KQTUL14ZpqZrFFyqn2Wi8bP1pHN2rKoFz+UKMnXLSzhh325w= X-Received: by 2002:a05:6214:f0f:b0:67a:b4ed:fb1 with SMTP id gw15-20020a0562140f0f00b0067ab4ed0fb1mr6947899qvb.22.1703446482844; Sun, 24 Dec 2023 11:34:42 -0800 (PST) MIME-Version: 1.0 From: Duke Abbaddon Date: Sun, 24 Dec 2023 19:34:31 +0000 Message-ID: Subject: ML Classification Bundling for HIM & Her Sorting bundles in priorities such as, Time to process, Similarity & by probability (likelihood) improves perception & thought process, Logical sort orders.. Required processing order based on sorted requirements (one needs another) Items that go locally together, { Cleaning, Cooking, cleanup } Logical order, { Drink, Power, Computer, Application, Search, Webpage, Notebook, read, write } Saving data caches it & aids processing; But organising it first makes retrieval clean & thought Clean Meditation Logic. To: press@google.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=2.3 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,KAM_HUGESUBJECT,KAM_SHORT,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Level: ** X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Compression, Dictionary Sort & Same Size Copy Match & Unite Same with location in 2D Matrix #JS #C #Python RS 2023 https://is.gd/CJS_DictionarySort Python & JS Configurations https://is.gd/DictionarySortJS The code appears complex; But there you go! In assembler it may be 15KB to 64KB; I am not entirely sure, However the objective is for ZSTD, GZIP, Brotli & Codecs such as DSC, The compression rate improves & the quality of the compression also! Depending on the encryption type; You may also improve the complexity by deviating from commons or by compressing better first. CPU + Cache, Task & Lists & Tables & Compressed Data, Task RAM & Storage: Ordering data sets In reference to : https://science.n-helix.com/2021/11/monticarlo-workload-selector.html Sorting memory load arrays so they align neatly into a block; Accounting Dates into Same ID & Order; Ordered identical ID; Taxing blocks, Accounting for block loads & saves so that blocks in the same locations & size are loaded & stored neatly.. Aligned minimal block formations save space on block systems such as NTFS & EFS & Advanced Format with Extended Directories such as on the Amiga & Mac.. The ordering of NTFS, EFS & ZFS directory tables & data caches & The ordering of all RAM load save cycles in contexts such as.. Storage access & writing; Boot partition ordering etcetera. These & many more uses depending on speed & parallel code ordering & identification Unification, Depending entirely on block size & efficiency of sort & collation grouping. * Data flows & parallel streams; Identical RAM & Storage workloads.. By ordering the data groups into dictionary content with minimal group identifiers the code then becomes a compression library or a group archiver or a shorthand writer with unified comment commits such as legal point 1, 2, N amendums & common informations, Sorting is relatively fast in 128x & 64x & 32X & 16x Cubes in SiMD Vector Matrix; Aligning information cubes & commons.. Seconds. Many uses exist for these functions & coding excellence is defined by you. * Machine Learning Additive D!=3DS + (D!=3DS)M illness ? Dictionary Sort can sort your X-rays before classification & after ML & classification distillation, good for robots & humans alike! But most humanity lack code compile, You can essentially use Python, JS, C code to collate & sort & identify the identical, You may know but image classifications of identical are reduced to commons like red blood or cell factors in python. * Modes of Conduct MS-HO=C2=B2C row.sort((a, b) =3D> a - b))= ; // Find the first occurrence of each value in each row const firstOccurrences =3D []; for (let i =3D 0; i < matrix.length; i++) { const row =3D matrix[i]; const firstOccurrencesRow =3D {}; for (let j =3D 0; j < row.length; j++) { const value =3D row[j]; if (!firstOccurrencesRow.hasOwnProperty(value)) { firstOccurrencesRow[value] =3D j; } } firstOccurrences.push(firstOccurrencesRow); } // Find the last occurrence of each value in each row const lastOccurrences =3D []; for (let i =3D 0; i < matrix.length; i++) { const row =3D matrix[i]; const lastOccurrencesRow =3D {}; for (let j =3D row.length - 1; j >=3D 0; j--) { const value =3D row[j]; if (!lastOccurrencesRow.hasOwnProperty(value)) { lastOccurrencesRow[value] =3D j; } } lastOccurrences.push(lastOccurrencesRow); } // Find the first and last occurrences of each value in the matrix const firstOccurrencesAll =3D {}; for (const row of firstOccurrences) { for (const value in row) { if (!firstOccurrencesAll.hasOwnProperty(value) || firstOccurrencesAll[value] > row[value]) { firstOccurrencesAll[value] =3D row[value]; } } } const lastOccurrencesAll =3D {}; for (const row of lastOccurrences) { for (const value in row) { if (!lastOccurrencesAll.hasOwnProperty(value) || lastOccurrencesAll[value] < row[value]) { lastOccurrencesAll[value] =3D row[value]; } } } // Find the rows that contain the same values const sameRows =3D []; for (let i =3D 0; i < matrix.length; i++) { const row =3D matrix[i]; let same =3D true; for (let j =3D 0; j < row.length; j++) { const value =3D row[j]; const firstOccurrence =3D firstOccurrencesAll[value]; const lastOccurrence =3D lastOccurrencesAll[value]; if (firstOccurrence !=3D=3D lastOccurrence || firstOccurrences[i][value] !=3D=3D firstOccurrence || lastOccurrences[i][value] !=3D=3D lastOccurrence) { same =3D false; break; } } if (same) { sameRows.push(i); } } // Combine the same rows into a single row const combinedMatrix =3D []; for (const row of sameRows) { combinedMatrix.push(matrix[row]); } // Sort the combined matrix const sortedCombinedMatrix =3D combinedMatrix.map((row) =3D> row.sort((a, b) =3D> a - b)); return { sortedCombinedMatrix, sameRows }; } const matrix =3D [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]; const result =3D sortAndMatch(matrix); console.log(result.sortedCombinedMatrix); console.log(result.sameRows); #C #include #include typedef struct { int value; int firstOccurrence; int lastOccurrence; } ValueInfo; void sortAndMatch(int matrix[][3], int rows, int cols, int** sortedCombinedMatrix, int* sameRows, int* sameRowsCount) { // Allocate memory for the first and last occurrences of each value in each row ValueInfo** firstOccurrences =3D (ValueInfo**)malloc(sizeof(ValueInfo*) *= rows); for (int i =3D 0; i < rows; i++) { firstOccurrences[i] =3D (ValueInfo*)malloc(sizeof(ValueInfo) * cols); } ValueInfo** lastOccurrences =3D (ValueInfo**)malloc(sizeof(ValueInfo*) * = rows); for (int i =3D 0; i < rows; i++) { lastOccurrences[i] =3D (ValueInfo*)malloc(sizeof(ValueInfo) * cols); } // Find the first and last occurrences of each value in each row for (int i =3D 0; i < rows; i++) { for (int j =3D 0; j < cols; j++) { int value =3D matrix[i][j]; bool foundFirst =3D false; bool foundLast =3D false; for (int k =3D 0; k < cols; k++) { if (matrix[i][k] =3D=3D value && !foundFirst) { firstOccurrences[i][j].value =3D value; firstOccurrences[i][j].firstOccurrence =3D k; foundFirst =3D true; } if (matrix[i][k] =3D=3D value && !foundLast) { lastOccurrences[i][j].value =3D value; lastOccurrences[i][j].lastOccurrence =3D k; foundLast =3D true; } } } } // Find the first and last occurrences of each value in the matrix ValueInfo* firstOccurrencesAll =3D (ValueInfo*)malloc(sizeof(ValueInfo) *= cols); for (int i =3D 0; i < cols; i++) { firstOccurrencesAll[i].value =3D -1; firstOccurrencesAll[i].firstOccurrence =3D -1; firstOccurrencesAll[i].lastOccurrence =3D -1; } ValueInfo* lastOccurrencesAll =3D (ValueInfo*)malloc(sizeof(ValueInfo) * = cols); for (int i =3D 0; i < cols; i++) { lastOccurrencesAll[i]. #Python import numpy as np def sort_and_match(matrix): # Sort each row of the matrix sorted_matrix =3D np.sort(matrix, axis=3D1) # Find the first occurrence of each value in each row first_occurrences =3D np.zeros_like(matrix) for i in range(matrix.shape[0]): for j in range(matrix.shape[1]): if matrix[i, j] not in first_occurrences[i, :j]: first_occurrences[i, j] =3D j # Find the last occurrence of each value in each row last_occurrences =3D np.zeros_like(matrix) for i in range(matrix.shape[0]): for j in range(matrix.shape[1]-1, -1, -1): if matrix[i, j] not in last_occurrences[i, j+1:]: last_occurrences[i, j] =3D j # Find the first and last occurrences of each value in the matrix first_occurrences_all =3D np.min(first_occurrences, axis=3D0) last_occurrences_all =3D np.max(last_occurrences, axis=3D0) # Find the rows that contain the same values same_rows =3D [] for i in range(matrix.shape[0]): if np.all(first_occurrences[i, :] =3D=3D last_occurrences[i, :]): same_rows.append(i) # Combine the same rows into a single row combined_matrix =3D np.zeros((len(same_rows), matrix.shape[1])) for i, row in enumerate(same_rows): combined_matrix[i, :] =3D matrix[row, :] # Sort the combined matrix sorted_combined_matrix =3D np.sort(combined_matrix, axis=3D1) return sorted_combined_matrix, same_rows matrix =3D np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]) sorted_combined_matrix, same_rows =3D sort_and_match(matrix) print(sorted_combined_matrix) print(same_rows) *************************** Coding for 200 epoch machine learning device query & problem solve Running machine learning (ML) models for 200 epochs can be computationally expensive, so it's important to consider the hardware and resources you have available. Here are some general recommendations for running ML models with 200 epochs: CPU: For smaller models or simpler tasks, you may be able to get away with using a CPU. However, for larger or more complex models, you'll likely need a GPU to provide the necessary processing power. RAM: You'll also need to have enough RAM to store the model and the data you're training it on. For 200 epochs, it's a good idea to have at least 16GB of RAM, and preferably 32GB or more. Storage: You'll need enough storage space to store the model, the data you're training it on, and any intermediate results. For 200 epochs, you'll likely need at least 1TB of storage, and preferably 2TB or more. In addition to hardware, you'll also need to consider the following factors when running ML models with 200 epochs: Model complexity: The more complex your model is, the longer it will take to train it. This is because the model has more parameters to optimize, which requires more computational power. Data size: The more data you have to train your model on, the longer it will take. This is because the model has to process more data to find patterns and learn from it. Learning rate: The learning rate is a parameter that controls how quickly the model learns. A higher learning rate will speed up training, but it may also lead to overfitting. A lower learning rate will be slower, but it may be more likely to find a good solution. If you're running into problems with running ML models with 200 epochs, there are a few things you can try: Reduce the model complexity: If your model is too complex, it may be taking too long to train. Try simplifying the model by removing some of its layers or neurons. Reduce the data size: If you have a lot of data, you may be able to speed up training by reducing the amount of data you use. Try using a smaller subset of the data to start with, or try using a data augmentation technique to create more data. Adjust the learning rate: The learning rate is a crucial parameter for training ML models. If you're not seeing results after 200 epochs, you may need to adjust the learning rate. Try increasing or decreasing the learning rate to see if it improves the training process. By carefully considering your hardware, resources, and model parameters, you can make sure that your ML models are trained efficiently and accurately. ********** Python code for 200 epoch machine learning device query & problem solve #Python Machine learning core code import tensorflow as tf from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D, Dropout (x_train, y_train), (x_test, y_test) =3D tf.keras.datasets.mnist.load_data(= ) x_train =3D x_train.reshape(-1, 28, 28, 1) # Flatten images x_train =3D x_train.astype('float32') x_train /=3D 255 # Normalize values between 0 and 1 x_test =3D x_test.reshape(-1, 28, 28, 1) x_test =3D x_test.astype('float32') x_test /=3D 255 model =3D tf.keras.Sequential([ Conv2D(32, kernel_size=3D(3, 3), activation=3D'relu', input_shape=3D(28= , 28, 1)), MaxPooling2D(pool_size=3D(2, 2)), Conv2D(64, kernel_size=3D(3, 3), activation=3D'relu'), MaxPooling2D(pool_size=3D(2, 2)), Flatten(), Dense(128, activation=3D'relu'), Dropout(0.2), Dense(10, activation=3D'softmax') ]) model.compile(optimizer=3D'adam', loss=3D'sparse_categorical_crossentropy', metrics=3D['accuracy']) model.fit(x_train, y_train, epochs=3D200) test_loss, test_acc =3D model.evaluate(x_test, y_test) print('Test accuracy:', test_acc) #JS Machine learning core code const { Tensor } =3D require('ndarrayjs'); const LinearRegression =3D require('jsml').LinearRegression; // Load the dataset const data =3D [ [1, 2], [2, 3], [3, 4], ]; // Split the data into training and testing sets const X =3D data.map(x =3D> Tensor([-1, x[0]]).transpose()); const Y =3D data.map(y =3D> Tensor([y])); const trainingData =3D { X, Y }; // Create the machine learning model const model =3D new LinearRegression(); // Train the model for (let i =3D 0; i < 200; i++) { model.fit(trainingData); } // Evaluate the model const testData =3D [[4], [5]]; const X_test =3D testData.map(x =3D> Tensor([-1, x[0]]).transpose()); const Y_pred =3D model.predict(X_test); console.log('Predicted values:'); console.log(Y_pred); #C Machine learning core code #include #include #include #include #include typedef struct Neuron { float *weights; float bias; } Neuron; typedef struct Layer { Neuron *neurons; int numNeurons; } Layer; typedef struct Network { Layer *layers; int numLayers; } Network; // Load training data from CSV file float *dataX =3D loadDataX("training_data.csv"); float *dataY =3D loadDataY("training_labels.csv"); // Shuffle data to improve model generalization int dataLen =3D len(dataX); for (int i =3D 0; i < dataLen; i++) { int j =3D rand() % dataLen; float tempX =3D dataX[i]; float tempY =3D dataY[i]; dataX[i] =3D dataX[j]; dataY[i] =3D dataY[j]; dataX[j] =3D tempX; dataY[j] =3D tempY; } for (int epoch =3D 0; epoch < 200; epoch++) { // Forward Pass // ... // Backpropagation // ... // Update weights and biases // ... } // Load testing data from CSV file float *testDataX =3D loadDataX("testing_data.csv"); float *testDataY =3D loadDataY("testing_labels.csv"); // Calculate accuracy int numCorrect =3D 0; for (int i =3D 0; i < len(testDataX); i++) { float predictedY =3D predict(testDataX[i]); if (predictedY =3D=3D testDataY[i]) { numCorrect++; } } float accuracy =3D (float)numCorrect / len(testDataX); printf("Accuracy: %f\n", accuracy);