public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* ML Classification Bundling for HIM & Her Sorting bundles in priorities such as, Time to process, Similarity & by probability (likelihood) improves perception & thought process, Logical sort orders.. Required processing order based on sorted requirements (one needs another) Items that go locally together, { Cleaning, Cooking, cleanup } Logical order, { Drink, Power, Computer, Application, Search, Webpage, Notebook, read, write } Saving data caches it & aids processing; But organising it first makes retrieval clean & thought Clean Meditation Logic.
@ 2023-12-24 19:34 Duke Abbaddon
  0 siblings, 0 replies; only message in thread
From: Duke Abbaddon @ 2023-12-24 19:34 UTC (permalink / raw)
  To: press

Compression, Dictionary Sort & Same Size Copy Match & Unite Same with
location in 2D Matrix #JS #C #Python RS 2023

https://is.gd/CJS_DictionarySort

Python & JS Configurations
https://is.gd/DictionarySortJS

The code appears complex; But there you go! In assembler it may be
15KB to 64KB; I am not entirely sure,
However the objective is for ZSTD, GZIP, Brotli & Codecs such as DSC,

The compression rate improves & the quality of the compression also!
Depending on the encryption type; You may also improve the complexity
by deviating from commons or by compressing better first.

CPU + Cache, Task & Lists & Tables & Compressed Data, Task RAM &
Storage: Ordering data sets
In reference to :
https://science.n-helix.com/2021/11/monticarlo-workload-selector.html

Sorting memory load arrays so they align neatly into a block;
Accounting Dates into Same ID & Order; Ordered identical ID; Taxing
blocks,

Accounting for block loads & saves so that blocks in the same
locations & size are loaded & stored neatly..

Aligned minimal block formations save space on block systems such as
NTFS & EFS & Advanced Format with Extended Directories such as on the
Amiga & Mac..

The ordering of NTFS, EFS & ZFS directory tables & data caches & The
ordering of all RAM load save cycles in contexts such as..
Storage access & writing; Boot partition ordering etcetera.

These & many more uses depending on speed & parallel code ordering &
identification Unification,

Depending entirely on block size & efficiency of sort & collation grouping.

*

Data flows & parallel streams; Identical RAM & Storage workloads..

By ordering the data groups into dictionary content with minimal group
identifiers the code then becomes a compression library or a group
archiver or a shorthand writer with unified comment commits such as
legal point 1, 2, N amendums & common informations,

Sorting is relatively fast in 128x & 64x & 32X & 16x Cubes in SiMD
Vector Matrix; Aligning information cubes & commons.. Seconds.

Many uses exist for these functions & coding excellence is defined by you.

*

Machine Learning Additive D!=S + (D!=S)M

illness ? Dictionary Sort can sort your X-rays before classification &
after ML & classification distillation, good for robots & humans
alike! But most humanity lack code compile,

You can essentially use Python, JS, C code to collate & sort &
identify the identical,
You may know but image classifications of identical are reduced to
commons like red blood or cell factors in python.

*

Modes of Conduct MS-HO²C<CL Halflife Olive Compile OpenCL : Microsoft Olive

"Model conversion: translates the base models from PyTorch to ONNX.

Transformer graph optimization: fuses subgraphs into multi-head
attention operators and eliminates inefficient from conversion.

Quantization: converts most layers from FP32 to FP16 to reduce the
model's GPU memory footprint and improve performance."

https://community.amd.com/t5/ai/how-to-automatic1111-stable-diffusion-webui-with-directml/ba-p/649027

Modes of Conduct MS-HO²C<CL Halflife Olive Compile OpenCL

What we will do: MS-HO²C<CL

Sort Memory lists for identical long chains & Merge with shorthand
conversion in Code IDE

Small Cache List, Convert & Compare Double & Single Precision to Half
to check for errors on identical match

Compare sorted Tensor Nodes for common paths though the neural net for
identical & Similar response mapping..
Shortening paths & Interpolating placement & values.

*

ML Classification Bundling for HIM & Her

Sorting bundles in priorities such as,

Time to process, Similarity & by probability (likelihood) improves
perception & thought process,

Logical sort orders..
Required processing order based on sorted requirements (one needs another)
Items that go locally together, { Cleaning, Cooking, cleanup }
Logical order, { Drink, Power, Computer, Application, Search, Webpage,
Notebook, read, write }

Saving data caches it & aids processing; But organising it first makes
retrieval clean & thought Clean Meditation Logic.

Connection specifics for a better brain; classified by type & example:

Human Brain cells have 1000 connections, squid 10000; Each connection does:

7Bit regular
8Bit, sharp
9Bit on better effort,
10Bit on clarity & meditation + work hard
6Bit on relaxed,
5Bit on drunk

Connections for dedicated skills such as maths have:

Dedication bundling (multiple connections)
Multiple Affirmations, A-Synchronous, Synchronous

1 5Bit to 7Bit
2, 5Bit to 18Bit
3, 7Bit to 26Bit
4, 16Bit to 38Bit
5, 17Bit to 48Bit

Eyes for example can bundle 5 on training, colour purity..
lower bundling offers more flexibility,
High bundling offers assurety & speed & retention.

RS

https://science.n-helix.com/2021/03/brain-bit-precision-int32-fp32-int16.html
https://science.n-helix.com/2022/10/ml.html

Python & JS Configurations
https://is.gd/DictionarySortJS
https://is.gd/DictionarySort

*

Considering space the ARM 7m is quite popular!

Now adding Nano1+2 SVE & Vector Matrix & machine learning features
under common extensions is going to help speed up code on emulation!

Memory needs a True Motive Function List Sort : TMF-Sort

Core function swaps & subgroup emulations..
Dictionary Sort can help sorting all instruction fetches for the OS;
You may be wondering but they are classically not sorted by function
grouping & class..

https://developer.arm.com/documentation/ddi0403/ee/?lang=en
https://documentation-service.arm.com/static/606dc36485368c4c2b1bf62f?token=

*

Classic oscillator : (D!=S)Merge : DSM

An example use of the Dictionary Sort is the classic oscillator spring
https://blog.research.google/2023/12/a-new-quantum-algorithm-for-classical.html

Now as they explain Springs=N represented by Log(N)QBit, Now we have 2
issues RAM & QBit Allocation based on complexity..

With Dictionary Sort & merge we can provide for Common spring
vibrations & Synchronous spring motions,
RAM & QBit Merged DataSets that allow us to attribute cause & effect
in a single motion.

RS

Bluetooth dongle LE Protocol
https://drive.google.com/file/d/17csRnAfdceZiTSnQZvhaLqLSwL__zsIG/view?usp=sharing

The Matrix Vector

https://science.n-helix.com/2023/06/map.html

https://science.n-helix.com/2022/09/ovccans.html

https://science.n-helix.com/2022/03/ice-ssrtp.html

https://science.n-helix.com/2022/04/vecsr.html

https://science.n-helix.com/2021/11/monticarlo-workload-selector.html

https://science.n-helix.com/2022/02/interrupt-entropy.html

https://github.com/synfosec/packz

Rupert S

code for sorting and matching a 2D matrix can vary depending on the
specific assembler architecture and compiler being used. Here's a
general outline of the steps involved:

Load and Initialize Data:

Load the 2D matrix into memory, keeping track of its dimensions (rows
and columns).
Initialize an auxiliary array to store the first and last occurrences
of each value in each row.
Initialize an empty array to store the combined matrix.
Initialize a counter for the number of same rows found.
Sort Each Row:

Iterate through each row of the matrix.
For each row, use a sorting algorithm, such as bubble sort or
insertion sort, to sort the elements in ascending order.
Find First and Last Occurrences:

Iterate through each row of the sorted matrix.
For each element in the row, scan the remaining elements in the row to
find its first and last occurrences.
Store the first and last occurrences in the corresponding auxiliary array.
Identify Same Rows:

Iterate through each row of the matrix.
For each row, check if all elements have the same first and last
occurrences as the corresponding elements in the previous row.
If all elements match, mark the row as a "same row" and increment the counter.
Combine Same Rows:

Allocate memory for the combined matrix based on the number of same rows found.
Iterate through the same rows, copying the elements of each same row
into the corresponding row of the combined matrix.
Sort Combined Matrix:

Sort the combined matrix using a sorting algorithm.
Update Output Arrays:

Store the sorted combined matrix in the provided output array.
Store the indices of the same rows in the provided output array.

By ordering the data groups into dictionary content with minimal group
identifiers the code then becomes a compression library or a group
archiver or a shorthand writer with unified comment commits such as
legal point 1, 2, N amendums & common informations,

Sorting is relatively fast in 128x & 64x & 32X & 16x Cubes in SiMD
Vector Matrix; Aligning information cubes & commons.. Seconds.

RS

#JS

function sortAndMatch(matrix) {
  // Sort each row of the matrix
  const sortedMatrix = matrix.map((row) => row.sort((a, b) => a - b));

  // Find the first occurrence of each value in each row
  const firstOccurrences = [];
  for (let i = 0; i < matrix.length; i++) {
    const row = matrix[i];
    const firstOccurrencesRow = {};
    for (let j = 0; j < row.length; j++) {
      const value = row[j];
      if (!firstOccurrencesRow.hasOwnProperty(value)) {
        firstOccurrencesRow[value] = j;
      }
    }
    firstOccurrences.push(firstOccurrencesRow);
  }

  // Find the last occurrence of each value in each row
  const lastOccurrences = [];
  for (let i = 0; i < matrix.length; i++) {
    const row = matrix[i];
    const lastOccurrencesRow = {};
    for (let j = row.length - 1; j >= 0; j--) {
      const value = row[j];
      if (!lastOccurrencesRow.hasOwnProperty(value)) {
        lastOccurrencesRow[value] = j;
      }
    }
    lastOccurrences.push(lastOccurrencesRow);
  }

  // Find the first and last occurrences of each value in the matrix
  const firstOccurrencesAll = {};
  for (const row of firstOccurrences) {
    for (const value in row) {
      if (!firstOccurrencesAll.hasOwnProperty(value) ||
firstOccurrencesAll[value] > row[value]) {
        firstOccurrencesAll[value] = row[value];
      }
    }
  }

  const lastOccurrencesAll = {};
  for (const row of lastOccurrences) {
    for (const value in row) {
      if (!lastOccurrencesAll.hasOwnProperty(value) ||
lastOccurrencesAll[value] < row[value]) {
        lastOccurrencesAll[value] = row[value];
      }
    }
  }

  // Find the rows that contain the same values
  const sameRows = [];
  for (let i = 0; i < matrix.length; i++) {
    const row = matrix[i];
    let same = true;
    for (let j = 0; j < row.length; j++) {
      const value = row[j];
      const firstOccurrence = firstOccurrencesAll[value];
      const lastOccurrence = lastOccurrencesAll[value];
      if (firstOccurrence !== lastOccurrence ||
firstOccurrences[i][value] !== firstOccurrence ||
lastOccurrences[i][value] !== lastOccurrence) {
        same = false;
        break;
      }
    }
    if (same) {
      sameRows.push(i);
    }
  }

  // Combine the same rows into a single row
  const combinedMatrix = [];
  for (const row of sameRows) {
    combinedMatrix.push(matrix[row]);
  }

  // Sort the combined matrix
  const sortedCombinedMatrix = combinedMatrix.map((row) =>
row.sort((a, b) => a - b));

  return { sortedCombinedMatrix, sameRows };
}

const matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]];
const result = sortAndMatch(matrix);
console.log(result.sortedCombinedMatrix);
console.log(result.sameRows);

#C

#include <stdio.h>
#include <stdlib.h>

typedef struct {
  int value;
  int firstOccurrence;
  int lastOccurrence;
} ValueInfo;

void sortAndMatch(int matrix[][3], int rows, int cols, int**
sortedCombinedMatrix, int* sameRows, int* sameRowsCount) {
  // Allocate memory for the first and last occurrences of each value
in each row
  ValueInfo** firstOccurrences = (ValueInfo**)malloc(sizeof(ValueInfo*) * rows);
  for (int i = 0; i < rows; i++) {
    firstOccurrences[i] = (ValueInfo*)malloc(sizeof(ValueInfo) * cols);
  }

  ValueInfo** lastOccurrences = (ValueInfo**)malloc(sizeof(ValueInfo*) * rows);
  for (int i = 0; i < rows; i++) {
    lastOccurrences[i] = (ValueInfo*)malloc(sizeof(ValueInfo) * cols);
  }

  // Find the first and last occurrences of each value in each row
  for (int i = 0; i < rows; i++) {
    for (int j = 0; j < cols; j++) {
      int value = matrix[i][j];
      bool foundFirst = false;
      bool foundLast = false;

      for (int k = 0; k < cols; k++) {
        if (matrix[i][k] == value && !foundFirst) {
          firstOccurrences[i][j].value = value;
          firstOccurrences[i][j].firstOccurrence = k;
          foundFirst = true;
        }

        if (matrix[i][k] == value && !foundLast) {
          lastOccurrences[i][j].value = value;
          lastOccurrences[i][j].lastOccurrence = k;
          foundLast = true;
        }
      }
    }
  }

  // Find the first and last occurrences of each value in the matrix
  ValueInfo* firstOccurrencesAll = (ValueInfo*)malloc(sizeof(ValueInfo) * cols);
  for (int i = 0; i < cols; i++) {
    firstOccurrencesAll[i].value = -1;
    firstOccurrencesAll[i].firstOccurrence = -1;
    firstOccurrencesAll[i].lastOccurrence = -1;
  }

  ValueInfo* lastOccurrencesAll = (ValueInfo*)malloc(sizeof(ValueInfo) * cols);
  for (int i = 0; i < cols; i++) {
    lastOccurrencesAll[i].

#Python

import numpy as np

def sort_and_match(matrix):
    # Sort each row of the matrix
    sorted_matrix = np.sort(matrix, axis=1)

    # Find the first occurrence of each value in each row
    first_occurrences = np.zeros_like(matrix)
    for i in range(matrix.shape[0]):
        for j in range(matrix.shape[1]):
            if matrix[i, j] not in first_occurrences[i, :j]:
                first_occurrences[i, j] = j

    # Find the last occurrence of each value in each row
    last_occurrences = np.zeros_like(matrix)
    for i in range(matrix.shape[0]):
        for j in range(matrix.shape[1]-1, -1, -1):
            if matrix[i, j] not in last_occurrences[i, j+1:]:
                last_occurrences[i, j] = j

    # Find the first and last occurrences of each value in the matrix
    first_occurrences_all = np.min(first_occurrences, axis=0)
    last_occurrences_all = np.max(last_occurrences, axis=0)

    # Find the rows that contain the same values
    same_rows = []
    for i in range(matrix.shape[0]):
        if np.all(first_occurrences[i, :] == last_occurrences[i, :]):
            same_rows.append(i)

    # Combine the same rows into a single row
    combined_matrix = np.zeros((len(same_rows), matrix.shape[1]))
    for i, row in enumerate(same_rows):
        combined_matrix[i, :] = matrix[row, :]

    # Sort the combined matrix
    sorted_combined_matrix = np.sort(combined_matrix, axis=1)

    return sorted_combined_matrix, same_rows

matrix = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])
sorted_combined_matrix, same_rows = sort_and_match(matrix)
print(sorted_combined_matrix)
print(same_rows)


***************************

Coding for 200 epoch machine learning device query & problem solve

Running machine learning (ML) models for 200 epochs can be
computationally expensive, so it's important to consider the hardware
and resources you have available. Here are some general
recommendations for running ML models with 200 epochs:

CPU: For smaller models or simpler tasks, you may be able to get away
with using a CPU. However, for larger or more complex models, you'll
likely need a GPU to provide the necessary processing power.

RAM: You'll also need to have enough RAM to store the model and the
data you're training it on. For 200 epochs, it's a good idea to have
at least 16GB of RAM, and preferably 32GB or more.

Storage: You'll need enough storage space to store the model, the data
you're training it on, and any intermediate results. For 200 epochs,
you'll likely need at least 1TB of storage, and preferably 2TB or
more.
In addition to hardware, you'll also need to consider the following
factors when running ML models with 200 epochs:

Model complexity: The more complex your model is, the longer it will
take to train it. This is because the model has more parameters to
optimize, which requires more computational power.

Data size: The more data you have to train your model on, the longer
it will take. This is because the model has to process more data to
find patterns and learn from it.

Learning rate: The learning rate is a parameter that controls how
quickly the model learns. A higher learning rate will speed up
training, but it may also lead to overfitting. A lower learning rate
will be slower, but it may be more likely to find a good solution.

If you're running into problems with running ML models with 200
epochs, there are a few things you can try:

Reduce the model complexity: If your model is too complex, it may be
taking too long to train. Try simplifying the model by removing some
of its layers or neurons.

Reduce the data size: If you have a lot of data, you may be able to
speed up training by reducing the amount of data you use.

Try using a smaller subset of the data to start with, or try using a
data augmentation technique to create more data.

Adjust the learning rate: The learning rate is a crucial parameter for
training ML models. If you're not seeing results after 200 epochs, you
may need to adjust the learning rate. Try increasing or decreasing the
learning rate to see if it improves the training process.

By carefully considering your hardware, resources, and model
parameters, you can make sure that your ML models are trained
efficiently and accurately.

**********

Python code for 200 epoch machine learning device query & problem solve

#Python Machine learning core code

import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Conv2D,
MaxPooling2D, Dropout

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

x_train = x_train.reshape(-1, 28, 28, 1)  # Flatten images
x_train = x_train.astype('float32')
x_train /= 255  # Normalize values between 0 and 1

x_test = x_test.reshape(-1, 28, 28, 1)
x_test = x_test.astype('float32')
x_test /= 255

model = tf.keras.Sequential([
    Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)),
    MaxPooling2D(pool_size=(2, 2)),

    Conv2D(64, kernel_size=(3,

3), activation='relu'),
    MaxPooling2D(pool_size=(2, 2)),

    Flatten(),
    Dense(128, activation='relu'),
    Dropout(0.2),
    Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy', metrics=['accuracy'])

model.fit(x_train, y_train, epochs=200)

test_loss, test_acc = model.evaluate(x_test, y_test)
print('Test accuracy:', test_acc)

#JS Machine learning core code

const { Tensor } = require('ndarrayjs');
const LinearRegression = require('jsml').LinearRegression;

// Load the dataset
const data = [
  [1, 2],
  [2, 3],
  [3, 4],
];

// Split the data into training and testing sets
const X = data.map(x => Tensor([-1, x[0]]).transpose());
const Y = data.map(y => Tensor([y]));
const trainingData = { X, Y };

// Create the machine learning model
const model = new LinearRegression();

// Train the model
for (let i = 0; i < 200; i++) {
  model.fit(trainingData);
}

// Evaluate the model
const testData = [[4], [5]];
const X_test = testData.map(x => Tensor([-1, x[0]]).transpose());
const Y_pred = model.predict(X_test);

console.log('Predicted values:');
console.log(Y_pred);

#C Machine learning core code

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <math.h>

#include <tensorflow/c/c_api.h>

typedef struct Neuron {
  float *weights;
  float bias;
} Neuron;

typedef struct Layer {
  Neuron *neurons;
  int numNeurons;
} Layer;

typedef struct Network {
  Layer *layers;
  int numLayers;
} Network;

// Load training data from CSV file
float *dataX = loadDataX("training_data.csv");
float *dataY = loadDataY("training_labels.csv");

// Shuffle data to improve model generalization
int dataLen = len(dataX);
for (int i = 0; i < dataLen; i++) {
  int j = rand() % dataLen;
  float tempX = dataX[i];
  float tempY = dataY[i];
  dataX[i] = dataX[j];
  dataY[i] = dataY[j];
  dataX[j] = tempX;
  dataY[j] = tempY;
}

for (int epoch = 0; epoch < 200; epoch++) {
  // Forward Pass
  // ...

  // Backpropagation
  // ...

  // Update weights and biases
  // ...
}

// Load testing data from CSV file
float *testDataX = loadDataX("testing_data.csv");
float *testDataY = loadDataY("testing_labels.csv");

// Calculate accuracy
int numCorrect = 0;
for (int i = 0; i < len(testDataX); i++) {
  float predictedY = predict(testDataX[i]);
  if (predictedY == testDataY[i]) {
    numCorrect++;
  }
}

float accuracy = (float)numCorrect / len(testDataX);
printf("Accuracy: %f\n", accuracy);

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-12-24 19:34 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-24 19:34 ML Classification Bundling for HIM & Her Sorting bundles in priorities such as, Time to process, Similarity & by probability (likelihood) improves perception & thought process, Logical sort orders.. Required processing order based on sorted requirements (one needs another) Items that go locally together, { Cleaning, Cooking, cleanup } Logical order, { Drink, Power, Computer, Application, Search, Webpage, Notebook, read, write } Saving data caches it & aids processing; But organising it first makes retrieval clean & thought Clean Meditation Logic Duke Abbaddon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).