public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* The Voyager Tensor Expression 64K (c)RS "Voyager knows of no bounds for surely you have done all the traveling & left but Nothing but footprints in the snows of chaos."
@ 2023-12-20 23:09 Duke Abbaddon
  0 siblings, 0 replies; only message in thread
From: Duke Abbaddon @ 2023-12-20 23:09 UTC (permalink / raw)
  To: Media

Problems voyager faces: So Nasa & ESA and international UN you may be
wondering? How on earth ? Could we ZFS an old Voyager MIPS ? Linux has
answers! 20:01 17/12/2023

The application of ZFS & ZLib is very applicable to systems such as
Voyager 1 & 2 & newer,
The new innovations in ZLib & ZFS mean that application to MIPS
(satellites like voyager)..
Compressed RAM & Clean ROMS Mean more RAM to use with a small
processing cost & the ability to use minimised tensors such as JS &
NumPy https://is.gd/DictionarySortJS

1:
Electrophiles & the act of cable; The sun as you probably understand
is a high energy electron place,

The ones who understand tokmas already know that tokmas are a source
of High V- e- charged free electrons,

Any reaction of nuclear type has the effect of releasing and
potentially receiving electrons..

High heat dry environments also send electrons; Because water
generally ionises & therefore uses the energy of the electron,

However Water ionised also is Active ionic & releases Hydrogen &
oxygen briefly in ionised states.

2:
Now in reference to our Voyager Satellites, deep space is a massive
source of ionic interference!

Inferencing may restore Voyager's sensibility! however as far as i
know the voyager has 68KB RAM & Tape drive, those tape drives must not
wear out or we could use Swap file or drive to enhance RAM,

Now we could! but shall we ?

Now we can use NumPy for i have heard it claimed to be tensors! now
NumPy is a 4MB file ? In assembler with minimal compiled features &
rooflined from a Server HPC; we could maybe fit NumPy into an
Operating system of say 15Kb to 35Kb! But is that true ?

How long does tape loading work for in time & load and wearing fabric?

I dreadfully have a tiny bit of experience with the commodore 64 & by
the fact Atari Cassette Cartridge & ROM,

So unless we send a fast recovery interceptor millions of miles we
could not get a cassette to the Voyager 1 & 2

But we can now!

ZLib & ZFS have been recently improved with Py Directory Sort

https://is.gd/CJS_DictionarySort
Python & JS Configurations
https://is.gd/DictionarySortJS

So we can use ZLib, GZip, Brotli & BZip compression library &
Application loading from tape!

So Nasa & ESA and international UN you may be wondering? How on earth
? Could we ZFS an old Voyager MIPS ? Linux has answers!

3:

Problems voyager faces:

Ion storms
Micro meteors
Micro Black holes
UFO
BORG ôo

So what can we do with light? #DirectlySeeSend&Recieve #Energiser #Cel414

4 Laser Space Communications, Now you might be wondering? Voyager ?
Laser coms ? sure?!

But we can convert a readable signal from a solar panel into an input
receptor for a signal recorded to TAPE or RAM or Direct PIO/DMA to a
storage media!

So how could we use this today ?

We could light beam past the moon to the L2 & directly to the solar
panel & have a message!
We can laser radio or radar burst back or to there? yes
We could power on systems with sustained light redirection ? yes
We could signal with flickered light from a mirror that has LED
crystal darkeners (such as LED TV)?! Yes

We can send energy & therefore receive it from powered on systems!
But from how far ? That depends on the size & accuracy of the dish!
Even a hubble could do that; The Romans sure could/Would.

So what can we do with light? #DirectlySeeSend&Recieve #Energiser #Cel414

Rupert S 2023-12

*****

The Voyager Tensor Expression 64K (c)RS

"Voyager knows of no bounds for surely you have done all the traveling
& left but Nothing but footprints in the snows of chaos."

The thing about tensors is that while python & JS & C can run & train
them; The Tensor configuration can be run compressed inside a 62KB
ZLib/ZFS,

Now running a very tight Tensor configuration on the budget of Voyager
or a Gameboy? on a 3 CPU Set? Entirely potential!

But we have to keep the logic pure & maths to really run full POWER
CPU; if we have AVX, Vector or Nano & Matrix Array.

Now we will be training the code on a 32Bit/16Bit Float F16b & F32
full precision array of GPU & NPU & Matrix!

However we will be funneling the final version though a test pattern
grid & Dumping it into a 64KB ...

Fully memory resident array; With paging for new tasks in 4KB Chunks
of ZLib compressed Data.

Total footprint under 1MB per Data Transfer..

Voyager knows of no bounds for surely you have done all the traveling
& left but Nothing but footprints in the snows of chaos.

Tensors C, JS & NumPy https://is.gd/DictionarySortJS

Rupert Summerskill

***************

Python 200 epoch machine learning device query & problem solve

Running machine learning (ML) models for 200 epochs can be
computationally expensive, so it's important to consider the hardware
and resources you have available. Here are some general
recommendations for running ML models with 200 epochs:

CPU: For smaller models or simpler tasks, you may be able to get away
with using a CPU. However, for larger or more complex models, you'll
likely need a GPU to provide the necessary processing power.

RAM: You'll also need to have enough RAM to store the model and the
data you're training it on. For 200 epochs, it's a good idea to have
at least 16GB of RAM, and preferably 32GB or more.

Storage: You'll need enough storage space to store the model, the data
you're training it on, and any intermediate results. For 200 epochs,
you'll likely need at least 1TB of storage, and preferably 2TB or
more.
In addition to hardware, you'll also need to consider the following
factors when running ML models with 200 epochs:

Model complexity: The more complex your model is, the longer it will
take to train it. This is because the model has more parameters to
optimize, which requires more computational power.

Data size: The more data you have to train your model on, the longer
it will take. This is because the model has to process more data to
find patterns and learn from it.

Learning rate: The learning rate is a parameter that controls how
quickly the model learns. A higher learning rate will speed up
training, but it may also lead to overfitting. A lower learning rate
will be slower, but it may be more likely to find a good solution.

If you're running into problems with running ML models with 200
epochs, there are a few things you can try:

Reduce the model complexity: If your model is too complex, it may be
taking too long to train. Try simplifying the model by removing some
of its layers or neurons.

Reduce the data size: If you have a lot of data, you may be able to
speed up training by reducing the amount of data you use.

Try using a smaller subset of the data to start with, or try using a
data augmentation technique to create more data.

Adjust the learning rate: The learning rate is a crucial parameter for
training ML models. If you're not seeing results after 200 epochs, you
may need to adjust the learning rate. Try increasing or decreasing the
learning rate to see if it improves the training process.

By carefully considering your hardware, resources, and model
parameters, you can make sure that your ML models are trained
efficiently and accurately.

Python code for 200 epoch machine learning device query & problem solve

#Python Machine learning core code

import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Conv2D,
MaxPooling2D, Dropout

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

x_train = x_train.reshape(-1, 28, 28, 1)  # Flatten images
x_train = x_train.astype('float32')
x_train /= 255  # Normalize values between 0 and 1

x_test = x_test.reshape(-1, 28, 28, 1)
x_test = x_test.astype('float32')
x_test /= 255

model = tf.keras.Sequential([
    Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)),
    MaxPooling2D(pool_size=(2, 2)),

    Conv2D(64, kernel_size=(3,

3), activation='relu'),
    MaxPooling2D(pool_size=(2, 2)),

    Flatten(),
    Dense(128, activation='relu'),
    Dropout(0.2),
    Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy', metrics=['accuracy'])

model.fit(x_train, y_train, epochs=200)

test_loss, test_acc = model.evaluate(x_test, y_test)
print('Test accuracy:', test_acc)

#JS Machine learning core code

const { Tensor } = require('ndarrayjs');
const LinearRegression = require('jsml').LinearRegression;

// Load the dataset
const data = [
  [1, 2],
  [2, 3],
  [3, 4],
];

// Split the data into training and testing sets
const X = data.map(x => Tensor([-1, x[0]]).transpose());
const Y = data.map(y => Tensor([y]));
const trainingData = { X, Y };

// Create the machine learning model
const model = new LinearRegression();

// Train the model
for (let i = 0; i < 200; i++) {
  model.fit(trainingData);
}

// Evaluate the model
const testData = [[4], [5]];
const X_test = testData.map(x => Tensor([-1, x[0]]).transpose());
const Y_pred = model.predict(X_test);

console.log('Predicted values:');
console.log(Y_pred);

#C Machine learning core code

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <math.h>

#include <tensorflow/c/c_api.h>

typedef struct Neuron {
  float *weights;
  float bias;
} Neuron;

typedef struct Layer {
  Neuron *neurons;
  int numNeurons;
} Layer;

typedef struct Network {
  Layer *layers;
  int numLayers;
} Network;

// Load training data from CSV file
float *dataX = loadDataX("training_data.csv");
float *dataY = loadDataY("training_labels.csv");

// Shuffle data to improve model generalization
int dataLen = len(dataX);
for (int i = 0; i < dataLen; i++) {
  int j = rand() % dataLen;
  float tempX = dataX[i];
  float tempY = dataY[i];
  dataX[i] = dataX[j];
  dataY[i] = dataY[j];
  dataX[j] = tempX;
  dataY[j] = tempY;
}

for (int epoch = 0; epoch < 200; epoch++) {
  // Forward Pass
  // ...

  // Backpropagation
  // ...

  // Update weights and biases
  // ...
}

// Load testing data from CSV file
float *testDataX = loadDataX("testing_data.csv");
float *testDataY = loadDataY("testing_labels.csv");

// Calculate accuracy
int numCorrect = 0;
for (int i = 0; i < len(testDataX); i++) {
  float predictedY = predict(testDataX[i]);
  if (predictedY == testDataY[i]) {
    numCorrect++;
  }
}

float accuracy = (float)numCorrect / len(testDataX);
printf("Accuracy: %f\n", accuracy);

***************

Compression, Dictionary Sort & Same Size Copy Match & Unite Same with
location in 2D Matrix #JS #C #Python RS 2023

https://is.gd/CJS_DictionarySort

Python & JS Configurations
https://is.gd/DictionarySortJS

The code appears complex; But there you go! In assembler it may be
15KB to 64KB; I am not entirely sure,
However the objective is for ZSTD, GZIP, Brotli & Codecs such as DSC,

The compression rate improves & the quality of the compression also!
Depending on the encryption type; You may also improve the complexity
by deviating from commons or by compressing better first.

CPU + Cache, Task & Lists & Tables & Compressed Data, Task RAM &
Storage: Ordering data sets
In reference to :
https://science.n-helix.com/2021/11/monticarlo-workload-selector.html

Sorting memory load arrays so they align neatly into a block;
Accounting Dates into Same ID & Order; Ordered identical ID; Taxing
blocks,

Accounting for block loads & saves so that blocks in the same
locations & size are loaded & stored neatly..

Aligned minimal block formations save space on block systems such as
NTFS & EFS & Advanced Format with Extended Directories such as on the
Amiga & Mac..

The ordering of NTFS, EFS & ZFS directory tables & data caches & The
ordering of all RAM load save cycles in contexts such as..
Storage access & writing; Boot partition ordering etcetera.

These & many more uses depending on speed & parallel code ordering &
identification Unification,

Depending entirely on block size & efficiency of sort & collation grouping.

*

Data flows & parallel streams; Identical RAM & Storage workloads..

By ordering the data groups into dictionary content with minimal group
identifiers the code then becomes a compression library or a group
archiver or a shorthand writer with unified comment commits such as
legal point 1, 2, N amendums & common informations,

Sorting is relatively fast in 128x & 64x & 32X & 16x Cubes in SiMD
Vector Matrix; Aligning information cubes & commons.. Seconds.

Many uses exist for these functions & coding excellence is defined by you.

*

Machine Learning Additive D!=S + (D!=S)M

illness ? Dictionary Sort can sort your X-rays before classification &
after ML & classification distillation, good for robots & humans
alike! But most humanity lack code compile,

You can essentially use Python, JS, C code to collate & sort &
identify the identical,
You may know but image classifications of identical are reduced to
commons like red blood or cell factors in python.

*

Modes of Conduct MS-HO²C<CL Halflife Olive Compile OpenCL : Microsoft Olive

"Model conversion: translates the base models from PyTorch to ONNX.

Transformer graph optimization: fuses subgraphs into multi-head
attention operators and eliminates inefficient from conversion.

Quantization: converts most layers from FP32 to FP16 to reduce the
model's GPU memory footprint and improve performance."

https://community.amd.com/t5/ai/how-to-automatic1111-stable-diffusion-webui-with-directml/ba-p/649027

Modes of Conduct MS-HO²C<CL Halflife Olive Compile OpenCL

What we will do: MS-HO²C<CL

Sort Memory lists for identical long chains & Merge with shorthand
conversion in Code IDE

Small Cache List, Convert & Compare Double & Single Precision to Half
to check for errors on identical match

Compare sorted Tensor Nodes for common paths though the neural net for
identical & Similar response mapping..
Shortening paths & Interpolating placement & values.

*

Classic oscillator : (D!=S)Merge : DSM

An example use of the Dictionary Sort is the classic oscillator spring
https://blog.research.google/2023/12/a-new-quantum-algorithm-for-classical.html

Now as they explain Springs=N represented by Log(N)QBit, Now we have 2
issues RAM & QBit Allocation based on complexity..

With Dictionary Sort & merge we can provide for Common spring
vibrations & Synchronous spring motions,
RAM & QBit Merged DataSets that allow us to attribute cause & effect
in a single motion.

RS

Bluetooth dongle LE Protocol
https://drive.google.com/file/d/17csRnAfdceZiTSnQZvhaLqLSwL__zsIG/view?usp=sharing

The Matrix Vector

https://science.n-helix.com/2023/06/map.html

https://science.n-helix.com/2022/09/ovccans.html

https://science.n-helix.com/2022/03/ice-ssrtp.html

https://science.n-helix.com/2022/04/vecsr.html

https://science.n-helix.com/2021/11/monticarlo-workload-selector.html

https://science.n-helix.com/2022/02/interrupt-entropy.html

https://github.com/synfosec/packz

Rupert S

code for sorting and matching a 2D matrix can vary depending on the
specific assembler architecture and compiler being used. Here's a
general outline of the steps involved:

Load and Initialize Data:

Load the 2D matrix into memory, keeping track of its dimensions (rows
and columns).
Initialize an auxiliary array to store the first and last occurrences
of each value in each row.
Initialize an empty array to store the combined matrix.
Initialize a counter for the number of same rows found.
Sort Each Row:

Iterate through each row of the matrix.
For each row, use a sorting algorithm, such as bubble sort or
insertion sort, to sort the elements in ascending order.
Find First and Last Occurrences:

Iterate through each row of the sorted matrix.
For each element in the row, scan the remaining elements in the row to
find its first and last occurrences.
Store the first and last occurrences in the corresponding auxiliary array.
Identify Same Rows:

Iterate through each row of the matrix.
For each row, check if all elements have the same first and last
occurrences as the corresponding elements in the previous row.
If all elements match, mark the row as a "same row" and increment the counter.
Combine Same Rows:

Allocate memory for the combined matrix based on the number of same rows found.
Iterate through the same rows, copying the elements of each same row
into the corresponding row of the combined matrix.
Sort Combined Matrix:

Sort the combined matrix using a sorting algorithm.
Update Output Arrays:

Store the sorted combined matrix in the provided output array.
Store the indices of the same rows in the provided output array.

By ordering the data groups into dictionary content with minimal group
identifiers the code then becomes a compression library or a group
archiver or a shorthand writer with unified comment commits such as
legal point 1, 2, N amendums & common informations,

Sorting is relatively fast in 128x & 64x & 32X & 16x Cubes in SiMD
Vector Matrix; Aligning information cubes & commons.. Seconds.

RS

#JS

function sortAndMatch(matrix) {
  // Sort each row of the matrix
  const sortedMatrix = matrix.map((row) => row.sort((a, b) => a - b));

  // Find the first occurrence of each value in each row
  const firstOccurrences = [];
  for (let i = 0; i < matrix.length; i++) {
    const row = matrix[i];
    const firstOccurrencesRow = {};
    for (let j = 0; j < row.length; j++) {
      const value = row[j];
      if (!firstOccurrencesRow.hasOwnProperty(value)) {
        firstOccurrencesRow[value] = j;
      }
    }
    firstOccurrences.push(firstOccurrencesRow);
  }

  // Find the last occurrence of each value in each row
  const lastOccurrences = [];
  for (let i = 0; i < matrix.length; i++) {
    const row = matrix[i];
    const lastOccurrencesRow = {};
    for (let j = row.length - 1; j >= 0; j--) {
      const value = row[j];
      if (!lastOccurrencesRow.hasOwnProperty(value)) {
        lastOccurrencesRow[value] = j;
      }
    }
    lastOccurrences.push(lastOccurrencesRow);
  }

  // Find the first and last occurrences of each value in the matrix
  const firstOccurrencesAll = {};
  for (const row of firstOccurrences) {
    for (const value in row) {
      if (!firstOccurrencesAll.hasOwnProperty(value) ||
firstOccurrencesAll[value] > row[value]) {
        firstOccurrencesAll[value] = row[value];
      }
    }
  }

  const lastOccurrencesAll = {};
  for (const row of lastOccurrences) {
    for (const value in row) {
      if (!lastOccurrencesAll.hasOwnProperty(value) ||
lastOccurrencesAll[value] < row[value]) {
        lastOccurrencesAll[value] = row[value];
      }
    }
  }

  // Find the rows that contain the same values
  const sameRows = [];
  for (let i = 0; i < matrix.length; i++) {
    const row = matrix[i];
    let same = true;
    for (let j = 0; j < row.length; j++) {
      const value = row[j];
      const firstOccurrence = firstOccurrencesAll[value];
      const lastOccurrence = lastOccurrencesAll[value];
      if (firstOccurrence !== lastOccurrence ||
firstOccurrences[i][value] !== firstOccurrence ||
lastOccurrences[i][value] !== lastOccurrence) {
        same = false;
        break;
      }
    }
    if (same) {
      sameRows.push(i);
    }
  }

  // Combine the same rows into a single row
  const combinedMatrix = [];
  for (const row of sameRows) {
    combinedMatrix.push(matrix[row]);
  }

  // Sort the combined matrix
  const sortedCombinedMatrix = combinedMatrix.map((row) =>
row.sort((a, b) => a - b));

  return { sortedCombinedMatrix, sameRows };
}

const matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]];
const result = sortAndMatch(matrix);
console.log(result.sortedCombinedMatrix);
console.log(result.sameRows);

#C

#include <stdio.h>
#include <stdlib.h>

typedef struct {
  int value;
  int firstOccurrence;
  int lastOccurrence;
} ValueInfo;

void sortAndMatch(int matrix[][3], int rows, int cols, int**
sortedCombinedMatrix, int* sameRows, int* sameRowsCount) {
  // Allocate memory for the first and last occurrences of each value
in each row
  ValueInfo** firstOccurrences = (ValueInfo**)malloc(sizeof(ValueInfo*) * rows);
  for (int i = 0; i < rows; i++) {
    firstOccurrences[i] = (ValueInfo*)malloc(sizeof(ValueInfo) * cols);
  }

  ValueInfo** lastOccurrences = (ValueInfo**)malloc(sizeof(ValueInfo*) * rows);
  for (int i = 0; i < rows; i++) {
    lastOccurrences[i] = (ValueInfo*)malloc(sizeof(ValueInfo) * cols);
  }

  // Find the first and last occurrences of each value in each row
  for (int i = 0; i < rows; i++) {
    for (int j = 0; j < cols; j++) {
      int value = matrix[i][j];
      bool foundFirst = false;
      bool foundLast = false;

      for (int k = 0; k < cols; k++) {
        if (matrix[i][k] == value && !foundFirst) {
          firstOccurrences[i][j].value = value;
          firstOccurrences[i][j].firstOccurrence = k;
          foundFirst = true;
        }

        if (matrix[i][k] == value && !foundLast) {
          lastOccurrences[i][j].value = value;
          lastOccurrences[i][j].lastOccurrence = k;
          foundLast = true;
        }
      }
    }
  }

  // Find the first and last occurrences of each value in the matrix
  ValueInfo* firstOccurrencesAll = (ValueInfo*)malloc(sizeof(ValueInfo) * cols);
  for (int i = 0; i < cols; i++) {
    firstOccurrencesAll[i].value = -1;
    firstOccurrencesAll[i].firstOccurrence = -1;
    firstOccurrencesAll[i].lastOccurrence = -1;
  }

  ValueInfo* lastOccurrencesAll = (ValueInfo*)malloc(sizeof(ValueInfo) * cols);
  for (int i = 0; i < cols; i++) {
    lastOccurrencesAll[i].

#Python

import numpy as np

def sort_and_match(matrix):
    # Sort each row of the matrix
    sorted_matrix = np.sort(matrix, axis=1)

    # Find the first occurrence of each value in each row
    first_occurrences = np.zeros_like(matrix)
    for i in range(matrix.shape[0]):
        for j in range(matrix.shape[1]):
            if matrix[i, j] not in first_occurrences[i, :j]:
                first_occurrences[i, j] = j

    # Find the last occurrence of each value in each row
    last_occurrences = np.zeros_like(matrix)
    for i in range(matrix.shape[0]):
        for j in range(matrix.shape[1]-1, -1, -1):
            if matrix[i, j] not in last_occurrences[i, j+1:]:
                last_occurrences[i, j] = j

    # Find the first and last occurrences of each value in the matrix
    first_occurrences_all = np.min(first_occurrences, axis=0)
    last_occurrences_all = np.max(last_occurrences, axis=0)

    # Find the rows that contain the same values
    same_rows = []
    for i in range(matrix.shape[0]):
        if np.all(first_occurrences[i, :] == last_occurrences[i, :]):
            same_rows.append(i)

    # Combine the same rows into a single row
    combined_matrix = np.zeros((len(same_rows), matrix.shape[1]))
    for i, row in enumerate(same_rows):
        combined_matrix[i, :] = matrix[row, :]

    # Sort the combined matrix
    sorted_combined_matrix = np.sort(combined_matrix, axis=1)

    return sorted_combined_matrix, same_rows

matrix = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])
sorted_combined_matrix, same_rows = sort_and_match(matrix)
print(sorted_combined_matrix)
print(same_rows)

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-12-20 23:09 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-20 23:09 The Voyager Tensor Expression 64K (c)RS "Voyager knows of no bounds for surely you have done all the traveling & left but Nothing but footprints in the snows of chaos." Duke Abbaddon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).