Models

Implement pre-built machine learning models with the call of a single function!

Pre-Built Models:

linreg() - Contains the implementation of linear regression.

logreg() - Contains the implementation of logistic regression.

nn() - Contains the implementation of a neural network, for classification tasks.

Sample Usage:

from nue import nn

# Pre-processing data
data = pd.read_csv('examples/data/mnist_train.csv')
data = np.array(data) # 60000, 785
Y_train = data[:, 0].T.reshape(1, -1)# 1, 60000
X_train = data[:, 1:786].T / 255 # 784, 60000

# Instantiating class
model = nn.NN(X_train, Y_train, 784, 10, 32, .1, 1000)

#Single function call
model.model()

See more at the Examples.

Important

Each model expects input data and target labels in the following shape:

Input Features: (n, m), where n is the number of features and m is the number of samples.

Target Labels: (1, m), where m is the number of samples.

models.linreg

A base class for implementing linear regression!

class models.linreg.LinearRegression(input, labels, num_features, alpha, epochs)[source]

Bases: object

Parameters:
  • input (numpy.ndarray) – The input data of shape (n, m), where n is the number of features and m is the number of samples

  • labels (numpy.ndarray) – The target labels of shape (1, m), where m is the number of samples

  • num_features – The total number of features in the input data per sample

  • alpha (float) – The learning rate for gradient descent

  • epochs (int) – The number of epochs for training

Type:

int

backward()[source]

Perform a backward pass to calculate the gradients of the weights and bias.

Returns:

Tuple containing the gradients of the weights (dw) and bias (db).

Return type:

tuple

forward()[source]

Perform a forward pass to calculate the predicted values.

Returns:

The predicted values.

Return type:

numpy.ndarray

gradient_descent()[source]

Perform gradient descent to train the linear regression model.

Returns:

Tuple containing the final weights (w) and bias (b).

Return type:

tuple

init_params()[source]

Initialize the parameters (weights and bias) for linear regression

Returns:

Tuple containing the weights (w) and bias (b)

Return type:

tuple

model()[source]

Run the entire linear regression model.

Returns:

Tuple containing the final weights (w) and bias (b).

Return type:

tuple

mse()[source]

Calculate the mean squared error (MSE) between the predicted and actual values.

Returns:

The mean squared error.

Return type:

float

update()[source]

Update the weights and bias using gradient descent.

Returns:

Tuple containing the updated weights (w) and bias (b).

Return type:

tuple

models.logreg

A base class for implementing logistic regression!

class models.logreg.LogisticRegression(input, labels, num_features, alpha, epochs)[source]

Bases: object

Parameters:
  • input (numpy.ndarray) – The input data matrix of shape (n, m), where n is the number of features and m is the number of samples

  • labels (numpy.ndarray) – The target labels of shape (1, m), where m is the number of samples.

  • num_features (int) – The number of features in the input data.

  • alpha (float) – The learning rate for gradient descent.

  • epochs (int) – The number of epochs for training.

backward()[source]

Perform a backward pass to calculate the gradients of the weights and bias.

Returns:

Tuple containing the gradients of the weights (dw) and bias (db).

Return type:

tuple

forward()[source]

Perform a forward pass to calculate the predicted probabilities.

Returns:

The predicted probabilities.

Return type:

numpy.ndarray

gradient_descent()[source]

Perform gradient descent to train the logistic regression model.

Returns:

Tuple containing the final weights (w) and bias (b).

Return type:

tuple

init_params()[source]

Initialize the parameters (weights and bias) for the logistic regression model. :return: Tuple containing the weights (w) and bias (b). :rtype: tuple

log_loss()[source]

Calculate the logistic loss (cross-entropy) between the predicted and actual labels.

Returns:

The logistic loss.

Return type:

float

model()[source]

Run the entire logistic regression model.

Returns:

Tuple containing the final weights (w) and bias (b).

Return type:

tuple

sigmoid(z)[source]

Calculate the sigmoid function for a given input.

Parameters:

z (float or numpy.ndarray) – The input value.

Returns:

The sigmoid of z.

Return type:

float or numpy.ndarray

update()[source]

Update the weights and bias using gradient descent.

Returns:

Tuple containing the updated weights (w) and bias (b).

Return type:

tuple

models.nn

A base class for implementing a neural network for classification tasks!

Important

This class is geared towards classification tasks and hasn’t been tested for other applications.

It’s been validated on the MNIST Digits, Fashion MNIST and CIFAR-10!

Implementation onto other datasets may not be guaranteed but feel free to experiment and contribute!

class models.nn.NN(input, labels, num_features, num_classes, hidden_size, alpha, epochs)[source]

Bases: object

Parameters:
  • input (numpy.ndarray) – The input data matrix of shape (m, n), where m is the number of samples and n is the number of features.

  • labels (numpy.ndarray) – The target labels of shape (m, 1).

  • num_features (int) – The number of features in the input data.

  • num_classes (int) – The number of classes in the classification task.

  • hidden_size (int) – The number of units in the hidden layer.

  • alpha (float) – The learning rate for gradient descent.

  • epochs (int) – The number of epochs for training.

ReLU(z)[source]

Apply the Rectified Linear Unit (ReLU) activation function element-wise to the input.

Parameters:

z (numpy.ndarray) – The input to the ReLU function.

Returns:

The output of the ReLU function.

Return type:

numpy.ndarray

ReLU_deriv(z)[source]

Compute the derivative of the ReLU function.

Parameters:

z (numpy.ndarray) – The input to the ReLU function.

Returns:

The derivative of the ReLU function.

Return type:

numpy.ndarray

accuracy()[source]

Calculate the accuracy of the model.

Returns:

The accuracy of the model as a percentage.

Return type:

float

backward()[source]

Perform a backward pass through the neural network to compute gradients.

Returns:

Tuple containing the gradients of the weights and biases for the hidden and output layers.

Return type:

tuple

cat_cross_entropy()[source]

Calculate the categorical cross-entropy loss between the predicted and actual labels.

Returns:

The categorical cross-entropy loss.

Return type:

float

forward()[source]

Perform a forward pass through the neural network.

Returns:

The predicted probabilities for each class

Return type:

numpy.ndarray

gradient_descent()[source]

Perform gradient descent to train the neural network.

Returns:

Tuple containing the final weights and biases for the hidden and output layers.

Return type:

tuple

init_params()[source]

Initialize the parameters (weights and biases) for the neural network.

Returns:

Tuple containing the weights and biases for the hidden and output layers.

Return type:

tuple

model()[source]

Run the entire neural network model.

Returns:

Tuple containing the final weights and biases for the hidden and output layers.

Return type:

tuple

one_hot()[source]

Convert the target labels into one-hot encoded format.

Returns:

The one-hot encoded labels.

Return type:

numpy.ndarray

softmax(z)[source]

Apply the softmax activation function to the input.

Parameters:

z (numpy.ndarray) – The input to the softmax function.

Returns:

The output of the softmax function.

Return type:

numpy.ndarray

update()[source]

Update the weights and biases of the neural network using gradient descent.

Returns:

Tuple containing the updated weights and biases for the hidden and output layers.

Return type:

tuple