xbtorch.deployment.correction

Fault-tolerant neural network architectures for crossbar-based accelerators.

This module implements the Collaborative Logistic Classifier (CLC) method from Liu et al., “A Fault-Tolerant Neural Network Architecture”. CLC introduces redundancy into the final classification layer using multiple logistic sub-classifiers and error-correcting codewords. This design improves robustness to hardware defects, device variability, and noise commonly encountered in memristive crossbars.

The workflow includes:

References

Functions

add_collaborative_logistic_classifiers(...)

Replace the last layer of a model with collaborative classifiers.

dnn_favorable_searching_code(conf_matrix, ...)

Construct favorable error-correcting codewords for collaborative classifiers.

test_collaborative(model, loader, codewords, ...)

Evaluate a collaborative classifier model.

train_collaborative(model, loader, ...)

Train a collaborative classifier model.

variable_length_decode(output, codewords[, ...])

Decode model outputs to class predictions using codewords.

Classes

CollaborativeLogisticClassifier(input_size, ...)

Collaborative logistic classifier layer.

CollaborativeLoss(codewords, model)

Custom loss for collaborative logistic classifiers.

class xbtorch.deployment.correction.CollaborativeLogisticClassifier(input_size, num_classifiers)[source]

Bases: Module

Collaborative logistic classifier layer.

This layer replaces the final classification layer in a network with multiple logistic classifiers, one per “committee member”. Each classifier outputs a probability (via sigmoid), and the final prediction is decoded using collaborative error-correcting codewords.

Parameters:
  • input_size (int) – Dimension of the input feature vector.

  • num_classifiers (int) – Number of collaborative logistic classifiers.

classifiers

Linear layer with num_classifiers outputs, no bias.

Type:

nn.Linear

significance

Trainable parameter vector controlling the relative importance of each classifier.

Type:

nn.Parameter

forward(x)[source]

Forward pass through the collaborative classifiers.

Parameters:

x (torch.Tensor) – Input tensor of shape (batch_size, input_size).

Returns:

Sigmoid probabilities of shape (batch_size, num_classifiers).

Return type:

torch.Tensor

class xbtorch.deployment.correction.CollaborativeLoss(codewords, model)[source]

Bases: Module

Custom loss for collaborative logistic classifiers.

This combines binary cross-entropy (BCE) loss with a penalty term based on the Hamming distance between the predicted and target codewords. The penalty encourages outputs to stay closer to the target codeword in code space, increasing fault tolerance.

Parameters:
  • codewords (torch.Tensor) – Matrix of target codewords, shape (num_classes, num_classifiers).

  • model (nn.Module) – Collaborative model using CollaborativeLogisticClassifier.

reg_lambda

Regularization weight for the Hamming distance penalty (default=0.1).

Type:

float

forward(output, target, threshold=0.5)[source]

Compute collaborative loss.

Parameters:
  • output (torch.Tensor) – Model predictions, shape (batch_size, num_classifiers).

  • target (torch.Tensor) – True class indices, shape (batch_size,).

  • threshold (float, optional) – Threshold for binarizing predictions. Default: 0.5.

Returns:

Scalar loss value.

Return type:

torch.Tensor

xbtorch.deployment.correction.add_collaborative_logistic_classifiers(model, num_classifiers, idx=-2)[source]

Replace the last layer of a model with collaborative classifiers.

Parameters:
  • model (nn.Module) – Model (patched xbtorch model) containing a model attribute with a module list.

  • num_classifiers (int) – Number of collaborative logistic classifiers to insert.

  • idx (int, optional (default=-2)) – Index of the layer to replace.

Raises:

ValueError – If the model does not contain a model attribute in the expected format.

xbtorch.deployment.correction.dnn_favorable_searching_code(conf_matrix, num_classifiers, hamming_distance=3)[source]

Construct favorable error-correcting codewords for collaborative classifiers.

Based on a confusion matrix, assigns binary codewords to classes such that inter-class Hamming distances are maximized, ensuring robustness against errors in individual classifiers.

Parameters:
  • conf_matrix (array-like) – Confusion matrix from training or validation, shape (num_classes, num_classes).

  • num_classifiers (int) – Number of collaborative classifiers (codeword length).

  • hamming_distance (int, optional (default=3)) – Minimum required Hamming distance between codewords.

Returns:

  • dict : mapping of class index → codeword (as integer).

  • torch.Tensor : codeword matrix of shape (num_classes, num_classifiers), with binary entries.

Return type:

tuple

Notes

  • If no codewords are found at the desired Hamming distance, the distance is iteratively reduced.

xbtorch.deployment.correction.test_collaborative(model, loader, codewords, device)[source]

Evaluate a collaborative classifier model.

Parameters:
  • model (nn.Module) – Collaborative model.

  • loader (torch.utils.data.DataLoader) – DataLoader providing evaluation batches.

  • codewords (torch.Tensor) – Codeword matrix, shape (num_classes, num_classifiers).

  • device (str or torch.device) – Device for computation.

Returns:

Accuracy over the dataset (0–1).

Return type:

float

xbtorch.deployment.correction.train_collaborative(model, loader, optimizer, loss_fn, codewords, device)[source]

Train a collaborative classifier model.

Parameters:
  • model (nn.Module) – Collaborative model (with CollaborativeLogisticClassifier).

  • loader (torch.utils.data.DataLoader) – DataLoader providing training batches.

  • optimizer (torch.optim.Optimizer) – Optimizer for training.

  • loss_fn (nn.Module) – Loss function, typically CollaborativeLoss.

  • codewords (torch.Tensor) – Codeword matrix, shape (num_classes, num_classifiers).

  • device (str or torch.device) – Device for computation.

xbtorch.deployment.correction.variable_length_decode(output, codewords, threshold=0.5)[source]

Decode model outputs to class predictions using codewords.

Parameters:
  • output (torch.Tensor) – Model outputs, shape (batch_size, num_classifiers).

  • codewords (torch.Tensor) – Codeword matrix of shape (num_classes, num_classifiers).

  • threshold (float, optional (default=0.5)) – Threshold for binarizing outputs.

Returns:

Predicted class indices, shape (batch_size,).

Return type:

torch.Tensor