xbtorch

XBTorch root package

This module provides the root API for XBTorch, including:

  • The XBParams singleton class for global configuration.

  • Helper functions to get/set parameters and initialize the library.

  • Lists of supported activation layers, parameterized layers, and parameter-less layers.

Functions

get_xbtorch_param(key[, default])

Retrieve a global XBTorch parameter.

initialize(*args, **kwargs)

Initialize the XBTorch library.

Classes

XBParams()

Singleton class to store global XBTorch parameters.

class xbtorch.XBParams[source]

Bases: object

Singleton class to store global XBTorch parameters.

This class ensures a single global configuration dictionary that controls decomposition, device, quantization, weight ranges, and accelerators.

_global_dict

Dictionary storing all global parameters and flags.

Type:

dict

_wage_defaults

Default settings for WAGE quantization.

Type:

dict

get_var(key, default=None)[source]
initialize(decomposition_algorithm=None, device_type=None, weight_range=(-1, 1), pytorch_device='cpu', wage_quantize=False, wage_params={}, inference_accelerator=None)[source]

Initialize the XBTorch environment.

Sets up decomposition algorithm, device, weight ranges, WAGE quantization, and optional inference accelerators. Also migrates tensors to the selected device.

Parameters:
  • decomposition_algorithm (xbtorch.decomposition.base.GenericDecomposition, optional) – Decomposition algorithm to use for layers (default is None).

  • device_type (xbtorch.devices.base.GenericDevice, optional) – Hardware device abstraction (default is None).

  • weight_range (tuple of float, optional) – Min and max allowed weights, default is (-1, 1).

  • pytorch_device (str or torch.device, optional) – PyTorch device for tensor allocation (default ‘cpu’).

  • wage_quantize (bool, optional) – Whether to enable WAGE quantization (default False).

  • wage_params (dict, optional) – Overrides for WAGE quantization defaults.

  • inference_accelerator (xbtorch.deployment.base.GenericAccelerator, optional) – Inference accelerator to use.

Raises:

TypeError – If provided decomposition_algorithm, device_type, or inference_accelerator is not of the expected type, or weight_range is invalid.

set_var(key, value)[source]
xbtorch.initialize(*args, **kwargs)[source]

Initialize the XBTorch library.

Convenience function that calls XBParams.initialize.

Modules

decomposition

Gradient decomposition algorithms

deployment

Deployment (mapping, encoding, etc.) of solutions to inference accelerators

devices

Device models and device model presets

loss

Custom loss functions and tools for loss landscapes

nn

Models, datasets, and various XBTorch utilities akin to torch.nn

optim

XBTorch-wrapped optimizers with support for WAGE quantization, device-aware updates, and decomposition algorithms

patches

Decorators for patching PyTorch models and optimizers for XBTorch

quant

Tools for various quantization-related tasks