Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Installation

Rust Crate

Add to your Cargo.toml:

[dependencies]
tropical-gemm = "0.1"

# For GPU acceleration (optional):
tropical-gemm-cuda = "0.1"

Python Package

# Basic installation
pip install tropical-gemm

# With PyTorch support for automatic differentiation
pip install tropical-gemm[torch]

# For development
pip install tropical-gemm[dev]

Optional Dependencies

The Python package has optional extras:

ExtraCommandDescription
torchpip install tropical-gemm[torch]PyTorch integration with autograd support
devpip install tropical-gemm[dev]Development dependencies (pytest, torch)

From Source

# Clone the repository
git clone https://github.com/TensorBFS/tropical-gemm
cd tropical-gemm/crates/tropical-gemm-python

# Create virtual environment
python -m venv .venv
source .venv/bin/activate  # Linux/Mac
# .venv\Scripts\activate   # Windows

# Install maturin and build
pip install maturin
maturin develop --release

# With CUDA support
maturin develop --release --features cuda

Verify Installation

import tropical_gemm
import numpy as np

a = np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32)
b = np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32)

c = tropical_gemm.maxplus_matmul(a, b)
print(c)  # [[5. 6.] [7. 8.]]

# Check GPU availability
print(f"CUDA available: {tropical_gemm.cuda_available()}")

Verify PyTorch Integration

import torch
from tropical_gemm.pytorch import tropical_maxplus_matmul, GPU_AVAILABLE

print(f"GPU available: {GPU_AVAILABLE}")

a = torch.randn(3, 4, requires_grad=True)
b = torch.randn(4, 5, requires_grad=True)

c = tropical_maxplus_matmul(a, b)
c.sum().backward()

print(f"grad_a: {a.grad.shape}")  # (3, 4)
print(f"grad_b: {b.grad.shape}")  # (4, 5)

CUDA Setup

For GPU acceleration, ensure CUDA is properly installed:

# Check CUDA installation
nvcc --version

# If not found, install CUDA toolkit
# Ubuntu:
sudo apt install nvidia-cuda-toolkit

# Or download from NVIDIA:
# https://developer.nvidia.com/cuda-downloads

The CUDA kernels are compiled at runtime using NVRTC, so you don’t need to compile the library with a specific CUDA version.

Building Python Package with CUDA

cd crates/tropical-gemm-python

# Build with CUDA feature
maturin develop --features cuda

# Or for release
maturin build --release --features cuda

Building from Source

# Clone
git clone https://github.com/TensorBFS/tropical-gemm
cd tropical-gemm

# Build all crates
cargo build --release --workspace

# Run tests
cargo test --workspace

# Build documentation
cargo doc --workspace --no-deps --open

Using the Makefile

A Makefile is provided for common tasks:

make help          # Show all targets
make setup         # Setup development environment
make build         # Build in release mode
make test          # Run all tests
make docs          # Build documentation
make bench         # Run benchmarks