pub struct Tensor<T: Scalar, B: Backend> { /* private fields */ }Expand description
A multi-dimensional tensor with stride-based layout.
Tensors support zero-copy view operations (permute, reshape) and automatically make data contiguous when needed for operations like GEMM.
§Type Parameters
T- The scalar element type (f32, f64, etc.)B- The backend type (Cpu, Cuda)
§Example
use omeinsum::{Tensor, Cpu};
let a = Tensor::<f32, Cpu>::from_data(&[1.0, 2.0, 3.0, 4.0, 5.0, 6.0], &[2, 3]);
let b = a.permute(&[1, 0]); // Zero-copy transpose
let c = b.contiguous(); // Make contiguous copyImplementations§
Source§impl<T: Scalar, B: Backend> Tensor<T, B>
impl<T: Scalar, B: Backend> Tensor<T, B>
Sourcepub fn contract_binary<A: Algebra<Scalar = T, Index = u32>>(
&self,
other: &Self,
ia: &[usize],
ib: &[usize],
iy: &[usize],
) -> Selfwhere
T: BackendScalar<B>,
pub fn contract_binary<A: Algebra<Scalar = T, Index = u32>>(
&self,
other: &Self,
ia: &[usize],
ib: &[usize],
iy: &[usize],
) -> Selfwhere
T: BackendScalar<B>,
Binary tensor contraction using reshape-to-GEMM strategy.
§Arguments
other- The other tensor to contract withia- Index labels for selfib- Index labels for otheriy- Output index labels
§Example
use omeinsum::{Tensor, Cpu};
use omeinsum::algebra::MaxPlus;
// A[i,j,k] × B[j,k,l] → C[i,l]
let a = Tensor::<f32, Cpu>::from_data(&(0..24).map(|x| x as f32).collect::<Vec<_>>(), &[2, 3, 4]);
let b = Tensor::<f32, Cpu>::from_data(&(0..60).map(|x| x as f32).collect::<Vec<_>>(), &[3, 4, 5]);
let c = a.contract_binary::<MaxPlus<f32>>(&b, &[0, 1, 2], &[1, 2, 3], &[0, 3]);
assert_eq!(c.shape(), &[2, 5]);Sourcepub fn contract_binary_with_argmax<A: Algebra<Scalar = T, Index = u32>>(
&self,
other: &Self,
ia: &[usize],
ib: &[usize],
iy: &[usize],
) -> (Self, Tensor<u32, B>)where
T: BackendScalar<B>,
pub fn contract_binary_with_argmax<A: Algebra<Scalar = T, Index = u32>>(
&self,
other: &Self,
ia: &[usize],
ib: &[usize],
iy: &[usize],
) -> (Self, Tensor<u32, B>)where
T: BackendScalar<B>,
Binary contraction with argmax tracking.
Source§impl<T: Scalar, B: Backend> Tensor<T, B>
impl<T: Scalar, B: Backend> Tensor<T, B>
Sourcepub fn from_data(data: &[T], shape: &[usize]) -> Selfwhere
B: Default,
pub fn from_data(data: &[T], shape: &[usize]) -> Selfwhere
B: Default,
Create a tensor from data with the given shape.
Data is assumed to be in column-major (Fortran) order.
Sourcepub fn from_data_with_backend(data: &[T], shape: &[usize], backend: B) -> Self
pub fn from_data_with_backend(data: &[T], shape: &[usize], backend: B) -> Self
Create a tensor from data with explicit backend.
Sourcepub fn zeros_with_backend(shape: &[usize], backend: B) -> Self
pub fn zeros_with_backend(shape: &[usize], backend: B) -> Self
Create a zero-filled tensor with explicit backend.
Sourcepub fn from_storage(storage: B::Storage<T>, shape: &[usize], backend: B) -> Self
pub fn from_storage(storage: B::Storage<T>, shape: &[usize], backend: B) -> Self
Create a tensor from storage with given shape.
The storage must be contiguous and have exactly shape.iter().product() elements.
Sourcepub fn storage(&self) -> Option<&B::Storage<T>>
pub fn storage(&self) -> Option<&B::Storage<T>>
Get a reference to the underlying storage.
Returns Some(&storage) only if the tensor is contiguous and has no offset.
For non-contiguous tensors, call contiguous() first.
Sourcepub fn is_contiguous(&self) -> bool
pub fn is_contiguous(&self) -> bool
Check if the tensor is contiguous in memory (row-major).
Sourcepub fn get(&self, index: usize) -> T
pub fn get(&self, index: usize) -> T
Get element at linear index (column-major).
This is an O(ndim) operation that directly accesses storage without allocating memory. The linear index is interpreted in column-major order.
§Arguments
index- Linear index into the flattened tensor (column-major order)
§Panics
Panics if index is out of bounds.
§Example
use omeinsum::{Tensor, Cpu};
let t = Tensor::<f32, Cpu>::from_data(&[1.0, 2.0, 3.0, 4.0], &[2, 2]);
assert_eq!(t.get(0), 1.0);
assert_eq!(t.get(3), 4.0);Sourcepub fn permute(&self, axes: &[usize]) -> Self
pub fn permute(&self, axes: &[usize]) -> Self
Permute dimensions (zero-copy).
§Example
use omeinsum::{Tensor, Cpu};
let data: Vec<f32> = (0..24).map(|x| x as f32).collect();
let a = Tensor::<f32, Cpu>::from_data(&data, &[2, 3, 4]);
let b = a.permute(&[2, 0, 1]); // Shape becomes [4, 2, 3]
assert_eq!(b.shape(), &[4, 2, 3]);Sourcepub fn reshape(&self, new_shape: &[usize]) -> Self
pub fn reshape(&self, new_shape: &[usize]) -> Self
Reshape to a new shape (zero-copy if contiguous).
§Example
use omeinsum::{Tensor, Cpu};
let a = Tensor::<f32, Cpu>::from_data(&[1.0, 2.0, 3.0, 4.0, 5.0, 6.0], &[2, 3]);
let b = a.reshape(&[6]); // Flatten
let c = a.reshape(&[3, 2]); // Different shape, same data
assert_eq!(b.shape(), &[6]);
assert_eq!(c.shape(), &[3, 2]);Sourcepub fn contiguous(&self) -> Self
pub fn contiguous(&self) -> Self
Make tensor contiguous in memory.
If already contiguous, returns a clone (shared storage). Otherwise, copies data to a new contiguous buffer.
Sourcepub fn sum_axis<A: Algebra<Scalar = T>>(&self, axis: usize) -> Selfwhere
B: Default,
pub fn sum_axis<A: Algebra<Scalar = T>>(&self, axis: usize) -> Selfwhere
B: Default,
Sum along a specific axis using the algebra’s addition.
The result has one fewer dimension than the input.
§Arguments
axis- The axis to sum over
§Panics
Panics if axis is out of bounds.
§Example
use omeinsum::{Tensor, Cpu, Standard};
// Column-major: data [1, 2, 3, 4] with shape [2, 2] represents:
// [[1, 3],
// [2, 4]]
let t = Tensor::<f32, Cpu>::from_data(&[1.0, 2.0, 3.0, 4.0], &[2, 2]);
// Sum over axis 1 (columns): [1+3, 2+4] = [4, 6]
let result = t.sum_axis::<Standard<f32>>(1);
assert_eq!(result.to_vec(), vec![4.0, 6.0]);Trait Implementations§
Auto Trait Implementations§
impl<T, B> Freeze for Tensor<T, B>where
B: Freeze,
impl<T, B> RefUnwindSafe for Tensor<T, B>
impl<T, B> Send for Tensor<T, B>
impl<T, B> Sync for Tensor<T, B>
impl<T, B> Unpin for Tensor<T, B>where
B: Unpin,
impl<T, B> UnwindSafe for Tensor<T, B>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
§impl<T> DistributionExt for Twhere
T: ?Sized,
impl<T> DistributionExt for Twhere
T: ?Sized,
fn rand<T>(&self, rng: &mut (impl Rng + ?Sized)) -> Twhere
Self: Distribution<T>,
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more