lamppp
|
Tensors are the fundamental data structure in Lamp++. They're n-dimensional arrays that can live on CPU or GPU, store different data types, and support automatic broadcasting for operations (it's like what ATen is for Pytorch, or if NumPy had CUDA support). This guide covers everything you need to know to work with them effectively.
The most common way to create a tensor is from a C++ vector:
The constructor takes four parameters:
{28, 28}
for a 28×28 image)CPU
or CUDA
)Float64
)Lamp++ supports six data types:
Tensors can live on CPU or GPU:
Important**: The .to()
method creates a new tensor with copied data, unlike PyTorch which returns a view.
Every tensor has several properties you can query:
To get data out of a tensor:
This works regardless of the tensor's original data type - it handles the conversion automatically.
Tensors support several operations that change their shape without copying data:
Reshaping is a fast operation that doesn't change the underlying data. Lamp++ does not support non-contiguous tensors.
Note**: The total number of elements must remain the same.
Important**: Unlike PyTorch, transpose()
returns a new tensor, not a view.
All basic arithmetic operations work element-wise and support broadcasting:
Lamp++ follows NumPy broadcasting rules. When operating on tensors with different shapes, they're automatically aligned:
Broadcasting rules**:
Examples of valid broadcasts:
{3, 4}
+ {4}
→ both become {3, 4}
{2, 3, 4}
+ {1, 4}
→ both become {2, 3, 4}
{5, 1, 3}
+ {2, 3}
→ both become {5, 2, 3}
Reductions compute aggregates along specified axes:
Note**: All reduction operations keep dimensions by default (like keepdims=True
in NumPy). Use squeeze()
to remove singleton dimensions.
For linear algebra operations:
Some operations return views (sharing memory):
reshape()
, squeeze()
, expand_dims()
- return viewsto()
- always returns a new tensorsrc/tensor/cuda/
Tensors use row-major (C-style) memory layout:
{2, 3}
tensor: [row0_col0, row0_col1, row0_col2, row1_col0, row1_col1, row1_col2]
When operating on tensors with different types, Lamp++ promotes to the "higher" type:
Bool
< Int16
< Int32
< Int64
< Float32
< Float64
Int32
+ Float32
→ Float32
Now that you understand tensors, you're ready to learn about automatic differentiation in the Understanding Autograd guide. The autograd system builds on these tensor operations to compute gradients automatically.