Overview
Implement a kernel that compute the running sum of the last 3 positions of 1D LayoutTensor a
and stores it in 1D LayoutTensor out
.
Note: You have 1 thread per position. You only need 1 global read and 1 global write per thread.
Key concepts
In this puzzle, youāll learn about:
- Using LayoutTensor for sliding window operations
- Managing shared memory with
LayoutTensorBuilder
that we saw in puzzle_08 - Efficient neighbor access patterns
- Boundary condition handling
The key insight is how LayoutTensor simplifies shared memory management while maintaining efficient window-based operations.
Configuration
- Array size:
SIZE = 8
elements - Threads per block:
TPB = 8
- Window size: 3 elements
- Shared memory:
TPB
elements
Notes:
- Tensor builder: Use
LayoutTensorBuilder[dtype]().row_major[TPB]().shared().alloc()
- Window access: Natural indexing for 3-element windows
- Edge handling: Special cases for first two positions
- Memory pattern: One shared memory load per thread
Code to complete
alias TPB = 8
alias SIZE = 8
alias BLOCKS_PER_GRID = (1, 1)
alias THREADS_PER_BLOCK = (TPB, 1)
alias dtype = DType.float32
alias layout = Layout.row_major(SIZE)
fn pooling[
layout: Layout
](
out: LayoutTensor[mut=True, dtype, layout],
a: LayoutTensor[mut=True, dtype, layout],
size: Int,
):
# Allocate shared memory using tensor builder
shared = tb[dtype]().row_major[TPB]().shared().alloc()
global_i = block_dim.x * block_idx.x + thread_idx.x
local_i = thread_idx.x
# FIX ME IN (roughly 10 lines)
View full file: problems/p09/p09_layout_tensor.mojo
Tips
- Create shared memory with tensor builder
- Load data with natural indexing:
shared[local_i] = a[global_i]
- Handle special cases for first two elements
- Use shared memory for window operations
- Guard against out-of-bounds access
Running the code
To test your solution, run the following command in your terminal:
uv run poe p09_layout_tensor
pixi run p09_layout_tensor
Your output will look like this if the puzzle isnāt solved yet:
out: HostBuffer([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0])
expected: HostBuffer([0.0, 1.0, 3.0, 6.0, 9.0, 12.0, 15.0, 18.0])
Solution
fn pooling[
layout: Layout
](
output: LayoutTensor[mut=True, dtype, layout],
a: LayoutTensor[mut=True, dtype, layout],
size: Int,
):
# Allocate shared memory using tensor builder
shared = tb[dtype]().row_major[TPB]().shared().alloc()
global_i = block_dim.x * block_idx.x + thread_idx.x
local_i = thread_idx.x
# Load data into shared memory
if global_i < size:
shared[local_i] = a[global_i]
# Synchronize threads within block
barrier()
# Handle first two special cases
if global_i == 0:
output[0] = shared[0]
elif global_i == 1:
output[1] = shared[0] + shared[1]
# Handle general case
elif 1 < global_i < size:
output[global_i] = (
shared[local_i - 2] + shared[local_i - 1] + shared[local_i]
)
The solution implements a sliding window sum using LayoutTensor with these key steps:
-
Shared memory setup
- Tensor builder creates block-local storage:
shared = tb[dtype]().row_major[TPB]().shared().alloc()
- Each thread loads one element:
Input array: [0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0] Block shared: [0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0]
barrier()
ensures all data is loaded
- Tensor builder creates block-local storage:
-
Boundary cases
- Position 0: Single element
out[0] = shared[0] = 0.0
- Position 1: Sum of first two elements
out[1] = shared[0] + shared[1] = 0.0 + 1.0 = 1.0
- Position 0: Single element
-
Main window operation
- For positions 2 and beyond:
Position 2: shared[0] + shared[1] + shared[2] = 0.0 + 1.0 + 2.0 = 3.0 Position 3: shared[1] + shared[2] + shared[3] = 1.0 + 2.0 + 3.0 = 6.0 Position 4: shared[2] + shared[3] + shared[4] = 2.0 + 3.0 + 4.0 = 9.0 ...
- Natural indexing with LayoutTensor:
# Sliding window of 3 elements window_sum = shared[i-2] + shared[i-1] + shared[i]
- For positions 2 and beyond:
-
Memory access pattern
- One global read per thread into shared tensor
- Efficient neighbor access through shared memory
- LayoutTensor benefits:
- Automatic bounds checking
- Natural window indexing
- Layout-aware memory access
- Type safety throughout
This approach combines the performance of shared memory with LayoutTensorās safety and ergonomics:
- Minimizes global memory access
- Simplifies window operations
- Handles boundaries cleanly
- Maintains coalesced access patterns
The final output shows the cumulative window sums:
[0.0, 1.0, 3.0, 6.0, 9.0, 12.0, 15.0, 18.0]