Files
Raptor/validation/operations/README.md
NiccoloN 39830be888
Some checks failed
Validate Operations / validate-operations (push) Failing after 51m52s
add support for operations: reduceMean, add, mul, div, sigmoid
2026-03-30 15:41:12 +02:00

9.5 KiB

Validation Operations

ONNX test models used by validate.py to verify the Raptor compiler + PIM simulator pipeline.

Generated tests can be regenerated with:

python3 validation/operations/gen_tests.py

Conv

Test Directory Input Output Kernel Stride Padding Bias Notes
Simple conv/simple [1,3,3,3] [1,1,2,2] 2x2 1 none no Basic conv, hand-crafted
With constant conv/with_constant [1,3,3,3] [1,1,3,3] 2x2 1 SAME_UPPER yes Hand-crafted, constant weight+bias
Batch 2 conv/batch_2 [2,3,3,3] [2,1,3,3] 2x2 1 SAME_UPPER yes Batched input
Kernel 3x3 conv/kernel_3x3 [1,1,5,5] [1,1,3,3] 3x3 1 none no Larger kernel
Stride 2 conv/stride_2 [1,1,6,6] [1,1,2,2] 3x3 2 none no Strided convolution
Multi channel conv/multi_channel [1,3,5,5] [1,4,3,3] 3x3 1 none no 3 in channels, 4 out channels
Pointwise 1x1 conv/pointwise_1x1 [1,8,4,4] [1,4,4,4] 1x1 1 none no Channel mixing
SAME padding 3x3 conv/same_padding_3x3 [1,1,5,5] [1,1,5,5] 3x3 1 SAME_UPPER no Spatial dims preserved
Explicit padding conv/explicit_padding [1,1,4,4] [1,1,4,4] 3x3 1 [1,1,1,1] no Symmetric explicit pads
With bias 3x3 conv/with_bias_3x3 [1,3,5,5] [1,2,3,3] 3x3 1 none yes Multi-channel with bias
Large spatial conv/large_spatial [1,1,8,8] [1,1,6,6] 3x3 1 none no Larger spatial input

Gemm

Test Directory A (input) W (weight) Output transB alpha beta Bias Notes
Default gemm/ [10,132] [132,132] [10,132] no 1 1 no Hand-crafted, square weights
Non-square gemm/non_square [4,128] [128,64] [4,64] no 1 1 no K != N
With bias gemm/with_bias [4,128] [128,128] [4,128] no 1 1 [128] Bias vector
transB gemm/transB [4,128] [64,128] [4,64] yes 1 1 no Transposed weight
Alpha/beta gemm/alpha_beta [4,64] [64,64] [4,64] no 0.5 0.25 [64] Scaled matmul + bias
Small gemm/small [2,8] [8,4] [2,4] no 1 1 no Tiny matrices
Large gemm/large [8,256] [256,128] [8,128] no 1 1 no Larger matrices
transB + bias gemm/transB_with_bias [4,128] [64,128] [4,64] yes 1 1 [64] Combined

Gemv

Test Directory Input W (weight) Output Bias Notes
Simple gemv/simple [1,132] [132,132] [1,132] no Single-sample matmul
Constant gemv/constant (none) [132,132] [1,132] no All inputs constant
Homogeneous const gemv/with_homogeneous_constant [1,132] [132,132] [1,132] [1,132] Bias matches output shape
Heterogeneous const gemv/with_heterogeneous_constant [1,132] [132,132] [1,132] [1,132] Different constant pattern
Scalar const gemv/with_scalar_constant [1,132] [132,132] [1,132] [1,1] Scalar bias, broadcast

Pool

Test Directory Input Output Kernel Stride Padding Notes
Max basic pool/max_basic [1,1,4,4] [1,1,3,3] 2x2 1 none Basic max pooling
Max stride 2 multi-channel pool/max_stride2_multichannel [1,5,6,6] [1,5,3,3] 2x2 2 none Channel-preserving max pool
Max SAME_UPPER pool/max_same_upper [1,1,5,5] [1,1,3,3] 3x3 2 SAME_UPPER Deprecated auto_pad path
Avg basic pool/avg_basic [1,3,4,4] [1,3,3,3] 2x2 1 none Basic average pooling
Avg explicit padding pool/avg_explicit_padding [1,2,4,4] [1,2,2,2] 3x3 2 [1,1,1,1] count_include_pad=0
Avg include pad pool/avg_include_pad [1,2,4,4] [1,2,2,2] 3x3 2 [1,1,1,1] count_include_pad=1
Max after Conv pool/max_after_conv [1,3,6,6] [1,4,2,2] Conv 3x3 then Pool 2x2 2 none Regression for pool(conv(...))

ReduceMean

Test Directory Input Output Axes Keepdims Notes
Basic reduce_mean/basic [4,8] [4,1] [1] 1 Reduce feature dimension, preserving rank
Keepdims 0 reduce_mean/keepdims_0 [4,8] [4] [1] 0 Reduce feature dimension, dropping reduced axis
4D spatial reduce_mean/4d_spatial [1,3,4,4] [1,3,1,1] [2,3] 1 Reduce H and W on NCHW input
After Conv reduce_mean/after_conv [1,3,5,5] [1,2,1,1] [2,3] 1 Conv 3x3 + bias, then spatial ReduceMean

Relu

Test Directory Input Output Notes
Basic relu/basic [4,8] [4,8] Standalone 2D Relu
4D relu/4d [2,3,4,4] [2,3,4,4] Standalone NCHW Relu
After Conv relu/after_conv [1,3,5,5] [1,2,3,3] Conv 3x3 + bias, then Relu
After Gemm relu/after_gemm [4,64] [4,32] Gemm + bias, then Relu

Sigmoid

Test Directory Input Output Notes
Basic sigmoid/basic [4,8] [4,8] Standalone 2D Sigmoid
4D sigmoid/4d [2,3,4,4] [2,3,4,4] Standalone NCHW Sigmoid
After Gemm sigmoid/after_gemm [4,64] [4,32] Gemm + bias, then Sigmoid

Add

Test Directory Input(s) Output Notes
Basic add/basic A:[4,8], B:[4,8] [4,8] Elementwise add, same-shape inputs
Broadcast row add/broadcast_row A:[4,8], B:[8] [4,8] Row-vector broadcasting via initializer
After Gemm add/after_gemm A:[4,64], D:[32] [4,32] Gemm + bias, then Add with broadcast vector

Mul

Test Directory Input(s) Output Notes
Basic mul/basic A:[4,8], B:[4,8] [4,8] Elementwise multiply, same-shape inputs
Scalar constant mul/scalar_constant X:[4,8], S:[1] [4,8] Scalar broadcasting via initializer
After Conv mul/after_conv X:[1,3,5,5], S:[1,2,1,1] [1,2,3,3] Conv 3x3 + bias, then per-channel scaling

Div

Test Directory Input(s) Output Notes
Basic div/basic X:[4,8], D:[4,8] [4,8] Elementwise divide by same-shape constant tensor
Scalar constant div/scalar_constant X:[4,8], S:[1] [4,8] Scalar broadcasting via initializer
After Gemm div/after_gemm A:[4,64], D:[32] [4,32] Gemm + bias, then Div with positive broadcast vector