-
Notifications
You must be signed in to change notification settings - Fork 13.4k
[mlir][vector] Separate bitwidth specific tests out #138071
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@llvm/pr-subscribers-mlir-vector @llvm/pr-subscribers-mlir Author: James Newling (newling) ChangesIn #136581 the logic pertaining to bitwidth was removed from the patterns. This PR further factorizes bitwidth logic out of the main test file. The number of tests with bitwidth (in the new file added in this PR) is now lower than before this PR. This is because this PR only tests the bitwidth specific logic once (there was a fair amount of redundant testing before). I didn't do this test refactoring in #136581 because I wanted to make it clear that it was NFC by leaving the tests unchanged there Patch is 33.68 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/138071.diff 2 Files Affected:
diff --git a/mlir/test/Dialect/Vector/linearize-subject-to-bitwidth.mlir b/mlir/test/Dialect/Vector/linearize-subject-to-bitwidth.mlir
new file mode 100644
index 0000000000000..12ec7ec14e4d1
--- /dev/null
+++ b/mlir/test/Dialect/Vector/linearize-subject-to-bitwidth.mlir
@@ -0,0 +1,56 @@
+// RUN: mlir-opt %s -split-input-file -test-bit-width-constrained-vector-linearize=target-vector-bitwidth=128 | FileCheck %s --check-prefixes=ALL,BW-128
+// RUN: mlir-opt %s -split-input-file -test-bit-width-constrained-vector-linearize=target-vector-bitwidth=0 | FileCheck %s --check-prefixes=ALL,BW-0
+
+// A vector<2x2xf32> has inner-most dimension with 64-bits. Check that at
+// bitwidth threshold 128 (>= 64), operations are linearized, and at
+// bitwidth threshold 0 (< 64), operations are not linearized.
+
+// ALL-LABEL: test_result_bitwidth_64
+func.func @test_result_bitwidth_64(%arg0: vector<2x2xf32>) -> vector<2x2xf32> {
+
+ // BW-128: arith.constant {{.*}} vector<4xf32>
+ // BW-0: arith.constant {{.*}} vector<2x2xf32>
+ %0 = arith.constant dense<[[1.0, 2.0], [3.0, 4.0]]> : vector<2x2xf32>
+
+ // BW-128: math.sin {{.*}} vector<4xf32>
+ // BW-0: math.sin {{.*}} vector<2x2xf32>
+ %1 = math.sin %arg0 : vector<2x2xf32>
+
+ return %0 : vector<2x2xf32>
+}
+
+// -----
+
+// Test that operations with vectors of index type are not linearized.
+
+// ALL-LABEL: test_index_no_linearize
+func.func @test_index_no_linearize(%arg0: vector<2x2xindex>, %arg1: vector<2x2xindex>) -> vector<2x2xindex> {
+
+ // BW-128: %[[ADD:.*]] = arith.addi {{.*}} : vector<2x2xindex>
+ // BW-0: %[[ADD:.*]] = arith.addi {{.*}} : vector<2x2xindex>
+ %0 = arith.addi %arg0, %arg1 : vector<2x2xindex>
+ return %0 : vector<2x2xindex>
+}
+
+// -----
+
+// The logic for the insert op with regards to the bitwidth threshold is
+// different to the other ops, so we test it here. Specifically, the logic
+// is based on the bitwidth of the value to store.
+
+// ALL-LABEL: test_vector_insert
+// ALL-SAME: (%[[DEST:.*]]: vector<2x8x4xf32>, %[[SRC:.*]]: vector<8x4xf32>) -> vector<2x8x4xf32> {
+func.func @test_vector_insert(%arg0: vector<2x8x4xf32>, %arg1: vector<8x4xf32>) -> vector<2x8x4xf32> {
+
+ // BW-128-DAG: %[[ARG_SRC:.*]] = vector.shape_cast %[[SRC]] : vector<8x4xf32> to vector<32xf32>
+ // BW-128-DAG: %[[ARG_DEST:.*]] = vector.shape_cast %[[DEST]] : vector<2x8x4xf32> to vector<64xf32>
+ // BW-128: %[[SHUFFLE:.*]] = vector.shuffle %[[ARG_DEST]], %[[ARG_SRC]]
+ // BW-128: %[[RES:.*]] = vector.shape_cast %[[SHUFFLE]] : vector<64xf32> to vector<2x8x4xf32>
+ // BW-128: return %[[RES]] : vector<2x8x4xf32>
+
+ // BW-0: %[[RES:.*]] = vector.insert %[[SRC]], %[[DEST]] [0] : vector<8x4xf32> into vector<2x8x4xf32>
+ // BW-0: return %[[RES]] : vector<2x8x4xf32>
+
+ %0 = vector.insert %arg1, %arg0[0]: vector<8x4xf32> into vector<2x8x4xf32>
+ return %0 : vector<2x8x4xf32>
+}
diff --git a/mlir/test/Dialect/Vector/linearize.mlir b/mlir/test/Dialect/Vector/linearize.mlir
index 06eaf58b225ae..56261103fd908 100644
--- a/mlir/test/Dialect/Vector/linearize.mlir
+++ b/mlir/test/Dialect/Vector/linearize.mlir
@@ -1,115 +1,77 @@
-// RUN: mlir-opt %s -split-input-file -test-vector-linearize -verify-diagnostics | FileCheck %s --check-prefixes=ALL,DEFAULT
+// RUN: mlir-opt %s -split-input-file -test-vector-linearize -verify-diagnostics | FileCheck %s
-// RUN: mlir-opt %s -split-input-file -test-bit-width-constrained-vector-linearize=target-vector-bitwidth=128 -verify-diagnostics | FileCheck %s --check-prefixes=ALL,BW-128
-// RUN: mlir-opt %s -split-input-file -test-bit-width-constrained-vector-linearize=target-vector-bitwidth=0 | FileCheck %s --check-prefixes=ALL,BW-0
-
-// ALL-LABEL: test_linearize
-// ALL-SAME: (%[[ORIG_ARG:.*]]: vector<2x2xf32>)
+// CHECK-LABEL: test_linearize
+// CHECK-SAME: (%[[ORIG_ARG:.*]]: vector<2x2xf32>)
func.func @test_linearize(%arg0: vector<2x2xf32>) -> vector<2x2xf32> {
- // DEFAULT: %[[ARG:.*]] = vector.shape_cast %[[ORIG_ARG]] : vector<2x2xf32> to vector<4xf32>
- // DEFAULT: %[[CST:.*]] = arith.constant dense<[1.000000e+00, 2.000000e+00, 3.000000e+00, 4.000000e+00]> : vector<4xf32>
- // DEFAULT: %[[RES:.*]] = vector.shape_cast %[[CST]] : vector<4xf32> to vector<2x2xf32>
-
- // BW-128: %[[ARG:.*]] = vector.shape_cast %[[ORIG_ARG]] : vector<2x2xf32> to vector<4xf32>
- // BW-128: %[[CST:.*]] = arith.constant dense<[1.000000e+00, 2.000000e+00, 3.000000e+00, 4.000000e+00]> : vector<4xf32>
- // BW-128: %[[RES:.*]] = vector.shape_cast %[[CST]] : vector<4xf32> to vector<2x2xf32>
- // BW-0: %[[RES:.*]] = arith.constant dense<{{.*}}> : vector<2x2xf32>
+ // CHECK: %[[ARG:.*]] = vector.shape_cast %[[ORIG_ARG]] : vector<2x2xf32> to vector<4xf32>
+ // CHECK: %[[CST:.*]] = arith.constant dense<[1.000000e+00, 2.000000e+00, 3.000000e+00, 4.000000e+00]> : vector<4xf32>
+ // CHECK: %[[RES:.*]] = vector.shape_cast %[[CST]] : vector<4xf32> to vector<2x2xf32>
%0 = arith.constant dense<[[1.0, 2.0], [3.0, 4.0]]> : vector<2x2xf32>
- // DEFAULT: %{{.*}} = math.sin %[[ARG]] : vector<4xf32>
- // BW-128: %{{.*}} = math.sin %[[ARG]] : vector<4xf32>
- // BW-0: %{{.*}} = math.sin %{{.*}} : vector<2x2xf32>
+ // CHECK: %{{.*}} = math.sin %[[ARG]] : vector<4xf32>
%1 = math.sin %arg0 : vector<2x2xf32>
- // DEFAULT: %{{.*}} = arith.addf %[[ARG]], %[[CST]] : vector<4xf32>
- // BW-128: %{{.*}} = arith.addf %[[ARG]], %[[CST]] : vector<4xf32>
- // BW-0: %{{.*}} = arith.addf %{{.*}} : vector<2x2xf32>
+ // CHECK: %{{.*}} = arith.addf %[[ARG]], %[[CST]] : vector<4xf32>
%2 = arith.addf %arg0, %0 : vector<2x2xf32>
- // ALL: return %[[RES]] : vector<2x2xf32>
+ // CHECK: return %[[RES]] : vector<2x2xf32>
return %0 : vector<2x2xf32>
}
// -----
-// ALL-LABEL: test_linearize_poison
+// CHECK-LABEL: test_linearize_poison
func.func @test_linearize_poison() -> vector<2x2xf32> {
- // DEFAULT: %[[POISON:.*]] = ub.poison : vector<4xf32>
- // DEFAULT: %[[RES:.*]] = vector.shape_cast %[[POISON]] : vector<4xf32> to vector<2x2xf32>
- // BW-128: %[[POISON:.*]] = ub.poison : vector<4xf32>
- // BW-128: %[[RES:.*]] = vector.shape_cast %[[POISON]] : vector<4xf32> to vector<2x2xf32>
-
- // BW-0: %[[RES:.*]] = ub.poison : vector<2x2xf32>
+ // CHECK: %[[POISON:.*]] = ub.poison : vector<4xf32>
+ // CHECK: %[[RES:.*]] = vector.shape_cast %[[POISON]] : vector<4xf32> to vector<2x2xf32>
%0 = ub.poison : vector<2x2xf32>
- // ALL: return %[[RES]] : vector<2x2xf32>
+
+ // CHECK: return %[[RES]] : vector<2x2xf32>
return %0 : vector<2x2xf32>
}
// -----
-// ALL-LABEL: test_partial_linearize
-// ALL-SAME: (%[[ORIG_ARG:.*]]: vector<2x2xf32>, %[[ORIG_ARG2:.*]]: vector<4x4xf32>)
+// CHECK-LABEL: test_partial_linearize
+// CHECK-SAME: (%[[ORIG_ARG:.*]]: vector<2x2xf32>, %[[ORIG_ARG2:.*]]: vector<4x4xf32>)
func.func @test_partial_linearize(%arg0: vector<2x2xf32>, %arg1: vector<4x4xf32>) -> vector<2x2xf32> {
- // DEFAULT-DAG: %[[ARG:.*]] = vector.shape_cast %[[ORIG_ARG]] : vector<2x2xf32> to vector<4xf32>
- // DEFAULT-DAG: %[[ARG2:.*]] = vector.shape_cast %[[ORIG_ARG2]] : vector<4x4xf32> to vector<16xf32>
- // DEFAULT: %[[CST:.*]] = arith.constant dense<{{.*}}> : vector<4xf32>
- // DEFAULT: %[[RES:.*]] = vector.shape_cast %[[CST]] : vector<4xf32> to vector<2x2xf32>
- // BW-128: %[[ARG:.*]] = vector.shape_cast %[[ORIG_ARG]] : vector<2x2xf32> to vector<4xf32>
- // BW-128: %[[CST:.*]] = arith.constant dense<{{.*}}> : vector<4xf32>
- // BW-128: %[[RES:.*]] = vector.shape_cast %[[CST]] : vector<4xf32> to vector<2x2xf32>
-
- // BW-0: %[[RES:.*]] = arith.constant dense<{{.*}}> : vector<2x2xf32>
+ // CHECK-DAG: %[[ARG:.*]] = vector.shape_cast %[[ORIG_ARG]] : vector<2x2xf32> to vector<4xf32>
+ // CHECK-DAG: %[[ARG2:.*]] = vector.shape_cast %[[ORIG_ARG2]] : vector<4x4xf32> to vector<16xf32>
+ // CHECK: %[[CST:.*]] = arith.constant dense<{{.*}}> : vector<4xf32>
+ // CHECK: %[[RES:.*]] = vector.shape_cast %[[CST]] : vector<4xf32> to vector<2x2xf32>
%0 = arith.constant dense<[[1.0, 2.0], [3.0, 4.0]]> : vector<2x2xf32>
- // DEFAULT: %[[C2:.*]] = arith.constant dense<{{.*}}> : vector<16xf32>
- // BW-128: %[[C2:.*]] = arith.constant dense<{{.*}}> : vector<4x4xf32>
- // BW-0: %[[C2:.*]] = arith.constant dense<{{.*}}> : vector<4x4xf32>
+ // CHECK: %[[C2:.*]] = arith.constant dense<{{.*}}> : vector<16xf32>
%5 = arith.constant dense<[[1.0, 2.0, 3.0, 4.0], [1.0, 2.0,3.0, 4.0], [1.0, 2.0, 3.0, 4.0], [1.0, 2.0, 5.0, 6.0]]> : vector<4x4xf32>
// Arith and math ops are handled in generic way, check some of them
- // DEFAULT: %[[SIN:.*]] = math.sin %[[ARG]] : vector<4xf32>
- // BW-128: %[[SIN:.*]] = math.sin %[[ARG]] : vector<4xf32>
- // BW-0: %[[SIN:.*]] = math.sin %[[ORIG_ARG]] : vector<2x2xf32>
+ // CHECK: %[[SIN:.*]] = math.sin %[[ARG]] : vector<4xf32>
%1 = math.sin %arg0 : vector<2x2xf32>
- // DEFAULT: %[[SIN1:.*]] = math.sin %[[ARG2]] : vector<16xf32>
- // BW-128: %[[SIN1:.*]] = math.sin %[[ORIG_ARG2]] : vector<4x4xf32>
- // BW-0: %[[SIN1:.*]] = math.sin %[[ORIG_ARG2]] : vector<4x4xf32>
+ // CHECK: %[[SIN1:.*]] = math.sin %[[ARG2]] : vector<16xf32>
%6 = math.sin %arg1 : vector<4x4xf32>
- // DEFAULT: %{{.*}} = arith.addf %[[ARG]], %[[CST]] : vector<4xf32>
- // BW-128: %{{.*}} = arith.addf %[[ARG]], %[[CST]] : vector<4xf32>
- // BW-0: %{{.*}} = arith.addf %{{.*}} : vector<2x2xf32>
+ // CHECK: %{{.*}} = arith.addf %[[ARG]], %[[CST]] : vector<4xf32>
%2 = arith.addf %arg0, %0 : vector<2x2xf32>
- // DEFAULT: %[[ADD2:.*]] = arith.addf %[[ARG2]], %[[C2]] : vector<16xf32>
- // BW-128: %[[ADD2:.*]] = arith.addf %[[ORIG_ARG2]], %[[C2]] : vector<4x4xf32>
- // BW-0: %[[ADD2:.*]] = arith.addf %[[ORIG_ARG2]], %[[C2]] : vector<4x4xf32>
+ // CHECK: %[[ADD2:.*]] = arith.addf %[[ARG2]], %[[C2]] : vector<16xf32>
%7 = arith.addf %arg1, %5 : vector<4x4xf32>
- // ALL: return %[[RES]] : vector<2x2xf32>
+ // CHECK: return %[[RES]] : vector<2x2xf32>
return %0 : vector<2x2xf32>
}
// -----
-// ALL-LABEL: test_index_no_linearize
-func.func @test_index_no_linearize(%arg0: vector<2x2xindex>, %arg1: vector<2x2xindex>) -> vector<2x2xindex> {
- // BW-128: %[[ADD:.*]] = arith.addi {{.*}} : vector<2x2xindex>
- %0 = arith.addi %arg0, %arg1 : vector<2x2xindex>
- return %0 : vector<2x2xindex>
-}
-
-// -----
-
// vectorizable operation (arith.mulf) with tensor result types.
-// ALL-LABEL: test_tensor_no_linearize
+// CHECK-LABEL: test_tensor_no_linearize
func.func @test_tensor_no_linearize(%arg0: tensor<2x2xf32>, %arg1: tensor<2x2xf32>) -> (tensor<2x2xf32>, tensor<2x2xf32>) {
- // ALL: %[[MULF:.*]] = arith.mulf %arg0, %arg1 : tensor<2x2xf32>
+
+ // CHECK: %[[MULF:.*]] = arith.mulf %arg0, %arg1 : tensor<2x2xf32>
%0 = arith.mulf %arg0, %arg1 : tensor<2x2xf32>
return %0, %arg0 : tensor<2x2xf32>, tensor<2x2xf32>
@@ -117,79 +79,67 @@ func.func @test_tensor_no_linearize(%arg0: tensor<2x2xf32>, %arg1: tensor<2x2xf3
// -----
-// ALL-LABEL: func.func @test_scalable_linearize(
-// ALL-SAME: %[[ARG_0:.*]]: vector<2x[2]xf32>) -> vector<2x[2]xf32> {
+// CHECK-LABEL: func.func @test_scalable_linearize(
+// CHECK-SAME: %[[ARG_0:.*]]: vector<2x[2]xf32>) -> vector<2x[2]xf32> {
func.func @test_scalable_linearize(%arg0: vector<2x[2]xf32>) -> vector<2x[2]xf32> {
- // DEFAULT: %[[SC:.*]] = vector.shape_cast %[[ARG_0]] : vector<2x[2]xf32> to vector<[4]xf32>
- // DEFAULT: %[[CST:.*]] = arith.constant dense<3.000000e+00> : vector<[4]xf32>
- // BW-128: %[[SC:.*]] = vector.shape_cast %[[ARG_0]] : vector<2x[2]xf32> to vector<[4]xf32>
- // BW-128: %[[CST:.*]] = arith.constant dense<3.000000e+00> : vector<[4]xf32>
- // BW-0: %[[CST:.*]] = arith.constant dense<3.000000e+00> : vector<2x[2]xf32>
+
+ // CHECK: %[[SC:.*]] = vector.shape_cast %[[ARG_0]] : vector<2x[2]xf32> to vector<[4]xf32>
+ // CHECK: %[[CST:.*]] = arith.constant dense<3.000000e+00> : vector<[4]xf32>
%0 = arith.constant dense<[[3., 3.], [3., 3.]]> : vector<2x[2]xf32>
- // DEFAULT: %[[SIN:.*]] = math.sin %[[SC]] : vector<[4]xf32>
- // BW-128: %[[SIN:.*]] = math.sin %[[SC]] : vector<[4]xf32>
- // BW-0: %[[SIN:.*]] = math.sin %[[ARG_0]] : vector<2x[2]xf32>
+ // CHECK: %[[SIN:.*]] = math.sin %[[SC]] : vector<[4]xf32>
%1 = math.sin %arg0 : vector<2x[2]xf32>
- // DEFAULT: %[[ADDF:.*]] = arith.addf %[[SIN]], %[[CST]] : vector<[4]xf32>
- // BW-128: %[[ADDF:.*]] = arith.addf %[[SIN]], %[[CST]] : vector<[4]xf32>
- // BW-0: %[[RES:.*]] = arith.addf %[[CST]], %[[SIN]] : vector<2x[2]xf32>
+ // CHECK: %[[ADDF:.*]] = arith.addf %[[SIN]], %[[CST]] : vector<[4]xf32>
%2 = arith.addf %0, %1 : vector<2x[2]xf32>
- // DEFAULT: %[[RES:.*]] = vector.shape_cast %[[ADDF]] : vector<[4]xf32> to vector<2x[2]xf32>
- // BW-128: %[[RES:.*]] = vector.shape_cast %[[ADDF]] : vector<[4]xf32> to vector<2x[2]xf32>
- // ALL: return %[[RES]] : vector<2x[2]xf32>
+ // CHECK: %[[RES:.*]] = vector.shape_cast %[[ADDF]] : vector<[4]xf32> to vector<2x[2]xf32>
+ // CHECK: return %[[RES]] : vector<2x[2]xf32>
return %2 : vector<2x[2]xf32>
}
// -----
-// ALL-LABEL: func.func @test_scalable_no_linearize(
-// ALL-SAME: %[[VAL_0:.*]]: vector<[2]x[2]xf32>) -> vector<[2]x[2]xf32> {
+// CHECK-LABEL: func.func @test_scalable_no_linearize(
+// CHECK-SAME: %[[VAL_0:.*]]: vector<[2]x[2]xf32>) -> vector<[2]x[2]xf32> {
func.func @test_scalable_no_linearize(%arg0: vector<[2]x[2]xf32>) -> vector<[2]x[2]xf32> {
- // ALL: %[[CST:.*]] = arith.constant dense<2.000000e+00> : vector<[2]x[2]xf32>
+
+ // CHECK: %[[CST:.*]] = arith.constant dense<2.000000e+00> : vector<[2]x[2]xf32>
%0 = arith.constant dense<[[2., 2.], [2., 2.]]> : vector<[2]x[2]xf32>
- // ALL: %[[SIN:.*]] = math.sin %[[VAL_0]] : vector<[2]x[2]xf32>
+ // CHECK: %[[SIN:.*]] = math.sin %[[VAL_0]] : vector<[2]x[2]xf32>
%1 = math.sin %arg0 : vector<[2]x[2]xf32>
- // ALL: %[[RES:.*]] = arith.addf %[[CST]], %[[SIN]] : vector<[2]x[2]xf32>
+ // CHECK: %[[RES:.*]] = arith.addf %[[CST]], %[[SIN]] : vector<[2]x[2]xf32>
%2 = arith.addf %0, %1 : vector<[2]x[2]xf32>
- // ALL: return %[[RES]] : vector<[2]x[2]xf32>
+ // CHECK: return %[[RES]] : vector<[2]x[2]xf32>
return %2 : vector<[2]x[2]xf32>
}
// -----
-// ALL-LABEL: func.func @test_0d_vector
+// CHECK-LABEL: func.func @test_0d_vector
func.func @test_0d_vector() -> vector<f32> {
- // ALL: %[[CST:.+]] = arith.constant dense<0.000000e+00> : vector<f32>
+
+ // CHECK: %[[CST:.+]] = arith.constant dense<0.000000e+00> : vector<f32>
%0 = arith.constant dense<0.0> : vector<f32>
- // ALL: return %[[CST]]
+
+ // CHECK: return %[[CST]]
return %0 : vector<f32>
}
// -----
-// ALL-LABEL: test_extract_strided_slice_1
-// ALL-SAME: (%[[ORIG_ARG:.*]]: vector<4x8xf32>) -> vector<2x2xf32> {
+// CHECK-LABEL: test_extract_strided_slice_1
+// CHECK-SAME: (%[[ORIG_ARG:.*]]: vector<4x8xf32>) -> vector<2x2xf32> {
func.func @test_extract_strided_slice_1(%arg0 : vector<4x8xf32>) -> vector<2x2xf32> {
- // DEFAULT: %[[ARG:.*]] = vector.shape_cast %[[ORIG_ARG]] : vector<4x8xf32> to vector<32xf32>
- // DEFAULT: %[[SHUFFLE:.*]] = vector.shuffle %[[ARG]], %[[ARG]]
- // DEFAULT-SAME: [4, 5, 12, 13] : vector<32xf32>, vector<32xf32>
- // DEFAULT: %[[RES:.*]] = vector.shape_cast %[[SHUFFLE]] : vector<4xf32> to vector<2x2xf32>
- // DEFAULT: return %[[RES]] : vector<2x2xf32
-
- // BW-128: %[[ARG:.*]] = vector.shape_cast %[[ORIG_ARG]] : vector<4x8xf32> to vector<32xf32>
- // BW-128: %[[SHUFFLE:.*]] = vector.shuffle %[[ARG]], %[[ARG]]
- // BW-128-SAME: [4, 5, 12, 13] : vector<32xf32>, vector<32xf32>
- // BW-128: %[[RES:.*]] = vector.shape_cast %[[SHUFFLE]] : vector<4xf32> to vector<2x2xf32>
- // BW-128: return %[[RES]] : vector<2x2xf32>
-
- // BW-0: %[[RES:.*]] = vector.extract_strided_slice %[[ARG:.*]] {offsets = [0, 4], sizes = [2, 2], strides = [1, 1]} : vector<4x8xf32> to vector<2x2xf32>
- // BW-0: return %[[RES]] : vector<2x2xf32>
+
+ // CHECK: %[[ARG:.*]] = vector.shape_cast %[[ORIG_ARG]] : vector<4x8xf32> to vector<32xf32>
+ // CHECK: %[[SHUFFLE:.*]] = vector.shuffle %[[ARG]], %[[ARG]]
+ // CHECK-SAME: [4, 5, 12, 13] : vector<32xf32>, vector<32xf32>
+ // CHECK: %[[RES:.*]] = vector.shape_cast %[[SHUFFLE]] : vector<4xf32> to vector<2x2xf32>
+ // CHECK: return %[[RES]] : vector<2x2xf32
%0 = vector.extract_strided_slice %arg0 { sizes = [2, 2], strides = [1, 1], offsets = [0, 4]}
: vector<4x8xf32> to vector<2x2xf32>
return %0 : vector<2x2xf32>
@@ -197,36 +147,30 @@ func.func @test_extract_strided_slice_1(%arg0 : vector<4x8xf32>) -> vector<2x2xf
// -----
-// ALL-LABEL: func.func @test_extract_strided_slice_1_scalable(
-// ALL-SAME: %[[VAL_0:.*]]: vector<4x[8]xf32>) -> vector<2x[8]xf32> {
+// CHECK-LABEL: func.func @test_extract_strided_slice_1_scalable(
+// CHECK-SAME: %[[VAL_0:.*]]: vector<4x[8]xf32>) -> vector<2x[8]xf32> {
func.func @test_extract_strided_slice_1_scalable(%arg0: vector<4x[8]xf32>) -> vector<2x[8]xf32> {
- // ALL-NOT: vector.shuffle
- // ALL-NOT: vector.shape_cast
- // ALL: %[[RES:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [1, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x[8]xf32> to vector<2x[8]xf32>
+
+ // CHECK-NOT: vector.shuffle
+ // CHECK-NOT: vector.shape_cast
+ // CHECK: %[[RES:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [1, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x[8]xf32> to vector<2x[8]xf32>
%0 = vector.extract_strided_slice %arg0 { sizes = [2, 8], strides = [1, 1], offsets = [1, 0] } : vector<4x[8]xf32> to vector<2x[8]xf32>
- // ALL: return %[[RES]] : vector<2x[8]xf32>
+
+ // CHECK: return %[[RES]] : vector<2x[8]xf32>
return %0 : vector<2x[8]xf32>
}
// -----
-// ALL-LABEL: test_extract_strided_slice_2
-// ALL-SAME: (%[[ORIG_ARG:.*]]: vector<2x8x2xf32>) -> vector<1x4x2xf32> {
+// CHECK-LABEL: test_extract_strided_slice_2
+// CHECK-SAME: (%[[ORIG_ARG:.*]]: vector<2x8x2xf32>) -> vector<1x4x2xf32> {
func.func @test_extract_strided_slice_2(%arg0 : vector<2x8x2xf32>) -> vector<1x4x2xf32> {
- // DEFAULT: %[[ARG:.*]] = vector.shape_cast %[[ORIG_ARG]] : vector<2x8x2xf32> to vector<32xf32>
- // DEFAULT: %[[SHUFFLE:.*]] = vector.shuffle %[[ARG]], %[[ARG]]
- // DEFAULT-SAME: [20, 21, 22, 23, 24, 25, 26, 27] : vector<32xf32>, vector<32xf32>
- // DEFAULT: %[[RES:.*]] = vector.shape_cast %[[SHUFFLE]] : vector<8xf32> to vector<1x4x2xf32>
- // DEFAULT: return %[[RES]] : vector<1x4x2xf32>
-
- // BW-128: %[[ARG:.*]] = vector.shape_cast %[[ORIG_ARG]] : vector<2x8x2xf32> to vector<32xf32>
- // BW-128: %[[SHUFFLE:.*]] = vector.shuffle %[[ARG]], %[[ARG]]
- // BW-128-SAME: [20, 21, 22, 23, 24, 25, 26, 27] : vector<32xf32>, vector<32xf32>
- // BW-128: %[[RES:.*]] = vector.shape_cast %[[SHUFFLE]] : vector<8xf32> to vector<1x4x2xf32>
- // BW-128: return %[[RES]] : vector<1x4x2xf32>
-
- // BW-0: %[[RES:.*]] = vector.extract_strided_slice %[[ORIG_ARG]] {offsets = [1, 2], sizes = [1, 4], strides = [1, 1]} : vector<2x8x2xf32> to vector<1x4x2xf32>
- // BW-0: return %[[RES]] : vector<1x4x2xf32>
+
+ // CHECK: %[[ARG:.*]] = vector.shape_cast %[[ORIG_ARG]] : vector<2x8x2xf32> to vector<32xf32>
+ // CHECK: %[[SHUFFLE:.*]] = vector.shuffle %[[ARG]], %[[ARG]]
+ // CHECK-SAME: [20, 21, 22, 23, 24, 25, 26, 27] : vector<32xf32>, vector<32xf32>
+ // CHECK: %[[RES:.*]] = vector.shape_cast %[[SHUFFLE]] : vector<8xf32> to vector<1x4x2xf32>
+ // CHECK: return %[[RES]] : vector<1x4x2xf32>
%0 = vector.extract_strided_slice %arg0 { offsets = [1, 2], strides = [1, 1], sizes = [1, 4] }
: vector<2x8x2xf32> to vector<1x4x2xf32>
return %0 : vector<1x4x2xf32>
@@ -234,182 +178,144 @@ func.func @test_extract_strided_slice_2(%arg0 : vector<2x8x2xf32>) -> vector<1x4
// -----
-// ALL-LABEL: test_vector_shuffle
-// ALL-SAME: (%[[ORIG_ARG0:.*]]: vector<4x2xf32>, %[[ORIG_ARG1:.*]]: vector<4x2xf32>) -> vector<8x2xf32> {
+// CHECK-LABEL: test_vector_shuffle
+// CHECK-SAME: (%[[ORIG_ARG0:.*]]: vector<4x2xf32>, %[[ORIG_ARG1:.*]]: vector<4x2xf32>) -> vector<8x2xf32> {
func.func @test_vector_shuffle(%arg0: vector<4x2xf32>, %arg1: vector<4x2xf32>) -> vector<8x2xf32> {
- // DEFAULT-DAG: %[[ARG0:.*]] = vector.shape_cast %[[ORIG_ARG0]] : vector<4x2xf32> to vector<8xf32>
- // DEFAULT-DAG: %[[ARG1:.*]] = vector.shape_cast %[[ORIG_ARG1]] : vector<4x2xf32> to vector<8xf32>
- // DEFAULT: %[[SHUFFLE:.*]] = vector.shuffle %[[ARG0]], %[[ARG1]]
- // DEFAULT-SAME: [0, 1, 8, 9, 2, 3, 10, 11, 4, 5, 12,...
[truncated]
|
FYI @nbpatel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice clean-up, thank you! LGTM
I didn't do this test refactoring in #136581 because I wanted to make it clear that it was NFC by leaving the tests unchanged there
+1
|
||
// ----- | ||
|
||
// Test that operations with vectors of index type are not linearized. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess for this to work, we'd need to know the bit-width of index
(i.e. have access to a data layout)? Perhaps it's worth adding a note?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In #136581 I left the comment "The width of the type 'index' is unbounded (and therefore potentially above the target width).". But accurately saying that it is "platform dependent" might be more accurate?
Source of truth:
The index type is a signless integer whose size is equal to the natural |
I'll add this note to the test before commiting
0854960
to
7957579
Compare
In #136581 the logic pertaining to bitwidth was removed from the patterns. This PR further factorizes bitwidth logic out of the main test file.
The number of tests with bitwidth (in the new file added in this PR) is now lower than before this PR. This is because this PR only tests the bitwidth specific logic once (there was a fair amount of redundant testing before).
I didn't do this test refactoring in #136581 because I wanted to make it clear that it was NFC by leaving the tests unchanged there