Description
Bugzilla Link | 44043 |
Version | trunk |
OS | All |
Reporter | LLVM Bugzilla Contributor |
CC | @rotateright |
Extended Description
A number of comparisons involving bit tests are converted into shifts by InstCombine and DAGCombine. However, shifts are expensive for most 8 and 16 bit targets with comparatively cheaper selects.
It is desirable that selects are emitted instead of shifts for these targets. The following cases were identified in TargetLowering and DAGCombine and were fixed by:
https://reviews.llvm.org/D69116
https://reviews.llvm.org/D69120
https://reviews.llvm.org/D69326
https://reviews.llvm.org/D70042
Cases in InstCombine remain to be fixed. In llvm-dev it has been suggested that these cases should be fixed by reversing the current canonicalisation. I am showing them in this and following reports:
REPORTED CASE:
Source code:
int testShiftAnd_1( int x ) (InstCombineSelect foldSelectICmpAnd)
{
return x<0 ? 2 : 0;
}
IR code:
define i16 @testShiftAnd_1(i16 %x) {
entry:
%0 = lshr i16 %x, 14
%1 = and i16 %0, 2
ret i16 %1
}
MSP430 Target code:
testShiftAnd_1:
swpb r12
mov.b r12, r12
clrc
rrc r12
rra r12
rra r12
rra r12
rra r12
rra r12
and llvm/llvm-project#374, r12
ret
AVR Target code:
testShiftAnd_1:
lsr r25
ror r24
lsr r25
ror r24
lsr r25
ror r24
lsr r25
ror r24
lsr r25
ror r24
lsr r25
ror r24
lsr r25
ror r24
lsr r25
ror r24
lsr r25
ror r24
lsr r25
ror r24
lsr r25
ror r24
lsr r25
ror r24
lsr r25
ror r24
lsr r25
ror r24
andi r24, 2
andi r25, 0
ret
EXPECTED RESULT:
Source code:
int testShiftAnd_1( int x ) // (InstCombineSelect foldSelectICmpAnd)
{
return x<0 ? 2 : 0;
}
Expected IR code:
define i16 @testShiftAnd_1(i16 %x) {
entry:
%cmp = icmp slt i16 %x, 0
%cond = select i1 %cmp, i16 2, i16 0
ret i16 %cond
}
Expected MSP430 Target code:
testShiftAnd_1:
mov r12, r13
mov llvm/llvm-project#373, r12
tst r13
jl .LBB5_2
clr r12
.LBB5_2:
add r12, r12
ret
Expected AVR Target code:
testShiftAnd_1:
tst r25
brmi LBB5_2
ldi r24, 0
ldi r25, 0
ret
LBB5_2:
ldi r24, 2
ldi r25, 0
ret