diff options
author | Evan Cheng <evan.cheng@apple.com> | 2009-05-28 00:35:15 +0000 |
---|---|---|
committer | Evan Cheng <evan.cheng@apple.com> | 2009-05-28 00:35:15 +0000 |
commit | 2f5d3a50b3d4206d83f7ccc4e95c3c0465d6e460 (patch) | |
tree | 6a1c962c410e68929b12e2753749314181570188 /lib/Target/X86 | |
parent | 5482b98d8f6581f1900a3c890d823ca1b5150851 (diff) | |
download | external_llvm-2f5d3a50b3d4206d83f7ccc4e95c3c0465d6e460.zip external_llvm-2f5d3a50b3d4206d83f7ccc4e95c3c0465d6e460.tar.gz external_llvm-2f5d3a50b3d4206d83f7ccc4e95c3c0465d6e460.tar.bz2 |
Added optimization that narrow load / op / store and the 'op' is a bit twiddling instruction and its second operand is an immediate. If bits that are touched by 'op' can be done with a narrower instruction, reduce the width of the load and store as well. This happens a lot with bitfield manipulation code.
e.g.
orl $65536, 8(%rax)
=>
orb $1, 10(%rax)
Since narrowing is not always a win, e.g. i32 -> i16 is a loss on x86, dag combiner consults with the target before performing the optimization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@72507 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'lib/Target/X86')
-rw-r--r-- | lib/Target/X86/X86ISelLowering.cpp | 5 | ||||
-rw-r--r-- | lib/Target/X86/X86ISelLowering.h | 5 |
2 files changed, 10 insertions, 0 deletions
diff --git a/lib/Target/X86/X86ISelLowering.cpp b/lib/Target/X86/X86ISelLowering.cpp index b89eef0..0136f90 100644 --- a/lib/Target/X86/X86ISelLowering.cpp +++ b/lib/Target/X86/X86ISelLowering.cpp @@ -6877,6 +6877,11 @@ bool X86TargetLowering::isZExtFree(MVT VT1, MVT VT2) const { return VT1 == MVT::i32 && VT2 == MVT::i64 && Subtarget->is64Bit(); } +bool X86TargetLowering::isNarrowingProfitable(MVT VT1, MVT VT2) const { + // i16 instructions are longer (0x66 prefix) and potentially slower. + return !(VT1 == MVT::i32 && VT2 == MVT::i16); +} + /// isShuffleMaskLegal - Targets can use this to indicate that they only /// support *some* VECTOR_SHUFFLE operations, those with specific masks. /// By default, if a target supports the VECTOR_SHUFFLE node, all mask values diff --git a/lib/Target/X86/X86ISelLowering.h b/lib/Target/X86/X86ISelLowering.h index badbd24..550f8bd 100644 --- a/lib/Target/X86/X86ISelLowering.h +++ b/lib/Target/X86/X86ISelLowering.h @@ -466,6 +466,11 @@ namespace llvm { virtual bool isZExtFree(const Type *Ty1, const Type *Ty2) const; virtual bool isZExtFree(MVT VT1, MVT VT2) const; + /// isNarrowingProfitable - Return true if it's profitable to narrow + /// operations of type VT1 to VT2. e.g. on x86, it's profitable to narrow + /// from i32 to i8 but not from i32 to i16. + virtual bool isNarrowingProfitable(MVT VT1, MVT VT2) const; + /// isShuffleMaskLegal - Targets can use this to indicate that they only /// support *some* VECTOR_SHUFFLE operations, those with specific masks. /// By default, if a target supports the VECTOR_SHUFFLE node, all mask |