diff options
author | Evan Cheng <evan.cheng@apple.com> | 2012-12-10 23:21:26 +0000 |
---|---|---|
committer | Evan Cheng <evan.cheng@apple.com> | 2012-12-10 23:21:26 +0000 |
commit | 376642ed620ecae05b68c7bc81f79aeb2065abe0 (patch) | |
tree | 9757b2568050b3ab58af15c234df3bc9f66202b0 /lib/Target/X86/X86ISelLowering.cpp | |
parent | 2b475922e6169098606006a69d765160caa77848 (diff) | |
download | external_llvm-376642ed620ecae05b68c7bc81f79aeb2065abe0.zip external_llvm-376642ed620ecae05b68c7bc81f79aeb2065abe0.tar.gz external_llvm-376642ed620ecae05b68c7bc81f79aeb2065abe0.tar.bz2 |
Some enhancements for memcpy / memset inline expansion.
1. Teach it to use overlapping unaligned load / store to copy / set the trailing
bytes. e.g. On 86, use two pairs of movups / movaps for 17 - 31 byte copies.
2. Use f64 for memcpy / memset on targets where i64 is not legal but f64 is. e.g.
x86 and ARM.
3. When memcpy from a constant string, do *not* replace the load with a constant
if it's not possible to materialize an integer immediate with a single
instruction (required a new target hook: TLI.isIntImmLegal()).
4. Use unaligned load / stores more aggressively if target hooks indicates they
are "fast".
5. Update ARM target hooks to use unaligned load / stores. e.g. vld1.8 / vst1.8.
Also increase the threshold to something reasonable (8 for memset, 4 pairs
for memcpy).
This significantly improves Dhrystone, up to 50% on ARM iOS devices.
rdar://12760078
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169791 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'lib/Target/X86/X86ISelLowering.cpp')
-rw-r--r-- | lib/Target/X86/X86ISelLowering.cpp | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/lib/Target/X86/X86ISelLowering.cpp b/lib/Target/X86/X86ISelLowering.cpp index 84e5677..90bee41 100644 --- a/lib/Target/X86/X86ISelLowering.cpp +++ b/lib/Target/X86/X86ISelLowering.cpp @@ -1412,6 +1412,13 @@ X86TargetLowering::getOptimalMemOpType(uint64_t Size, return MVT::i32; } +bool +X86TargetLowering::allowsUnalignedMemoryAccesses(EVT VT, bool *Fast) const { + if (Fast) + *Fast = Subtarget->isUnalignedMemAccessFast(); + return true; +} + /// getJumpTableEncoding - Return the entry encoding for a jump table in the /// current function. The returned value is a member of the /// MachineJumpTableInfo::JTEntryKind enum. |