diff options
author | Quentin Colombet <qcolombet@apple.com> | 2013-10-11 18:01:14 +0000 |
---|---|---|
committer | Quentin Colombet <qcolombet@apple.com> | 2013-10-11 18:01:14 +0000 |
commit | c34693f6efc670b71e11f3479844c36d9696b535 (patch) | |
tree | bbb17e341801d36e5109f9224e8aaed3d40c9baf /include/llvm | |
parent | 563c18283926b18bbdb6d3ad6cf02594399e2baf (diff) | |
download | external_llvm-c34693f6efc670b71e11f3479844c36d9696b535.zip external_llvm-c34693f6efc670b71e11f3479844c36d9696b535.tar.gz external_llvm-c34693f6efc670b71e11f3479844c36d9696b535.tar.bz2 |
[DAGCombiner] Slice a big load in two loads when the element are next to each
other in memory and the target has paired load and performs post-isel loads
combining.
E.g., this optimization will transform something like this:
a = load i64* addr
b = trunc i64 a to i32
c = lshr i64 a, 32
d = trunc i64 c to i32
into:
b = load i32* addr1
d = load i32* addr2
Where addr1 = addr2 +/- sizeof(i32), if the target supports paired load and
performs post-isel loads combining.
One should overload TargetLowering::hasPairedLoad to provide this information.
The default is false.
<rdar://problem/14477220>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192471 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'include/llvm')
-rw-r--r-- | include/llvm/Target/TargetLowering.h | 29 |
1 files changed, 29 insertions, 0 deletions
diff --git a/include/llvm/Target/TargetLowering.h b/include/llvm/Target/TargetLowering.h index 0130e07..1c0ad63 100644 --- a/include/llvm/Target/TargetLowering.h +++ b/include/llvm/Target/TargetLowering.h @@ -1183,6 +1183,35 @@ public: return false; } + /// Return true if the target supplies and combines to a paired load + /// two loaded values of type LoadedType next to each other in memory. + /// RequiredAlignment gives the minimal alignment constraints that must be met to + /// be able to select this paired load. + /// + /// This information is *not* used to generate actual paired loads, but it is used + /// to generate a sequence of loads that is easier to combine into a paired load. + /// For instance, something like this: + /// a = load i64* addr + /// b = trunc i64 a to i32 + /// c = lshr i64 a, 32 + /// d = trunc i64 c to i32 + /// will be optimized into: + /// b = load i32* addr1 + /// d = load i32* addr2 + /// Where addr1 = addr2 +/- sizeof(i32). + /// + /// In other words, unless the target performs a post-isel load combining, this + /// information should not be provided because it will generate more loads. + virtual bool hasPairedLoad(Type * /*LoadedType*/, + unsigned & /*RequiredAligment*/) const { + return false; + } + + virtual bool hasPairedLoad(EVT /*LoadedType*/, + unsigned & /*RequiredAligment*/) const { + return false; + } + /// Return true if zero-extending the specific node Val to type VT2 is free /// (either because it's implicitly zero-extended such as ARM ldrb / ldrh or /// because it's folded such as X86 zero-extending loads). |