aboutsummaryrefslogtreecommitdiffstats
path: root/lib/Target/ARM/ARMTargetMachine.cpp
diff options
context:
space:
mode:
authorEvan Cheng <evan.cheng@apple.com>2010-12-05 22:04:16 +0000
committerEvan Cheng <evan.cheng@apple.com>2010-12-05 22:04:16 +0000
commit48575f6ea7d5cd21ab29ca370f58fcf9ca31400b (patch)
treefd7f84a4921afa7c4baac36c5772ae688f4f31da /lib/Target/ARM/ARMTargetMachine.cpp
parent0a3fdd6e11cd351737b4451c05ec5d794e6855cf (diff)
downloadexternal_llvm-48575f6ea7d5cd21ab29ca370f58fcf9ca31400b.zip
external_llvm-48575f6ea7d5cd21ab29ca370f58fcf9ca31400b.tar.gz
external_llvm-48575f6ea7d5cd21ab29ca370f58fcf9ca31400b.tar.bz2
Making use of VFP / NEON floating point multiply-accumulate / subtraction is
difficult on current ARM implementations for a few reasons. 1. Even though a single vmla has latency that is one cycle shorter than a pair of vmul + vadd, a RAW hazard during the first (4? on Cortex-a8) can cause additional pipeline stall. So it's frequently better to single codegen vmul + vadd. 2. A vmla folowed by a vmul, vmadd, or vsub causes the second fp instruction to stall for 4 cycles. We need to schedule them apart. 3. A vmla followed vmla is a special case. Obvious issuing back to back RAW vmla + vmla is very bad. But this isn't ideal either: vmul vadd vmla Instead, we want to expand the second vmla: vmla vmul vadd Even with the 4 cycle vmul stall, the second sequence is still 2 cycles faster. Up to now, isel simply avoid codegen'ing fp vmla / vmls. This works well enough but it isn't the optimial solution. This patch attempts to make it possible to use vmla / vmls in cases where it is profitable. A. Add missing isel predicates which cause vmla to be codegen'ed. B. Make sure the fmul in (fadd (fmul)) has a single use. We don't want to compute a fmul and a fmla. C. Add additional isel checks for vmla, avoid cases where vmla is feeding into fp instructions (except for the #3 exceptional case). D. Add ARM hazard recognizer to model the vmla / vmls hazards. E. Add a special pre-regalloc case to expand vmla / vmls when it's likely the vmla / vmls will trigger one of the special hazards. Work in progress, only A+B are enabled. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120960 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'lib/Target/ARM/ARMTargetMachine.cpp')
-rw-r--r--lib/Target/ARM/ARMTargetMachine.cpp6
1 files changed, 6 insertions, 0 deletions
diff --git a/lib/Target/ARM/ARMTargetMachine.cpp b/lib/Target/ARM/ARMTargetMachine.cpp
index 1c05ab5..89047f4 100644
--- a/lib/Target/ARM/ARMTargetMachine.cpp
+++ b/lib/Target/ARM/ARMTargetMachine.cpp
@@ -16,11 +16,14 @@
#include "ARM.h"
#include "llvm/PassManager.h"
#include "llvm/CodeGen/Passes.h"
+#include "llvm/Support/CommandLine.h"
#include "llvm/Support/FormattedStream.h"
#include "llvm/Target/TargetOptions.h"
#include "llvm/Target/TargetRegistry.h"
using namespace llvm;
+static cl::opt<bool>ExpandMLx("expand-fp-mlx", cl::init(false), cl::Hidden);
+
static MCAsmInfo *createMCAsmInfo(const Target &T, StringRef TT) {
Triple TheTriple(TT);
switch (TheTriple.getOS()) {
@@ -146,6 +149,9 @@ bool ARMBaseTargetMachine::addPreRegAlloc(PassManagerBase &PM,
// FIXME: temporarily disabling load / store optimization pass for Thumb1.
if (OptLevel != CodeGenOpt::None && !Subtarget.isThumb1Only())
PM.add(createARMLoadStoreOptimizationPass(true));
+ if (ExpandMLx &&
+ OptLevel != CodeGenOpt::None && Subtarget.hasVFP2())
+ PM.add(createMLxExpansionPass());
return true;
}