diff options
author | JF Bastien <jfb@google.com> | 2013-07-12 23:33:03 +0000 |
---|---|---|
committer | JF Bastien <jfb@google.com> | 2013-07-12 23:33:03 +0000 |
commit | 1b6f5a29ab62fd3e763983f31200b4cc69fa752b (patch) | |
tree | 96e466e9cfd5e6c32acba50732374fb26f590405 | |
parent | bee07bddeaf30aba392c1abd2815cd07545ef2c0 (diff) | |
download | external_llvm-1b6f5a29ab62fd3e763983f31200b4cc69fa752b.zip external_llvm-1b6f5a29ab62fd3e763983f31200b4cc69fa752b.tar.gz external_llvm-1b6f5a29ab62fd3e763983f31200b4cc69fa752b.tar.bz2 |
Fix ARM paired GPR COPY lowering
ARM paired GPR COPY was being lowered to two MOVr without CC. This
patch puts the CC back.
My test is a reduction of the case where I encountered the issue,
64-bit atomics use paired GPRs.
The issue only occurs with selectionDAG, FastISel doesn't encounter it
so I didn't bother calling it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186226 91177308-0d34-0410-b5e6-96231b3b80d8
-rw-r--r-- | lib/Target/ARM/ARMBaseInstrInfo.cpp | 3 | ||||
-rw-r--r-- | test/CodeGen/ARM/copy-paired-reg.ll | 17 |
2 files changed, 20 insertions, 0 deletions
diff --git a/lib/Target/ARM/ARMBaseInstrInfo.cpp b/lib/Target/ARM/ARMBaseInstrInfo.cpp index 5283d7b..d670178 100644 --- a/lib/Target/ARM/ARMBaseInstrInfo.cpp +++ b/lib/Target/ARM/ARMBaseInstrInfo.cpp @@ -745,6 +745,9 @@ void ARMBaseInstrInfo::copyPhysReg(MachineBasicBlock &MBB, if (Opc == ARM::VORRq) Mov.addReg(Src); Mov = AddDefaultPred(Mov); + // MOVr can set CC. + if (Opc == ARM::MOVr) + Mov = AddDefaultCC(Mov); } // Add implicit super-register defs and kills to the last instruction. Mov->addRegisterDefined(DestReg, TRI); diff --git a/test/CodeGen/ARM/copy-paired-reg.ll b/test/CodeGen/ARM/copy-paired-reg.ll new file mode 100644 index 0000000..17a4461 --- /dev/null +++ b/test/CodeGen/ARM/copy-paired-reg.ll @@ -0,0 +1,17 @@ +; RUN: llc < %s -mtriple=armv7-apple-ios -verify-machineinstrs +; RUN: llc < %s -mtriple=armv7-linux-gnueabi -verify-machineinstrs + +define void @f() { + %a = alloca i8, i32 8, align 8 + %b = alloca i8, i32 8, align 8 + + %c = bitcast i8* %a to i64* + %d = bitcast i8* %b to i64* + + store atomic i64 0, i64* %c seq_cst, align 8 + store atomic i64 0, i64* %d seq_cst, align 8 + + %e = load atomic i64* %d seq_cst, align 8 + + ret void +} |