aboutsummaryrefslogtreecommitdiffstats
path: root/test/CodeGen/X86/unaligned-load.ll
Commit message (Collapse)AuthorAgeFilesLines
* rip out a ton of intrinsic modernization logic from AutoUpgrade.cpp, which isChris Lattner2011-06-181-9/+10
| | | | | | | | | | | for pre-2.9 bitcode files. We keep x86 unaligned loads, movnt, crc32, and the target indep prefetch change. As usual, updating the testsuite is a PITA. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133337 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix a regression caused by r102515 where explicit alignment on globals isCameron Zwarich2011-04-131-2/+2
| | | | | | | ignored. There was a test to catch this, but it was just blindly updated in a large change. This fixes another part of <rdar://problem/9275290>. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129466 91177308-0d34-0410-b5e6-96231b3b80d8
* Revert r122955. It seems using movups to lower memcpy can cause massive ↵Evan Cheng2011-01-071-9/+16
| | | | | | regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123015 91177308-0d34-0410-b5e6-96231b3b80d8
* Use movups to lower memcpy and memset even if it's not fast (like corei7).Evan Cheng2011-01-061-16/+9
| | | | | | | | | The theory is it's still faster than a pair of movq / a quad of movl. This will probably hurt older chips like P4 but should run faster on current and future Intel processors. rdar://8817010 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122955 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix an inconsistency in the x86 backend that led it to reject "calll foo" onChris Lattner2010-09-221-1/+1
| | | | | | | | | | | | | x86-32: 32-bit calls were named "call" not "calll". 64-bit calls were correctly named "callq", so this only impacted x86-32. This fixes rdar://8456370 - llvm-mc rejects 'calll' This also exposes that mingw/64 is generating a 32-bit call instead of a 64-bit call, I will file a bugzilla. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114534 91177308-0d34-0410-b5e6-96231b3b80d8
* Rework global alignment computation again. Now we do round upChris Lattner2010-04-281-2/+2
| | | | | | | | | alignment of globals to the preferred alignment, but only when there is no section specified on the global (by far the common case). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102515 91177308-0d34-0410-b5e6-96231b3b80d8
* fix PR6921 a different way. Intead of increasing theChris Lattner2010-04-261-0/+1
| | | | | | | | | alignment of globals with a specified alignment, we fix common variables to obey their alignment. Add a comment explaining why this behavior is important. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102365 91177308-0d34-0410-b5e6-96231b3b80d8
* Revert r102300/102301, which serious broke objc apps.Chris Lattner2010-04-261-3/+2
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102359 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix PR6921: globals were not getting correctly rounded up to theirChris Lattner2010-04-251-2/+3
| | | | | | | | preferred alignment unless they were common or some other special case. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102300 91177308-0d34-0410-b5e6-96231b3b80d8
* Avoid using f64 to lower memcpy from constant string. It's cheaper to use ↵Evan Cheng2010-04-081-3/+1
| | | | | | i32 store of immediates. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100751 91177308-0d34-0410-b5e6-96231b3b80d8
* In 64-bit mode, use i64 to lower memcpy / memset instead of f64.Evan Cheng2010-04-011-9/+8
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100137 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix sdisel memcpy, memset, memmove lowering:Evan Cheng2010-04-011-7/+18
| | | | | | | | | | | | | | 1. Makes it possible to lower with floating point loads and stores. 2. Avoid unaligned loads / stores unless it's fast. 3. Fix some memcpy lowering logic bug related to when to optimize a load from constant string into a constant. 4. Adjust x86 memcpy lowering threshold to make it more sane. 5. Fix x86 target hook so it uses vector and floating point memory ops more effectively. rdar://7774704 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100090 91177308-0d34-0410-b5e6-96231b3b80d8
* make this less constrained, we want blank lines between globals.Chris Lattner2010-01-221-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94201 91177308-0d34-0410-b5e6-96231b3b80d8
* don't let asm-verbose break the check-next lines in these tests.Chris Lattner2010-01-191-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93869 91177308-0d34-0410-b5e6-96231b3b80d8
* Remove unnecessary check.Bill Wendling2009-12-021-1/+0
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90352 91177308-0d34-0410-b5e6-96231b3b80d8
* Test from Dhrystone to make sure that we're not emitting an aligned load for aBill Wendling2009-11-191-0/+28
string that's aligned at 8-bytes instead of 16-bytes. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89295 91177308-0d34-0410-b5e6-96231b3b80d8