aboutsummaryrefslogtreecommitdiffstats
path: root/test/CodeGen/X86/pr1505b.ll
Commit message (Collapse)AuthorAgeFilesLines
* Update aosp/master llvm for rebase to r233350Pirama Arumuga Nainar2015-04-091-4/+4
| | | | Change-Id: I07d935f8793ee8ec6b7da003f6483046594bca49
* Enable MI Sched for x86.Andrew Trick2013-10-151-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | This changes the SelectionDAG scheduling preference to source order. Soon, the SelectionDAG scheduler can be bypassed saving a nice chunk of compile time. Performance differences that result from this change are often a consequence of register coalescing. The register coalescer is far from perfect. Bugs can be filed for deficiencies. On x86 SandyBridge/Haswell, the source order schedule is often preserved, particularly for small blocks. Register pressure is generally improved over the SD scheduler's ILP mode. However, we are still able to handle large blocks that require latency hiding, unlike the SD scheduler's BURR mode. MI scheduler also attempts to discover the critical path in single-block loops and adjust heuristics accordingly. The MI scheduler relies on the new machine model. This is currently unimplemented for AVX, so we may not be generating the best code yet. Unit tests are updated so they don't depend on SD scheduling heuristics. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192750 91177308-0d34-0410-b5e6-96231b3b80d8
* Revert "Temporarily enable MI-Sched on X86."Andrew Trick2013-06-251-1/+2
| | | | | | This reverts commit 98a9b72e8c56dc13a2617de84503a3d78352789c. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184823 91177308-0d34-0410-b5e6-96231b3b80d8
* Temporarily enable MI-Sched on X86.Andrew Trick2013-06-241-2/+1
| | | | | | | Sorry for the unit test churn. I'll try to make the change permanently next time. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184705 91177308-0d34-0410-b5e6-96231b3b80d8
* Upgrade syntax of tests using volatile instructions to use 'load volatile' ↵Chris Lattner2011-11-271-2/+2
| | | | | | instead of 'volatile load', which is archaic. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145171 91177308-0d34-0410-b5e6-96231b3b80d8
* FileCheckize a couple of tests.Jakob Stoklund Olesen2011-06-281-2/+23
| | | | | | | Also and add a test for popping dead return values and avoid testing the spill precision. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133997 91177308-0d34-0410-b5e6-96231b3b80d8
* Change the x86 32-bit scheduler to register pressure and fix up theEric Christopher2011-03-111-1/+0
| | | | | | | | | corresponding testcases back to the previous versions. Fixes some performance regressions only seen on 32-bit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127441 91177308-0d34-0410-b5e6-96231b3b80d8
* Turn on list-ilp scheduling by default on x86 and x86-64, fix upEric Christopher2011-03-081-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | testcases accordingly. Some are currently xfailed and will be filed as bugs to be fixed or understood. Performance results: roughly neutral on SPEC some micro benchmarks in the llvm suite are up between 100 and 150%, only a pair of regressions that are due to be investigated john-the-ripper saw: 10% improvement in traditional DES 8% improvement in BSDI DES 59% improvement in FreeBSD MD5 67% improvement in OpenBSD Blowfish 14% improvement in LM DES Small compile time impact. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127208 91177308-0d34-0410-b5e6-96231b3b80d8
* Reapply coalescer fix for better cross-class coalescing.Jakob Stoklund Olesen2010-02-111-2/+2
| | | | | | This time with fixed test cases. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95938 91177308-0d34-0410-b5e6-96231b3b80d8
* Eliminate more uses of llvm-as and llvm-dis.Dan Gohman2009-09-081-2/+2
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81290 91177308-0d34-0410-b5e6-96231b3b80d8
* reduce this testcase moreChris Lattner2008-03-091-14/+0
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48092 91177308-0d34-0410-b5e6-96231b3b80d8
* Significantly simplify and improve handling of FP function results on x86-32.Chris Lattner2008-01-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This case returns the value in ST(0) and then has to convert it to an SSE register. This causes significant codegen ugliness in some cases. For example in the trivial fp-stack-direct-ret.ll testcase we used to generate: _bar: subl $28, %esp call L_foo$stub fstpl 16(%esp) movsd 16(%esp), %xmm0 movsd %xmm0, 8(%esp) fldl 8(%esp) addl $28, %esp ret because we move the result of foo() into an XMM register, then have to move it back for the return of bar. Instead of hacking ever-more special cases into the call result lowering code we take a much simpler approach: on x86-32, fp return is modeled as always returning into an f80 register which is then truncated to f32 or f64 as needed. Similarly for a result, we model it as an extension to f80 + return. This exposes the truncate and extensions to the dag combiner, allowing target independent code to hack on them, eliminating them in this case. This gives us this code for the example above: _bar: subl $12, %esp call L_foo$stub addl $12, %esp ret The nasty aspect of this is that these conversions are not legal, but we want the second pass of dag combiner (post-legalize) to be able to hack on them. To handle this, we lie to legalize and say they are legal, then custom expand them on entry to the isel pass (PreprocessForFPConvert). This is gross, but less gross than the code it is replacing :) This also allows us to generate better code in several other cases. For example on fp-stack-ret-conv.ll, we now generate: _test: subl $12, %esp call L_foo$stub fstps 8(%esp) movl 16(%esp), %eax cvtss2sd 8(%esp), %xmm0 movsd %xmm0, (%eax) addl $12, %esp ret where before we produced (incidentally, the old bad code is identical to what gcc produces): _test: subl $12, %esp call L_foo$stub fstpl (%esp) cvtsd2ss (%esp), %xmm0 cvtss2sd %xmm0, %xmm0 movl 16(%esp), %eax movsd %xmm0, (%eax) addl $12, %esp ret Note that we generate slightly worse code on pr1505b.ll due to a scheduling deficiency that is unrelated to this patch. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46307 91177308-0d34-0410-b5e6-96231b3b80d8
* take these with a pr #Chris Lattner2008-01-241-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46303 91177308-0d34-0410-b5e6-96231b3b80d8
* Codegen improvement has reduced one spill.Evan Cheng2008-01-101-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45814 91177308-0d34-0410-b5e6-96231b3b80d8
* Convert tests using "| wc -l | grep ..." to use the count script.Dan Gohman2007-08-151-2/+2
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41097 91177308-0d34-0410-b5e6-96231b3b80d8
* New testcases for rev 37847 (PR's 1489 and 1505).Dale Johannesen2007-07-031-0/+73
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37848 91177308-0d34-0410-b5e6-96231b3b80d8