aboutsummaryrefslogtreecommitdiffstats
path: root/test/CodeGen/X86/lsr-loop-exit-cond.ll
Commit message (Collapse)AuthorAgeFilesLines
* Update aosp/master llvm for rebase to r233350Pirama Arumuga Nainar2015-04-091-42/+42
| | | | Change-Id: I07d935f8793ee8ec6b7da003f6483046594bca49
* Enable MI Sched for x86.Andrew Trick2013-10-151-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | This changes the SelectionDAG scheduling preference to source order. Soon, the SelectionDAG scheduler can be bypassed saving a nice chunk of compile time. Performance differences that result from this change are often a consequence of register coalescing. The register coalescer is far from perfect. Bugs can be filed for deficiencies. On x86 SandyBridge/Haswell, the source order schedule is often preserved, particularly for small blocks. Register pressure is generally improved over the SD scheduler's ILP mode. However, we are still able to handle large blocks that require latency hiding, unlike the SD scheduler's BURR mode. MI scheduler also attempts to discover the critical path in single-block loops and adjust heuristics accordingly. The MI scheduler relies on the new machine model. This is currently unimplemented for AVX, so we may not be generating the best code yet. Unit tests are updated so they don't depend on SD scheduling heuristics. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192750 91177308-0d34-0410-b5e6-96231b3b80d8
* Allocate local registers in order for optimal coloring.Andrew Trick2013-07-251-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Also avoid locals evicting locals just because they want a cheaper register. Problem: MI Sched knows exactly how many registers we have and assumes they can be colored. In cases where we have large blocks, usually from unrolled loops, greedy coloring fails. This is a source of "regressions" from the MI Scheduler on x86. I noticed this issue on x86 where we have long chains of two-address defs in the same live range. It's easy to see this in matrix multiplication benchmarks like IRSmk and even the unit test misched-matmul.ll. A fundamental difference between the LLVM register allocator and conventional graph coloring is that in our model a live range can't discover its neighbors, it can only verify its neighbors. That's why we initially went for greedy coloring and added eviction to deal with the hard cases. However, for singly defined and two-address live ranges, we can optimally color without visiting neighbors simply by processing the live ranges in instruction order. Other beneficial side effects: It is much easier to understand and debug regalloc for large blocks when the live ranges are allocated in order. Yes, global allocation is still very confusing, but it's nice to be able to comprehend what happened locally. Heuristics could be added to bias register assignment based on instruction locality (think late register pairing, banks...). Intuituvely this will make some test cases that are on the threshold of register pressure more stable. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187139 91177308-0d34-0410-b5e6-96231b3b80d8
* Mass update to CodeGen tests to use CHECK-LABEL for labels corresponding to ↵Stephen Lin2013-07-141-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | function definitions for more informative error messages. No functionality change and all updated tests passed locally. This update was done with the following bash script: find test/CodeGen -name "*.ll" | \ while read NAME; do echo "$NAME" if ! grep -q "^; *RUN: *llc.*debug" $NAME; then TEMP=`mktemp -t temp` cp $NAME $TEMP sed -n "s/^define [^@]*@\([A-Za-z0-9_]*\)(.*$/\1/p" < $NAME | \ while read FUNC; do sed -i '' "s/;\(.*\)\([A-Za-z0-9_-]*\):\( *\)$FUNC: *\$/;\1\2-LABEL:\3$FUNC:/g" $TEMP done sed -i '' "s/;\(.*\)-LABEL-LABEL:/;\1-LABEL:/" $TEMP sed -i '' "s/;\(.*\)-NEXT-LABEL:/;\1-NEXT:/" $TEMP sed -i '' "s/;\(.*\)-NOT-LABEL:/;\1-NOT:/" $TEMP sed -i '' "s/;\(.*\)-DAG-LABEL:/;\1-DAG:/" $TEMP mv $TEMP $NAME fi done git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186280 91177308-0d34-0410-b5e6-96231b3b80d8
* Revert "Temporarily enable MI-Sched on X86."Andrew Trick2013-06-251-3/+4
| | | | | | This reverts commit 98a9b72e8c56dc13a2617de84503a3d78352789c. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184823 91177308-0d34-0410-b5e6-96231b3b80d8
* Temporarily enable MI-Sched on X86.Andrew Trick2013-06-241-4/+3
| | | | | | | Sorry for the unit test churn. I'll try to make the change permanently next time. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184705 91177308-0d34-0410-b5e6-96231b3b80d8
* PR13578: Teach MachineCSE that instructions that use a constant register can ↵Benjamin Kramer2012-08-111-2/+2
| | | | | | | | be CSE'd safely. This is common e.g. when doing rip-relative addressing on x86_64. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@161728 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix remaining lit tests which were failing when run on an AtomPreston Gurd2012-07-191-2/+15
| | | | | | | | | | processor. Patches by Tyler Nowicki, Andy Zhang, and Preston Gurd! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160520 91177308-0d34-0410-b5e6-96231b3b80d8
* Don't break the IV update in TLI::SimplifySetCC().Jakob Stoklund Olesen2012-04-051-0/+42
| | | | | | | | | | | | | | | | | | | LSR always tries to make the ICmp in the loop latch use the incremented induction variable. This allows the induction variable to be kept in a single register. When the induction variable limit is equal to the stride, SimplifySetCC() would break LSR's hard work by transforming: (icmp (add iv, stride), stride) --> (cmp iv, 0) This forced us to use lea for the IC update, preventing the simpler incl+cmp. <rdar://problem/7643606> <rdar://problem/11184260> git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154119 91177308-0d34-0410-b5e6-96231b3b80d8
* test/CodeGen/X86/lsr-loop-exit-cond.ll: Try to appease linux and freebsd ↵NAKAMURA Takumi2011-11-101-1/+1
| | | | | | | | bots to specify explicit -mtriple=x86_64-darwin. I guess it expects -relocation-model=pic. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144290 91177308-0d34-0410-b5e6-96231b3b80d8
* Use a bigger hammer to fix PR11314 by disabling the "forcing two-addressEvan Cheng2011-11-101-0/+1
| | | | | | | | | | | | | | | | | instruction lower optimization" in the pre-RA scheduler. The optimization, rather the hack, was done before MI use-list was available. Now we should be able to implement it in a better way, perhaps in the two-address pass until a MI scheduler is available. Now that the scheduler has to backtrack to handle call sequences. Adding artificial scheduling constraints is just not safe. Furthermore, the hack is not taking all the other scheduling decisions into consideration so it's just as likely to pessimize code. So I view disabling this optimization goodness regardless of PR11314. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144267 91177308-0d34-0410-b5e6-96231b3b80d8
* In the pre-RA scheduler, maintain cmp+br proximity.Andrew Trick2011-04-141-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | This is done by pushing physical register definitions close to their use, which happens to handle flag definitions if they're not glued to the branch. This seems to be generally a good thing though, so I didn't need to add a target hook yet. The primary motivation is to generate code closer to what people expect and rule out missed opportunity from enabling macro-op fusion. As a side benefit, we get several 2-5% gains on x86 benchmarks. There is one regression: SingleSource/Benchmarks/Shootout/lists slows down be -10%. But this is an independent scheduler bug that will be tracked separately. See rdar://problem/9283108. Incidentally, pre-RA scheduling is only half the solution. Fixing the later passes is tracked by: <rdar://problem/8932804> [pre-RA-sched] on x86, attempt to schedule CMP/TEST adjacent with condition jump Fixes: <rdar://problem/9262453> Scheduler unnecessary break of cmp/jump fusion git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129508 91177308-0d34-0410-b5e6-96231b3b80d8
* Turn on list-ilp scheduling by default on x86 and x86-64, fix upEric Christopher2011-03-081-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | testcases accordingly. Some are currently xfailed and will be filed as bugs to be fixed or understood. Performance results: roughly neutral on SPEC some micro benchmarks in the llvm suite are up between 100 and 150%, only a pair of regressions that are due to be investigated john-the-ripper saw: 10% improvement in traditional DES 8% improvement in BSDI DES 59% improvement in FreeBSD MD5 67% improvement in OpenBSD Blowfish 14% improvement in LM DES Small compile time impact. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127208 91177308-0d34-0410-b5e6-96231b3b80d8
* This test doesn't need the ssp attribute.Dan Gohman2010-06-041-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@105440 91177308-0d34-0410-b5e6-96231b3b80d8
* Eliminate uses of %prcontext.Daniel Dunbar2009-09-051-1/+4
| | | | | | | - I'd appreciate it if someone else eyeballs my changes to make sure I captured the intent of the test. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81083 91177308-0d34-0410-b5e6-96231b3b80d8
* Teach LSR to optimize more loop exit compares, i.e. change them to use ↵Evan Cheng2009-05-111-0/+134
postinc iv value. Previously LSR would only optimize those which are in the loop latch block. However, if LSR can prove it is safe (and profitable), it's now possible to change those not in the latch blocks to use postinc values. Also, if the compare is the only use, LSR would place the iv increment instruction before the compare instead in the latch. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@71485 91177308-0d34-0410-b5e6-96231b3b80d8