aboutsummaryrefslogtreecommitdiffstats
path: root/lib/Transforms/IPO
Commit message (Collapse)AuthorAgeFilesLines
* It's not safe to blindly remove invoke instructions. This happens when weNick Lewycky2012-07-251-1/+2
| | | | | | | | encounter an invoke of an allocation function. This should fix the dragonegg bootstrap. Testcase to follow, later. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160757 91177308-0d34-0410-b5e6-96231b3b80d8
* Don't delete one more instruction than we're allowed to. This should fix theNick Lewycky2012-07-241-1/+3
| | | | | | | | Darwin bootstrap. Testcase exists but isn't fully reduced, I expect to commit the testcase this evening. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160693 91177308-0d34-0410-b5e6-96231b3b80d8
* Teach globalopt to not nuke all stores to globals. Keep them around of theyNick Lewycky2012-07-241-8/+177
| | | | | | | | | | might be deliberate "one time" leaks, so that leak checkers can find them. This is a reapply of r160602 with the fix that this time I'm committing the code I thought I was committing last time; the I->eraseFromParent() goes *after* the break out of the loop. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160664 91177308-0d34-0410-b5e6-96231b3b80d8
* Revert r160602.Nick Lewycky2012-07-211-177/+8
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160603 91177308-0d34-0410-b5e6-96231b3b80d8
* Teach globalopt to play nice with leak checkers. This is a reapplication ofNick Lewycky2012-07-211-8/+177
| | | | | | | | | r160529 that was subsequently reverted. The fix was to not call GV->eraseFromParent() right before the caller does the same. The existing testcases already caught this bug if run under valgrind. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160602 91177308-0d34-0410-b5e6-96231b3b80d8
* Revert r160529 due to crashes.Nick Lewycky2012-07-191-171/+8
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160532 91177308-0d34-0410-b5e6-96231b3b80d8
* Don't wipe out global variables that are probably storing pointers to heapNick Lewycky2012-07-191-8/+171
| | | | | | | memory. This makes clang play nice with leak checkers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160529 91177308-0d34-0410-b5e6-96231b3b80d8
* Replace some explicit compare loops with std::equal.Benjamin Kramer2012-07-191-4/+1
| | | | | | No functionality change. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160501 91177308-0d34-0410-b5e6-96231b3b80d8
* Remove tabs.Bill Wendling2012-07-192-11/+11
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160477 91177308-0d34-0410-b5e6-96231b3b80d8
* GlobalOpt forgot to handle bitcast when analyzing globals. Found by inspection.Duncan Sands2012-07-021-0/+2
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159546 91177308-0d34-0410-b5e6-96231b3b80d8
* Move llvm/Support/IRBuilder.h -> llvm/IRBuilder.hChandler Carruth2012-06-291-6/+6
| | | | | | | | | | | | | | | | | This was always part of the VMCore library out of necessity -- it deals entirely in the IR. The .cpp file in fact was already part of the VMCore library. This is just a mechanical move. I've tried to go through and re-apply the coding standard's preferred header sort, but at 40-ish files, I may have gotten some wrong. Please let me know if so. I'll be committing the corresponding updates to Clang and Polly, and Duncan has DragonEgg. Thanks to Bill and Eric for giving the green light for this bit of cleanup. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159421 91177308-0d34-0410-b5e6-96231b3b80d8
* Move lib/Analysis/DebugInfo.cpp to lib/VMCore/DebugInfo.cpp andBill Wendling2012-06-281-1/+1
| | | | | | | | | | include/llvm/Analysis/DebugInfo.h to include/llvm/DebugInfo.h. The reasoning is because the DebugInfo module is simply an interface to the debug info MDNodes and has nothing to do with analysis. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159312 91177308-0d34-0410-b5e6-96231b3b80d8
* Revert r159136 due to PR13124.Matt Beaumont-Gay2012-06-271-11/+0
| | | | | | | | | | | Original commit message: If a constant or a function has linkonce_odr linkage and unnamed_addr, mark it hidden. Being linkonce_odr guarantees that it is available in every dso that needs it. Being a constant/function with unnamed_addr guarantees that the copies don't have to be merged. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159272 91177308-0d34-0410-b5e6-96231b3b80d8
* If a constant or a function has linkonce_odr linkage and unnamed_addr, mark itRafael Espindola2012-06-251-0/+11
| | | | | | | | hidden. Being linkonce_odr guarantees that it is available in every dso that needs it. Being a constant/function with unnamed_addr guarantees that the copies don't have to be merged. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159136 91177308-0d34-0410-b5e6-96231b3b80d8
* llvm/lib: [CMake] Add explicit dependency to intrinsics_gen.NAKAMURA Takumi2012-06-241-0/+2
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159112 91177308-0d34-0410-b5e6-96231b3b80d8
* Tab to spaces. No functionality change.Nick Lewycky2012-06-241-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159104 91177308-0d34-0410-b5e6-96231b3b80d8
* Extend the IL for selecting TLS models (PR9788)Hans Wennborg2012-06-231-7/+7
| | | | | | | | | | | | | | | This allows the user/front-end to specify a model that is better than what LLVM would choose by default. For example, a variable might be declared as @x = thread_local(initialexec) global i32 42 if it will not be used in a shared library that is dlopen'ed. If the specified model isn't supported by the target, or if LLVM can make a better choice, a different model may be used. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@159077 91177308-0d34-0410-b5e6-96231b3b80d8
* fix whitespace in my last commit.Nuno Lopes2012-06-221-1/+1
| | | | | | sorry for the churn :S enough for today; going to sleep. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158953 91177308-0d34-0410-b5e6-96231b3b80d8
* remove extractMallocCallFromBitCast, since it was tailor maded for its sole ↵Nuno Lopes2012-06-221-2/+4
| | | | | | user. Update GlobalOpt accordingly. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158952 91177308-0d34-0410-b5e6-96231b3b80d8
* Some optimizations done by globalopt are safe only for internal linkage, notRafael Espindola2012-06-151-0/+3
| | | | | | | | linkonce linkage. For example, it is not valid to add unnamed_addr. This also fixes a crash in g++.dg/opt/static5.C. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158528 91177308-0d34-0410-b5e6-96231b3b80d8
* Implement the isSafeToDiscardIfUnused predicate and use it in globalopt andRafael Espindola2012-06-142-4/+4
| | | | | | | globaldce. Globaldce was already removing linkonce globals, but globalopt was not. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158476 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix typos found by http://github.com/lyda/misspell-checkBenjamin Kramer2012-06-021-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157885 91177308-0d34-0410-b5e6-96231b3b80d8
* switch AttrListPtr::get to take an ArrayRef, simplifying a lot of clients.Chris Lattner2012-05-282-11/+6
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157556 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix the inliner so that the optsize function attribute don't alter thePatrik Hägglund2012-05-231-8/+11
| | | | | | | | | inline threshold if the global inline threshold is lower (as for -Oz). Reviewed by Chandler Carruth and Bill Wendling. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157323 91177308-0d34-0410-b5e6-96231b3b80d8
* Teach Function::hasAddressTaken that BlockAddress doesn't really takeJay Foad2012-05-121-0/+4
| | | | | | the address of a function. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156703 91177308-0d34-0410-b5e6-96231b3b80d8
* Move the CodeExtractor utility to a dedicated header file / source file,Chandler Carruth2012-05-042-5/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | and expose it as a utility class rather than as free function wrappers. The simple free-function interface works well for the bugpoint-specific pass's uses of code extraction, but in an upcoming patch for more advanced code extraction, they simply don't expose a rich enough interface. I need to expose various stages of the process of doing the code extraction and query information to decide whether or not to actually complete the extraction or give up. Rather than build up a new predicate model and pass that into these functions, just take the class that was actually implementing the functions and lift it up into a proper interface that can be used to perform code extraction. The interface is cleaned up and re-documented to work better in a header. It also is now setup to accept the blocks to be extracted in the constructor rather than in a method. In passing this essentially reverts my previous commit here exposing a block-level query for eligibility of extraction. That is no longer necessary with the more rich interface as clients can query the extraction object for eligibility directly. This will reduce the number of walks of the input basic block sequence by quite a bit which is useful if this enters the normal optimization pipeline. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@156163 91177308-0d34-0410-b5e6-96231b3b80d8
* Add a Fixme.Bill Wendling2012-04-161-0/+2
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154793 91177308-0d34-0410-b5e6-96231b3b80d8
* By default, use Early-CSE instead of GVN for vectorization cleanup.Hal Finkel2012-04-131-2/+9
| | | | | | | | | | As has been suggested by Duncan and others, Early-CSE and GVN should do similar redundancy elimination, but Early-CSE is much less expensive. Most of my autovectorization benchmarks show a performance regresion, but all of these are < 0.1%, and so I think that it is still worth using the less expensive pass. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154673 91177308-0d34-0410-b5e6-96231b3b80d8
* Code-gen may inject code into the IR before it emits the ASM. The linkerBill Wendling2012-04-131-0/+6
| | | | | | | | | obviously cannot know that this code is present, let alone used. So prevent the internalize pass from internalizing those global values which code-gen may insert. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154645 91177308-0d34-0410-b5e6-96231b3b80d8
* Add two statistics to help track how we are computing the inline cost.Chandler Carruth2012-04-111-0/+6
| | | | | | Yea, 'NumCallerCallersAnalyzed' isn't a great name, suggestions welcome. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154492 91177308-0d34-0410-b5e6-96231b3b80d8
* Add an option to turn off the expensive GVN load PRE part of GVN.Bill Wendling2012-04-021-4/+5
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153902 91177308-0d34-0410-b5e6-96231b3b80d8
* Belatedly address some code review from Chris.Chandler Carruth2012-04-011-1/+1
| | | | | | | As a side note, I really dislike array_pod_sort... Do we really still care about any STL implementations that get this so wrong? Does libc++? git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153834 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix a pretty scary bug I introduced into the always inliner withChandler Carruth2012-04-011-1/+1
| | | | | | | | | | a single missing character. Somehow, this had gone untested. I've added tests for returns-twice logic specifically with the always-inliner that would have caught this, and fixed the bug. Thanks to Matt for the careful review and spotting this!!! =D git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153832 91177308-0d34-0410-b5e6-96231b3b80d8
* Give the always-inliner its own custom filter. It shouldn't have to payChandler Carruth2012-03-311-20/+63
| | | | | | | | | | | | the very high overhead of the complex inline cost analysis when all it wants to do is detect three patterns which must not be inlined. Comment the code, clean it up, and leave some hints about possible performance improvements if this ever shows up on a profile. Moving this off of the (now more expensive) inline cost analysis is particularly important because we have to run this inliner even at -O0. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153814 91177308-0d34-0410-b5e6-96231b3b80d8
* Remove a bunch of empty, dead, and no-op methods from all of theseChandler Carruth2012-03-313-26/+0
| | | | | | | | | | interfaces. These methods were used in the old inline cost system where there was a persistent cache that had to be updated, invalidated, and cleared. We're now doing more direct computations that don't require this intricate dance. Even if we resume some level of caching, it would almost certainly have a simpler and more narrow interface than this. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153813 91177308-0d34-0410-b5e6-96231b3b80d8
* Initial commit for the rewrite of the inline cost analysis to operateChandler Carruth2012-03-313-38/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | on a per-callsite walk of the called function's instructions, in breadth-first order over the potentially reachable set of basic blocks. This is a major shift in how inline cost analysis works to improve the accuracy and rationality of inlining decisions. A brief outline of the algorithm this moves to: - Build a simplification mapping based on the callsite arguments to the function arguments. - Push the entry block onto a worklist of potentially-live basic blocks. - Pop the first block off of the *front* of the worklist (for breadth-first ordering) and walk its instructions using a custom InstVisitor. - For each instruction's operands, re-map them based on the simplification mappings available for the given callsite. - Compute any simplification possible of the instruction after re-mapping, and store that back int othe simplification mapping. - Compute any bonuses, costs, or other impacts of the instruction on the cost metric. - When the terminator is reached, replace any conditional value in the terminator with any simplifications from the mapping we have, and add any successors which are not proven to be dead from these simplifications to the worklist. - Pop the next block off of the front of the worklist, and repeat. - As soon as the cost of inlining exceeds the threshold for the callsite, stop analyzing the function in order to bound cost. The primary goal of this algorithm is to perfectly handle dead code paths. We do not want any code in trivially dead code paths to impact inlining decisions. The previous metric was *extremely* flawed here, and would always subtract the average cost of two successors of a conditional branch when it was proven to become an unconditional branch at the callsite. There was no handling of wildly different costs between the two successors, which would cause inlining when the path actually taken was too large, and no inlining when the path actually taken was trivially simple. There was also no handling of the code *path*, only the immediate successors. These problems vanish completely now. See the added regression tests for the shiny new features -- we skip recursive function calls, SROA-killing instructions, and high cost complex CFG structures when dead at the callsite being analyzed. Switching to this algorithm required refactoring the inline cost interface to accept the actual threshold rather than simply returning a single cost. The resulting interface is pretty bad, and I'm planning to do lots of interface cleanup after this patch. Several other refactorings fell out of this, but I've tried to minimize them for this patch. =/ There is still more cleanup that can be done here. Please point out anything that you see in review. I've worked really hard to try to mirror at least the spirit of all of the previous heuristics in the new model. It's not clear that they are all correct any more, but I wanted to minimize the change in this single patch, it's already a bit ridiculous. One heuristic that is *not* yet mirrored is to allow inlining of functions with a dynamic alloca *if* the caller has a dynamic alloca. I will add this back, but I think the most reasonable way requires changes to the inliner itself rather than just the cost metric, and so I've deferred this for a subsequent patch. The test case is XFAIL-ed until then. As mentioned in the review mail, this seems to make Clang run about 1% to 2% faster in -O0, but makes its binary size grow by just under 4%. I've looked into the 4% growth, and it can be fixed, but requires changes to other parts of the inliner. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153812 91177308-0d34-0410-b5e6-96231b3b80d8
* Internalize: Remove reference of @llvm.noinline, it was replaced with the ↵Benjamin Kramer2012-03-311-1/+0
| | | | | | noinline attribute a long time ago. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153806 91177308-0d34-0410-b5e6-96231b3b80d8
* GlobalOpt: If we have an inbounds GEP from a ConstantAggregateZero global ↵Benjamin Kramer2012-03-281-0/+6
| | | | | | that we just determined to be constant, replace all loads from it with a zero value. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153576 91177308-0d34-0410-b5e6-96231b3b80d8
* Make a seemingly tiny change to the inliner and fix the generated codeChandler Carruth2012-03-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | size bloat. Unfortunately, I expect this to disable the majority of the benefit from r152737. I'm hopeful at least that it will fix PR12345. To explain this requires... quite a bit of backstory I'm afraid. TL;DR: The change in r152737 actually did The Wrong Thing for linkonce-odr functions. This change makes it do the right thing. The benefits we saw were simple luck, not any actual strategy. Benchmark numbers after a mini-blog-post so that I've written down my thoughts on why all of this works and doesn't work... To understand what's going on here, you have to understand how the "bottom-up" inliner actually works. There are two fundamental modes to the inliner: 1) Standard fixed-cost bottom-up inlining. This is the mode we usually think about. It walks from the bottom of the CFG up to the top, looking at callsites, taking information about the callsite and the called function and computing th expected cost of inlining into that callsite. If the cost is under a fixed threshold, it inlines. It's a touch more complicated than that due to all the bonuses, weights, etc. Inlining the last callsite to an internal function gets higher weighth, etc. But essentially, this is the mode of operation. 2) Deferred bottom-up inlining (a term I just made up). This is the interesting mode for this patch an r152737. Initially, this works just like mode #1, but once we have the cost of inlining into the callsite, we don't just compare it with a fixed threshold. First, we check something else. Let's give some names to the entities at this point, or we'll end up hopelessly confused. We're considering inlining a function 'A' into its callsite within a function 'B'. We want to check whether 'B' has any callers, and whether it might be inlined into those callers. If so, we also check whether inlining 'A' into 'B' would block any of the opportunities for inlining 'B' into its callers. We take the sum of the costs of inlining 'B' into its callers where that inlining would be blocked by inlining 'A' into 'B', and if that cost is less than the cost of inlining 'A' into 'B', then we skip inlining 'A' into 'B'. Now, in order for #2 to make sense, we have to have some confidence that we will actually have the opportunity to inline 'B' into its callers when cheaper, *and* that we'll be able to revisit the decision and inline 'A' into 'B' if that ever becomes the correct tradeoff. This often isn't true for external functions -- we can see very few of their callers, and we won't be able to re-consider inlining 'A' into 'B' if 'B' is external when we finally see more callers of 'B'. There are two cases where we believe this to be true for C/C++ code: functions local to a translation unit, and functions with an inline definition in every translation unit which uses them. These are represented as internal linkage and linkonce-odr (resp.) in LLVM. I enabled this logic for linkonce-odr in r152737. Unfortunately, when I did that, I also introduced a subtle bug. There was an implicit assumption that the last caller of the function within the TU was the last caller of the function in the program. We want to bonus the last caller of the function in the program by a huge amount for inlining because inlining that callsite has very little cost. Unfortunately, the last caller in the TU of a linkonce-odr function is *not* the last caller in the program, and so we don't want to apply this bonus. If we do, we can apply it to one callsite *per-TU*. Because of the way deferred inlining works, when it sees this bonus applied to one callsite in the TU for 'B', it decides that inlining 'B' is of the *utmost* importance just so we can get that final bonus. It then proceeds to essentially force deferred inlining regardless of the actual cost tradeoff. The result? PR12345: code bloat, code bloat, code bloat. Another result is getting *damn* lucky on a few benchmarks, and the over-inlining exposing critically important optimizations. I would very much like a list of benchmarks that regress after this change goes in, with bitcode before and after. This will help me greatly understand what opportunities the current cost analysis is missing. Initial benchmark numbers look very good. WebKit files that exhibited the worst of PR12345 went from growing to shrinking compared to Clang with r152737 reverted. - Bootstrapped Clang is 3% smaller with this change. - Bootstrapped Clang -O0 over a single-source-file of lib/Lex is 4% faster with this change. Please let me know about any other performance impact you see. Thanks to Nico for reporting and urging me to actually fix, Richard Smith, Duncan Sands, Manuel Klimek, and Benjamin Kramer for talking through the issues today. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153506 91177308-0d34-0410-b5e6-96231b3b80d8
* Move the instruction simplification of callsite arguments in the inlinerChandler Carruth2012-03-251-36/+1
| | | | | | | | | | | | | | | | | | | | | | | to instead rely on much more generic and powerful instruction simplification in the function cloner (and thus inliner). This teaches the pruning function cloner to use instsimplify rather than just the constant folder to fold values during cloning. This can simplify a large number of things that constant folding alone cannot begin to touch. For example, it will realize that 'or' and 'and' instructions with certain constant operands actually become constants regardless of what their other operand is. It also can thread back through the caller to perform simplifications that are only possible by looking up a few levels. In particular, GEPs and pointer testing tend to fold much more heavily with this change. This should (in some cases) have a positive impact on compile times with optimizations on because the inliner itself will simply avoid cloning a great deal of code. It already attempted to prune proven-dead code, but now it will be use the stronger simplifications to prove more code dead. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153403 91177308-0d34-0410-b5e6-96231b3b80d8
* add EP_OptimizerLast extension pointKostya Serebryany2012-03-231-0/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153353 91177308-0d34-0410-b5e6-96231b3b80d8
* Rip out support for 'llvm.noinline'. This thing has a strange history...Chandler Carruth2012-03-161-45/+0
| | | | | | | | | | | | | | | | | | | | | It was added in 2007 as the first cut at supporting no-inline attributes, but we didn't have function attributes of any form at the time. However, it was added without any mention in the LangRef or other documentation. Later on, in 2008, Devang added function notes for 'inline=never' and then turned them into proper function attributes. From that point onward, as far as I can tell, the world moved on, and no one has touched 'llvm.noinline' in any meaningful way since. It's time has now come. We have had better mechanisms for doing this for a long time, all the frontends I'm aware of use them, and this is just holding back progress. Given that it was never a documented feature of the IR, I've provided no auto-upgrade support. If people know of real, in-the-wild bitcode that relies on this, yell at me and I'll add it, but I *seriously* doubt anyone cares. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152904 91177308-0d34-0410-b5e6-96231b3b80d8
* Start removing the use of an ad-hoc 'never inline' set and insteadChandler Carruth2012-03-163-34/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | directly query the function information which this set was representing. This simplifies the interface of the inline cost analysis, and makes the always-inline pass significantly more efficient. Previously, always-inline would first make a single set of every function in the module *except* those marked with the always-inline attribute. It would then query this set at every call site to see if the function was a member of the set, and if so, refuse to inline it. This is quite wasteful. Instead, simply check the function attribute directly when looking at the callsite. The normal inliner also had similar redundancy. It added every function in the module with the noinline attribute to its set to ignore, even though inside the cost analysis function we *already tested* the noinline attribute and produced the same result. The only tricky part of removing this is that we have to be able to correctly remove only the functions inlined by the always-inline pass when finalizing, which requires a bit of a hack. Still, much less of a hack than the set of all non-always-inline functions was. While I was touching this function, I switched a heavy-weight set to a vector with sort+unique. The algorithm already had a two-phase insert and removal pattern, we were just needlessly paying the uniquing cost on every insert. This probably speeds up some compiles by a small amount (-O0 compiles with lots of always-inline, so potentially heavy libc++ users), but I've not tried to measure it. I believe there is no functional change here, but yell if you spot one. None are intended. Finally, the direction this is going in is to greatly simplify the inline cost query interface so that we can replace its implementation with a much more clever one. Along the way, all the APIs get simplified, so it seems incrementally good. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152903 91177308-0d34-0410-b5e6-96231b3b80d8
* Change where we enable the heuristic that delays inlining into functionsChandler Carruth2012-03-141-7/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | which are small enough to themselves be inlined. Delaying in this manner can be harmful if the function is inelligible for inlining in some (or many) contexts as it pessimizes the code of the function itself in the event that inlining does not eventually happen. Previously the check was written to only do this delaying of inlining for static functions in the hope that they could be entirely deleted and in the knowledge that all callers of static functions will have the opportunity to inline if it is in fact profitable. However, with C++ we get two other important sources of functions where the definition is always available for inlining: inline functions and templated functions. This patch generalizes the inliner to allow linkonce-ODR (the linkage such C++ routines receive) to also qualify for this delay-based inlining. Benchmarking across a range of large real-world applications shows roughly 2% size increase across the board, but an average speedup of about 0.5%. Some benhcmarks improved over 2%, and the 'clang' binary itself (when bootstrapped with this feature) shows a 1% -O0 performance improvement when run over all Sema, Lex, and Parse source code smashed into a single file. A clean re-build of Clang+LLVM with a bootstrapped Clang shows approximately 2% improvement, but that measurement is often noisy. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152737 91177308-0d34-0410-b5e6-96231b3b80d8
* Teach globalopt how to evaluate an invoke with a non-void return type.Dan Gohman2012-03-131-5/+6
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152634 91177308-0d34-0410-b5e6-96231b3b80d8
* When inlining a function and adding its inner call sites to theChandler Carruth2012-03-121-1/+35
| | | | | | | | | | | | | | candidate set for subsequent inlining, try to simplify the arguments to the inner call site now that inlining has been performed. The goal here is to propagate and fold constants through deeply nested call chains. Without doing this, we loose the inliner bonus that should be applied because the arguments don't match the exact pattern the cost estimator uses. Reviewed on IRC by Benjamin Kramer. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152556 91177308-0d34-0410-b5e6-96231b3b80d8
* Taken into account Duncan's comments for r149481 dated by 2nd Feb 2012:Stepan Dyatkovskiy2012-03-081-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120130/136146.html Implemented CaseIterator and it solves almost all described issues: we don't need to mix operand/case/successor indexing anymore. Base iterator class is implemented as a template since it may be initialized either from "const SwitchInst*" or from "SwitchInst*". ConstCaseIt is just a read-only iterator. CaseIt is read-write iterator; it allows to change case successor and case value. Usage of iterator allows totally remove resolveXXXX methods. All indexing convertions done automatically inside the iterator's getters. Main way of iterator usage looks like this: SwitchInst *SI = ... // intialize it somehow for (SwitchInst::CaseIt i = SI->caseBegin(), e = SI->caseEnd(); i != e; ++i) { BasicBlock *BB = i.getCaseSuccessor(); ConstantInt *V = i.getCaseValue(); // Do something. } If you want to convert case number to TerminatorInst successor index, just use getSuccessorIndex iterator's method. If you want initialize iterator from TerminatorInst successor index, use CaseIt::fromSuccessorIndex(...) method. There are also related changes in llvm-clients: klee and clang. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152297 91177308-0d34-0410-b5e6-96231b3b80d8
* Plog a memleak in GlobalOpt.Benjamin Kramer2012-02-271-1/+1
| | | | | | Found by valgrind. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151525 91177308-0d34-0410-b5e6-96231b3b80d8
* Add comment.Chad Rosier2012-02-252-2/+3
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151431 91177308-0d34-0410-b5e6-96231b3b80d8
* Add support for disabling llvm.lifetime intrinsics in the AlwaysInliner. TheseChad Rosier2012-02-253-8/+17
| | | | | | | | | | are optimization hints, but at -O0 we're not optimizing. This becomes a problem when the alwaysinline attribute is abused. rdar://10921594 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151429 91177308-0d34-0410-b5e6-96231b3b80d8