| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
rather than a bitfield, a great suggestion by Chris during code review.
There is still quite a bit of cruft in the interface, but that requires
sorting out some awkward uses of the cost inside the actual inliner.
No functionality changed intended here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153853 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
| |
interfaces. These methods were used in the old inline cost system where
there was a persistent cache that had to be updated, invalidated, and
cleared. We're now doing more direct computations that don't require
this intricate dance. Even if we resume some level of caching, it would
almost certainly have a simpler and more narrow interface than this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153813 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.
This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:
- Build a simplification mapping based on the callsite arguments to the
function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
breadth-first ordering) and walk its instructions using a custom
InstVisitor.
- For each instruction's operands, re-map them based on the
simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
cost metric.
- When the terminator is reached, replace any conditional value in the
terminator with any simplifications from the mapping we have, and add
any successors which are not proven to be dead from these
simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
callsite, stop analyzing the function in order to bound cost.
The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.
Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.
Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.
I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.
As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153812 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
directly query the function information which this set was representing.
This simplifies the interface of the inline cost analysis, and makes the
always-inline pass significantly more efficient.
Previously, always-inline would first make a single set of every
function in the module *except* those marked with the always-inline
attribute. It would then query this set at every call site to see if the
function was a member of the set, and if so, refuse to inline it. This
is quite wasteful. Instead, simply check the function attribute directly
when looking at the callsite.
The normal inliner also had similar redundancy. It added every function
in the module with the noinline attribute to its set to ignore, even
though inside the cost analysis function we *already tested* the
noinline attribute and produced the same result.
The only tricky part of removing this is that we have to be able to
correctly remove only the functions inlined by the always-inline pass
when finalizing, which requires a bit of a hack. Still, much less of
a hack than the set of all non-always-inline functions was. While I was
touching this function, I switched a heavy-weight set to a vector with
sort+unique. The algorithm already had a two-phase insert and removal
pattern, we were just needlessly paying the uniquing cost on every
insert.
This probably speeds up some compiles by a small amount (-O0 compiles
with lots of always-inline, so potentially heavy libc++ users), but I've
not tried to measure it.
I believe there is no functional change here, but yell if you spot one.
None are intended.
Finally, the direction this is going in is to greatly simplify the
inline cost query interface so that we can replace its implementation
with a much more clever one. Along the way, all the APIs get simplified,
so it seems incrementally good.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152903 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
side of things. This is all dead code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152759 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
correlated pairs of pointer arguments at the callsite. This is designed
to recognize the common C++ idiom of begin/end pointer pairs when the
end pointer is a constant offset from the begin pointer. With the
C-based idiom of a pointer and size, the inline cost saw the constant
size calculation, and this provides the same level of information for
begin/end pairs.
In order to propagate this information we have to search for candidate
operations on a pair of pointer function arguments (or derived from
them) which would be simplified if the pointers had a known constant
offset. Then the callsite analysis looks for such pointer pairs in the
argument list, and applies the appropriate bonus.
This helps LLVM detect that half of bounds-checked STL algorithms
(such as hash_combine_range, and some hybrid sort implementations)
disappear when inlined with a constant size input. However, it's not
a complete fix due the inaccuracy of our cost metric for constants in
general. I'm looking into that next.
Benchmarks showed no significant code size change, and very minor
performance changes. However, specific code such as hashing is showing
significantly cleaner inlining decisions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152752 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
analysis to be methods on the cost analysis's function info object
instead of the code metrics object. These really are just users of the
code metrics, they're building the information for the function's
analysis.
This is the first step of growing the amount of information we collect
about a function in order to cope with pair-wise simplifications due to
allocas.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152283 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@144537 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
| |
We want heuristics to be based on accurate data, but more importantly
we don't want llvm to behave randomly. A benign trunc inserted by an
upstream pass should not cause a wild swings in optimization
level. See PR11034. It's a general problem with threshold-based
heuristics, but we can make it less bad.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@140919 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
| |
Luis Felipe Strano Moraes!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129558 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a) Making it a per call site bonus for functions that we can move from
indirect to direct calls.
b) Reduces the bonus from 500 to 100 per call site.
c) Subtracts the size of the possible newly inlineable call from the
bonus to only add a bonus if we can inline a small function to devirtualize
it.
Also changes the bonus from a positive that's subtracted to a negative
that's added.
Fixes the remainder of rdar://8546196 by reducing the object file size
after inlining by 84%.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124916 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124641 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124312 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
| |
a few loops accordingly. Should be no functional change.
This is a step for more accurate cost/benefit analysis of devirt/inlining
bonuses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124275 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124148 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
create a given specialization of a function in PartialSpecialization. If the total performance bonus across all callsites passing the same constant exceeds the specialization cost, we create the specialization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116158 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
performance metrics. Partial Specialization will apply the former to function specializations, and the latter to all callsites that can use a specialization, in order to decide whether to create a specialization
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116057 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
| |
and into CodeMetrics. They
don't use any InlineCostAnalyzer state, and are useful for other clients who don't necessarily want to use
all of InlineCostAnalyzer's logic, some of which is fairly inlining-specific.
No intended functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113499 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
can be reused from PartialSpecializationCost
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@105725 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
| |
on RAUW of functions, this is a correctness issue instead of a mere memory
usage problem.
No testcase until the new MergeFunctions can land.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103653 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
function as an explicit argument, for use when inlining function pointers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102841 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102049 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101664 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
| |
and don't cause any problems on Darwin.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101584 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101528 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
| |
see http://google1.osuosl.org:8011/builders/dragonegg-x86_64-linux/builds/693
Original commit text:
Use a ValueMap not a std::map for the reason indicated
in the comment. This was causing nondeterministic changes
in inlining decisions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101525 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101520 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
| |
in the comment. This was causing nondeterministic changes
in inlining decisions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101503 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101463 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98403 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
| |
Use CodeMetrics.analyzeBasicBlock() to estimate BB size.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98401 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Caller cost info would be reset everytime a callee was inlined. If the
caller has lots of calls and there is some mutual recursion going on, the
caller cost info could be calculated many times.
This patch reduces inliner runtime from 240s to 0.5s for a function with 20000
small function calls.
This is a more conservative version of r98089 that doesn't break the clang
test CodeGenCXX/temp-order.cpp. That test relies on rather extreme inlining
for constant folding.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98099 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98094 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
| |
The Caller cost info would be reset everytime a callee was inlined. If the
caller has lots of calls and there is some mutual recursion going on, the
caller cost info could be calculated many times.
This patch reduces inliner runtime from 240s to 0.5s for a function with 20000
small function calls.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98089 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95453 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After running a batch of measurements, it is clear that the inliner metrics
need some adjustments:
Own argument bonus: 20 -> 5
Outgoing argument penalty: 0 -> 5
Alloca bonus: 10 -> 5
Constant instr bonus: 7 -> 5
Dead successor bonus: 40 -> 5*(avg instrs/block)
The new cost metrics are generaly 25 points higher than before, so we may need
to move thresholds.
With this change, InlineConstants::CallPenalty becomes a political correction:
if (!isa<IntrinsicInst>(II) && !callIsSmall(CS.getCalledFunction()))
NumInsts += InlineConstants::CallPenalty + CS.arg_size();
The code size is accurately modelled by CS.arg_size(). CallPenalty is added
because calls tend to take a long time, so it may not be worth it to inline a
function with lots of calls.
All of the political corrections are in the InlineConstants namespace:
IndirectCallBonus, CallPenalty, LastCallToStaticBonus, ColdccPenalty,
NoreturnPenalty.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94615 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
| |
just the NumBlocks field.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84056 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
| |
named CodeMetrics. Move it to be a non-nested class. Rename RegionInfo
back to FunctionInfo.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84013 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83998 91177308-0d34-0410-b5e6-96231b3b80d8
|