aboutsummaryrefslogtreecommitdiffstats
path: root/test/CodeGen
Commit message (Collapse)AuthorAgeFilesLines
...
* [NVPTX] Fix bug in stack code generation causes by MC conversionJustin Holewinski2013-08-061-0/+18
| | | | | | | We do use a very small set of physical registers, so account for them in the virtual register encoding between MachineInstr and MC git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187799 91177308-0d34-0410-b5e6-96231b3b80d8
* [NVPTX] Start conversion to MC infrastructureJustin Holewinski2013-08-061-0/+18
| | | | | | | | | This change converts the NVPTX target to use the MC infrastructure instead of directly emitting MachineInstr instances. This brings the target more up-to-date with LLVM TOT, and should fix PR15175 and PR15958 (libNVPTXInstPrinter is empty) as a side-effect. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187798 91177308-0d34-0410-b5e6-96231b3b80d8
* ARM: implement allowTruncateForTailCallTim Northover2013-08-061-0/+111
| | | | | | | Now that it's in place, it seems silly not to let ARM make use of the extra tail call opportunities. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187795 91177308-0d34-0410-b5e6-96231b3b80d8
* Refactor isInTailCallPosition handlingTim Northover2013-08-063-0/+157
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change came about primarily because of two issues in the existing code. Niether of: define i64 @test1(i64 %val) { %in = trunc i64 %val to i32 tail call i32 @ret32(i32 returned %in) ret i64 %val } define i64 @test2(i64 %val) { tail call i32 @ret32(i32 returned undef) ret i32 42 } should be tail calls, and the function sameNoopInput is responsible. The main problem is that it is completely symmetric in the "tail call" and "ret" value, but in reality different things are allowed on each side. For these cases: 1. Any truncation should lead to a larger value being generated by "tail call" than needed by "ret". 2. Undef should only be allowed as a source for ret, not as a result of the call. Along the way I noticed that a mismatch between what this function treats as a valid truncation and what the backends see can lead to invalid calls as well (see x86-32 test case). This patch refactors the code so that instead of being based primarily on values which it recurses into when necessary, it starts by inspecting the type and considers each fundamental slot that the backend will see in turn. For example, given a pathological function that returned {{}, {{}, i32, {}}, i32} we would consider each "real" i32 in turn, and ask if it passes through unchanged. This is much closer to what the backend sees as a result of ComputeValueVTs. Aside from the bug fixes, this eliminates the recursion that's going on and, I believe, makes the bulk of the code significantly easier to understand. The trade-off is the nasty iterators needed to find the real types inside a returned value. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187787 91177308-0d34-0410-b5e6-96231b3b80d8
* Factor FlattenCFG out from SimplifyCFGTom Stellard2013-08-062-0/+115
| | | | | | Patch by: Mei Ye git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187764 91177308-0d34-0410-b5e6-96231b3b80d8
* R600/SI: Add missing test for r187749Tom Stellard2013-08-051-0/+48
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187754 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Use BRCT and BRCTG to eliminate add-&-compare sequencesRichard Sandiford2013-08-053-1/+237
| | | | | | | | | | | | | | | | | This patch just uses a peephole test for "add; compare; branch" sequences within a single block. The IR optimizers already convert loops to decrement-and-branch-on-nonzero form in some cases, so even this simplistic test triggers many times during a clang bootstrap and projects/test-suite run. It looks like there are still cases where we need to more strongly prefer branches on nonzero though. E.g. I saw a case where a loop that started out with a check for 0 ended up with a check for -1. I'll try to look at that sometime. I ended up adding the Reference class because MachineInstr::readsRegister() doesn't check for subregisters (by design, as far as I could tell). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187723 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Use LOAD AND TEST to eliminate comparisons against zeroRichard Sandiford2013-08-051-0/+223
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187720 91177308-0d34-0410-b5e6-96231b3b80d8
* AVX-512 set: added mask operations, lowering BUILD_VECTOR for i1 vector types.Elena Demikhovsky2013-08-051-0/+58
| | | | | | | Added intrinsics and tests. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187717 91177308-0d34-0410-b5e6-96231b3b80d8
* Add the saving of S2. This is needed for some of the floating pointReed Kotler2013-08-045-16/+17
| | | | | | | | | | | helper functions. This can be optimized out later when the remaining parts of the helper function work is moved into the Mips16HardFloat pass. For now it forces us to use the 32 bit save/restore instructions instead of the 16 bit ones. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187712 91177308-0d34-0410-b5e6-96231b3b80d8
* X86: Turn fp selects into mask operations.Benjamin Kramer2013-08-043-48/+290
| | | | | | | | | | | | | | | | | | | | | | | double test(double a, double b, double c, double d) { return a<b ? c : d; } before: _test: ucomisd %xmm0, %xmm1 ja LBB0_2 movaps %xmm3, %xmm2 LBB0_2: movaps %xmm2, %xmm0 after: _test: cmpltsd %xmm1, %xmm0 andpd %xmm0, %xmm2 andnpd %xmm3, %xmm0 orpd %xmm2, %xmm0 Small speedup on Benchmarks/SmallPT git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187706 91177308-0d34-0410-b5e6-96231b3b80d8
* AVX-512 set: added VEXTRACTPS instructionElena Demikhovsky2013-08-041-1/+20
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187705 91177308-0d34-0410-b5e6-96231b3b80d8
* X86: specify CPU on new test to fix atom buildbotTim Northover2013-08-041-1/+1
| | | | | | | Apparently Atoms use lea for stack adjustment, which we weren't looking for. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187704 91177308-0d34-0410-b5e6-96231b3b80d8
* X86: correct tail return address calculationTim Northover2013-08-041-0/+19
| | | | | | | | | | | Due to the weird and wondeful usual arithmetic conversions, some calculations involving negative values were getting performed in uint32_t and then promoted to int64_t, which is really not a good idea. Patch by Katsuhiro Ueno. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187703 91177308-0d34-0410-b5e6-96231b3b80d8
* Clean up code for Mips16 large frame handling.Reed Kotler2013-08-041-12/+25
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187701 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix PPC64 64-bit GPR inline asm constraint matchingHal Finkel2013-08-031-0/+65
| | | | | | | | | | | | | | | Internally, the PowerPC backend names the 32-bit GPRs R[0-9]+, and names the 64-bit parent GPRs X[0-9]+. When matching inline assembly constraints with explicit register names, on PPC64 when an i64 MVT has been requested, we need to follow gcc's convention of using r[0-9]+ to refer to the 64-bit (parent) registers. At some point, we'll probably want to arrange things so that the generic code in TargetLowering uses the AsmName fields declared in *RegisterInfo.td in order to match these inline asm register constraints. If we do that, this change can be reverted. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187693 91177308-0d34-0410-b5e6-96231b3b80d8
* [mips] Expand vector truncating stores and extending loads.Akira Hatanaka2013-08-021-0/+11
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187667 91177308-0d34-0410-b5e6-96231b3b80d8
* Temporarily revert "Debug Info Finder|Verifier: handle DbgLoc attached toEric Christopher2013-08-024-8/+7
| | | | | | | | instructions." in an attempt to bring back some bots. This reverts commit r187609. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187638 91177308-0d34-0410-b5e6-96231b3b80d8
* Use function attributes to indicate that we don't want to realign the stack.Bill Wendling2013-08-014-25/+702
| | | | | | | | | Function attributes are the future! So just query whether we want to realign the stack directly from the function instead of through a random target options structure. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187618 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix some issues with Mips16 floating when certain intrinsics are present.Reed Kotler2013-08-011-0/+368
| | | | | | | | | | | | | | | | | This is actually an LLVM bug in the way it generates signatures for these when soft float is enabled. For example, floor ends up having the signature of int64(int64). The signature part is not the same as where the actual parameter types are recorded, and those ARE of course int64(int64) when soft float is enabled. (Yes, Mips16 hard float uses soft float but with different runtime rounes but then has to interoperate with Mips32 using normal floating point). This logic will eventually be moved to the Mips16HardFloat pass so it's not worth sorting out these issues in LLVM since nobody but Mips16 cares about these signatures, as far as I know, and even I won't eventually either. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187613 91177308-0d34-0410-b5e6-96231b3b80d8
* Debug Info Finder|Verifier: handle DbgLoc attached to instructions.Manman Ren2013-08-014-7/+8
| | | | | | | | | | | | | | | | | | | | | | | | | Also remove checking of llvm.dbg.sp since it is not used in generating dwarf. Current state of Finder: DebugInfoFinder tries to list all debug info MDNodes used in a module. To list debug info MDNodes used by an instruction, DebugInfoFinder provides processDeclare, processValue and processLocation to handle DbgDeclareInst, DbgValueInst and DbgLoc attached to instructions. processModule will go through all DICompileUnits in llvm.dbg.cu and list debug info MDNodes used by the CUs. TODO: 1> Finder has a list of CUs, SPs, Types, Scopes and global variables. We need to add a list of variables that are used by DbgDeclareInst and DbgValueInst. 2> MDString fields should be null or isa<MDString> and MDNode fields should be null or isa<MDNode>. We currently use empty string or int 0 to represent null. 3> Go though Verify functions and make sure that they check field types. 4> Clean up existing testing cases to remove llvm.dbg.sp and make sure each testing case has a llvm.dbg.cu. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187609 91177308-0d34-0410-b5e6-96231b3b80d8
* R600: Add 64-bit float load/store supportTom Stellard2013-08-0115-43/+161
| | | | | | | | | | | | | | | | | | * Added R600_Reg64 class * Added T#Index#.XY registers definition * Added v2i32 register reads from parameter and global space * Added f32 and i32 elements extraction from v2f32 and v2i32 * Added v2i32 -> v2f32 conversions Tom Stellard: - Mark vec2 operations as expand. The addition of a vec2 register class made them all legal. Patch by: Dmitry Cherkassov Signed-off-by: Dmitry Cherkassov <dcherkassov@gmail.com> git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187582 91177308-0d34-0410-b5e6-96231b3b80d8
* R600: Use 64-bit alignment for 64-bit kernel argumentsTom Stellard2013-08-011-0/+2
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187581 91177308-0d34-0410-b5e6-96231b3b80d8
* R600/SI: Custom lower i64 ZERO_EXTENDTom Stellard2013-08-011-0/+18
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187580 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Reuse CC results for integer comparisons with zeroRichard Sandiford2013-08-012-0/+691
| | | | | | | | | | | This also fixes a bug in the predication of LR to LOCR: I'd forgotten that with these in-place instruction builds, the implicit operands need to be added manually. I think this was latent until now, but is tested by int-cmp-45.c. It also adds a CC valid mask to STOC, again tested by int-cmp-45.c. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187573 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Prefer comparisons with zeroRichard Sandiford2013-08-015-10/+54
| | | | | | | | Convert >= 1 to > 0, etc. Using comparison with zero isn't a win on its own, but it exposes more opportunities for CC reuse (the next patch). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187571 91177308-0d34-0410-b5e6-96231b3b80d8
* AArch64: add initial NEON supportTim Northover2013-08-0123-1/+6098
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Patch by Ana Pazos. - Completed implementation of instruction formats: AdvSIMD three same AdvSIMD modified immediate AdvSIMD scalar pairwise - Completed implementation of instruction classes (some of the instructions in these classes belong to yet unfinished instruction formats): Vector Arithmetic Vector Immediate Vector Pairwise Arithmetic - Initial implementation of instruction formats: AdvSIMD scalar two-reg misc AdvSIMD scalar three same - Intial implementation of instruction class: Scalar Arithmetic - Initial clang changes to support arm v8 intrinsics. Note: no clang changes for scalar intrinsics function name mangling yet. - Comprehensive test cases for added instructions To verify auto codegen, encoding, decoding, diagnosis, intrinsics. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187567 91177308-0d34-0410-b5e6-96231b3b80d8
* XCore target: Fix Vararg handlingRobert Lytton2013-08-012-17/+55
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187565 91177308-0d34-0410-b5e6-96231b3b80d8
* XCore target: Add byval handlingRobert Lytton2013-08-011-0/+58
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187563 91177308-0d34-0410-b5e6-96231b3b80d8
* Xcore targetRobert Lytton2013-08-011-0/+4
| | | | | | Fix emitArrayBound() calling OutStreamer.Emit*() multiple times when trying to print a single line git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187562 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix some misc. issues with Mips16 fp stubs.Reed Kotler2013-08-011-48/+50
| | | | | | | | | | 1) They should never be inlined. 2) A naming inconsistency with gcc mips16 3) Stubs should not have the global attribute git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187555 91177308-0d34-0410-b5e6-96231b3b80d8
* Revert "R600: Non vector only instruction can be scheduled on trans unit"Tom Stellard2013-07-3125-185/+73
| | | | | | This reverts commit 98ce62780ea7185ba710868bf83c8077e8d7f6d6. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187526 91177308-0d34-0410-b5e6-96231b3b80d8
* R600: Avoid more than 4 literals in the same instruction group at schedulingVincent Lejeune2013-07-311-0/+68
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187515 91177308-0d34-0410-b5e6-96231b3b80d8
* R600: Non vector only instruction can be scheduled on trans unitVincent Lejeune2013-07-3125-73/+185
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187514 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Implement isLegalAddressingMode()Richard Sandiford2013-07-311-0/+25
| | | | | | | | | | | The loop optimizers were assuming that scales > 1 were OK. I think this is actually a bug in TargetLoweringBase::isLegalAddressingMode(), since it seems to be trying to reject anything that isn't r+i or r+r, but it has no default case for scales other than 0, 1 or 2. Implementing the hook for z means that z can no longer test any change there though. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187497 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Be more careful about inverting CC masks (conditional loads)Richard Sandiford2013-07-312-14/+14
| | | | | | | | | Extend r187495 to conditional loads. I split this out because the easiest way seemed to be to force a particular operand order in SystemZISelDAGToDAG.cpp. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187496 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Be more careful about inverting CC masksRichard Sandiford2013-07-3147-124/+149
| | | | | | | | | | | | | | | | | | | | | | | | | System z branches have a mask to select which of the 4 CC values should cause the branch to be taken. We can invert a branch by inverting the mask. However, not all instructions can produce all 4 CC values, so inverting the branch like this can lead to some oddities. For example, integer comparisons only produce a CC of 0 (equal), 1 (less) or 2 (greater). If an integer EQ is reversed to NE before instruction selection, the branch will test for 1 or 2. If instead the branch is reversed after instruction selection (by inverting the mask), it will test for 1, 2 or 3. Both are correct, but the second isn't really canonical. This patch therefore keeps track of which CC values are possible and uses this when inverting a mask. Although this is mostly cosmestic, it fixes undefined behavior for the CIJNLH in branch-08.ll. Another fix would have been to mask out bit 0 when generating the fused compare and branch, but the point of this patch is that we shouldn't need to do that in the first place. The patch also makes it easier to reuse CC results from other instructions. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187495 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Move compare-and-branch generation even laterRichard Sandiford2013-07-311-0/+45
| | | | | | | | | | | | | | | | | | | | | | | | r187116 moved compare-and-branch generation from the instruction-selection pass to the peephole optimizer (via optimizeCompare). It turns out that even this is a bit too early. Fused compare-and-branch instructions don't interact well with predication, where a CC result is needed. They also make it harder to reuse the CC side-effects of earlier instructions (not yet implemented, but the subject of a later patch). Another problem was that the AnalyzeBranch family of routines weren't handling compares and branches, so we weren't able to reverse the fused form in cases where we would reverse a separate branch. This could have been fixed by extending AnalyzeBranch, but given the other problems, I've instead moved the fusing to the long-branch pass, which is also responsible for the opposite transformation: splitting out-of-range compares and branches into separate compares and long branches. I've added a test for the AnalyzeBranch problem. A test for the predication problem is included in the next patch, which fixes a bug in the choice of CC mask. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187494 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Postpone NI->RISBG conversion to convertToThreeAddress()Richard Sandiford2013-07-3129-431/+446
| | | | | | | | | | | | | | | | | | | | | | | r186399 aggressively used the RISBG instruction for immediate ANDs, both because it can handle some values that AND IMMEDIATE can't, and because it allows the destination register to be different from the source. I realized later while implementing the distinct-ops support that it would be better to leave the choice up to convertToThreeAddress() instead. The AND IMMEDIATE form is shorter and is less likely to be cracked. This is a problem for 32-bit ANDs because we assume that all 32-bit operations will leave the high word untouched, whereas RISBG used in this way will either clear the high word or copy it from the source register. The patch uses the z196 instruction RISBLG for this instead. This means that z10 will be restricted to NILL, NILH and NILF for 32-bit ANDs, but I think that should be OK for now. Although we're using z10 as the base architecture, the optimization work is going to be focused more on z196 and zEC12. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187492 91177308-0d34-0410-b5e6-96231b3b80d8
* Added INSERT and EXTRACT intructions from AVX-512 ISA.Elena Demikhovsky2013-07-311-0/+44
| | | | | | | | | | All insertf*/extractf* functions replaced with insert/extract since we have insertf and inserti forms. Added lowering for INSERT_VECTOR_ELT / EXTRACT_VECTOR_ELT for 512-bit vectors. Added lowering for EXTRACT/INSERT subvector for 512-bit vectors. Added a test. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187491 91177308-0d34-0410-b5e6-96231b3b80d8
* Changed register names (and pointer keywords) to be lower case when using ↵Craig Topper2013-07-316-18/+18
| | | | | | | | | | Intel X86 assembler syntax. Patch by Richard Mitton. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187476 91177308-0d34-0410-b5e6-96231b3b80d8
* This test may have been sensitive to the ARM ABI...Andrew Trick2013-07-301-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187442 91177308-0d34-0410-b5e6-96231b3b80d8
* MI Sched fix: assert "Disconnected LRG within the scheduling region."Andrew Trick2013-07-301-1/+54
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187435 91177308-0d34-0410-b5e6-96231b3b80d8
* R600/SI: Expand vector fp <-> int conversionsTom Stellard2013-07-304-36/+36
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187421 91177308-0d34-0410-b5e6-96231b3b80d8
* [ARM] check bitwidth in PerformORCombineSaleem Abdulrasool2013-07-301-0/+32
| | | | | | | | | | | | | | When simplifying a (or (and B A) (and C ~A)) to a (VBSL A B C) ensure that the bitwidth of the second operands to both ands match before comparing the negation of the values. Split the check of the value of the second operands to the ands. Move the cast and variable declaration slightly higher to make it slightly easier to follow. Bug-Id: 16700 Signed-off-by: Saleem Abdulrasool <compnerd@compnerd.org> git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187404 91177308-0d34-0410-b5e6-96231b3b80d8
* [R600] Replicate old DAGCombiner behavior in target specific DAG combine.Quentin Colombet2013-07-301-1/+0
| | | | | | | | build_vector is lowered to REG_SEQUENCE, which is something the register allocator does a good job at optimizing. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187397 91177308-0d34-0410-b5e6-96231b3b80d8
* [DAGCombiner] insert_vector_elt: Avoid building a vector twice.Quentin Colombet2013-07-307-26/+53
| | | | | | | | | | | | | | | | | | This patch prevents the following combine when the input vector is used more than once. insert_vector_elt (build_vector elt0, ..., eltN), NewEltIdx, idx => build_vector elt0, ..., NewEltIdx, ..., eltN The reasons are: - Building a vector may be expensive, so try to reuse the existing part of a vector instead of creating a new one (think big vectors). - elt0 to eltN now have two users instead of one. This may prevent some other optimizations. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187396 91177308-0d34-0410-b5e6-96231b3b80d8
* Debug Info: enable verifier for testing cases.Manman Ren2013-07-293-3/+3
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187375 91177308-0d34-0410-b5e6-96231b3b80d8
* Debug Info: update testing cases to pass verifier.Manman Ren2013-07-2915-45/+61
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187362 91177308-0d34-0410-b5e6-96231b3b80d8
* Proper va_arg/va_copy lowering on win64Nico Rieck2013-07-291-0/+60
| | | | | | | Win64 uses CharPtrBuiltinVaList instead of X86_64ABIBuiltinVaList like other 64-bit targets. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187355 91177308-0d34-0410-b5e6-96231b3b80d8