aboutsummaryrefslogtreecommitdiffstats
path: root/test
Commit message (Collapse)AuthorAgeFilesLines
* Revert r112623. It is causing self host build failures.Devang Patel2010-08-311-53/+0
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112631 91177308-0d34-0410-b5e6-96231b3b80d8
* Remember byval argument's frame index during argument lowering and use this ↵Devang Patel2010-08-311-0/+53
| | | | | | | | | info to emit debug info. Fixes Radar 8367011. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112623 91177308-0d34-0410-b5e6-96231b3b80d8
* Add a test for the duplicated-conditional situation illutrated by PR5652.Owen Anderson2010-08-311-0/+24
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112621 91177308-0d34-0410-b5e6-96231b3b80d8
* merge two tests.Chris Lattner2010-08-312-35/+35
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112617 91177308-0d34-0410-b5e6-96231b3b80d8
* Manually reduce this testcase.Owen Anderson2010-08-311-77/+11
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112615 91177308-0d34-0410-b5e6-96231b3b80d8
* merge two tests and convert to filecheck.Chris Lattner2010-08-312-23/+28
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112613 91177308-0d34-0410-b5e6-96231b3b80d8
* Add a micro-test for the transforms I added to JumpThreading.Owen Anderson2010-08-311-0/+30
| | | | | | | | | | | | | I have not been able to find a way to test each in isolation, for a few reasons: 1) The ability to look-through non-i1 BinaryOperator's requires the ability to look through non-constant ICmps in order for it to ever trigger. 2) The ability to do LVI-powered PHI value determination only matters in cases that ProcessBranchOnPHI can't handle. Since it already handles all the cases without other instructions in the def-use chain between the PHI and the branch, it requires the ability to look through ICmps and/or BinaryOperators as well. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112611 91177308-0d34-0410-b5e6-96231b3b80d8
* Update test for 112609Jim Grosbach2010-08-311-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112610 91177308-0d34-0410-b5e6-96231b3b80d8
* Rename test directory to reflect new pass name.Owen Anderson2010-08-312-0/+0
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112592 91177308-0d34-0410-b5e6-96231b3b80d8
* Rename ValuePropagation to a more descriptive CorrelatedValuePropagation.Owen Anderson2010-08-311-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112591 91177308-0d34-0410-b5e6-96231b3b80d8
* More Chris-inspired JumpThreading fixes: use ConstantExpr to correctly ↵Owen Anderson2010-08-311-0/+91
| | | | | | | | | | constant-fold undef, and be more careful with its return value. This actually exposed an infinite recursion bug in ComputeValueKnownInPredecessors which theoretically already existed (in JumpThreading's handling of and/or of i1's), but never manifested before. This patch adds a tracking set to prevent this case. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112589 91177308-0d34-0410-b5e6-96231b3b80d8
* Remove r111665, which implemented store-narrowing in InstCombine. Chris ↵Owen Anderson2010-08-311-21/+0
| | | | | | | | | discovered a miscompilation in it, and it's not easily fixable at the optimizer level. I'll investigate reimplementing it in DAGCombine. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112575 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix borken testAnton Korobeynikov2010-08-301-3/+3
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112555 91177308-0d34-0410-b5e6-96231b3b80d8
* Combine these two tests, and make sure there's a newline at the end of the file.Owen Anderson2010-08-302-21/+19
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112554 91177308-0d34-0410-b5e6-96231b3b80d8
* Remove NEON vmovn intrinsic, replacing it with vector truncate operations.Bob Wilson2010-08-303-7/+17
| | | | | | | Auto-upgrade the old intrinsic and update tests. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112507 91177308-0d34-0410-b5e6-96231b3b80d8
* two changes:Chris Lattner2010-08-301-2/+2
| | | | | | | | | | | | | | 1) nuke ConstDataCoalSection, which is dead. 2) revise my previous patch for rdar://8018335, which was completely wrong. Specifically, it doesn't make sense to mark __TEXT,__const_coal as PURE_INSTRUCTIONS, because it is for readonly data. templates (it turns out) go to const_coal_nt. The real fix for rdar://8018335 was to give ConstTextCoalSection a section kind of ReadOnly instead of Text. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112496 91177308-0d34-0410-b5e6-96231b3b80d8
* Partially revert r112480. Caused test failures.Michael J. Spencer2010-08-301-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112486 91177308-0d34-0410-b5e6-96231b3b80d8
* coff-dump.py: Fix PR7996. Now it is compatible to Python-2.4.NAKAMURA Takumi2010-08-301-2/+5
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112485 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix constant-over-index.ll test on windows.Michael J. Spencer2010-08-301-2/+3
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112483 91177308-0d34-0410-b5e6-96231b3b80d8
* Test: Fix LLVMC tests on CMake.Michael J. Spencer2010-08-3021-20/+45
| | | | | | The CMake build didn't define TEST_COMPILE_CXX_CMD. The tests assumed gcc. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112480 91177308-0d34-0410-b5e6-96231b3b80d8
* Correct bogus module triple specifications.Duncan Sands2010-08-3046-46/+46
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112469 91177308-0d34-0410-b5e6-96231b3b80d8
* LICM does get dead instructions input to it. Instead of sinking themChris Lattner2010-08-291-0/+14
| | | | | | | out of loops, just delete them. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112451 91177308-0d34-0410-b5e6-96231b3b80d8
* Make IVUsers iterative instead of recursive.Dan Gohman2010-08-291-1/+1
| | | | | | | | This has the side effect of reversing the order of most of IVUser's results. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112442 91177308-0d34-0410-b5e6-96231b3b80d8
* Make this test less dependent on register allocation choices.Dan Gohman2010-08-291-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112426 91177308-0d34-0410-b5e6-96231b3b80d8
* Use exec.Dan Gohman2010-08-291-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112425 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix lowering of INSERT_VECTOR_ELT in SPU. Kalle Raiskila2010-08-291-0/+8
| | | | | | | The IDX was treated as byte index, not element index. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112422 91177308-0d34-0410-b5e6-96231b3b80d8
* Remove NEON vaddl, vaddw, vsubl, and vsubw intrinsics. Instead, use llvmBob Wilson2010-08-294-80/+132
| | | | | | | | IR add/sub operations with one or both operands sign- or zero-extended. Auto-upgrade the old intrinsics. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112416 91177308-0d34-0410-b5e6-96231b3b80d8
* merge a bunch of shuffle tests into sse2.llChris Lattner2010-08-299-158/+164
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112398 91177308-0d34-0410-b5e6-96231b3b80d8
* add some nounwind'sChris Lattner2010-08-297-13/+13
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112396 91177308-0d34-0410-b5e6-96231b3b80d8
* fixme accomplishedChris Lattner2010-08-281-2/+0
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112386 91177308-0d34-0410-b5e6-96231b3b80d8
* fix the buildvector->insertp[sd] logic to not always create a redundantChris Lattner2010-08-283-2/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | insertp[sd] $0, which is a noop. Before: _f32: ## @f32 pshufd $1, %xmm1, %xmm2 pshufd $1, %xmm0, %xmm3 addss %xmm2, %xmm3 addss %xmm1, %xmm0 ## kill: XMM0<def> XMM0<kill> XMM0<def> insertps $0, %xmm0, %xmm0 insertps $16, %xmm3, %xmm0 ret after: _f32: ## @f32 movdqa %xmm0, %xmm2 addss %xmm1, %xmm2 pshufd $1, %xmm1, %xmm1 pshufd $1, %xmm0, %xmm3 addss %xmm1, %xmm3 movdqa %xmm2, %xmm0 insertps $16, %xmm3, %xmm0 ret The extra movs are due to a random (poor) scheduling decision. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112379 91177308-0d34-0410-b5e6-96231b3b80d8
* fix the BuildVector -> unpcklps logic to not do pointless shuffles Chris Lattner2010-08-281-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | when the top elements of a vector are undefined. This happens all the time for X86-64 ABI stuff because only the low 2 elements of a 4 element vector are defined. For example, on: _Complex float f32(_Complex float A, _Complex float B) { return A+B; } We used to produce (with SSE2, SSE4.1+ uses insertps): _f32: ## @f32 movdqa %xmm0, %xmm2 addss %xmm1, %xmm2 pshufd $16, %xmm2, %xmm2 pshufd $1, %xmm1, %xmm1 pshufd $1, %xmm0, %xmm0 addss %xmm1, %xmm0 pshufd $16, %xmm0, %xmm1 movdqa %xmm2, %xmm0 unpcklps %xmm1, %xmm0 ret We now produce: _f32: ## @f32 movdqa %xmm0, %xmm2 addss %xmm1, %xmm2 pshufd $1, %xmm1, %xmm1 pshufd $1, %xmm0, %xmm3 addss %xmm1, %xmm3 movaps %xmm2, %xmm0 unpcklps %xmm3, %xmm0 ret This implements rdar://8368414 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112378 91177308-0d34-0410-b5e6-96231b3b80d8
* Update ocaml test.Benjamin Kramer2010-08-281-6/+0
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112364 91177308-0d34-0410-b5e6-96231b3b80d8
* remove unions from LLVM IR. They are severely buggy and notChris Lattner2010-08-283-59/+0
| | | | | | | being actively maintained, improved, or extended. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112356 91177308-0d34-0410-b5e6-96231b3b80d8
* remove the ABCD and SSI passes. They don't have any clients thatChris Lattner2010-08-288-182/+0
| | | | | | | | I'm aware of, aren't maintained, and LVI will be replacing their value. nlewycky approved this on irc. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112355 91177308-0d34-0410-b5e6-96231b3b80d8
* handle the constant case of vector insertion. For somethingChris Lattner2010-08-281-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | like this: struct S { float A, B, C, D; }; struct S g; struct S bar() { struct S A = g; ++A.B; A.A = 42; return A; } we now generate: _bar: ## @bar ## BB#0: ## %entry movq _g@GOTPCREL(%rip), %rax movss 12(%rax), %xmm0 pshufd $16, %xmm0, %xmm0 movss 4(%rax), %xmm2 movss 8(%rax), %xmm1 pshufd $16, %xmm1, %xmm1 unpcklps %xmm0, %xmm1 addss LCPI1_0(%rip), %xmm2 pshufd $16, %xmm2, %xmm2 movss LCPI1_1(%rip), %xmm0 pshufd $16, %xmm0, %xmm0 unpcklps %xmm2, %xmm0 ret instead of: _bar: ## @bar ## BB#0: ## %entry movq _g@GOTPCREL(%rip), %rax movss 12(%rax), %xmm0 pshufd $16, %xmm0, %xmm0 movss 4(%rax), %xmm2 movss 8(%rax), %xmm1 pshufd $16, %xmm1, %xmm1 unpcklps %xmm0, %xmm1 addss LCPI1_0(%rip), %xmm2 movd %xmm2, %eax shlq $32, %rax addq $1109917696, %rax ## imm = 0x42280000 movd %rax, %xmm0 ret git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112345 91177308-0d34-0410-b5e6-96231b3b80d8
* optimize bitcasts from large integers to vector into vectorChris Lattner2010-08-281-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | element insertion from the pieces that feed into the vector. This handles a pattern that occurs frequently due to code generated for the x86-64 abi. We now compile something like this: struct S { float A, B, C, D; }; struct S g; struct S bar() { struct S A = g; ++A.A; ++A.C; return A; } into all nice vector operations: _bar: ## @bar ## BB#0: ## %entry movq _g@GOTPCREL(%rip), %rax movss LCPI1_0(%rip), %xmm1 movss (%rax), %xmm0 addss %xmm1, %xmm0 pshufd $16, %xmm0, %xmm0 movss 4(%rax), %xmm2 movss 12(%rax), %xmm3 pshufd $16, %xmm2, %xmm2 unpcklps %xmm2, %xmm0 addss 8(%rax), %xmm1 pshufd $16, %xmm1, %xmm1 pshufd $16, %xmm3, %xmm2 unpcklps %xmm2, %xmm1 ret instead of icky integer operations: _bar: ## @bar movq _g@GOTPCREL(%rip), %rax movss LCPI1_0(%rip), %xmm1 movss (%rax), %xmm0 addss %xmm1, %xmm0 movd %xmm0, %ecx movl 4(%rax), %edx movl 12(%rax), %esi shlq $32, %rdx addq %rcx, %rdx movd %rdx, %xmm0 addss 8(%rax), %xmm1 movd %xmm1, %eax shlq $32, %rsi addq %rax, %rsi movd %rsi, %xmm1 ret This resolves rdar://8360454 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112343 91177308-0d34-0410-b5e6-96231b3b80d8
* Completely disable tail calls when fast-isel is enabled, as fast-iselDan Gohman2010-08-281-4/+10
| | | | | | | doesn't currently support dealing with this. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112341 91177308-0d34-0410-b5e6-96231b3b80d8
* Add a prototype of a new peephole optimizing pass that uses LazyValue info ↵Owen Anderson2010-08-273-0/+45
| | | | | | | | | to simplify PHIs and select's. This pass addresses the missed optimizations from PR2581 and PR4420. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112325 91177308-0d34-0410-b5e6-96231b3b80d8
* Change ARM VFP VLDM/VSTM instructions to use addressing mode #4, just likeBob Wilson2010-08-271-1/+1
| | | | | | | | | | | | | | | | | | | | all the other LDM/STM instructions. This fixes asm printer crashes when compiling with -O0. I've changed one of the NEON tests (vst3.ll) to run with -O0 to check this in the future. Prior to this change VLDM/VSTM used addressing mode #5, but not really. The offset field was used to hold a count of the number of registers being loaded or stored, and the AM5 opcode field was expanded to specify the IA or DB mode, instead of the standard ADD/SUB specifier. Much of the backend was not aware of these special cases. The crashes occured when rewriting a frameindex caused the AM5 offset field to be changed so that it did not have a valid submode. I don't know exactly what changed to expose this now. Maybe we've never done much with -O0 and NEON. Regardless, there's no longer any reason to keep a count of the VLDM/VSTM registers, so we can use addressing mode #4 and clean things up in a lot of places. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112322 91177308-0d34-0410-b5e6-96231b3b80d8
* tidy up test.Chris Lattner2010-08-271-1/+2
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112321 91177308-0d34-0410-b5e6-96231b3b80d8
* no really, fix the test.Chris Lattner2010-08-271-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112317 91177308-0d34-0410-b5e6-96231b3b80d8
* fix this test. It's not clear what it's really testing.Chris Lattner2010-08-271-1/+1
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112316 91177308-0d34-0410-b5e6-96231b3b80d8
* Enhance the shift propagator to handle the case when you have:Chris Lattner2010-08-271-0/+15
| | | | | | | | | | | | | | | | | | A = shl x, 42 ... B = lshr ..., 38 which can be transformed into: A = shl x, 4 ... iff we can prove that the would-be-shifted-in bits are already zero. This eliminates two shifts in the testcase and allows eliminate of the whole i128 chain in the real example. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112314 91177308-0d34-0410-b5e6-96231b3b80d8
* Implement a pretty general logical shift propagationChris Lattner2010-08-271-4/+17
| | | | | | | | | | | | | framework, which is good at ripping through bitfield operations. This generalize a bunch of the existing xforms that instcombine does, such as (x << c) >> c -> and to handle intermediate logical nodes. This is useful for ripping up the "promote to large integer" code produced by SRoA. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112304 91177308-0d34-0410-b5e6-96231b3b80d8
* merge and filecheckize testChris Lattner2010-08-272-42/+57
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112289 91177308-0d34-0410-b5e6-96231b3b80d8
* merge two testsChris Lattner2010-08-272-10/+12
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112288 91177308-0d34-0410-b5e6-96231b3b80d8
* teach the truncation optimization that an entire chain ofChris Lattner2010-08-271-4/+21
| | | | | | | | computation can be truncated if it is fed by a sext/zext that doesn't have to be exactly equal to the truncation result type. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112285 91177308-0d34-0410-b5e6-96231b3b80d8
* get this test passing on linux builders.Chris Lattner2010-08-271-8/+8
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112280 91177308-0d34-0410-b5e6-96231b3b80d8
* Add an instcombine to clean up a common pattern producedChris Lattner2010-08-271-0/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | by the SRoA "promote to large integer" code, eliminating some type conversions like this: %94 = zext i16 %93 to i32 ; <i32> [#uses=2] %96 = lshr i32 %94, 8 ; <i32> [#uses=1] %101 = trunc i32 %96 to i8 ; <i8> [#uses=1] This also unblocks other xforms from happening, now clang is able to compile: struct S { float A, B, C, D; }; float foo(struct S A) { return A.A + A.B+A.C+A.D; } into: _foo: ## @foo ## BB#0: ## %entry pshufd $1, %xmm0, %xmm2 addss %xmm0, %xmm2 movdqa %xmm1, %xmm3 addss %xmm2, %xmm3 pshufd $1, %xmm1, %xmm0 addss %xmm3, %xmm0 ret on x86-64, instead of: _foo: ## @foo ## BB#0: ## %entry movd %xmm0, %rax shrq $32, %rax movd %eax, %xmm2 addss %xmm0, %xmm2 movapd %xmm1, %xmm3 addss %xmm2, %xmm3 movd %xmm1, %rax shrq $32, %rax movd %eax, %xmm0 addss %xmm3, %xmm0 ret This seems pretty close to optimal to me, at least without using horizontal adds. This also triggers in lots of other code, including SPEC. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112278 91177308-0d34-0410-b5e6-96231b3b80d8