Commit message (Expand) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | Revert r122955. It seems using movups to lower memcpy can cause massive regre... | Evan Cheng | 2011-01-07 | 1 | -25/+17 |
* | Use movups to lower memcpy and memset even if it's not fast (like corei7). | Evan Cheng | 2011-01-06 | 1 | -17/+25 |
* | Re-implement r122936 with proper target hooks. Now getMaxStoresPerMemcpy | Evan Cheng | 2011-01-06 | 1 | -20/+44 |
* | Revert r122936. I'll re-implement the change. | Evan Cheng | 2011-01-06 | 1 | -44/+20 |
* | r105228 reduced the memcpy / memset inline limit to 4 with -Os to avoid blowing | Evan Cheng | 2011-01-06 | 1 | -20/+44 |
* | fix PR6623: when optimizing for size, don't inline memcpy/memsets | Chris Lattner | 2010-05-31 | 1 | -0/+30 |
* | upgrade and filecheckize this test. | Chris Lattner | 2010-05-31 | 1 | -6/+16 |
* | Add nounwind. | Evan Cheng | 2010-04-05 | 1 | -2/+2 |
* | Eliminate more uses of llvm-as and llvm-dis. | Dan Gohman | 2009-09-08 | 1 | -1/+1 |
* | Refactor the memcpy lowering for the x86 target. | Rafael Espindola | 2007-09-28 | 1 | -0/+17 |