summaryrefslogtreecommitdiffstats
path: root/src/mesa/drivers/dri/i965/brw_schedule_instructions.cpp
Commit message (Collapse)AuthorAgeFilesLines
* i965/ir: Update several stale comments.Francisco Jerez2016-09-141-4/+4
| | | | Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
* i965/fs: Misc simplification.Francisco Jerez2016-09-141-2/+2
| | | | | | | Get rid of some leftover redundant arithmetic introduced during the conversion to byte offsets and sizes that can be simplified easily. Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
* i965/fs: Replace fs_inst::regs_written with ::size_written field in bytes.Francisco Jerez2016-09-141-3/+3
| | | | | | | | | | | | | | The previous regs_written field can be recovered by rewriting each rvalue reference of regs_written like 'x = i.regs_written' to 'x = DIV_ROUND_UP(i.size_written, reg_unit)', and each lvalue reference like 'i.regs_written = x' to 'i.size_written = x * reg_unit'. For the same reason as in the previous patches, this doesn't attempt to be particularly clever about simplifying the result in the interest of keeping the rather lengthy patch as obvious as possible. I'll come back later to clean up any ugliness introduced here. Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
* i965/vec4: Add wrapper functions for vec4_instruction::regs_read and ↵Francisco Jerez2016-09-141-4/+4
| | | | | | | | | | | | | | | | ::regs_written. This is in preparation for dropping vec4_instruction::regs_read and ::regs_written in favor of more accurate alternatives expressed in byte units. The main reason these wrappers are useful is that a number of optimization passes implement dataflow analysis with register granularity, so these helpers will come in handy once we've switched register offsets and sizes to the byte representation. The wrapper functions will also make sure that GRF misalignment (currently neglected by most of the back-end) is taken into account correctly in the calculation of regs_read and regs_written. Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
* i965/fs: Add wrapper functions for fs_inst::regs_read and ::regs_written.Francisco Jerez2016-09-141-15/+15
| | | | | | | | | | | | | | This is in preparation for dropping fs_inst::regs_read and ::regs_written in favor of more accurate alternatives expressed in byte units. The main reason these wrappers are useful is that a number of optimization passes implement dataflow analysis with register granularity, so these helpers will come in handy once we've switched register offsets and sizes to the byte representation. The wrapper functions will also make sure that GRF misalignment (currently neglected by most of the back-end) is taken into account correctly in the calculation of regs_read and regs_written. Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
* i965/fs: Replace fs_reg::reg_offset with fs_reg::offset expressed in bytes.Francisco Jerez2016-09-141-5/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The fs_reg::offset field in byte units introduced in this patch is a more straightforward alternative to the current register offset representation split between fs_reg::reg_offset and ::subreg_offset. The split representation makes it too easy to forget about one of the offsets while dealing with the other, which has led to multiple back-end bugs in the past. To make the matter worse the unit reg_offset was expressed in was rather inconsistent, for uniforms it would be expressed in either 4B or 16B units depending on the back-end, and for most other things it would be expressed in 32B units. This encodes reg_offset as a new offset field expressed consistently in byte units. Each rvalue reference of reg_offset in existing code like 'x = r.reg_offset' is rewritten to 'x = r.offset / reg_unit', and each lvalue reference like 'r.reg_offset = x' is rewritten to 'r.offset = r.offset % reg_unit + x * reg_unit'. Because the change affects a lot of places and is rather non-trivial to verify due to the inconsistent value of reg_unit, I've tried to avoid making any additional changes other than applying the rewrite rule above in order to keep the patch as simple as possible, sometimes at the cost of introducing obvious stupidity (e.g. algebraic expressions that could be simplified given some knowledge of the context) -- I'll clean those up later on in a second pass. Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
* intel: s/brw_device_info/gen_device_info/Jason Ekstrand2016-09-031-2/+2
| | | | | | | | | | | | | Generated by: sed -i -e 's/brw_device_info/gen_device_info/g' src/intel/**/*.c sed -i -e 's/brw_device_info/gen_device_info/g' src/intel/**/*.h sed -i -e 's/brw_device_info/gen_device_info/g' **/i965/*.c sed -i -e 's/brw_device_info/gen_device_info/g' **/i965/*.cpp sed -i -e 's/brw_device_info/gen_device_info/g' **/i965/*.h Signed-off-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
* i965/fs: Remove special casing of framebuffer writes in scheduler code.Francisco Jerez2016-08-251-2/+1
| | | | | | | | | | | | The reason why it was safe for the scheduler to ignore the side effects of framebuffer write instructions was that its side effects couldn't have had any influence on any other instruction in the program, because we weren't doing framebuffer reads, and framebuffer writes were always non-overlapping. We need actual memory dependency analysis in order to determine whether a side-effectful instruction can be reordered with respect to other instructions in the program. Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
* i965/sched: Simplify work done by add_barrier_deps().Matt Turner2016-08-191-0/+9
| | | | | | | | | | | | | Scheduling barriers are implemented by placing a dependence on every node before and after the barrier. This is unnecessary as we can limit the number of nodes we place dependencies on to those between us and the next barrier in each direction. Runtime of dEQP-GLES31.functional.ssbo.layout.random.all_shared_buffer.23 is reduced from ~25 minutes to a little more than three. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94681 Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
* i965/sched: Change the scheduling heuristics to favor early program termination.Francisco Jerez2016-08-181-3/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This uses the unblocked time of the exit assigned to each available node to attempt to unblock exit nodes as early as possible, potentially reducing the runtime of the shader when an exit branch is taken. There is a natural trade-off between terminating the program as early as possible and reducing the worst-case latency of the program as a whole (since this will typically move exit-unblocking nodes closer to its dependencies potentially causing additional stalls of the execution pipeline), but in practice the bandwidth and ALU cycle savings from terminating the program earlier tend to outweigh the slight increase in worst-case program execution latency, so it makes sense to prefer nodes likely to unblock an earlier exit regardless of the latency benefits of other available nodes. I haven't observed any benchmark regressions from this change after testing on VLV, HSW, BDW, BSW and SKL. The FPS of the GfxBench Manhattan benchmark increases by 10%-20% and the FPS of Unigine Valley improves by roughly 5% depending on the platform and settings. The change to the register pressure-sensitive heuristic is rather conservative and gives precedence to the existing heuristic in order to avoid increasing register pressure and causing spill count and SIMD width regressions in shader-db. It may make sense to revisit this with additional performance data. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
* i965/sched: Assign a preferred exit node to each node of the dependency graph.Francisco Jerez2016-08-181-0/+59
| | | | | | | | | | | | | This adds a bit of metadata to schedule_node that will be used to compare available nodes in the scheduling heuristic code based on which of them unblocks the earliest successor exit node. Note that assigning exit nodes wouldn't be necessary in a bottom-up scheduler because we could achieve the same effect by scheduling the exit nodes themselves appropriately. No shader-db changes. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
* i965/sched: Calculate the critical path of scheduling nodes non-recursively.Francisco Jerez2016-08-181-13/+12
| | | | | | | | | | | | | | | | | | | The critical path of each node is calculated by induction based on the critical paths of its children, which can be done in a post-order depth-first traversal of the dependency graph. The current code implements graph traversal by iterating over all nodes of the graph and then recursing into its children -- But it turns out that recursion is unnecessary because the lexical order of instructions in the block is already a good enough reverse post-order of the dependency graph (if it weren't a reverse post-order some instruction would have been located before one of its dependencies in the original ordering of the basic block, which is impossible), so we just need to walk the instruction list in reverse to achieve the same result more efficiently. No shader-db changes. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
* i965/fs: Keep track of flag dependencies with byte granularity during ↵Francisco Jerez2016-05-271-10/+31
| | | | | | | | | | | | scheduling. This prevents false dependencies from being created between instructions that write disjoint 8-bit portions of the flag register and OTOH should make sure that the scheduler considers dependencies between instructions that write or read multiple flag subregisters at once (e.g. 32-wide predication or conditional mods). Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
* i965/fs: Add missing get_latency_gen7() cases for the Gen7 pull constant ↵Francisco Jerez2016-05-271-0/+2
| | | | | | | | | | | | opcodes. This was causing the scheduler to be rather optimistic about the latency of pull constant opcodes on Gen7+. This might seem to increase the cycle count estimate calculated by the scheduler itself for some shaders, even though the actual cycle count should actually be decreased. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
* i965/fs: Rename Gen4 physical varying pull constant load opcode.Francisco Jerez2016-05-271-1/+1
| | | | | | | | | For consistency with the Gen7 variant. I'm not doing the same to the uniform pull constant message at this point because the non-GEN7 one is still overloaded to be either an expression-like logical instruction or a Gen4-specific physical send message. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
* i965: Add infrastucture for sample lod-zero operations.Matt Turner2016-05-191-0/+2
| | | | Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
* i965: Don't add barrier deps for FB write messages.Matt Turner2016-03-301-1/+2
| | | | | | | Ken did this earlier, and this is just me reimplementing his patch a little differently. Reviewed-by: Francisco Jerez <currojerez@riseup.net>
* i965: Add and use is_scheduling_barrier() function.Matt Turner2016-03-301-4/+17
|
* i965: Remove NOP insertion kludge in scheduler.Matt Turner2016-03-301-20/+5
| | | | | | | | | | | Instead of removing every instruction in add_insts_from_block(), just move the instruction to its scheduled location. This is a step towards doing both bottom-up and top-down scheduling without conflicts. Note that this patch changes cycle counts for programs because it begins including control flow instructions in the estimates. Reviewed-by: Francisco Jerez <currojerez@riseup.net>
* i965: Relax restriction on scheduling last instruction.Matt Turner2016-03-301-20/+3
| | | | | | | | | | | | | | | | | | | | | | I think when this code was written, basic blocks were always ended by a control flow instruction or an end-of-thread message. That's no longer the case, and removing this restriction actually helps things: instructions in affected programs: 7267 -> 7244 (-0.32%) helped: 4 total cycles in shared programs: 66559580 -> 66431900 (-0.19%) cycles in affected programs: 28310152 -> 28182472 (-0.45%) helped: 9577 HURT: 879 GAINED: 2 The addition of the is_control_flow() checks is not a functional change, since the add_insts_from_block() does not put them in the list of instructions to schedule. I plan to change this in a later patch. Reviewed-by: Francisco Jerez <currojerez@riseup.net>
* Revert "i965: Don't add barrier deps for FB write messages."Matt Turner2016-03-301-4/+3
| | | | | | | | | | | | | | This reverts commit d0e1d6b7e27bf5f05436e47080d326d7daa63af2. The change in the vec4 code is a mistake -- there's never an FS_OPCODE_FB_WRITE in vec4 code. The change in the fs code had the (harmless) effect of not recognizing an FB_WRITE as a scheduling barrier even if it was marked EOT -- harmless because the scheduler marked the last instruction of a block as a barrier, something I'm changing in the following patches. This will be reimplemented later in the series.
* i965: Simplify full scheduling-barrier conditions.Matt Turner2016-03-301-27/+8
| | | | | | | All of these were simply code for "architecture register file" (and in the case of destinations, "not the null register"). Reviewed-by: Francisco Jerez <currojerez@riseup.net>
* i965: Remove incorrect cycle estimates.Matt Turner2016-03-301-10/+0
| | | | | | | | These printed the cycle count the last basic block (sched.time is set per basic block!). We have accurate, full program, data printed elsewhere. Reviewed-by: Francisco Jerez <currojerez@riseup.net>
* i965: Use foreach_in_list_reverse_safe() macro.Matt Turner2016-03-121-12/+2
| | | | Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
* i965: Don't add barrier deps for FB write messages.Kenneth Graunke2016-02-081-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are never render target reads, so there are no scheduling hazards. Giving the extra flexibility to the scheduler makes it possible to do FB writes as soon as their sources are available, reducing register pressure. It also makes it possible to do the payload setup for more than one FB write message at a time, which could better hide latency. shader-db results on Skylake: total instructions in shared programs: 9110254 -> 9110211 (-0.00%) instructions in affected programs: 2898 -> 2855 (-1.48%) helped: 3 HURT: 0 LOST: 0 GAINED: 1 A reduction in instruction counts is surprising, but legitimate: the three shaders helped were spilling, and reducing register pressure allowed us to issue fewer spills/fills. total cycles in shared programs: 69035108 -> 68928820 (-0.15%) cycles in affected programs: 4412402 -> 4306114 (-2.41%) helped: 4457 HURT: 213 Signed-off-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Matt Turner <mattst88@gmail.com> Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
* i965: Clean up #includes in the compiler.Matt Turner2015-11-241-2/+0
| | | | Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
* i965: Replace HW_REG with ARF/FIXED_GRF.Matt Turner2015-11-131-38/+15
| | | | | | | | | | | | | | | | HW_REGs are (were!) kind of awful. If the file was HW_REG, you had to look at different fields for type, abs, negate, writemask, swizzle, and a second file. They also caused annoying problems like immediate sources being considered scheduling barriers (commit 6148e94e2) and other such nonsense. Instead use ARF/FIXED_GRF/MRF for fixed registers in those files. After a sufficient amount of time has passed since "GRF" was used, we can rename FIXED_GRF -> GRF, but doing so now would make rebasing awful. Reviewed-by: Emil Velikov <emil.velikov@collabora.co.uk> Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
* i965: Rename GRF to VGRF.Matt Turner2015-11-131-13/+13
| | | | | | | | | | The 2-bit hardware register file field is ARF, GRF, MRF, IMM. Rename GRF to VGRF (virtual GRF) so that we can reuse the GRF name to mean an assigned general purpose register. Reviewed-by: Emil Velikov <emil.velikov@collabora.co.uk> Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
* i965: Use brw_reg's nr field to store register number.Matt Turner2015-11-131-31/+31
| | | | | | | | | | | | In addition to combining another field, we get replace silliness like "reg.reg" with something that actually makes sense, "reg.nr"; and no one will ever wonder again why dst.reg isn't a dst_reg. Moving the now 16-bit nr field to a 16-bit boundary decreases code size by about 3k. Reviewed-by: Emil Velikov <emil.velikov@collabora.co.uk> Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
* i965: Remove fixed_hw_reg field from backend_reg.Matt Turner2015-11-131-25/+25
| | | | | | | | Since backend_reg now inherits brw_reg, we can use it in place of the fixed_hw_reg field. Reviewed-by: Emil Velikov <emil.velikov@collabora.co.uk> Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
* i965/fs: Use regs_read/written for post-RA scheduling in calculate_depsJason Ekstrand2015-11-071-11/+4
| | | | | | | | | | | | Previously, we were assuming that everything read/wrote exactly 1 logical GRF (1 in SIMD8 and 2 in SIMD16). This isn't actually true. In particular, the PLN instruction reads 2 logical registers in one of the components. This commit changes post-RA scheduling to use regs_read and regs_written instead so that we add enough dependencies. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92770 Reviewed-by: Matt Turner <mattst88@gmail.com> Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
* i965/sched: don't calculate live intervals for post-RA schedulingConnor Abbott2015-10-311-1/+2
| | | | | | | | | | | For some reason, this causes assertions on gm965 only. In any case, it's unnecessary since we don't need liveness information in the post-RA scheduler. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92744 Cc: Mark Janes <mark.a.janes@intel.com> Signed-off-by: Connor Abbott <cwabbott0@gmail.com> Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
* i965/sched: use liveness analysis for computing register pressureConnor Abbott2015-10-301-56/+229
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, we were using some heuristics to try and detect when a write was about to begin a live range, or when a read was about to end a live range. We never used the liveness analysis information used by the register allocator, though, which meant that the scheduler's and the allocator's ideas of when a live range began and ended were different. Not only did this make our estimate of the register pressure benefit of scheduling an instruction wrong in some cases, but it was preventing us from knowing the actual register pressure when scheduling each instruction, which we want to have in order to switch to register pressure scheduling only when the register pressure is too high. This commit rewrites the register pressure tracking code to use the same model as our register allocator currently uses. We use the results of liveness analysis, as well as the compute_payload_ranges() function that we split out in the last commit. This means that we compute live ranges twice on each round through the register allocator, although we could speed it up by only recomputing the ranges and not the live in/live out sets after scheduling, since we only shuffle around instructions within a single basic block when we schedule. Shader-db results on bdw: total instructions in shared programs: 7130187 -> 7129880 (-0.00%) instructions in affected programs: 1744 -> 1437 (-17.60%) helped: 1 HURT: 1 total cycles in shared programs: 172535126 -> 172473226 (-0.04%) cycles in affected programs: 11338636 -> 11276736 (-0.55%) helped: 876 HURT: 873 LOST: 8 GAINED: 0 v2: use regs_read() in more places. Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
* i965: dump scheduling cycle estimatesConnor Abbott2015-10-301-0/+20
| | | | | | | | | | | | | The heuristic we're using is rather lame, since it assumes everything is non-uniform and loops execute 10 times, but it should be enough for measuring improvements in the scheduler that don't result in a change in the number of instructions. v2: - Switch loops and cycle counts to be compatible with older shader-db. - Make loop heuristic 10x to match with spilling code. Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
* i965/sched: write-after-read dependencies are freeConnor Abbott2015-10-301-5/+5
| | | | | | | | | | | | Although write-after-write dependencies have the same latency as read-after-write dependencies due to how the register scoreboard works, write-after-read dependencies aren't checked by the EU at all, so they're purely a constraint on how the scheduler can order the instructions. v2: fix accumulator dependencies too. Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
* i965: fix cycle estimates when there's a pipeline stallConnor Abbott2015-10-301-7/+8
| | | | | | | | | | | | The issue time for an instruction is how many cycles it takes to actually put it into the pipeline. If there's a pipeline stall that causes the instruction to be delayed, we should first take that into account to figure out when the instruction would start executing and *then* add the issue time. The old code had it backwards, and so we would underestimate the total time whenever we thought there would be a pipeline stall by up to the issue time of the instruction. Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
* i965: Test fixed_hw_reg.file against BRW_IMMEDIATE_VALUE, not IMM.Matt Turner2015-10-291-4/+4
| | | | | | | | No functional change, since they were both 3, but BRW_IMMEDIATE_VALUE is the hardware value and IMM was the IR value -- and you can see that BRW_IMMEDIATE_VALUE was correctly used in the context of this patch. Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
* nir: remove dependency on glslRob Clark2015-10-161-1/+1
| | | | | | | | | | | | | | | Move glsl_types into NIR, now that the dependency on glsl_symbol_table has been split out. Possibly makes sense to rename things at this point, but if we do that I'd like to keep it split out into a separate patch to make git history easier to follow (IMHO). v2: fix android build v3: I f***ing hate scons.. but at least it builds Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Signed-off-by: Rob Clark <robclark@freedesktop.org>
* i965: Turn BRW_MAX_MRF into a macro that accepts a hardware generationIago Toral Quiroga2015-09-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are some bug reports about shaders failing to compile in gen6 because MRF 14 is used when we need to spill. For example: https://bugs.freedesktop.org/show_bug.cgi?id=86469 https://bugs.freedesktop.org/show_bug.cgi?id=90631 Discussion in bugzilla pointed to the fact that gen6 might actually have 24 MRF registers available instead of 16, so we could use other MRF registers and avoid these conflicts (we still need to investigate why some shaders need up to MRF 14 anyway, since this is not expected). Notice that the hardware docs are not clear about this fact: SNB PRM Vol4 Part2's "Table 5-4. MRF Registers Available in Device Hardware" says "Number per Thread" - "24 registers" However, SNB PRM Vol4 Part1, 1.6.1 Message Register File (MRF) says: "Normal threads should construct their messages in m1..m15. (...) Regardless of actual hardware implementation, the thread should not assume th at MRF addresses above m15 wrap to legal MRF registers." Therefore experimentation was necessary to evaluate if we had these extra MRF registers available or not. This was tested in gen6 using MRF registers 21..23 for spilling and doing a full piglit run (all.py) forcing spilling of everything on the FS backend. It was also tested by doing spilling of everything on both the FS and the VS backends with a piglit run of shader.py. In both cases no regressions were observed. In fact, many of these tests where helped in the cases where we forced spilling, since that triggered the same underlying problem described in the bug reports. Here are some results using INTEL_DEBUG=spill_fs,spill_vec4 for a shader.py run on gen6 hardware: Using MRFs 13..15 for spilling: crash: 2, fail: 113, pass: 6621, skip: 5461 Using MRFs 21..23 for spilling: crash: 2, fail: 12, pass: 6722, skip: 5461 This patch sets the ground for later patches to implement spilling using MRF registers 21..23 in gen6. Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
* i965/fs: Use exec_size instead of dst.width for computing component sizeJason Ekstrand2015-06-301-2/+2
| | | | | | | | There are a variety of places where we use dst.width / 8 to compute the size of a single logical channel. Instead, we should be using exec_size. Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com> Acked-by: Francisco Jerez <currojerez@riseup.net>
* i965: Rename backend_visitor to backend_shaderJason Ekstrand2015-05-281-9/+9
| | | | | | | | The backend_shader class really is a representation of a shader. The fact that it inherits from ir_visitor is somewhat immaterial. Reviewed-by: Matt Turner <mattst88@gmail.com> Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
* i965: Add typed surface access opcodes.Francisco Jerez2015-05-041-0/+3
| | | | | Acked-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
* i965: Add untyped surface write opcode.Francisco Jerez2015-05-041-0/+1
| | | | | Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com> Acked-by: Kenneth Graunke <kenneth@whitecape.org>
* i965: Use device_info instead of the context in instruction schedulingJason Ekstrand2015-04-221-11/+10
| | | | Reviewed-by: Matt Turner <mattst88@gmail.com>
* i965: Make scheduler cycle estimates use the proper stage name.Kenneth Graunke2015-02-191-5/+6
| | | | | | | | | | | | | | | Previously, the vec4 backend labeled shaders as "vec4" - now it uses the specific names "VS" and "GS". The FS backend now correctly prints "VS" for vertex shaders (rather than "fs"). It also prints "FS" instead of "fs" for fragment shaders; preserving that behavior didn't seem essential. Signed-off-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com> Reviewed-by: Ian Romanick <ian.d.romanick@intel.com> Reviewed-by: Kristian Høgsberg <krh@bitplanet.net> Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
* i965/vec4: Don't infer MRF dependencies for send from GRF instructions.Francisco Jerez2015-02-101-14/+18
| | | | Reviewed-by: Matt Turner <mattst88@gmail.com>
* i965/vec4: Fix the scheduler to take into account reads and writes of ↵Francisco Jerez2015-02-101-5/+10
| | | | | | | | multiple registers. v2: Avoid nested ternary operators in vec4_instruction::regs_read(). (Matt) Reviewed-by: Matt Turner <mattst88@gmail.com>
* i965/fs: Remove dependency of fs_inst on the visitor class.Francisco Jerez2015-02-101-4/+4
| | | | | | The fs_visitor argument of fs_inst::regs_read() wasn't used at all. Reviewed-by: Matt Turner <mattst88@gmail.com>
* i965: Factor out virtual GRF allocation to a separate object.Francisco Jerez2015-02-101-5/+5
| | | | | | | | | | | | | Right now virtual GRF book-keeping and allocation is performed in each visitor class separately (among other hundred different things), leading to duplicated logic in each visitor and preventing layering as it forces any code that manipulates i965 IR and needs to allocate virtual registers to depend on the specific visitor that happens to be used to translate from GLSL IR. v2: Use realloc()/free() to allocate VGRF book-keeping arrays (Connor). Reviewed-by: Matt Turner <mattst88@gmail.com>
* i965: Don't make instructions with a null dest a barrier to scheduling.Matt Turner2015-01-231-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we properly track accumulator dependencies, the scheduler is able to schedule instructions between the mach and mov in the common the integer multiplication pattern: mul acc0, x, y mach null, x, y mov dest, acc0 Since a null destination implies no dependency on the destination, we can also safely schedule instructions (that don't write the accumulator) between the mul and mach. GAINED: 103 LOST: 43 Causes one program to spill (643 -> 1076 instructions). I committed this patch last year (commit 42a26cb5) but reverted it (commit 0d3f83f4) after inexplicable artifacts in Kerbal Space Program (bug 78648). Tapani reapplied this patch and could not reproduce the bug with current Mesa. Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>