| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This will be useful for things such as function inlining.
|
|
|
|
|
| |
This is useful if you want to clone a single function_impl if, for
instance, you wanted to do function inlining.
|
|
|
|
|
| |
This can happen if a function ends in a return instruction and you remove
the return.
|
|
|
|
|
| |
All it does is remove the return at the end, but it's good enough for
simple functions.
|
|
|
|
|
|
| |
Otherwise, we have a problem when we go to print functions with arguments
because their names get added to the hash table during declaration which
happens after we print the prototype.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
In particular, this commit adds support for computing gl_GlobalInvocationID
and gl_LocalInvocationIndex from other intrinsics.
|
| |
|
|
|
|
|
|
|
| |
Now that we have a helper in the builder for system values and a helper in
core NIR to get the intrinsic opcode, there's really no point in having
things split out into a helper function. This commit "modernizes" this
pass to use helpers better and look more like newer passes.
|
|
|
|
|
|
| |
While we're at it, go ahead and make nir_lower_clip use it.
Cc: Rob Clark <robclark@gmail.com>
|
|
|
|
| |
The one user of this (i965) only ever calls it while in SSA form.
|
|
|
|
|
|
|
| |
glslang is giving us 0, which causes the SIMD8 GS compile to hit an
assert.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
|
|\ |
|
| |
| |
| |
| | |
Reviewed-by: Rob Clark <robdclark@gmail.com>
|
| |
| |
| |
| | |
It moved with the nir_intrinsic_load/store update.
|
|\ \
| |/
| |
| |
| | |
This pulls in nir_intrinsic_load/store changes and the switch of all
uniforms in i965 to bytes. This accounts for the Vulkan changes.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There is some special-casing needed in a competent back-end. However, they
can do their special-casing easily enough based on whether or not the
offset is a constant. In the mean time, having the *_indirect variants
adds special cases a number of places where they don't need to be and, in
general, only complicates things. To complicate matters, NIR had no way to
convdert an indirect load/store to a direct one in the case that the
indirect was a constant so we would still not really get what the back-ends
wanted. The best solution seems to be to get rid of the *_indirect
variants entirely.
This commit is a bunch of different changes squashed together:
- nir: Get rid of *_indirect variants of input/output load/store intrinsics
- nir/glsl: Stop handling UBO/SSBO load/stores differently depending on indirect
- nir/lower_io: Get rid of load/store_foo_indirect
- i965/fs: Get rid of load/store_foo_indirect
- i965/vec4: Get rid of load/store_foo_indirect
- tgsi_to_nir: Get rid of load/store_foo_indirect
- ir3/nir: Use the new unified io intrinsics
- vc4: Do all uniform loads with byte offsets
- vc4/nir: Use the new unified io intrinsics
- vc4: Fix load_user_clip_plane crash
- vc4: add missing src for store outputs
- vc4: Fix state uniforms
- nir/lower_clip: Update to the new load/store intrinsics
- nir/lower_two_sided_color: Update to the new load intrinsic
NIR and i965 changes are
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
NIR indirect declarations and vc4 changes are
Reviewed-by: Eric Anholt <eric@anholt.net>
ir3 changes are
Reviewed-by: Rob Clark <robdclark@gmail.com>
NIR changes are
Acked-by: Rob Clark <robdclark@gmail.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
v3:
* Update min/max based on latest SSBO code (Iago)
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Shared variables can be accessed by other threads within the same
local workgroup. This prevents us from performing certain
optimizations with shared variables.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| | |
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When an intrinsic atomic operation is used on a shared variable, we
translate it to a new 'shared variable' specific intrinsic function
call.
For example, a call to __intrinsic_atomic_add when used on a shared
variable will be translated to a call to
__intrinsic_atomic_add_shared.
v3:
* Fix stale comments copied from SSBOs (Iago)
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The compiler probably already blocks this earlier on, but we should be
checking for an SSBO here.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When an atomic function is called, we need to check to see if it is
for an SSBO variable before lowering it to the SSBO specific intrinsic
function.
v2:
* is_in_buffer_block => is_in_shader_storage_block (Iago)
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The atomic functions can also be used with shared variables in compute
shaders.
When lowering the intrinsic in lower_ubo_reference, we still create an
SSBO specific intrinsic since SSBO accesses can be indirectly
addressed, whereas all compute shader shared variable live in a single
shared variable area.
v2:
* Also remove the _internal suffix from ssbo atomic intrinsic names (Iago)
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| |
| | |
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| | |
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| | |
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In this lowering pass, shared variables are decomposed into intrinsic
calls.
v2:
* Send mem_ctx as a parameter (Iago)
v3:
* Shared variables don't have an associated interface block (Iago)
* Always use 430 packing (Iago)
* Comment / whitespace cleanup (Iago)
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| |
| | |
We use column-major for shared variable matrices.
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| | |
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| | |
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This code will also be usable by the pass to lower shared variables.
Note, that *const_offset is adjusted by setup_buffer_access so it must
be initialized before calling setup_buffer_access.
v2:
* Add comment for lower_buffer_access::setup_buffer_access
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| | |
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This class has code that will be shared by lower_ubo_reference and
lower_shared_reference. (lower_shared_reference will be used to
support compute shader shared variables.)
v2:
* Add lower_buffer_access.h to makefile (Emil)
* Remove static is_dereferenced_thing_row_major from
lower_buffer_access.cpp. This will become a lower_buffer_access
method in the next commit.
* Pass mem_ctx as parameter rather than using a member variable (Iago)
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This allows the code in emit_access to be generic enough to also be
for lowering shared variables.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
v2:
* Rename ssbo_get_array_length to ssbo_unsized_array_length_access (Iago)
* Use always use this-> when referencing buffer_access_type (Iago)
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Otherwise packed and inactive varyings get optimized away. This needs
to be prevented when using separate shader objects where interface
needs to be preserved.
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
|
| |
| |
| |
| |
| |
| |
| | |
s/suports/supports/
Signed-off-by: Andreas Boll <andreas.boll.dev@gmail.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com> [v1]
Reviewed-by: Eric Anholt <eric@anholt.net> [v1]
v2: Move new rule to Boolean simplification section
Add a a@bool != true simplification
Suggested-by: Neil Roberts <neil@linux.intel.com>
|
| |
| |
| |
| |
| |
| | |
To make it match unop().
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
... and allow the "binding" qualifier in ES 3.1 as well.
GLSL ES 3.1 incorporates only a few features from the extension
ARB_shading_language_420pack: the relaxed qualifier ordering
requirements and the binding qualifier.
Cc: "11.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
| |
| |
| |
| |
| |
| |
| | |
These features would not have been enabled with #version 420 otherwise.
Cc: "11.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
|