| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
| |
SCons doesn't understand nir yet and doesn't want to compile the glsl to
nir pass. Move the files to their own variable so we can add it only for
automake.
Tested-by: Brian Paul <brianp@vmware.com>
|
|
|
|
|
|
|
| |
These are used by code that doesn't necessarily link to libglsl.la. Move
them to shader_enums.[ch] where we keep similar helpers.
Reviewed-by: Matt Turner <mattst88@gmail.com>
|
|
|
|
|
|
|
|
|
| |
libglsl_la_SOURCES includes both NIR_FILES and LIBGLSL_FILES, so for
libglsl.la consumers, this is a no-op. libnir.la however no longer uses
any GLSL IR infrastructure and can be used without also linking to
libglsl.la.
Acked-by: Matt Turner <mattst88@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
v2:
* Fill UboInterfaceBlockIndex and SsboInterfaceBlockIndex in
split_ubos_and_ssbos (Iago)
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Juha-Pekka Heikkila <juhapekka.heikkila@gmail.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
|
|
|
|
|
|
|
|
|
|
| |
I believe that `1u << x`, where x >= 32 yields undefined results
according to the C standard.
Particularly MSVC says `warning C4334: '<<' : result of 32-bit shift
implicitly converted to 64 bits (was 64-bit shift intended?)`.
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
|
|
|
| |
Reviewed-by: Edward O'Callaghan <eocallaghan@alterapraxis.com
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds lowering options for the following opcodes:
- nir_op_fmod
- nir_op_bitfield_insert
- nir_op_uadd_carry
- nir_op_usub_borrow
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
|
|
|
|
|
|
|
|
| |
Both were defined as returning bool but the gpu_shader5 functions are
defined to return int. Also, we had the parameters for usub borrwo
backwards in the folding expression.
Reviewed-by: Matt Turner <mattst88@gmail.com>
|
|
|
|
|
|
|
|
| |
This was added in 54f583a20 since then error handling has improved.
The test this was added to fix now fails earlier since 01822706ec
Reviewed-by: Matt Turner <mattst88@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vector_insert takes a vector, a scalar location, and a scalar value,
and produces a new vector with that component updated. As such, it
can't be vectorized properly.
vector_extract takes a vector and a scalar location, and returns
that scalar component of the vector. Vectorization doesn't really
make any sense.
Treating both as horizontal operations makes sure the vectorizer
won't try to touch these.
Found by inspection.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Previously each member was being counted as using a single slot,
count_attribute_slots() fixes the count for array and struct members.
Also don't assign a negitive to the unsigned expl_location variable.
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
|
|
|
|
|
| |
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Edward O'Callaghan <eocallaghan@alterapraxis.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we were only reserving a single location for arrays and
structs.
We also didn't take into account implicit locations clashing with
explicit locations when assigning locations for their arrays or
structs.
This patch fixes both issues.
V5: fix regression for patch inputs/outputs in tessellation shaders
V4: just use count_attribute_slots() to get the number of slots,
also calculate the correct number of slots to reserve for gs and
tess stages by making use of the new get_varying_type() helper.
V3: handle arrays of structs
V2: also fix for arrays of arrays and structs.
Acked-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Edward O'Callaghan <eocallaghan@alterapraxis.com>
|
|
|
|
|
|
|
|
| |
This will be used in the following patch for calculating array sizes correctly
when reserving explicit varying locations.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Edward O'Callaghan <eocallaghan@alterapraxis.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we would pack varyings before trying to remove them, this
relied on the packing pass not packing varyings with a location of -1
to avoid packing varyings that should be removed.
However this meant unused varyings with an explicit location would be
packed before they could be removed when we enable packing of them in a
later patch.
V2: fix regression in V1 removing unused varyings in multi-stage SSO,
fix regression with single stage programs.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Edward O'Callaghan <eocallaghan@alterapraxis.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The nir_opt_algebraic rule
(('fadd', ('flog2', a), ('fneg', ('flog2', b))), ('flog2', ('fdiv', a, b))),
can produce new fdiv operations, which need to be lowered on i965,
as we don't actually implement fdiv. (Normally, we handle this in
GLSL IR's lower_instructions pass, but in the above case we introduce
an fdiv after that point. So, make NIR do it for us.)
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Cc: mesa-stable@lists.freedesktop.org
|
|
|
|
|
|
|
|
| |
There is a function dedicated to demoting unused varyings lets
trust it to do its job.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Edward O'Callaghan <eocallaghan@alterapraxis.com>
|
|
|
|
|
|
|
|
| |
After lowering the matching flag is_unmatched_generic_inout is lost so
we need to move this validation before lowering.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Edward O'Callaghan <eocallaghan@alterapraxis.com>
|
|
|
|
|
|
|
|
|
| |
An SSO program can have multiple stages and we only want to add the externally
facing varyings. The current code was adding both the packed inputs and outputs
for the first and last stage of each program.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Edward O'Callaghan <eocallaghan@alterapraxis.com>
|
|
|
|
|
|
|
|
|
| |
Note these are a bit uglier, due to avoidance of GNU C extensions. But
drivers which do not need to be built with compilers that don't support
the extension can wrap these macros with their own.
Signed-off-by: Rob Clark <robclark@freedesktop.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
|
|
|
|
|
| |
Signed-off-by: Rob Clark <robclark@freedesktop.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
varying_matches::record tries to compute the number of components in
each varying, which varying_matches::assign_locations uses to assign
locations. With varying packing, it uses glsl_type::component_slots()
to come up with a reasonable value.
Without varying packing, it fell back to an open-coded computation
that didn't bother to handle structs at all. I believe we can simply
use 4 * glsl_type::count_attribute_slots(false), which already handles
these cases correctly.
Partially fixes rendering in GFXBench 4.0's tessellation benchmark.
(NVE0 is almost right after this, but i965 is still mostly garbage.)
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
Cc: "11.0 11.1" <mesa-stable@lists.freedesktop.org>
|
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
|
|
|
|
|
|
|
|
|
|
|
| |
There used to be more members but they now share other fields
in order to keep memory use low.
Also making the naming more generic will allow us to reuse the
field for explicit byte offsets within blocks for
ARB_enhanced_layouts.
Reviewed-by: Edward O'Callaghan <eocallaghan@alterapraxis.com>
|
|
|
|
|
| |
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
Reviewed-by: Edward O'Callaghan <eocallaghan@alterapraxis.com>
|
|
|
|
|
| |
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
Reviewed-by: Edward O'Callaghan <eocallaghan@alterapraxis.com>
|
|
|
|
|
|
|
|
|
|
|
| |
A hugely common case when using nir_builder is to have a shader with a
single function called main. This adds a helper that gives you just that.
This commit also makes us use it in the NIR control-flow unit tests as well
as tgsi_to_nir and prog_to_nir.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This optimizes a + b - b to just a. Modest shader-db results (BDW):
total instructions in shared programs: 7842452 -> 7841862 (-0.01%)
instructions in affected programs: 61938 -> 61348 (-0.95%)
total loops in shared programs: 2131 -> 2131 (0.00%)
helped: 263
HURT: 0
GAINED: 0
LOST: 0
but the optimization turns
gl_VertexID - gl_BaseVertexARB
into just a reference to SYSTEM_VALUE_VERTEX_ID_ZERO_BASE, which the
i965 hardware supports natively. That means we can avoid using the
internal vertex buffer for gl_BaseVertexARB in this case.
Reviewed-by: Eduardo Lima Mitev <elima@igalia.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
|
|
|
|
| |
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
|
|
|
|
|
| |
Fixes make check.
Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When Connor originally drafted NIR, he copied the same function+overload
system that GLSL IR had with a few names changed. However, this
double-indirection is not really needed and has only served to confuse
people. Instead, let's just have functions which may not have unique names
and may or may not have an implementation. If someone wants to do overload
resolving, they can hav a hash table based function+overload system in the
overload resolving pass. There's no good reason to keep it in core NIR.
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
ir3 bits are
Reviewed-by: Rob Clark <robclark@gmail.com>
|
|
|
|
|
| |
Signed-off-by: Rob Clark <robclark@freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
|
|
|
|
|
|
| |
I need access to glsl_type::vec2_type from C. Wrapping vec() also gives
us access to vec3 if we need it.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of performing the read-modify-write cycle in glsl->nir, we can
simply emit a partial writemask. For locals, nir_lower_vars_to_ssa will
do the equivalent read-modify-write cycle for us, so we continue to get
the same SSA values we had before.
Because glsl_to_nir calls nir_lower_outputs_to_temporaries, all outputs
are shadowed with temporary values, and written out as whole vectors at
the end of the shader. So, most consumers will still not see partial
writemasks.
However, nir_lower_outputs_to_temporaries bails for tessellation control
shader outputs. So those remain actual variables, and stores to those
variables now get a writemask. nir_lower_io passes that through. This
means that TCS outputs should actually work now.
This is a functional change for tessellation control shaders.
v2: Relax the nir_validate assert to allow partial writemasks.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tessellation control shaders need to be careful when writing outputs.
Because multiple threads can concurrently write the same output
variables, we need to only write the exact components we were told.
Traditionally, for sub-vector writes, we've read the whole vector,
updated the temporary, and written the whole vector back. This breaks
down with concurrent access.
This patch prepares the way for a solution by adding a writemask field
to store_var intrinsics, as well as the other store intrinsics. It then
updates all produces to emit a writemask of "all channels enabled". It
updates nir_lower_io to copy the writemask to output store intrinsics.
Finally, it updates nir_lower_vars_to_ssa to handle partial writemasks
by doing a read-modify-write cycle (which is safe, because local
variables are specific to a single thread).
This should have no functional change, since no one actually emits
partial writemasks yet.
v2: Make nir_validate momentarily assert that writemasks cover the
complete value - we shouldn't have partial writemasks yet
(requested by Jason Ekstrand).
v3: Fix accidental SSBO change that arose from merge conflicts.
v4: Don't try to handle writemasks in ir3_compiler_nir - my code
for indirects was likely wrong, and TTN doesn't generate partial
writemasks today anyway. Change them to asserts as requested by
Rob Clark.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com> [v3]
|
|
|
|
|
|
|
|
|
|
| |
This field is used as a flag to optimise out any varyings that don't have
a matching varying on the other side of the interface.
The value should be the same for all varyings (except for SSO but we can't
optimise those) by the time they reach nir and are no longer be needed.
Acked-by: Jason Ekstrand <jason.ekstrand@intel.com>
|
|
|
|
|
|
|
|
|
| |
This function deals with vertex inputs and fragment
outputs, so we should count the attribute locations
correctly for the vertex inputs.
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
|
|
|
|
|
| |
This fixes the calculations for transform feedback for doubles.
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
|
|
|
|
|
|
| |
This doubles the element width for the types that are greater
than 2 elements wide.
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
|
|
|
|
|
|
| |
This doesn't apply to other stages. This is only
used in the mesa/st code, which needs further fixes.
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So vertex shader input attributes are handled different than internal
varyings between shader stages, dvec3 and dvec4 only count as
one slot for vertex attributes, but for internal varyings, they
count as 2.
This patch comments all the uses of this API to clarify what we
pass in, except one which needs further investigation
Signed-off-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
|
|
|
|
|
| |
Signed-off-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old function didn't work for matrices, and we need this
in other places to fix some other problems, so move to a helper
in glsl type and fix the one user so far.
A dual slot double is one that has 3 or 4 components in it's
base type.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Oded Gabbay <oded.gabbay@gmail.com>
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
|
|
|
|
|
|
|
|
|
| |
Don't use a bool here, as for some 64-bit fixes we need
the stage.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Oded Gabbay <oded.gabbay@gmail.com>
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
|
|
|
|
|
|
|
|
|
|
|
| |
As in the previous patches, these can be implemented as
any(v) -> any_nequal(v, false)
all(v) -> all_equal(v, true)
and their removal simplifies the code in the next patch.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
|
|
| |
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
|
|
|
|
|
| |
The GLSL IR to TGSI/Mesa IR paths for any_nequal have the same
optimizations the ir_unop_any paths had.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
|
|
| |
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
|
|
|
|
|
|
|
|
| |
I apparently regressed this when rewriting the built-ins using
ir_builder, in 76d2f73643f.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93387
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Matt Turner <mattst88@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Now that we have a helper in the builder for system values and a helper in
core NIR to get the intrinsic opcode, there's really no point in having
things split out into a helper function. This commit "modernizes" this
pass to use helpers better and look more like newer passes.
Reviewed-by: Eric Anholt <eric@anholt.net>
|