| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Those functions rely on being able to treat the GET_PTR returned value as an
array indexed by x, but that's not the case for our tiling.
Bug #16387
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
The dri_bo_map()s that follow will take care of idling the hardware as needed.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We have something similar in the X Server that covers X Server rendering, this
is the equivalent here for rendering to the front buffer. If we cared about
avoiding this at glFlush time, we could only do this when some actual
frontbuffer rendering had occurred.
Bug #16392.
|
| | |
| | |
| | |
| | |
| | | |
Apparently in Y mode we get bit 6 ^ bit 9. The reflect demo in 'd' mode now
displays correctly.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The boolean that the server gives us for whether the region is tiled was
getting used as the enum for what tiling mode. Instead, guess the correct
tiling in screen setup.
Also, fix the Y-tiling pitch setup. The pitch to the next tile in Y is
32 scanlines, not 8.
|
| | |
| | |
| | |
| | |
| | |
| | | |
It turns out that it's not just deviceID dependent, and there's some additional
undefined factor that determines the bit 6 swizzling. It's now controllable
with swizzle_mode=[012] until we get a response on how to automatically detect.
|
| | |
| | |
| | |
| | |
| | | |
This was broken in the merge of 965 blit support. It tried to lock only
when things were already locked.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Most of these were to ensure that caches got synchronized between 2d (or meta)
rendering and later use of the target as a source, such as for texture
miptree setup. Those are replaced with intel_batchbuffer_emit_mi_flush(),
which just drops an MI_FLUSH. Most of the remainder were to ensure that
REFERENCES_CLIPRECTS batchbuffers got flushed before the lock was dropped.
Those are now replaced by automatically flushing those when dropping the lock.
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | | |
This lets GEM use pwrite, for an additional 4% or so speedup.
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
The fencing code is not required, and waiting on the fences defeated one of
the purposes of the extension, which is to allow asynchronous readpixels.
|
| |\ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The no_rast fallback was getting partially overwritten by later TNL init,
resulting in a segfault when things were in a mixed-up state.
|
| | | | |
| | | | |
| | | | |
| | | | | |
Apparently a bit gets flipped in the addressing for some rows of each tile.
|
| | | | |
| | | | |
| | | | |
| | | | | |
This is an API breakage only.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The objects are swappable, so we're less concerned by excessive object
allocation now, and it's about a 20% performance improvement. If we get
concerns about the memory consumption from others, we can look into a
compromise position later.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Swap buffers is a fairly reasonable time to wait for the hardware for a
while; this keeps us from overrunning the ring.
|
| |\ \ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Conflicts:
src/mesa/drivers/dri/common/dri_bufmgr.h
src/mesa/drivers/dri/intel/intel_bufmgr_ttm.c
src/mesa/drivers/dri/intel/intel_bufmgr_ttm.h
src/mesa/drivers/dri/intel/intel_ioctl.c
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This is good for about 5% on ipers on 965, and should help any cpu-bound app.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Mapping and unmapping buffers is expensive, and having the map around isn't
harmful (other than consuming address space). So, once mapped, just leave
buffers mapped in case they get re-used.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Add both MI_FLUSH and intel_batchbuffer_flush to intelEmitCopyBlit.
This ensures that the data are flushed *and* the gem kernel driver sees the
various memory domain transitions.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Use the new DRM_IOCTL_I915_GEM_BUSY ioctl to detect
idle buffers for re-use.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
We don't need an MI_FLUSH there, because everything that's been flushed in the
batch will eventually hit the hardware.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Otherwise, since the MI_FLUSH at the end of every batch had been removed,
non-automatic-flushing chips (965) wouldn't get flushed and apps with static
rendering would get partial screen contents until the server's blockhandler
flush kicked in.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The right solution would probably be keeping a list of regions which have been
rendered to.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The write_domain needs to be set after any batch buffer uses an object,
track when that happens in the new 'cpu_domain_set' field.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Only a few relocations are typically used, so don't clear the
whole thing.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This avoids kernel relocations for most batchbuffer relocs.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Track DRM GEM name changes.
Add driver hooks for bo_subdata and bo_get_subdata so that GEM can use pread
and pwrite.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Fix the kernel API to place the read/write domain information in the
relocation instead of the buffer.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Make sure 'used' tracks the right value through the whole function.
Also, use GLint for intel_batchbuffer_space in case we do bad things
in the future.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This existed to get the icache flushed. However, GEM handles this for us
now for sure, and we had disabled it prematurely anyway.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The GEM flags are much more descriptive for what we need. Since this makes
bufmgr_fake rather device-specific, move it to the intel common directory.
We've wanted to do device-specific stuff to it before.
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Doesn't deal with local modifications yet (need new kernel set_domain ioctl
for that to work). Also, guesses what domains are affected based on the
read/write bits set in the flags. Works for 915, probably not so much for
965.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Accessing tiled surfaces without using the fence registers requires that
software deal with the address swizzling itself.
|
| | | | | | |
|
| | | | | | |
|
| | | | | | |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
small integers are much prettier, and let me correlate to DRM debug output.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This is just cosmetic, to produce less scary values when the ioctl fails and
doesn't return values there.
|
| | | | | | |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Instead of attempting to fix these for GEM, just disable until GEM is
working.
|