| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\
| |
| |
| | |
x86_64-linux-glibc2.11-4.8" into idea133
|
| |
| |
| |
| |
| |
| |
| |
| | |
Unlike its 4.6 counterpart, the new
x86_64-linux-glibc2.11-4.8/x86_64-linux/include/c++/4.8/x86_64-linux/bits/gthr-default.h
(line #39) no longer unconditionally include unistd.h which provides getopt prototype
Change-Id: I53310bb0f27e6ed7b4ee732ef301c4868decccb4
|
| |
| |
| |
| |
| |
| |
| | |
These warnings appear when building the sources through the emulator's
standalone build system, not the platform one.
Change-Id: Ib5d51cf6211f32763be00c7436ae14c06f76b436
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The function uses DescribePixelFormat() which doesn't return a
count, but a maximum index, in a base-1 list of possible formats,
so adjust the code accordingly.
See http://msdn.microsoft.com/en-us/library/windows/desktop/dd318302(v=vs.85).aspx
Change-Id: Id0cc92249348e6c845570adaaf4c280721a194bb
|
|/
|
|
|
|
|
|
|
|
|
| |
The eglWaitEGL implementation didn't restore the previous
bound API after calling eglWaitClient. This probably isn't
a big concern for emugl correctness, but fixing this removes
a compiler warning.
See http://www.khronos.org/registry/egl/sdk/docs/man/xhtml/eglWaitGL.html
Change-Id: I143ffeeefa01aff502d27d4e1d6f892f0d1efe5b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes the Windows SDK build. A previous patch
apparently broke it even though I could not reproduce this
locally before submitting.
What it does is, when using the platform build to generate
Windows binaries, use the host Linux binary instead of rebuilding
the 'emugen' tool from sources.
Note that the emulator's standalone build supports building
host Linux binaries even when targetting Windows by default.
+ Add a missing module import that got lost in translation
for some odd reason.
Change-Id: I2ccd962d8b3df859b2cba82573225820b69b0d32
|
|
|
|
|
|
|
|
| |
This patch improves the build files for the GPU emulation
libraries to allow them to be built directly with the emulator's
own standalone build system.
Change-Id: I205392bdfe4223a5c43fa67e24a2beffcbcbc07a
|
|
|
|
|
|
|
|
| |
Final patch to completely remove dependencies on
libcutils/libutils/liblog from the host-side GPU
emulation libraries.
Change-Id: I84a058bbd0ca676b18c0b0a094ac8bae692f9c94
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch removes the use of the 'thread_store' class from
<utils/threads.h> by providing its own implementation instead
under shared/emugl/common/thread_store.h, plus appropriate
unit tests.
Note that unlike the Android version, this properly destroys
the thread-local values on thread exit (instead of leaking
them).
+ Provide a LazyInstance class used to perform thread-safe
lazy initialization of static variables without the use
of C++ constructors.
Change-Id: Iabe01fbd713c6872b5fe245d7255c3c03749a88a
|
|
|
|
|
|
|
|
|
|
|
| |
This patch removes the dependency on android::Mutex from
<cutils/threads.h> by providing a custom implementation, which
is a simple wrapper around pthread_mutex_t / CriticalSection,
under shared/emugl/common/mutex.h
+ Provide unit tests.
Change-Id: I379ef0c480c478ab9ba5f2faaf8274267eff37ba
|
|
|
|
|
|
|
| |
This patch removes a few minor compiler warnings related
to unused local variables.
Change-Id: Icd4b3b478dce0c38cc1dd04419db7350dcbdb8f6
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This gets rid of two copies of SmartPtr.h and replaces them with
a single implementation under shared/emugl/common/smart_ptr.*
Note that this uses a new include path rooted at the shared/
directory for classes that are likely to be built both for
the host and the device (in case we back-port this to
device/generic/goldfish/opengl/ in the future).
+ Add a gtest-based set of unittests, after building, just
call 'emugl_common_host_unittests' to run it.
Note that this probably needs a 64-bit version as well,
will come later once I find a way to build GTest for 64-bits
without breaking the platform build :-)
Also note that this moves the class to the 'emugl' namespace,
in order to make the code easier to build out of the platform
tree, and embed it in other projects. More classes will be
transitioned / refactored in future patches.
AOSP_BUG=64806
Change-Id: Ieb326c5f3f002a21537b8a391a82ce2ef9925073
|
|
|
|
|
|
|
|
|
|
|
| |
A small patch to prepare for the out-of-platform-tree build.
This ones places SDL-related definitions in a new build file
(sdl.mk) and provide a way for the emulator's build system
to provide its own SDL compiler and linker flags.
+ Add missing KHR/khrplatform.h file.
Change-Id: I496f1a49730ffbfae80a074e09611bd07777cf1a
|
|
|
|
|
|
|
|
|
|
|
| |
This patch gets rid of all compiler warnings for the
GPU emulation libraries when building on a Linux host.
Note that GLcommon/GLutils.h now provides two new functions
to perform 'safe' type casts between unsigned integers and
pointers: SafePointerFromUInt() and SafeUIntFromPointer().
Change-Id: I01c48bbd72f925d70eb9831f57e15815e687121f
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
bug: 10456411
Fix for internal bug
Change-Id: I85181d358f1844b25cc85fbaf5f64842d5ed6f22
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The pointer returned by glGetString is owned by the GL context, so
when the GL context is destroyed it may become invalid. This happens
on Mesa, for example. Make/manage our own copy of the extension string
to use after destroying the context.
Bug: 9627179
Change-Id: I605536151ee64f50403546d0d38c5b5f1f27dd73
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is what the pixel format attribute lists in
MacPixelFormatsAttribs.m try to achieve, but despite this, alpha
is nonzero in every returned configuration on certain (all?)
machines (at least on 10.8.5 on a nvidia gpu). This means that
EGL won't return any configs at all with alpha == 0.
The default config chooser in GLSurfaceView requires a config with
alpha == 0. This means that previously, this view failed to start up
on the emulator on OS X, unless set up with a non-default config
chooser.
Change-Id: I2bf3e92a026c525a97d6746e491d920ce127787f
|
|\ \
| |/
|/|
| |
| |
| | |
* changes:
Ignore empty ranges
Fix rangeUnion return value in the successful case
|
| |
| |
| |
| | |
Change-Id: I0cccba6795e3b9709cc646f6fa55bb60e6446ea1
|
| |
| |
| |
| |
| |
| |
| | |
Even if the ranges can be merge rangeUnion was returning false.
Most probably this was a typo.
Change-Id: I4cf8a19bd701a8501c2d49cf0bfa996f9e12c02f
|
|\ \
| | |
| | |
| | |
| | |
| | | |
* changes:
ColorBuffer: Remove the y-invert Intel GPU bug workaround
EglMacApi: Use the right pbuffer texture target and format parameters
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The fix in 57501158 makes sure pbuffers get initialized
properly. The previously incorrect pbuffer texture parameters
had different effects on different GPUs/drivers. On a Nvidia
GT 650M, the buffers were rendered properly but glReadPixels calls
were inverted, while Intel HD 3000/4000 seemed to get the rendering
inverted as well.
By passing proper pbuffer texture parameters, the bug (which in
itself was no driver bug but inconsistent behaviour when given
invalid parameters) vanishes.
This reverts the bug workaround parts of 9322c5cb (from
development.git).
Change-Id: Ibc38147967361cba6ba85cdf3b4e9a2e2ee6d881
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The target and format parameters are in the EGL parameter range
(and are stored in EGLints), while nsCreatePBuffer (which calls
NSOpenGLPixelBuffer initWithTextureTarget) takes GLenums.
This is pretty much similar to the same function in EglWindowsApi.cpp,
but contrary to that function, there's nothing similar to
WGL_NO_TEXTURE_ARB in initWithTextureTarget, so something has to be
specified in all cases.
Previously, the default EGL_NO_TEXTURE (0x305C) was passed through.
While this mostly worked just fine, it had the surprising hidden
side effect of using a vertically flipped coordinate system in
glReadPixels (with the origin being the top left corner instead
of the bottom left one, which is default in OpenGL).
This makes the EncodeDecodeTest media CTS test pass with surface
output on the emulator on Mac OS X. (This test renders the decoded
video to a pbuffer and checks individual pixel values using
glReadPixels.)
Change-Id: I21a2430ce6334a5e82ea3203c4d157f5bad1558d
|
|/
|
|
|
|
|
|
|
|
|
|
| |
The EGL specs say that eglChooseConfig doesn't update num_config
if it returns a failure (which is exactly what
Translator/EGL/EglImp.cpp does). Therefore, if this function
returned a failure (e.g. due to an unsupported egl attribute),
nConfigs was left untouched, meaning that the configs array
was left uninitialized but treated as if it was full of valid
configs.
Change-Id: I3809272298ea10d65dc939849d2e3c17d1158da6
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of using a TLS destructor, RenderThreadInfo is now an automatic
variable of RenderThread::Main(), so is automatically destroyed on thread exit.
RenderThread::Main() now explicitly unbinds the thread's context and surfaces
from the FrameBuffer, ensuring that the thread has released all references
before it exits.
This fixes a bug where RenderThreadInfo's destructor was releasing the
references in the TLS destructor, which caused ColorBuffer's destructor to call
FrameBuffer::bind_locked() when the FrameBuffer lock wasn't held. By clearing
the references in FrameBuffer::bindContext(), locking around destruction
happens correctly.
Change-Id: I617cea838d5f352a597ccc6d3dbd8f9c08cb91bd
|
|/
|
|
|
|
|
| |
Fix a small bug that caused the emulator to crash when used with some
graphic libraries.
Change-Id: Ifb7e0b11a8302d0538632dac467d187dfcdfda47
|
|
|
|
|
|
|
|
|
| |
In cases when RenderThread failed to start, RenderServer would free the
stream first and then the thread. However, the thread itself also attemps to
free the stream and this caused a crash of the emulator in some corner
cases.
Change-Id: I2e508c37ab0a09c9261b30e59072bf1a44982dfe
|
|
|
|
|
|
| |
- lack of tlsDestruct causes resource leakage in linux
Change-Id: I6f5308fd00da06dbecd9246393021e3d72aa40c3
|
|
|
|
|
|
|
| |
- SocketStream is passed from RenderServer::Main, but it is not deleted.
- As a result, emulator will keep opening these sockets.
Change-Id: I4f2445855fc45a8d5f20f4d598e7021e8f3e000c
|
|
|
|
|
|
|
|
| |
- setting int should be OK even without lock as there is no synchronization in reader side
- dead-lock can happen inside constructor if the same error handler is already
set and if error happens inside constructor in place like XSync
Change-Id: I2f401067e0555ae092df23f57cc9ab054a1555d7
|
|
|
|
|
| |
Bug: 39835
Change-Id: Ied3f43b76d2bb1bdba478f57122ec0ef4d967ae4
|
|
|
|
|
|
|
|
|
|
|
| |
EglOS::getDefaultDisplay() can return an invalid display, e.g. on X11
if $DISPLAY is not set. This is called from the EglGlobalInfo
constructor, which doesn't have a good way to indicate failure. So
instead EglGlobalInfo::addDisplay() checks that the display is valid
before wrapping it in a EglDisplay.
Bug: 7020498
Change-Id: Id18fc568dae5bff4b45b706f3322ae5e4785d95d
|
|
|
|
|
|
|
|
|
|
|
| |
When the emulator window has non-1.0 scaling, the scale is applied
when blitting the Android framebuffer to the host window. For the
OnPost callback, we were not only reading the image back from the
window (so post-scaling), we were also storing it in a 1.0-scaled
buffer. Now we read back from the Android framebuffer image, which is
always 1.0-scaled.
Change-Id: Ia9c974711a9a4b0b19f4b997f65ecc64481b4c6a
|
|
|
|
| |
Change-Id: I84133fb36d8f15ed33e6bcba2be158e43c903901
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we used a hardcoded address (tcp port, unix pipe path,
etc.) for the OpenGLRender system. Multiple emulators would all try to
listen on the same address, with the system non-deterministically (?)
choosing which one accepted each new connection. This resulted in
frames going to the wrong emulator window, one emulator shutting down
another's OpenGL system, etc.
Now the OpenGLRender server requests an unused tcp port or derives a
path from the pid, and reports the address back to the emulator client
to use for future connections from the guest.
Change-Id: I6af2eac0c7f27670a3b6595772eebc7aa2b24688
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This leak has always been there, but normally only leaks one socket
per emulator instance. Worse, though, is that the socket is listening
for connections on a hardcoded port, so it prevents other emulators
from listening on that port. Since we now start the GL renderer
briefly in every emulator instance (for GL string collection) this
means only one emulator can run at a time, even if none are using GL
acceleration.
Even with this fix, a GL-accelerated emulator will prevent any other
emulator (accelerated or not) from starting, since it is listening on
the hardcoded port, and the new emulator will try to listen on and
connect to that port at least for GL string collection. That will be
fixed in a future change.
Bug: 33383
Change-Id: I62a8a67eb6afb6c53cb41a19d00b6449cf5e1abe
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Switch the min and mag filters for the ColorBuffer's texture object to
linear instead of nearest. This gives much better results when posting
a colorbuffer to a scaled framebuffer, which happens when the android
screen + emulator skin is too big to fit on the user's display.
The ColorBuffer's own texture object is only actually used as a
texture during post(); when used as a texture by the guest system
(e.g. by surfaceflinger composition) it's mapped through an EGLImage
to a guest-owned texture object, which has its own filter settings. So
we can just change the filter settings for the ColorBuffer texture
object at initialization without affecting anything else.
SDK Bug: 6721429
Change-Id: I8ec81125d076e0ed77a44f8b0dce412fa3cecabf
|
|
|
|
|
|
|
|
| |
ColorBuffer wasn't destroying its blit texture and associated
EGLImage, leaking one pair per Android gralloc buffer.
Change-Id: I2fa42d2ecbb654edca7b224bd002d7513a08a633
http://code.google.com/p/android/issues/detail?id=33445
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
See b/5680952 "Compilation warnings in etc1.cpp" for discussion.
This is a manual merge of an update that was made to
frameworks/native/opengl/libs/ETC1/etc1.cpp.
sdk/emulator/opengl/host/libs/Translator/GLcommon/etc1.cpp is an exact
copy of frameworks/native/opengl/libs/ETC1/etc1.cpp, so we might as well
keep the two versions in synch.
Bug: 5680952
Change-Id: Icf5d5ed2e7c5c79eb9677d210b1ff5fee507271d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Because of the way the SDK and Android system images are branched,
host code that goes into the SDK tools can't live in the same
repository as code that goes into the system image. This change keeps
the emugl host code in sdk.git/emulator/opengl while moving the emugl
system code to development.git/tools/emulator/opengl.
A few changes were made beyond simply cloning the directories:
(a) Makefiles were modified to only build the relevant components. Not
doing so would break the build due to having multiple rule
definitions.
(b) Protocol spec files were moved from the guest encoder directories
to the host decoder directories. The decoder must support older
versions of the protocol, but not newer versions, so it makes
sense to keep the latest version of the protocol spec with the
decoder.
(c) Along with that, the encoder is now built from checked in
generated encoder source rather than directly from the protocol
spec. The generated code must be updated manually. This makes it
possible to freeze the system encoder version without freezing the
host decoder version, and also makes it very obvious when a
protocol changes is happening that will require special
backwards-compatibility support in the decoder/renderer.
(d) Host-only and system-only code were removed from the repository
where they aren't used.
(e) README and DESIGN documents were updated to reflect this split.
No actual source code was changed due to the above.
Change-Id: I70b576a70ac3dc94155f931508b152178f1e8cd5
|
|
|
|
|
|
|
|
| |
MIPS cannot handle unaligned accesses, so this patch changes the
direct assignment of ints/floats to using memcpy
Signed-Off-By: Bhanu Chetlapalli <bhanu@mips.com>
Change-Id: I82600dece8f48f718f73b49cdf831094bbfdcde5
|
|
|
|
|
|
|
|
| |
Since per-frame readback is slow and clients don't need it on all the
time, this change allows the callback to be registered after
initialization, and allows it to be disabled later.
Change-Id: Ic73d4515d302a0981ee0c80b9e6f9ba5c84b82ae
|
|
|
|
|
|
|
|
|
| |
This also changes the strings reported by the default OpenGL ES
1.1/2.0 to OpenGL translators so they include the strings from the
underlying OpenGL implementation. This will give more useful bug
reports and SDK deployment statistics.
Change-Id: Id2d231a4fe3c40157c24a63ec19785826e037fd3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The emulator opengles.c file duplicated the function declarations from
libOpenglRenderer's render_api.h instead of including it directly.
This led to multiple bugs since the declarations didn't actually
match, but there was no way for the compiler or dynamic loader to
check this.
This change makes opengles.c include render_api.h to get function
pointer prototypes, and changes the prototypes/implementation as
necessary to make both sides actually match. It should be much more
difficult to introduce interface mismatch bugs now.
Two bugs this change would have prevented:
(a) The interface mismatch caused by inconsistent branching which led
to GPU acceleration crashing on Windows. With this change, we
would have caught the problem at compile time.
(b) The emulator verbose log has always been printing "Can't start
OpenGLES renderer?" even when the renderer started fine. This is
because the renderer was returning a bool (true == success) but
the emulator's declaration said it returned int, and the emulator
assumed 0 meant success. This difference in return type should now
be caught at compile time.
Change-Id: Iab3b6960e221edd135b515a166cf991b62bb60c9
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The emulator GLES support has two interfaces: a host shared library
interface used by QEMU, and a protocol between the platform and the
host. The host library interface is not versioned; QEMU and the GLES
renderer must match. The protocol on the other hand must be backwards
compatible: a new GLES renderer must support an older platform image.
Thus for branching purposes it makes more sense to put the GLES
renderer in sdk.git, which is branched along with qemu.git for SDK
releases. Platform images will be built against the protocol version
in the platform branch of sdk.git.
Change-Id: I2c3bce627ecfd0a4b3e688d1839fe10755a21e58
|
|
|
|
|
|
| |
This project code is moving to live under development.git/tools/emulator
Change-Id: I3f7673bc17681a0ffa14bb0b4d0880977b77f24d
|