diff options
| author | John Grossman <johngro@google.com> | 2012-04-12 11:53:11 -0700 |
|---|---|---|
| committer | John Grossman <johngro@google.com> | 2012-04-20 10:09:25 -0700 |
| commit | c95cfbb87d0ac5e773037019a96bfc29972d4b4e (patch) | |
| tree | 016435873185a80476d7ab1f3dae8696b2c04ebb | |
| parent | 8d314b709fdd81bb64bdaa8d72a0b19c355cefb9 (diff) | |
| download | frameworks_av-c95cfbb87d0ac5e773037019a96bfc29972d4b4e.zip frameworks_av-c95cfbb87d0ac5e773037019a96bfc29972d4b4e.tar.gz frameworks_av-c95cfbb87d0ac5e773037019a96bfc29972d4b4e.tar.bz2 | |
TimedAudioTrack: Optimize the queue trim operation.
Hand merge from ics-aah
> TimedAudioTrack: Optimize the queue trim operation.
>
> Don't perform the end PTS calculation for each buffer during trimming.
> Instead, only calculate the ending PTS of a buffer if there is no next
> buffer in the queue. This optimization assumes that the buffers being
> queued are in monotonic media time order (a fair assumption for now)
> and that the timestamps in the audio are contiguous (not a requirement
> for this API, but a reality of how it is being used right now).
>
> In the case where the audio is discontinuous on purpose, it is
> that this optimization will cause the system hold one extra buffer
> which it could have safely trimmed. It should not be much of an issue
> since in real life the audio is almost always contiguous, and as long
> as the media clock is running and the mixer is mixing, the buffer will
> be used up and discard as part of the normal flow anyway.
>
> Change-Id: I00061e85ee7d5651fcf80751646c7d7415894a14
> Signed-off-by: John Grossman <johngro@google.com>
Change-Id: I0054b58e1389fa005aa990cb5710caf4af7b706a
Signed-off-by: John Grossman <johngro@google.com>
| -rw-r--r-- | services/audioflinger/AudioFlinger.cpp | 40 |
1 files changed, 29 insertions, 11 deletions
diff --git a/services/audioflinger/AudioFlinger.cpp b/services/audioflinger/AudioFlinger.cpp index 3a6e476..bce30d7 100644 --- a/services/audioflinger/AudioFlinger.cpp +++ b/services/audioflinger/AudioFlinger.cpp @@ -3994,20 +3994,38 @@ void AudioFlinger::PlaybackThread::TimedTrack::trimTimedBufferQueue_l() { size_t trimEnd; for (trimEnd = 0; trimEnd < mTimedBufferQueue.size(); trimEnd++) { - int64_t frameCount = mTimedBufferQueue[trimEnd].buffer()->size() - / mCblk->frameSize; int64_t bufEnd; - if (!mMediaTimeToSampleTransform.doReverseTransform(frameCount, - &bufEnd)) { - ALOGE("Failed to convert frame count of %lld to media time duration" - " (scale factor %d/%u) in %s", frameCount, - mMediaTimeToSampleTransform.a_to_b_numer, - mMediaTimeToSampleTransform.a_to_b_denom, - __PRETTY_FUNCTION__); - break; + if ((trimEnd + 1) < mTimedBufferQueue.size()) { + // We have a next buffer. Just use its PTS as the PTS of the frame + // following the last frame in this buffer. If the stream is sparse + // (ie, there are deliberate gaps left in the stream which should be + // filled with silence by the TimedAudioTrack), then this can result + // in one extra buffer being left un-trimmed when it could have + // been. In general, this is not typical, and we would rather + // optimized away the TS calculation below for the more common case + // where PTSes are contiguous. + bufEnd = mTimedBufferQueue[trimEnd + 1].pts(); + } else { + // We have no next buffer. Compute the PTS of the frame following + // the last frame in this buffer by computing the duration of of + // this frame in media time units and adding it to the PTS of the + // buffer. + int64_t frameCount = mTimedBufferQueue[trimEnd].buffer()->size() + / mCblk->frameSize; + + if (!mMediaTimeToSampleTransform.doReverseTransform(frameCount, + &bufEnd)) { + ALOGE("Failed to convert frame count of %lld to media time" + " duration" " (scale factor %d/%u) in %s", + frameCount, + mMediaTimeToSampleTransform.a_to_b_numer, + mMediaTimeToSampleTransform.a_to_b_denom, + __PRETTY_FUNCTION__); + break; + } + bufEnd += mTimedBufferQueue[trimEnd].pts(); } - bufEnd += mTimedBufferQueue[trimEnd].pts(); if (bufEnd > mediaTimeNow) break; |
