diff options
author | Tejas Shikhare <tshikhar@codeaurora.org> | 2011-11-17 17:18:01 -0800 |
---|---|---|
committer | Steve Kondik <shade@chemlab.org> | 2012-05-21 11:43:55 -0700 |
commit | ef930d1572d5baab13e6a25ea842d83a861a96e2 (patch) | |
tree | cd3c40835715da0738295e812f46e0f6a9d4f345 | |
parent | 0a872611c6ebe9a8f67dfb294e18b06c8fffd52d (diff) | |
download | frameworks_base-ef930d1572d5baab13e6a25ea842d83a861a96e2.zip frameworks_base-ef930d1572d5baab13e6a25ea842d83a861a96e2.tar.gz frameworks_base-ef930d1572d5baab13e6a25ea842d83a861a96e2.tar.bz2 |
audio: Squashed commit of LPA support from CAF
* Patches wrangled by Kali
frameworks/base: LPA and routing api implementation
- Integrated routing APIs for LPA with the new AUDIO HAL
- Integrated LPAPlayer for LPA Playback on 8660
Change-Id: I345f62afa53057261602042ac348d43333cc8732
libstagefright: Integrate LPAPlayerALSA for LPA playback on 8960
Change-Id: Ie7ec686bef7a8c0b41c06ae11bdf49f84e136197
frameworks/base: Fix for no audio while playing specific ADIF clips
- Now the SW AAC Decoder is OMX based and handles decoding from the
beginning after the port reconfiguration. So it is not required to
force seeking to the beginning when INFO_FORMAT_CHANGED received,
after decoding the first frame.
- Removed the same to fix no audio issue with specific ADIF clips
which reports INFO_FORMAT_CHANGED.
Change-Id: I057312d1f9e0e5ced26bb5234cbc79d95be53b1b
CRs-fixed: 321723
libstagefright: fix for crash in AwesomePlayer startAudioPlayer_l
-Issue: check(seeking) fails in startAudioPlayer_l for LPA playback
-Cause: LPAPlayer does not set seeking flag after starting playback
in the middle of a clip
-Fix: Set mSeeking flag and ReadOptions in LPAPlayer::Start
Change-Id: Iac91a2b328be41cb98f6fdfa7c62e0b93a3a48a4
CRs-fixed: 322725
frameworks/base: Fix for pause/resume issue while LPA playback
- If LPA playback is paused and resumed immediately, the audio
resumes for sometime and then the playback switches to next
clip due to error in OMXCodec.
- In the LPA pause implementation, the source, OMXCodec, pause
is being called which does not handle executing to pause
state transition. So this causes decoding issue while resuming.
- Removed unnecessary pause/resume API calls to OMXCodec to fix
the issue.
Change-Id: Ic7713c43aeedd9ec4818def9275653e7756e3a91
CRs-fixed: 322324
libstagefright:Fix for no progress bar update while seeking at end of LPA clip
-progress bar doesnt update while seeking at end of LPA clip.
-EOS is not posted to Awesome player when i/p EOS is reached,all input buffers have been decoded
and response queue is empty.
-Post EOS to Awesome player when i/p EOS is reached,all input buffers have been decoded
and response queue is empty.
CRs-Fixed: 321961
Change-Id: I6f90ac577825d807b99e724b3948f7cca1478e8d
frameworks/base: Enable Audio effects for LPA output
- Added the support to apply Audio Effects on LPA output.
Change-Id: I08b64167e9beac7fbe84ad2610f0177766be7c7e
frameworks/base:Fix for memory leaks during LPA playback
-Sigkill errors while running audio monkey causing the device the monkey to stop
Memory is getting critically low leading to background process getting killed
by the OOM killer.
-Memory leaks during LPA playback is leading to memory exhaution.
-Fixing the memory leaks.
Change-Id: I546d2a08d33789b3433d8ea61c30f6cba02a9f7c
CRs-Fixed: 326720
libstagefirght: Update timeStarted to use system time in LPAPlayer::start
- Issues: Paussing LPA clips at the last second causes the control to
the end of next clip
- Causes: TimeStarted is not updated correctly if pause cmd is received
before decoder thread starts.
- FIX: Update timeStarted to use system time in LPAPlayer::start()
Change-Id: If01b397b251c8aa20feed581c260d5ff818a2834
CRs-fixed: 324298
frameworks/base: Prevent effects application in paused state
- The issue is that effects are being applied on the LPA buffers
in paused state.
- After 3s in paused state, session of the playback is
deregistered hence effects should not be applied
- The issue is fixed by stalling the effects thread till the
playback is resumed and session for LPA is re-establised with
MediaPlayerService
Change-Id: I87f0f1cfcaaaf0f95a7218f46ea76d043c84bb77
CRs-Fixed: 328300
frameworks/base: Syncronize resume and onPauseTimeOut
- All the mixer controls are closed 3s after pausing playback
through the onPauseTimeOut function
- A scenario where onPauseTimeOut is closing mixer controls, a
resume is issued, causes a crash
- Synchronize these 2 functions using a mutex to prevent
concurrent execution.
Change-Id: Ic0e84423f7e3e4a26c441c73235e61d9a13c225d
CRs-Fixed: 329312
frameworks/base : Prevent pcm_prepare when A2DP is enabled
- pcm_prepare should not be called without setting routing
controls, as this will result in driver bad state.
- Fix the issue by calling pcm_prepare only when A2DP is not
enabled and routing controls are set.
Change-Id: Ic2db9224d70500c392fa31804844aa934eca633d
CRs-fixed: 327396
libstagefright: Flush ASM before closing the stream
- By calling pcm_prepare we can flush the driver and
dsp so that playback close can issue an eos from kernel
Change-Id: Icb5249ff8c480405b4b8ac5ce5f995ed5d73bf0d
CRs-Fixed: 331532
(cherry picked from commit 8bdfa122ec7ff72f61ea01f932d96d94dc27f016)
libstagefright: Fix for seek issue in mp3 streaming playback
-Issue: In LPA playback if seek is issued, the pcm driver
starts after a fill buffer and write is completed. If pause is
issued before the driver starts, audio pause fails and results
to a sudden jump in the playback or to an EOS at random
-Testscenario: Flush,immediately followed by Pause in LPA playback.
-Fix: Pause is handled when the pcm write is completed. This is
acheived by a conditional wait on the pcm write done.
CRs-Fixed: 331099
(cherry picked from commit 6ce15986ee7f2155044f79c505ebcd5a310a6c0d)
Change-Id: I605316bba2d964ba3d52f6a7cc42e7e390d92fdf
libstagefright: prevent trigger for stale events
- prevent event thread from running if response queue is empty which
means there is no buffer with the driver
CRs-Fixed: 336970
(cherry picked from commit 690fb2d96a58b2341e49e3424d7e0efe7093aad7)
Change-Id: Ie86a900e77175b2786cfe10fc0c64457e9fc4bae
libstagefright: Ensure pcm_prepare is called only when routing is
still active.
- LPAPlayer does Derouting, pcm_prepare and pcm_close during pause.
- pcm_prepare should not be called after derouting, as the driver
tries to prepare a session which is already derouted.
- This results in no backend errors in kernel for LPA Front end
session as the backend is already closed with derouting.
- Fix the issue by ensuring pcm_prepare is always called only
routing is still active.
Change-Id: I4b4eef7f9775b6141a5ec9a0eed82ca2f7a5c6d6
CRs-fixed: 341268
libstagefright: Add support in LPA for DSP timestamp.
Change-Id: Ie9525b0ab201b9de828a25ef1cd9731567f4610a
CRs-Fixed: 338065
(cherry picked from commit 2457cb32ec93a11e2a95d77557daaf6be0e1529a)
libstagefright: honor write done event during pause
Queue up completed buffers for decoding even if playback
is in paused state as there is concurrency between write
done and pause
CRs-Fixed: 340469
(cherry picked from commit c9116b67545c5d973c255fba55c031271d3c38a4)
Change-Id: Ifdc2cec4d92773ac279c02df7067bd95c32ca4a4
libstagefright: Fix A2DP seek to EOS issue in LPA
- In A2dp scenario, when seeked to EOS and 0 bytes returned by decoder,
eos was not issued to the app. This resulted in no audio.
- Put the buffer back in the request queue in case of 0 decoded bytes
and post an AudioEOS event to the app.
Change-Id: Icb2cc053d71d02c8adb90fc5be1922ea813331e9
CRs-Fixed: 339608
(cherry picked from commit cdcdc2e6c6967de31476b1ece3702b645989e1df)
Conflicts:
media/libstagefright/LPAPlayerALSA.cpp
LPAPlayerALSA: Fix for acquire/release Wake-locks.
-Requires a wake lock to ensure that 3s timer after playback runs
in suspend mode to get into TCXO shutdown.
-Add support for acquiring wake locks from mediaserver process
-PowerService is used for acquiring/releasing wakelock
Change-Id: Icb21c319eee24aa38d56afcd8eddcb6315b74558
CRs-Fixed: 338542
libstagefright: Fix concurrency issue during A2DP switch.
- When A2DP is disconnected, Pause is issued by the app.
Sometimes, pause happens concurrently with stop, resulting a
stuck in write.
- Fix the issue by switching the sessions, when resuming the
playback so that above concurrency is avoided.
CRs-Fixed: 338086
(cherry picked from commit 29c0c17f53b6c945605e91da9108eea958b17bea)
Change-Id: I7ee7a3ca0569006c404cb5cca885271b53476695
libstagefright: Initialize audio routing flag in the constructor
If the audio routing flag is not initialized, it could result in
some rare errors which will cause routing to be in bad state.
(cherry picked from commit 74d381b2ca643515abf2bafa587df0b1cc7e56c7)
Change-Id: I3f4b9b3e172921a397f4ad2a55c8e0e429af13bc
libstagefright: Fix for application not responding when going to next song
- When a song is ended, the driver is paused, flushed and then close.
- The issue is that under some scenario, the song is ended before the
driver is initialized. Pausing the driver as result causes a native
crash
- Issue is fixed by only issuing the pause to the driver if it is
started.
Change-Id: Ib839a087136526e9186fc37c8cb29c681612e6c9
CRs-Fixed: 339578
(cherry picked from commit a7117328a21eca0fc422b56c12acfab25f17873a)
libstagefright: Add wake lock support for LPAPlayerALSA
- LPAPlayerALSA now holds the wake lock while LPA playback is
on going
- This allows external applications who do not hold wake lock
to use LPA playback
CRs-Fixed: 342451
(cherry picked from commit 9621db1c1cf2092b1b51983c91434612c4cd8480)
Change-Id: I3ff8bbdc2535e29b3e0b94953d9ce6364b5c0782
libstagefright: pause AudioStream when bt is disconnected.
Notify AudioPolicyManager that device is in paused state when
BT is disconnected
CRs-Fixed: 349091
(cherry picked from commit e2fd42a43f92696f917b74b23a1cce9ac276a707)
Change-Id: I5377f8568e1fccb11685ca0e718968eb1823d539
libstagefright: Decrease LPA buffer size to 256 kb
CRs-Fixed: 344793
Conflicts:
media/libstagefright/LPAPlayerALSA.cpp
Change-Id: Ia9d13985dffa0473b3bdadc547eeb06b114b5a8b
libstagefright: Change thread priority for LPA threads
- Since A2DP behaves like a render thread, there is a need for it
run at urgent audio priority.
(cherry picked from commit 3b81741adf7b743cfa72874f63bf561950c9cd22)
Change-Id: I9d7ee924766fef1ac77c47dc445d8d32a305d700
libstagefright: Update LPA Player to use ION
Create LPAPlayerION and LPAPlayerPMEM files to separate
memory allocation using ION and PMEM and change the
existing files accordingly.
CRs-fixed: 341467
(cherry picked from commit 63a2671e848d5f8bc9295706974d5c7bee7b2002)
Change-Id: Ife594fb9c36a98d4a3be47ae4140a9c82ec477f7
frameworks/base: Fix to prevent deadlocks with Audioeffects.
-Initialization of LPAeffectschain is not protected and
locking/unlocking the Effectschain based on this value
can lead to a deadlock scenario's during Stability or
Monkey runs on Music app with Audioeffects in action.
-Protect the initialization of LPAeffectschain.
CRs-Fixed: 336281
(cherry picked from commit f0c6443679b0244a6cddf3042aa4b92b69f4d178)
Change-Id: I27ec5b6cbbd3c6e72fb234542aa159ebec5df6be
AudioFlinger: Fix for LPA volume change when headset connected.
-When headset connected, volume is increased for LPA media
playback in repeat mode.
-Fixed volume setting in LPA mode.
CRs-Fixed: 339790
(cherry picked from commit bc410c04dfed2caca9759e6eaf1ada6984f359cb)
Change-Id: Id94920580384812353e3ae95f8f61511a1ec37c2
frameworks/base: Add support for LPA volume control using mediaplayer API.
-Issue: Setting LPA volume using MediaPlayer::setVolume()
API fails.
-Cause: Current implementation of this API has only software
decoder volume setting.
-Fix: Add support to call kernel API for volume,
as LPA volume is applied in DSP.
Change-Id: If2eee5d03f421b1097b9a7f53d3ba3e4f293f4d8
CRs-Fixed: 317323
frameworks/base: Do not use LPA mode for ADIF clips
- When ADIF clips are being played in LPA mode, if it is paused for
more than 3sec and resumed, it results ANR
- This is due to the limitation that ADIF playback cannot be seeked.
When LPA playback is paused for more than 3sec, all the buffers
with LPA driver are flused and closed. On Resume, it tries to seek
to the paused location, where it fails for ADIF clips.
- Fixed by not allowing ADIF clips in LPA mode.
Change-Id: I25890844b0a28a474c9ac073d2576fca56f60e8c
CRs-fixed: 324296
libstagefright: Fix LPA mute issue via browser.
-When Playing LPA clip via HTML link, mute option
fails to work.
-In MediaPlayerService, setVolume API handles only
for non-LPA case. Need to change to call LPA Volume
update too.
-Call mSession setVolume incase of LPA. The AudioFlinger
has to keep track of previous volume when muted. This
volume is again applied back when unmuted since App
sents volume as unity when unmuted which is not the previous
volume before mute. This change fixes the below issues
--mute/unmute option via browser
--increase/decrease volume when mute-should not affect mute option.
--While in mute pause for 3sec and resume,mute is lost.
CRs-Fixed: 327159
(cherry picked from commit 440de6deaae11b527b7250039e5172a690152e8c)
Change-Id: I73e9773f0a507c47947051bceebeb013ebca8e67
media/libmedia: Release the session only for non-lpa clip
Issue 1:
- The session id is not acquired for LPA clips in AudioTrack
however destructor tries to release it at the end of LPA
Playback.
- This cause corruption and eventually causes the ref count
to decrease on every LPA clip. As a result the application
of effects is not consistent
- This issue is fixed by releasing the session only during
non-lpa clip
Issue 2:
- There was noise for initial buffers during LPA playback
- Mixer thread was applying effects for LPA effect chain
- Prevent this by ensuring when lpa session is active,
mixer thread does not apply effects on the LPA chain
(cherry picked from commit 95932d301acf6d331fd8c42154ae69a7c98a9a33)
Change-Id: I96dbbab831f21bc40ff98f202902ee753ab61fb6
CRs-Fixed: 328645
libstagefright: Create new AAC and MP3 decoder libraries without OMX layer
- With the current AAC and MP3 OMX SW decoders, the decoding time
is increased w.r.t the libraries without OMX layer that are
present in GB. This increase in decoding time results reduction in
power savings in LPA mode.
- This commit is to remove OMX layer for AAC and MP3 to reduce the
power consumption in LPA mode.
(cherry picked from commit 16b4260ff4a200b2ad69290be714578ffa33424f)
Change-Id: I4ef13031207952074d0788a8953ebc38cfe48cee
CRs-fixed: 334400
fix build
Change-Id: I8fe32083911a41e1517b9e73b618521b38a0db25
32 files changed, 7060 insertions, 105 deletions
diff --git a/include/media/AudioSystem.h b/include/media/AudioSystem.h index 6a15f6e..f80a8d6 100644 --- a/include/media/AudioSystem.h +++ b/include/media/AudioSystem.h @@ -120,6 +120,10 @@ public: INPUT_CLOSED, INPUT_CONFIG_CHANGED, STREAM_CONFIG_CHANGED, +#ifdef WITH_QCOM_LPA + A2DP_OUTPUT_STATE, + EFFECT_CONFIG_CHANGED, +#endif NUM_CONFIG_EVENTS }; @@ -151,6 +155,15 @@ public: uint32_t format = AUDIO_FORMAT_DEFAULT, uint32_t channels = AUDIO_CHANNEL_OUT_STEREO, audio_policy_output_flags_t flags = AUDIO_POLICY_OUTPUT_FLAG_INDIRECT); +#ifdef WITH_QCOM_LPA + static audio_io_handle_t getSession(audio_stream_type_t stream, + uint32_t format = AUDIO_FORMAT_DEFAULT, + audio_policy_output_flags_t flags = AUDIO_POLICY_OUTPUT_FLAG_DIRECT, + int32_t sessionId = -1); + static void closeSession(audio_io_handle_t output); + static status_t pauseSession(audio_io_handle_t output, audio_stream_type_t stream); + static status_t resumeSession(audio_io_handle_t output, audio_stream_type_t stream); +#endif static status_t startOutput(audio_io_handle_t output, audio_stream_type_t stream, int session = 0); diff --git a/include/media/AudioTrack.h b/include/media/AudioTrack.h index 1c401e2..f518198 100644 --- a/include/media/AudioTrack.h +++ b/include/media/AudioTrack.h @@ -172,6 +172,20 @@ public: void* user = 0, int notificationFrames = 0, int sessionId = 0); +#ifdef WITH_QCOM_LPA + /* Creates an audio track and registers it with AudioFlinger. With this constructor, + * session ID of compressed stream can be registered AudioFlinger and AudioHardware, + * for routing purpose. + */ + + AudioTrack( int streamType, + uint32_t sampleRate = 0, + int format = 0, + int channels = 0, + uint32_t flags = 0, + int sessionId = 0, + int lpaSessionId =-1); +#endif /* Terminates the AudioTrack and unregisters it from AudioFlinger. * Also destroys all resources assotiated with the AudioTrack. @@ -198,7 +212,22 @@ public: const sp<IMemory>& sharedBuffer = 0, bool threadCanCallJava = false, int sessionId = 0); - +#ifdef WITH_QCOM_LPA + /* Initialize an AudioTrack and registers session Id for Tunneled audio decoding. + * Returned status (from utils/Errors.h) can be: + * - NO_ERROR: successful intialization + * - INVALID_OPERATION: AudioTrack is already intitialized + * - BAD_VALUE: invalid parameter (channels, format, sampleRate...) + * - NO_INIT: audio server or audio hardware not initialized + * */ + status_t set(int streamType =-1, + uint32_t sampleRate = 0, + int format = 0, + int channels = 0, + uint32_t flags = 0, + int sessionId = 0, + int lpaSessionId =-1); +#endif /* Result of constructing the AudioTrack. This must be checked * before using any AudioTrack API (except for set()), using @@ -485,6 +514,9 @@ private: uint32_t mUpdatePeriod; bool mFlushed; // FIXME will be made obsolete by making flush() synchronous uint32_t mFlags; +#ifdef WITH_QCOM_LPA + audio_io_handle_t mAudioSession; +#endif int mSessionId; int mAuxEffectId; Mutex mLock; diff --git a/include/media/IAudioFlinger.h b/include/media/IAudioFlinger.h index 9e3cb7f..6a9d4b0 100644 --- a/include/media/IAudioFlinger.h +++ b/include/media/IAudioFlinger.h @@ -57,6 +57,22 @@ public: int *sessionId, status_t *status) = 0; +#ifdef WITH_QCOM_LPA + virtual void createSession( + pid_t pid, + uint32_t sampleRate, + int channelCount, + int *sessionId, + status_t *status) = 0; + + virtual void deleteSession() = 0; + + virtual void applyEffectsOn( + int16_t *buffer1, + int16_t *buffer2, + int size) = 0; +#endif + virtual sp<IAudioRecord> openRecord( pid_t pid, int input, @@ -86,6 +102,9 @@ public: virtual float masterVolume() const = 0; virtual bool masterMute() const = 0; +#ifdef WITH_QCOM_LPA + virtual status_t setSessionVolume(int stream, float value, float right) = 0; +#endif /* set/get stream type state. This will probably be used by * the preference panel, mostly. */ @@ -117,6 +136,16 @@ public: uint32_t *pChannels, uint32_t *pLatencyMs, uint32_t flags) = 0; +#ifdef WITH_QCOM_LPA + virtual int openSession(uint32_t *pDevices, + uint32_t *pFormat, + uint32_t flags, + int32_t stream, + int32_t sessionId){return 0;}; + virtual status_t pauseSession(int output, int32_t stream) = 0; + virtual status_t resumeSession(int output, int32_t stream) = 0; + virtual status_t closeSession(int output) = 0; +#endif virtual int openDuplicateOutput(int output1, int output2) = 0; virtual status_t closeOutput(int output) = 0; virtual status_t suspendOutput(int output) = 0; @@ -159,6 +188,9 @@ public: int *enabled) = 0; virtual status_t moveEffects(int session, int srcOutput, int dstOutput) = 0; +#ifdef WITH_QCOM_LPA + virtual status_t deregisterClient(const sp<IAudioFlingerClient>& client) { return false; }; +#endif }; diff --git a/include/media/IAudioPolicyService.h b/include/media/IAudioPolicyService.h index 9807cbe..b8a4621 100644 --- a/include/media/IAudioPolicyService.h +++ b/include/media/IAudioPolicyService.h @@ -54,6 +54,15 @@ public: uint32_t format = AUDIO_FORMAT_DEFAULT, uint32_t channels = 0, audio_policy_output_flags_t flags = AUDIO_POLICY_OUTPUT_FLAG_INDIRECT) = 0; +#ifdef WITH_QCOM_LPA + virtual audio_io_handle_t getSession(audio_stream_type_t stream, + uint32_t format = AUDIO_FORMAT_DEFAULT, + audio_policy_output_flags_t flags = AUDIO_POLICY_OUTPUT_FLAG_DIRECT, + int32_t sessionId=-1) { return 0; } + virtual status_t pauseSession(audio_io_handle_t output, audio_stream_type_t stream) { return 0; } + virtual status_t resumeSession(audio_io_handle_t output, audio_stream_type_t stream) { return 0; } + virtual status_t closeSession(audio_io_handle_t output) = 0; +#endif virtual status_t startOutput(audio_io_handle_t output, audio_stream_type_t stream, int session = 0) = 0; diff --git a/include/media/MediaPlayerInterface.h b/include/media/MediaPlayerInterface.h index 80f43a3..e05e5c5 100644 --- a/include/media/MediaPlayerInterface.h +++ b/include/media/MediaPlayerInterface.h @@ -90,12 +90,25 @@ public: AudioCallback cb = NULL, void *cookie = NULL) = 0; +#ifdef WITH_QCOM_LPA + // API to open a routing session for tunneled audio playback + virtual status_t openSession( + int format, int sessionId, uint32_t sampleRate = 44100, int channels = 2) {return 0;}; +#endif + virtual void start() = 0; virtual ssize_t write(const void* buffer, size_t size) = 0; virtual void stop() = 0; virtual void flush() = 0; virtual void pause() = 0; +#ifdef WITH_QCOM_LPA + virtual void pauseSession() {return;}; + virtual void resumeSession() {return;}; +#endif virtual void close() = 0; +#ifdef WITH_QCOM_LPA + virtual void closeSession() {return;}; +#endif }; MediaPlayerBase() : mCookie(0), mNotify(0) {} diff --git a/include/media/stagefright/AudioPlayer.h b/include/media/stagefright/AudioPlayer.h index 7adefaa..bb05ba3 100644 --- a/include/media/stagefright/AudioPlayer.h +++ b/include/media/stagefright/AudioPlayer.h @@ -42,27 +42,27 @@ public: virtual ~AudioPlayer(); // Caller retains ownership of "source". - void setSource(const sp<MediaSource> &source); + virtual void setSource(const sp<MediaSource> &source); // Return time in us. virtual int64_t getRealTimeUs(); - status_t start(bool sourceAlreadyStarted = false); + virtual status_t start(bool sourceAlreadyStarted = false); - void pause(bool playPendingSamples = false); - void resume(); + virtual void pause(bool playPendingSamples = false); + virtual void resume(); // Returns the timestamp of the last buffer played (in us). - int64_t getMediaTimeUs(); + virtual int64_t getMediaTimeUs(); // Returns true iff a mapping is established, i.e. the AudioPlayer // has played at least one frame of audio. - bool getMediaTimeMapping(int64_t *realtime_us, int64_t *mediatime_us); + virtual bool getMediaTimeMapping(int64_t *realtime_us, int64_t *mediatime_us); - status_t seekTo(int64_t time_us); + virtual status_t seekTo(int64_t time_us); - bool isSeeking(); - bool reachedEOS(status_t *finalStatus); + virtual bool isSeeking(); + virtual bool reachedEOS(status_t *finalStatus); private: friend class VideoEditorAudioPlayer; diff --git a/include/media/stagefright/LPAPlayer.h b/include/media/stagefright/LPAPlayer.h new file mode 100644 index 0000000..0326d97 --- /dev/null +++ b/include/media/stagefright/LPAPlayer.h @@ -0,0 +1,342 @@ +/*
+ * Copyright (C) 2009 The Android Open Source Project
+ * Copyright (c) 2009-2012, Code Aurora Forum. All rights reserved.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef LPA_PLAYER_H_
+
+#define LPA_PLAYER_H_
+
+#include "AudioPlayer.h"
+#include <media/IAudioFlinger.h>
+#include <utils/threads.h>
+#include <utils/List.h>
+#include <utils/Vector.h>
+#include <fcntl.h>
+#include <pthread.h>
+#include <binder/IServiceManager.h>
+#include <linux/unistd.h>
+#include <linux/msm_audio.h>
+#include <linux/ion.h>
+#include <include/TimedEventQueue.h>
+#include <binder/BinderService.h> +#include <binder/MemoryDealer.h> +#include <powermanager/IPowerManager.h> + +// Pause timeout = 3sec
+#define LPA_PAUSE_TIMEOUT_USEC 3000000
+
+namespace android {
+
+class LPAPlayer : public AudioPlayer {
+public:
+ enum {
+ REACHED_EOS,
+ SEEK_COMPLETE
+ };
+
+ LPAPlayer(const sp<MediaPlayerBase::AudioSink> &audioSink, bool &initCheck,
+ AwesomePlayer *audioObserver = NULL);
+
+ virtual ~LPAPlayer();
+
+ // Caller retains ownership of "source".
+ virtual void setSource(const sp<MediaSource> &source);
+
+ // Return time in us.
+ virtual int64_t getRealTimeUs();
+
+ virtual status_t start(bool sourceAlreadyStarted = false);
+
+ virtual void pause(bool playPendingSamples = false);
+ virtual void resume();
+
+ // Returns the timestamp of the last buffer played (in us).
+ virtual int64_t getMediaTimeUs();
+
+ // Returns true iff a mapping is established, i.e. the LPAPlayer
+ // has played at least one frame of audio.
+ virtual bool getMediaTimeMapping(int64_t *realtime_us, int64_t *mediatime_us);
+
+ virtual status_t seekTo(int64_t time_us);
+
+ virtual bool isSeeking();
+ virtual bool reachedEOS(status_t *finalStatus);
+
+
+ void* handle;
+ static int objectsAlive;
+private:
+
+ int afd;
+ int efd;
+ int ionfd;
+ int sessionId;
+ uint32_t bytesToWrite;
+ bool isPaused;
+ bool mSeeked;
+ bool a2dpDisconnectPause;
+ bool a2dpThreadStarted;
+ volatile bool asyncReset;
+ bool eventThreadCreated;
+ int mBuffSize;
+ int mBuffNumber;
+ + void clearPowerManager(); + + class PMDeathRecipient : public IBinder::DeathRecipient { + public: + PMDeathRecipient(void *obj){parentClass = (LPAPlayer *)obj;} + virtual ~PMDeathRecipient() {} + + // IBinder::DeathRecipient + virtual void binderDied(const wp<IBinder>& who); + + private: + LPAPlayer *parentClass; + PMDeathRecipient(const PMDeathRecipient&); + PMDeathRecipient& operator = (const PMDeathRecipient&); + + friend class LPAPlayer; + }; + + friend class PMDeathRecipient; + + void acquireWakeLock(); + void releaseWakeLock(); + + sp<IPowerManager> mPowerManager; + sp<IBinder> mWakeLockToken; + sp<PMDeathRecipient> mDeathRecipient; + + //Structure to hold ion buffer information
+ class BuffersAllocated {
+ /* overload BuffersAllocated constructor to support both ion and pmem memory allocation */
+ public:
+ BuffersAllocated(void *buf1, void *buf2, int32_t nSize, int32_t fd) :
+ localBuf(buf1), memBuf(buf2), memBufsize(nSize), memFd(fd)
+ {}
+ BuffersAllocated(void *buf1, void *buf2, int32_t nSize, int32_t share_fd, struct ion_handle *handle) :
+ ion_handle(handle), localBuf(buf1), memBuf(buf2), memBufsize(nSize), memFd(share_fd)
+ {}
+ struct ion_handle *ion_handle;
+ void* localBuf;
+ void* memBuf;
+ int32_t memBufsize;
+ int32_t memFd;
+ uint32_t bytesToWrite;
+ };
+ void audio_register_memory();
+ void memBufferDeAlloc();
+ void *memBufferAlloc(int32_t nSize, int32_t *mem_fd);
+
+ List<BuffersAllocated> memBuffersRequestQueue;
+ List<BuffersAllocated> memBuffersResponseQueue;
+ List<BuffersAllocated> bufPool;
+ List<BuffersAllocated> effectsQueue;
+
+
+ //Declare all the threads
+ pthread_t eventThread;
+ pthread_t decoderThread;
+ pthread_t A2DPThread;
+ pthread_t EffectsThread;
+ pthread_t A2DPNotificationThread;
+
+ //Kill Thread boolean
+ bool killDecoderThread;
+ bool killEventThread;
+ bool killA2DPThread;
+ bool killEffectsThread;
+ bool killA2DPNotificationThread;
+
+ //Thread alive boolean
+ bool decoderThreadAlive;
+ bool eventThreadAlive;
+ bool a2dpThreadAlive;
+ bool effectsThreadAlive;
+ bool a2dpNotificationThreadAlive;
+
+ //Declare the condition Variables and Mutex
+ pthread_mutex_t mem_request_mutex;
+ pthread_mutex_t mem_response_mutex;
+ pthread_mutex_t decoder_mutex;
+ pthread_mutex_t event_mutex;
+ pthread_mutex_t a2dp_mutex;
+ pthread_mutex_t effect_mutex;
+ pthread_mutex_t apply_effect_mutex;
+ pthread_mutex_t a2dp_notification_mutex;
+ pthread_mutex_t pause_mutex;
+
+ pthread_cond_t event_cv;
+ pthread_cond_t decoder_cv;
+ pthread_cond_t a2dp_cv;
+ pthread_cond_t effect_cv;
+ pthread_cond_t event_thread_cv;
+ pthread_cond_t a2dp_notification_cv;
+ pthread_cond_t pause_cv;
+
+ // make sure Decoder thread has exited
+ void requestAndWaitForDecoderThreadExit();
+
+ // make sure the event thread also exited
+ void requestAndWaitForEventThreadExit();
+
+ // make sure the A2dp thread also exited
+ void requestAndWaitForA2DPThreadExit();
+
+ // make sure the Effects thread also exited
+ void requestAndWaitForEffectsThreadExit();
+
+ // make sure the Effects thread also exited
+ void requestAndWaitForA2DPNotificationThreadExit();
+
+ static void *eventThreadWrapper(void *me);
+ void eventThreadEntry();
+ static void *decoderThreadWrapper(void *me);
+ void decoderThreadEntry();
+ static void *A2DPThreadWrapper(void *me);
+ void A2DPThreadEntry();
+ static void *EffectsThreadWrapper(void *me);
+ void EffectsThreadEntry();
+ static void *A2DPNotificationThreadWrapper(void *me);
+ void A2DPNotificationThreadEntry();
+
+ void createThreads();
+
+ volatile bool bIsA2DPEnabled, bIsAudioRouted, bEffectConfigChanged;
+
+ //Structure to recieve the BT notification from the flinger.
+ class AudioFlingerLPAdecodeClient: public IBinder::DeathRecipient, public BnAudioFlingerClient {
+ public:
+ AudioFlingerLPAdecodeClient(void *obj);
+
+ LPAPlayer *pBaseClass;
+ // DeathRecipient
+ virtual void binderDied(const wp<IBinder>& who);
+
+ // IAudioFlingerClient
+
+ // indicate a change in the configuration of an output or input: keeps the cached
+ // values for output/input parameters upto date in client process
+ virtual void ioConfigChanged(int event, int ioHandle, void *param2);
+
+ friend class LPAPlayer;
+ };
+
+ sp<IAudioFlinger> mAudioFlinger;
+
+ // helper function to obtain AudioFlinger service handle
+ void getAudioFlinger();
+
+ void handleA2DPSwitch();
+
+ sp<AudioFlingerLPAdecodeClient> AudioFlingerClient;
+ friend class AudioFlingerLPAdecodeClient;
+ Mutex AudioFlingerLock;
+ bool mSourceEmpty;
+ bool mAudioSinkOpen;
+
+ sp<MediaSource> mSource;
+
+ MediaBuffer *mInputBuffer;
+ int32_t numChannels;
+ int mSampleRate;
+ int64_t mLatencyUs;
+ size_t mFrameSize;
+
+ Mutex pmLock; + Mutex mLock;
+ Mutex mSeekLock;
+ Mutex a2dpSwitchLock;
+ Mutex resumeLock;
+ int64_t mNumFramesPlayed;
+
+ int64_t mPositionTimeMediaUs;
+ int64_t mPositionTimeRealUs;
+
+ bool mSeeking;
+ bool mInternalSeeking;
+ bool mReachedEOS;
+ status_t mFinalStatus;
+ int64_t mSeekTimeUs;
+ int64_t mPauseTime;
+ int64_t mNumA2DPBytesPlayed;
+ int64_t timePlayed;
+ int64_t timeStarted;
+
+ bool mStarted;
+
+ bool mIsFirstBuffer;
+ status_t mFirstBufferResult;
+ MediaBuffer *mFirstBuffer;
+ TimedEventQueue mQueue;
+ bool mQueueStarted;
+ sp<TimedEventQueue::Event> mPauseEvent;
+ bool mPauseEventPending;
+ bool mPlaybackSuspended;
+ bool mIsDriverStarted;
+ bool mIsAudioRouted;
+
+ sp<MediaPlayerBase::AudioSink> mAudioSink;
+ AwesomePlayer *mObserver;
+
+ enum A2DPState {
+ A2DP_ENABLED,
+ A2DP_DISABLED,
+ A2DP_CONNECT,
+ A2DP_DISCONNECT
+ };
+
+ size_t fillBuffer(void *data, size_t size);
+
+ int64_t getRealTimeUsLocked();
+ int64_t getTimeStamp(A2DPState state);
+
+ void reset();
+
+ void onPauseTimeOut();
+
+
+ LPAPlayer(const LPAPlayer &);
+ LPAPlayer &operator=(const LPAPlayer &);
+};
+
+struct TimedEvent : public TimedEventQueue::Event {
+ TimedEvent(LPAPlayer *player,
+ void (LPAPlayer::*method)())
+ : mPlayer(player),
+ mMethod(method) {
+ }
+
+protected:
+ virtual ~TimedEvent() {}
+
+ virtual void fire(TimedEventQueue *queue, int64_t /* now_us */) {
+ (mPlayer->*mMethod)();
+ }
+
+private:
+ LPAPlayer *mPlayer;
+ void (LPAPlayer::*mMethod)();
+
+ TimedEvent(const TimedEvent &);
+ TimedEvent &operator=(const TimedEvent &);
+};
+
+} // namespace android
+
+#endif // LPA_PLAYER_H_
+
diff --git a/media/libmedia/AudioSystem.cpp b/media/libmedia/AudioSystem.cpp index f35b007..8a77d4d 100644 --- a/media/libmedia/AudioSystem.cpp +++ b/media/libmedia/AudioSystem.cpp @@ -596,6 +596,27 @@ audio_io_handle_t AudioSystem::getOutput(audio_stream_type_t stream, return output; } +#ifdef WITH_QCOM_LPA +audio_io_handle_t AudioSystem::getSession(audio_stream_type_t stream, + uint32_t format, + audio_policy_output_flags_t flags, + int sessionId) +{ + audio_io_handle_t output = 0; + + if ((flags & AUDIO_POLICY_OUTPUT_FLAG_DIRECT) == 0) { + return 0; + } + + const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service(); + if (aps == 0) return 0; + + output = aps->getSession(stream, format, flags, sessionId); + + return output; +} +#endif + status_t AudioSystem::startOutput(audio_io_handle_t output, audio_stream_type_t stream, int session) @@ -621,6 +642,29 @@ void AudioSystem::releaseOutput(audio_io_handle_t output) aps->releaseOutput(output); } +#ifdef WITH_QCOM_LPA +status_t AudioSystem::pauseSession(audio_io_handle_t output, audio_stream_type_t stream) +{ + const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service(); + if (aps == 0) return PERMISSION_DENIED; + return aps->pauseSession(output, stream); +} + +status_t AudioSystem::resumeSession(audio_io_handle_t output, audio_stream_type_t stream) +{ + const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service(); + if (aps == 0) return PERMISSION_DENIED; + return aps->resumeSession(output, stream); +} + +void AudioSystem::closeSession(audio_io_handle_t output) +{ + const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service(); + if (aps == 0) return; + aps->closeSession(output); +} +#endif + audio_io_handle_t AudioSystem::getInput(int inputSource, uint32_t samplingRate, uint32_t format, diff --git a/media/libmedia/AudioTrack.cpp b/media/libmedia/AudioTrack.cpp index 415d0ba..3f3bcd3 100644 --- a/media/libmedia/AudioTrack.cpp +++ b/media/libmedia/AudioTrack.cpp @@ -150,7 +150,20 @@ AudioTrack::AudioTrack( 0, flags, cbf, user, notificationFrames, sharedBuffer, false, sessionId); } - +#ifdef WITH_QCOM_LPA +AudioTrack::AudioTrack( + int streamType, + uint32_t sampleRate, + int format, + int channels, + uint32_t flags, + int sessionId, + int lpaSessionId) + : mStatus(NO_INIT), mAudioSession(-1) +{ + mStatus = set(streamType, sampleRate, format, channels, flags, sessionId, lpaSessionId); +} +#endif AudioTrack::~AudioTrack() { LOGV_IF(mSharedBuffer != 0, "Destructor sharedBuffer: %p", mSharedBuffer->pointer()); @@ -164,9 +177,31 @@ AudioTrack::~AudioTrack() mAudioTrackThread->requestExitAndWait(); mAudioTrackThread.clear(); } +#ifndef WITH_QCOM_LPA mAudioTrack.clear(); +#else + if(mAudioTrack != NULL) { + mAudioTrack.clear(); + AudioSystem::releaseAudioSessionId(mSessionId); + } + if(mAudioSession >= 0) { + const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger(); + if (audioFlinger != 0) { + status_t status; + LOGV("Calling AudioFlinger::deleteSession"); + audioFlinger->deleteSession(); + } else { + LOGE("Could not get audioflinger"); + } + + AudioSystem::closeSession(mAudioSession); + mAudioSession = -1; + } +#endif IPCThreadState::self()->flushCommands(); +#ifndef WITH_QCOM_LPA AudioSystem::releaseAudioSessionId(mSessionId); +#endif } } @@ -293,11 +328,97 @@ status_t AudioTrack::set( mUpdatePeriod = 0; mFlushed = false; mFlags = flags; +#ifdef WITH_QCOM_LPA + mAudioSession = -1; +#endif AudioSystem::acquireAudioSessionId(mSessionId); mRestoreStatus = NO_ERROR; return NO_ERROR; } +#ifdef WITH_QCOM_LPA +status_t AudioTrack::set( + int streamType, + uint32_t sampleRate, + int format, + int channels, + uint32_t flags, + int sessionId, + int lpaSessionId) +{ + + // handle default values first. + if (streamType == AUDIO_STREAM_DEFAULT) { + streamType = AUDIO_STREAM_MUSIC; + } + // these below should probably come from the audioFlinger too... + if (format == 0) { + format = AUDIO_FORMAT_PCM_16_BIT; + } + // validate parameters + if (!audio_is_valid_format(format)) { + LOGE("Invalid format"); + return BAD_VALUE; + } + // force direct flag if format is not linear PCM + if (!audio_is_linear_pcm(format)) { + flags |= AUDIO_POLICY_OUTPUT_FLAG_DIRECT; + } + + audio_io_handle_t output = AudioSystem::getSession((audio_stream_type_t)streamType, + format, (audio_policy_output_flags_t)flags, lpaSessionId); + + if (output == 0) { + LOGE("Could not get audio output for stream type %d", streamType); + return BAD_VALUE; + } + mVolume[LEFT] = 1.0f; + mVolume[RIGHT] = 1.0f; + mStatus = NO_ERROR; + mStreamType = streamType; + mFormat = format; + mChannelCount = 2; + mSharedBuffer = NULL; + mMuted = false; + mActive = 0; + mCbf = NULL; + mNotificationFramesReq = 0; + mRemainingFrames = 0; + mUserData = NULL; + mLatency = 0; + mLoopCount = 0; + mMarkerPosition = 0; + mMarkerReached = false; + mNewPosition = 0; + mUpdatePeriod = 0; + mFlags = flags; + mAudioTrack = NULL; + mAudioSession = output; + + mSessionId = sessionId; + mAuxEffectId = 0; + + const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger(); + if (audioFlinger == 0) { + LOGE("Could not get audioflinger"); + return NO_INIT; + } + status_t status; + audioFlinger->createSession(getpid(), + sampleRate, + channels, + &mSessionId, + &status); + if(status != NO_ERROR) { + LOGE("createSession returned with status %d", status); + } + /* Make the track active and start output */ + android_atomic_or(1, &mActive); + AudioSystem::startOutput(output, (audio_stream_type_t)mStreamType); + LOGV("AudioTrack::set() - Started output(%d)",output); + return NO_ERROR; +} +#endif status_t AudioTrack::initCheck() const { return mStatus; @@ -348,6 +469,16 @@ sp<IMemory>& AudioTrack::sharedBuffer() void AudioTrack::start() { +#ifdef WITH_QCOM_LPA + if ( mAudioSession != -1 ) { + if ( NO_ERROR != AudioSystem::resumeSession(mAudioSession, + (audio_stream_type_t)mStreamType) ) + { + LOGE("ResumeSession failed"); + } + return; + } +#endif sp<AudioTrackThread> t = mAudioTrackThread; status_t status = NO_ERROR; @@ -423,25 +554,31 @@ void AudioTrack::stop() AutoMutex lock(mLock); if (mActive == 1) { - mActive = 0; - mCblk->cv.signal(); - mAudioTrack->stop(); - // Cancel loops (If we are in the middle of a loop, playback - // would not stop until loopCount reaches 0). - setLoop_l(0, 0, 0); - // the playback head position will reset to 0, so if a marker is set, we need - // to activate it again - mMarkerReached = false; - // Force flush if a shared buffer is used otherwise audioflinger - // will not stop before end of buffer is reached. - if (mSharedBuffer != 0) { - flush_l(); - } - if (t != 0) { - t->requestExit(); - } else { - setpriority(PRIO_PROCESS, 0, ANDROID_PRIORITY_NORMAL); +#ifdef WITH_QCOM_LPA + if (mAudioTrack != NULL) { +#endif + mActive = 0; + mCblk->cv.signal(); + mAudioTrack->stop(); + // Cancel loops (If we are in the middle of a loop, playback + // would not stop until loopCount reaches 0). + setLoop_l(0, 0, 0); + // the playback head position will reset to 0, so if a marker is set, we need + // to activate it again + mMarkerReached = false; + // Force flush if a shared buffer is used otherwise audioflinger + // will not stop before end of buffer is reached. + if (mSharedBuffer != 0) { + flush_l(); + } + if (t != 0) { + t->requestExit(); + } else { + setpriority(PRIO_PROCESS, 0, ANDROID_PRIORITY_NORMAL); + } +#ifdef WITH_QCOM_LPA } +#endif } if (t != 0) { @@ -482,6 +619,16 @@ void AudioTrack::flush_l() void AudioTrack::pause() { LOGV("pause"); +#ifdef WITH_QCOM_LPA + if ( mAudioSession != -1 ) { + if ( NO_ERROR != AudioSystem::pauseSession(mAudioSession, + (audio_stream_type_t)mStreamType) ) + { + LOGE("PauseSession failed"); + } + return; + } +#endif AutoMutex lock(mLock); if (mActive == 1) { mActive = 0; @@ -507,6 +654,16 @@ status_t AudioTrack::setVolume(float left, float right) } AutoMutex lock(mLock); + +#ifdef WITH_QCOM_LPA + if(mAudioSession != -1) { + // LPA output + const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger(); + status_t status = audioFlinger->setSessionVolume(mStreamType, left, right); + return NO_ERROR; + } +#endif + mVolume[LEFT] = left; mVolume[RIGHT] = right; diff --git a/media/libmedia/IAudioFlinger.cpp b/media/libmedia/IAudioFlinger.cpp index d58834b..9aa0d65 100644 --- a/media/libmedia/IAudioFlinger.cpp +++ b/media/libmedia/IAudioFlinger.cpp @@ -40,6 +40,9 @@ enum { SET_MASTER_MUTE, MASTER_VOLUME, MASTER_MUTE, +#ifdef WITH_QCOM_LPA + SET_SESSION_VOLUME, +#endif SET_STREAM_VOLUME, SET_STREAM_MUTE, STREAM_VOLUME, @@ -52,8 +55,16 @@ enum { REGISTER_CLIENT, GET_INPUTBUFFERSIZE, OPEN_OUTPUT, +#ifdef WITH_QCOM_LPA + OPEN_SESSION, +#endif OPEN_DUPLICATE_OUTPUT, CLOSE_OUTPUT, +#ifdef WITH_QCOM_LPA + PAUSE_SESSION, + RESUME_SESSION, + CLOSE_SESSION, +#endif SUSPEND_OUTPUT, RESTORE_OUTPUT, OPEN_INPUT, @@ -69,7 +80,13 @@ enum { QUERY_EFFECT, GET_EFFECT_DESCRIPTOR, CREATE_EFFECT, - MOVE_EFFECTS + MOVE_EFFECTS, +#ifdef WITH_QCOM_LPA + SET_FM_VOLUME, + CREATE_SESSION, + DELETE_SESSION, + APPLY_EFFECTS +#endif }; class BpAudioFlinger : public BpInterface<IAudioFlinger> @@ -126,7 +143,62 @@ public: } return track; } +#ifdef WITH_QCOM_LPA + virtual void createSession( + pid_t pid, + uint32_t sampleRate, + int channelCount, + int *sessionId, + status_t *status) + { + Parcel data, reply; + data.writeInterfaceToken(IAudioFlinger::getInterfaceDescriptor()); + data.writeInt32(pid); + data.writeInt32(sampleRate); + data.writeInt32(channelCount); + int lSessionId = 0; + if (sessionId != NULL) { + lSessionId = *sessionId; + } + data.writeInt32(lSessionId); + status_t lStatus = remote()->transact(CREATE_SESSION, data, &reply); + if (lStatus != NO_ERROR) { + LOGE("openRecord error: %s", strerror(-lStatus)); + } else { + lSessionId = reply.readInt32(); + if (sessionId != NULL) { + *sessionId = lSessionId; + } + lStatus = reply.readInt32(); + } + if (status) { + *status = lStatus; + } + } + virtual void deleteSession() + { + Parcel data, reply; + data.writeInterfaceToken(IAudioFlinger::getInterfaceDescriptor()); + status_t lStatus = remote()->transact(DELETE_SESSION, data, &reply); + if (lStatus != NO_ERROR) { + LOGE("deleteSession error: %s", strerror(-lStatus)); + } + } + + virtual void applyEffectsOn(int16_t *inBuffer, int16_t *outBuffer, int size) + { + Parcel data, reply; + data.writeInterfaceToken(IAudioFlinger::getInterfaceDescriptor()); + data.writeInt32((int32_t)inBuffer); + data.writeInt32((int32_t)outBuffer); + data.writeInt32(size); + status_t lStatus = remote()->transact(APPLY_EFFECTS, data, &reply); + if (lStatus != NO_ERROR) { + LOGE("applyEffectsOn error: %s", strerror(-lStatus)); + } + } +#endif virtual sp<IAudioRecord> openRecord( pid_t pid, int input, @@ -248,7 +320,18 @@ public: remote()->transact(MASTER_MUTE, data, &reply); return reply.readInt32(); } - +#ifdef WITH_QCOM_LPA + virtual status_t setSessionVolume(int stream, float left, float right) + { + Parcel data, reply; + data.writeInterfaceToken(IAudioFlinger::getInterfaceDescriptor()); + data.writeInt32(stream); + data.writeFloat(left); + data.writeInt32(right); + remote()->transact(SET_SESSION_VOLUME, data, &reply); + return reply.readInt32(); + } +#endif virtual status_t setStreamVolume(int stream, float value, int output) { Parcel data, reply; @@ -390,6 +473,62 @@ public: if (pLatencyMs) *pLatencyMs = latency; return output; } +#ifdef WITH_QCOM_LPA + virtual int openSession(uint32_t *pDevices, + uint32_t *pFormat, + uint32_t flags, + int32_t stream, + int32_t sessionId) + { + Parcel data, reply; + uint32_t devices = pDevices ? *pDevices : 0; + uint32_t format = pFormat ? *pFormat : 0; + + data.writeInterfaceToken(IAudioFlinger::getInterfaceDescriptor()); + data.writeInt32(devices); + data.writeInt32(format); + data.writeInt32(flags); + data.writeInt32(stream); + data.writeInt32(sessionId); + remote()->transact(OPEN_SESSION, data, &reply); + int output = reply.readInt32(); + LOGV("openOutput() returned output, %p", output); + devices = reply.readInt32(); + if (pDevices) *pDevices = devices; + format = reply.readInt32(); + if (pFormat) *pFormat = format; + return output; + } + + virtual status_t pauseSession(int output, int32_t stream) + { + Parcel data, reply; + data.writeInterfaceToken(IAudioFlinger::getInterfaceDescriptor()); + data.writeInt32(output); + data.writeInt32(stream); + remote()->transact(PAUSE_SESSION, data, &reply); + return reply.readInt32(); + } + + virtual status_t resumeSession(int output, int32_t stream) + { + Parcel data, reply; + data.writeInterfaceToken(IAudioFlinger::getInterfaceDescriptor()); + data.writeInt32(output); + data.writeInt32(stream); + remote()->transact(RESUME_SESSION, data, &reply); + return reply.readInt32(); + } + + virtual status_t closeSession(int output) + { + Parcel data, reply; + data.writeInterfaceToken(IAudioFlinger::getInterfaceDescriptor()); + data.writeInt32(output); + remote()->transact(CLOSE_SESSION, data, &reply); + return reply.readInt32(); + } +#endif virtual int openDuplicateOutput(int output1, int output2) { @@ -694,6 +833,33 @@ status_t BnAudioFlinger::onTransact( reply->writeStrongBinder(track->asBinder()); return NO_ERROR; } break; +#ifdef WITH_QCOM_LPA + case CREATE_SESSION: { + CHECK_INTERFACE(IAudioFlinger, data, reply); + pid_t pid = data.readInt32(); + uint32_t sampleRate = data.readInt32(); + int channelCount = data.readInt32(); + int sessionId = data.readInt32(); + status_t status; + createSession(pid, sampleRate, channelCount, &sessionId, &status); + reply->writeInt32(sessionId); + reply->writeInt32(status); + return NO_ERROR; + } break; + case DELETE_SESSION: { + CHECK_INTERFACE(IAudioFlinger, data, reply); + deleteSession(); + return NO_ERROR; + } break; + case APPLY_EFFECTS: { + CHECK_INTERFACE(IAudioFlinger, data, reply); + int16_t *inBuffer = (int16_t*)data.readInt32(); + int16_t *outBuffer = (int16_t*)data.readInt32(); + int size = data.readInt32(); + applyEffectsOn(inBuffer, outBuffer, size); + return NO_ERROR; + } break; +#endif case OPEN_RECORD: { CHECK_INTERFACE(IAudioFlinger, data, reply); pid_t pid = data.readInt32(); @@ -757,6 +923,16 @@ status_t BnAudioFlinger::onTransact( reply->writeInt32( masterMute() ); return NO_ERROR; } break; +#ifdef WITH_QCOM_LPA + case SET_SESSION_VOLUME: { + CHECK_INTERFACE(IAudioFlinger, data, reply); + int stream = data.readInt32(); + float left = data.readFloat(); + float right = data.readFloat(); + reply->writeInt32( setSessionVolume(stream, left, right) ); + return NO_ERROR; + } break; +#endif case SET_STREAM_VOLUME: { CHECK_INTERFACE(IAudioFlinger, data, reply); int stream = data.readInt32(); @@ -853,6 +1029,47 @@ status_t BnAudioFlinger::onTransact( reply->writeInt32(latency); return NO_ERROR; } break; +#ifdef WITH_QCOM_LPA + case OPEN_SESSION: { + CHECK_INTERFACE(IAudioFlinger, data, reply); + uint32_t devices = data.readInt32(); + uint32_t format = data.readInt32(); + uint32_t flags = data.readInt32(); + int32_t stream = data.readInt32(); + int32_t sessionId = data.readInt32(); + int output = openSession(&devices, + &format, + flags, + stream, + sessionId); + LOGV("OPEN_SESSION output, %p", output); + reply->writeInt32(output); + reply->writeInt32(devices); + reply->writeInt32(format); + return NO_ERROR; + } break; + case PAUSE_SESSION: { + CHECK_INTERFACE(IAudioFlinger, data, reply); + int output = data.readInt32(); + int32_t stream = data.readInt32(); + reply->writeInt32(pauseSession(output, + stream)); + return NO_ERROR; + } break; + case RESUME_SESSION: { + CHECK_INTERFACE(IAudioFlinger, data, reply); + int output = data.readInt32(); + int32_t stream = data.readInt32(); + reply->writeInt32(resumeSession(output, + stream)); + return NO_ERROR; + } break; + case CLOSE_SESSION: { + CHECK_INTERFACE(IAudioFlinger, data, reply); + reply->writeInt32(closeSession(data.readInt32())); + return NO_ERROR; + } break; +#endif case OPEN_DUPLICATE_OUTPUT: { CHECK_INTERFACE(IAudioFlinger, data, reply); int output1 = data.readInt32(); diff --git a/media/libmedia/IAudioFlingerClient.cpp b/media/libmedia/IAudioFlingerClient.cpp index 3900de4..5c967ed 100644 --- a/media/libmedia/IAudioFlingerClient.cpp +++ b/media/libmedia/IAudioFlingerClient.cpp @@ -49,7 +49,12 @@ public: uint32_t stream = *(uint32_t *)param2; LOGV("ioConfigChanged stream %d", stream); data.writeInt32(stream); +#ifdef WITH_QCOM_LPA + } else if (event != AudioSystem::OUTPUT_CLOSED && event != AudioSystem::INPUT_CLOSED && + event != AudioSystem::A2DP_OUTPUT_STATE && event != AudioSystem::EFFECT_CONFIG_CHANGED) { +#else } else if (event != AudioSystem::OUTPUT_CLOSED && event != AudioSystem::INPUT_CLOSED) { +#endif AudioSystem::OutputDescriptor *desc = (AudioSystem::OutputDescriptor *)param2; data.writeInt32(desc->samplingRate); data.writeInt32(desc->format); diff --git a/media/libmedia/IAudioPolicyService.cpp b/media/libmedia/IAudioPolicyService.cpp index 50b4855..8aa8771 100644 --- a/media/libmedia/IAudioPolicyService.cpp +++ b/media/libmedia/IAudioPolicyService.cpp @@ -37,6 +37,12 @@ enum { SET_FORCE_USE, GET_FORCE_USE, GET_OUTPUT, +#ifdef WITH_QCOM_LPA + GET_SESSION, + PAUSE_SESSION, + RESUME_SESSION, + CLOSE_SESSION, +#endif START_OUTPUT, STOP_OUTPUT, RELEASE_OUTPUT, @@ -146,7 +152,52 @@ public: remote()->transact(GET_OUTPUT, data, &reply); return static_cast <audio_io_handle_t> (reply.readInt32()); } +#ifdef WITH_QCOM_LPA + virtual audio_io_handle_t getSession( + audio_stream_type_t stream, + uint32_t format, + audio_policy_output_flags_t flags, + int32_t sessionId) + { + Parcel data, reply; + data.writeInterfaceToken(IAudioPolicyService::getInterfaceDescriptor()); + data.writeInt32(static_cast <uint32_t>(stream)); + data.writeInt32(static_cast <uint32_t>(format)); + data.writeInt32(static_cast <uint32_t>(flags)); + data.writeInt32(static_cast <int32_t>(sessionId)); + remote()->transact(GET_SESSION, data, &reply); + return static_cast <audio_io_handle_t> (reply.readInt32()); + } + + virtual status_t pauseSession(audio_io_handle_t output, audio_stream_type_t stream) + { + Parcel data, reply; + data.writeInterfaceToken(IAudioPolicyService::getInterfaceDescriptor()); + data.writeInt32(output); + data.writeInt32(static_cast <uint32_t>(stream)); + remote()->transact(PAUSE_SESSION, data, &reply); + return static_cast <status_t> (reply.readInt32()); + } + + virtual status_t resumeSession(audio_io_handle_t output, audio_stream_type_t stream) + { + Parcel data, reply; + data.writeInterfaceToken(IAudioPolicyService::getInterfaceDescriptor()); + data.writeInt32(output); + data.writeInt32(static_cast <uint32_t>(stream)); + remote()->transact(RESUME_SESSION, data, &reply); + return static_cast <status_t> (reply.readInt32()); + } + virtual status_t closeSession(audio_io_handle_t output) + { + Parcel data, reply; + data.writeInterfaceToken(IAudioPolicyService::getInterfaceDescriptor()); + data.writeInt32(output); + remote()->transact(CLOSE_SESSION, data, &reply); + return static_cast <audio_io_handle_t> (reply.readInt32()); + } +#endif virtual status_t startOutput(audio_io_handle_t output, audio_stream_type_t stream, int session) @@ -440,7 +491,47 @@ status_t BnAudioPolicyService::onTransact( reply->writeInt32(static_cast <int>(output)); return NO_ERROR; } break; +#ifdef WITH_QCOM_LPA + case GET_SESSION: { + CHECK_INTERFACE(IAudioPolicyService, data, reply); + audio_stream_type_t stream = static_cast <audio_stream_type_t>(data.readInt32()); + uint32_t format = data.readInt32(); + audio_policy_output_flags_t flags = static_cast <audio_policy_output_flags_t>(data.readInt32()); + int32_t sessionId = data.readInt32(); + audio_io_handle_t output = getSession(stream, + format, + flags, + sessionId); + reply->writeInt32(static_cast <int>(output)); + return NO_ERROR; + } break; + + case PAUSE_SESSION: { + CHECK_INTERFACE(IAudioPolicyService, data, reply); + audio_io_handle_t output = static_cast <audio_io_handle_t>(data.readInt32()); + audio_stream_type_t stream = static_cast <audio_stream_type_t>(data.readInt32()); + status_t status = pauseSession(output, stream); + reply->writeInt32(static_cast <int>(status)); + return NO_ERROR; + } break; + case RESUME_SESSION: { + CHECK_INTERFACE(IAudioPolicyService, data, reply); + audio_io_handle_t output = static_cast <audio_io_handle_t>(data.readInt32()); + audio_stream_type_t stream = static_cast <audio_stream_type_t>(data.readInt32()); + status_t status = resumeSession(output, stream); + reply->writeInt32(static_cast <int>(status)); + return NO_ERROR; + } break; + + case CLOSE_SESSION: { + CHECK_INTERFACE(IAudioPolicyService, data, reply); + audio_io_handle_t output = static_cast <audio_io_handle_t>(data.readInt32()); + status_t status = closeSession(output); + reply->writeInt32(static_cast <int>(status)); + return NO_ERROR; + } break; +#endif case START_OUTPUT: { CHECK_INTERFACE(IAudioPolicyService, data, reply); audio_io_handle_t output = static_cast <audio_io_handle_t>(data.readInt32()); diff --git a/media/libmedia/JetPlayer.cpp b/media/libmedia/JetPlayer.cpp index 8b953e0..e0e8754 100644 --- a/media/libmedia/JetPlayer.cpp +++ b/media/libmedia/JetPlayer.cpp @@ -94,6 +94,10 @@ int JetPlayer::init() 1, // format = PCM 16bits per sample, (pLibConfig->numChannels == 2) ? AUDIO_CHANNEL_OUT_STEREO : AUDIO_CHANNEL_OUT_MONO, mTrackBufferSize, +#ifdef WITH_QCOM_LPA + 0, + 0, +#endif 0); // create render and playback thread diff --git a/media/libmediaplayerservice/MediaPlayerService.cpp b/media/libmediaplayerservice/MediaPlayerService.cpp index f27d3d6..2888888 100644 --- a/media/libmediaplayerservice/MediaPlayerService.cpp +++ b/media/libmediaplayerservice/MediaPlayerService.cpp @@ -56,6 +56,7 @@ #include <media/stagefright/MediaErrors.h> #include <system/audio.h> +#include <system/audio_policy.h> #include <private/android_filesystem_config.h> @@ -1262,6 +1263,9 @@ MediaPlayerService::AudioOutput::AudioOutput(int sessionId) mSessionId(sessionId) { LOGV("AudioOutput(%d)", sessionId); mTrack = 0; +#ifdef WITH_QCOM_LPA + mSession = 0; +#endif mStreamType = AUDIO_STREAM_MUSIC; mLeftVolume = 1.0; mRightVolume = 1.0; @@ -1274,6 +1278,9 @@ MediaPlayerService::AudioOutput::AudioOutput(int sessionId) MediaPlayerService::AudioOutput::~AudioOutput() { close(); +#ifdef WITH_QCOM_LPA + closeSession(); +#endif } void MediaPlayerService::AudioOutput::setMinBufferCount() @@ -1337,6 +1344,39 @@ status_t MediaPlayerService::AudioOutput::getPosition(uint32_t *position) if (mTrack == 0) return NO_INIT; return mTrack->getPosition(position); } +#ifdef WITH_QCOM_LPA +status_t MediaPlayerService::AudioOutput::openSession( + int format, int lpaSessionId, uint32_t sampleRate, int channels) +{ + uint32_t flags = 0; + mCallback = NULL; + mCallbackCookie = NULL; + if (mSession) closeSession(); + mSession = NULL; + + flags |= AUDIO_POLICY_OUTPUT_FLAG_DIRECT; + + AudioTrack *t = new AudioTrack( + mStreamType, + sampleRate, + format, + channels, + flags, + mSessionId, + lpaSessionId); + LOGV("openSession: AudioTrack created successfully track(%p)",t); + if ((t == 0) || (t->initCheck() != NO_ERROR)) { + LOGE("Unable to create audio track"); + delete t; + return NO_INIT; + } + LOGV("openSession: Out"); + mSession = t; + LOGV("setVolume"); + t->setVolume(mLeftVolume, mRightVolume); + return NO_ERROR; +} +#endif status_t MediaPlayerService::AudioOutput::open( uint32_t sampleRate, int channelCount, int format, int bufferCount, @@ -1454,17 +1494,53 @@ void MediaPlayerService::AudioOutput::pause() void MediaPlayerService::AudioOutput::close() { LOGV("close"); - delete mTrack; - mTrack = 0; + if(mTrack != NULL) { + delete mTrack; + mTrack = 0; + } +} +#ifdef WITH_QCOM_LPA +void MediaPlayerService::AudioOutput::closeSession() +{ + LOGV("closeSession"); + if(mSession != NULL) { + delete mSession; + mSession = 0; + } } +void MediaPlayerService::AudioOutput::pauseSession() +{ + LOGV("pauseSession"); + if(mSession != NULL) { + mSession->pause(); + } +} + +void MediaPlayerService::AudioOutput::resumeSession() +{ + LOGV("resumeSession"); + if(mSession != NULL) { + mSession->start(); + } +} +#endif void MediaPlayerService::AudioOutput::setVolume(float left, float right) { +#ifdef WITH_QCOM_LPA + LOGV("setVolume(%f, %f): %p", left, right, mSession); +#else LOGV("setVolume(%f, %f)", left, right); +#endif + mLeftVolume = left; mRightVolume = right; if (mTrack) { mTrack->setVolume(left, right); +#ifdef WITH_QCOM_LPA + } else if(mSession) { + mSession->setVolume(left, right); +#endif } } diff --git a/media/libmediaplayerservice/MediaPlayerService.h b/media/libmediaplayerservice/MediaPlayerService.h index b04fddb..2b66b7a 100644 --- a/media/libmediaplayerservice/MediaPlayerService.h +++ b/media/libmediaplayerservice/MediaPlayerService.h @@ -85,13 +85,23 @@ class MediaPlayerService : public BnMediaPlayerService uint32_t sampleRate, int channelCount, int format, int bufferCount, AudioCallback cb, void *cookie); - +#ifdef WITH_QCOM_LPA + virtual status_t openSession( + int format, int sessionId, uint32_t sampleRate, int channels); +#endif virtual void start(); virtual ssize_t write(const void* buffer, size_t size); virtual void stop(); virtual void flush(); virtual void pause(); +#ifdef WITH_QCOM_LPA + virtual void pauseSession(); + virtual void resumeSession(); +#endif virtual void close(); +#ifdef WITH_QCOM_LPA + virtual void closeSession(); +#endif void setAudioStreamType(int streamType) { mStreamType = streamType; } void setVolume(float left, float right); status_t setAuxEffectSendLevel(float level); @@ -106,6 +116,9 @@ class MediaPlayerService : public BnMediaPlayerService int event, void *me, void *info); AudioTrack* mTrack; +#ifdef WITH_QCOM_LPA + AudioTrack* mSession; +#endif AudioCallback mCallback; void * mCallbackCookie; int mStreamType; diff --git a/media/libstagefright/Android.mk b/media/libstagefright/Android.mk index 671a53d..4cb0895 100644 --- a/media/libstagefright/Android.mk +++ b/media/libstagefright/Android.mk @@ -85,6 +85,24 @@ ifeq ($(BOARD_USES_QCOM_HARDWARE),true) LOCAL_C_INCLUDES += $(TOP)/hardware/qcom/display/libqcomui endif +ifeq ($(TARGET_USES_QCOM_LPA),true) +ifeq ($(BOARD_USES_ALSA_AUDIO),true) + LOCAL_SRC_FILES += LPAPlayerALSA.cpp + LOCAL_C_INCLUDES += $(TARGET_OUT_HEADERS)/mm-audio/libalsa-intf + LOCAL_C_INCLUDES += $(TOP)/hardware/libhardware_legacy/include + LOCAL_SHARED_LIBRARIES += libalsa-intf + LOCAL_SHARED_LIBRARIES += libhardware_legacy + LOCAL_SHARED_LIBRARIES += libpowermanager +else + LOCAL_SRC_FILES += LPAPlayer.cpp +ifeq ($(TARGET_USES_ION_AUDIO),true) + LOCAL_SRC_FILES += LPAPlayerION.cpp +else + LOCAL_SRC_FILES += LPAPlayerPMEM.cpp +endif +endif +endif + LOCAL_C_INCLUDES+= \ $(JNI_H_INCLUDE) \ $(TOP)/frameworks/base/include/media/stagefright/openmax \ @@ -92,7 +110,7 @@ LOCAL_C_INCLUDES+= \ $(TOP)/external/tremolo \ $(TOP)/external/openssl/include -LOCAL_SHARED_LIBRARIES := \ +LOCAL_SHARED_LIBRARIES += \ libbinder \ libmedia \ libutils \ @@ -122,6 +140,12 @@ LOCAL_STATIC_LIBRARIES := \ libstagefright_id3 \ libFLAC \ +ifeq ($(TARGET_USES_QCOM_LPA),true) +LOCAL_STATIC_LIBRARIES += \ + libstagefright_aacdec \ + libstagefright_mp3dec +endif + ifeq ($(BOARD_HAVE_CODEC_SUPPORT),SAMSUNG_CODEC_SUPPORT) LOCAL_CFLAGS += -DSAMSUNG_CODEC_SUPPORT endif diff --git a/media/libstagefright/AwesomePlayer.cpp b/media/libstagefright/AwesomePlayer.cpp index 483eb24..a2b120b 100644 --- a/media/libstagefright/AwesomePlayer.cpp +++ b/media/libstagefright/AwesomePlayer.cpp @@ -38,6 +38,9 @@ #include <media/stagefright/foundation/hexdump.h> #include <media/stagefright/foundation/ADebug.h> #include <media/stagefright/AudioPlayer.h> +#ifdef WITH_QCOM_LPA +#include <media/stagefright/LPAPlayer.h> +#endif #include <media/stagefright/DataSource.h> #include <media/stagefright/FileSource.h> #include <media/stagefright/MediaBuffer.h> @@ -52,6 +55,9 @@ #include <gui/SurfaceTextureClient.h> #include <surfaceflinger/ISurfaceComposer.h> +#include <cutils/properties.h> + +#include <media/stagefright/foundation/ALooper.h> #include <media/stagefright/foundation/AMessage.h> #include <cutils/properties.h> @@ -904,7 +910,51 @@ status_t AwesomePlayer::play_l() { if (mAudioSource != NULL) { if (mAudioPlayer == NULL) { if (mAudioSink != NULL) { +#ifndef WITH_QCOM_LPA mAudioPlayer = new AudioPlayer(mAudioSink, this); +#else + sp<MetaData> format = mAudioTrack->getFormat(); + const char *mime; + bool success = format->findCString(kKeyMIMEType, &mime); + CHECK(success); + + int64_t durationUs; + success = format->findInt64(kKeyDuration, &durationUs); + /* + * Some clips may not have kKeyDuration set, especially so for clips in a MP3 + * container with the Frames field absent in the Xing header. + */ + if (!success) + durationUs = 0; + + LOGV("LPAPlayer::getObjectsAlive() %d",LPAPlayer::objectsAlive); + int32_t isFormatAdif = 0; + format->findInt32(kkeyAacFormatAdif, &isFormatAdif); + + char lpaDecode[128]; + property_get("lpa.decode",lpaDecode,"0"); + if(strcmp("true",lpaDecode) == 0) + { + LOGV("LPAPlayer::getObjectsAlive() %d",LPAPlayer::objectsAlive); + if ( durationUs > 60000000 && !isFormatAdif + &&(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_MPEG) || !strcasecmp(mime,MEDIA_MIMETYPE_AUDIO_AAC)) + && LPAPlayer::objectsAlive == 0 && mVideoSource == NULL) { + LOGE("LPAPlayer created, LPA MODE detected mime %s duration %d\n", mime, durationUs); + bool initCheck = false; + mAudioPlayer = new LPAPlayer(mAudioSink, initCheck, this); + if(!initCheck) { + delete mAudioPlayer; + mAudioPlayer = NULL; + } + } + } + if(mAudioPlayer == NULL) { + LOGE("AudioPlayer created, Non-LPA mode mime %s duration %d\n", mime, durationUs); + mAudioPlayer = new AudioPlayer(mAudioSink, this); + } + + LOGV("Setting Audio source"); +#endif mAudioPlayer->setSource(mAudioSource); mTimeSource = mAudioPlayer; @@ -1430,10 +1480,42 @@ status_t AwesomePlayer::initAudioDecoder() { if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW)) { mAudioSource = mAudioTrack; } else { +#ifdef WITH_QCOM_LPA + // For LPA Playback use the decoder without OMX layer + char lpaDecode[128]; + char *matchComponentName = NULL; + property_get("lpa.decode",lpaDecode,"0"); + if(strcmp("true",lpaDecode) == 0 && mVideoSource == NULL) { + const char *mime; + bool success = meta->findCString(kKeyMIMEType, &mime); + CHECK(success); + int64_t durationUs; + success = meta->findInt64(kKeyDuration, &durationUs); + if (!success) durationUs = 0; + int32_t isFormatAdif = 0; + meta->findInt32(kkeyAacFormatAdif, &isFormatAdif); + + if ( (durationUs > 60000000) && !isFormatAdif && LPAPlayer::objectsAlive == 0) { + if(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_MPEG)) { + LOGV("matchComponentName is set to MP3Decoder"); + matchComponentName= "MP3Decoder"; + } + if(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AAC)) { + LOGV("matchComponentName is set to AACDecoder"); + matchComponentName= "AACDecoder"; + } + } + } +#endif mAudioSource = OMXCodec::Create( mClient.interface(), mAudioTrack->getFormat(), false, // createEncoder +#ifndef WITH_QCOM_LPA mAudioTrack); +#else + mAudioTrack, + matchComponentName); +#endif } if (mAudioSource != NULL) { diff --git a/media/libstagefright/LPAPlayer.cpp b/media/libstagefright/LPAPlayer.cpp new file mode 100644 index 0000000..8e85e40 --- /dev/null +++ b/media/libstagefright/LPAPlayer.cpp @@ -0,0 +1,1604 @@ +/*
+ * Copyright (C) 2009 The Android Open Source Project
+ * Copyright (c) 2009-2012, Code Aurora Forum. All rights reserved.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#define LOG_NDEBUG 0
+#define LOG_TAG "LPAPlayer"
+#include <utils/Log.h>
+#include <utils/threads.h>
+
+#include <sys/prctl.h>
+#include <sys/resource.h>
+
+#include <binder/IPCThreadState.h>
+#include <media/AudioTrack.h>
+
+#include <media/stagefright/LPAPlayer.h>
+#include <media/stagefright/MediaDebug.h>
+#include <media/stagefright/MediaDefs.h>
+#include <media/stagefright/MediaErrors.h>
+#include <media/stagefright/MediaSource.h>
+#include <media/stagefright/MetaData.h>
+#include <media/stagefright/MediaErrors.h>
+
+#include <linux/unistd.h>
+
+#include "include/AwesomePlayer.h"
+
+#define MEM_BUFFER_SIZE 524288
+//#define PMEM_BUFFER_SIZE (4800 * 4)
+#define MEM_BUFFER_COUNT 4
+
+namespace android {
+int LPAPlayer::objectsAlive = 0;
+
+LPAPlayer::LPAPlayer(
+ const sp<MediaPlayerBase::AudioSink> &audioSink, bool &initCheck,
+ AwesomePlayer *observer)
+:mInputBuffer(NULL),
+mSampleRate(0),
+mLatencyUs(0),
+mFrameSize(0),
+mNumFramesPlayed(0),
+mPositionTimeMediaUs(-1),
+mPositionTimeRealUs(-1),
+mSeeking(false),
+mInternalSeeking(false),
+mReachedEOS(false),
+mFinalStatus(OK),
+mStarted(false),
+mIsFirstBuffer(false),
+mFirstBufferResult(OK),
+mFirstBuffer(NULL),
+mAudioSink(audioSink),
+mObserver(observer),
+AudioPlayer(audioSink,observer) {
+ LOGV("LPAPlayer::LPAPlayer() ctor");
+ a2dpDisconnectPause = false;
+ mSeeked = false;
+ objectsAlive++;
+ timeStarted = 0;
+ numChannels =0;
+ afd = -1;
+ ionfd = -1;
+ timePlayed = 0;
+ isPaused = false;
+ bIsA2DPEnabled = false;
+ mAudioFlinger = NULL;
+ AudioFlingerClient = NULL;
+ eventThreadCreated = false;
+ /* Initialize Suspend/Resume related variables */
+ mQueue.start();
+ mQueueStarted = true;
+ mPauseEvent = new TimedEvent(this, &LPAPlayer::onPauseTimeOut);
+ mPauseEventPending = false;
+ mPlaybackSuspended = false;
+ bIsAudioRouted = false;
+ mIsDriverStarted = false;
+
+ LOGV("Opening pcm_dec driver");
+ afd = open("/dev/msm_pcm_lp_dec", O_WRONLY | O_NONBLOCK);
+ mSourceEmpty = true;
+ if ( afd < 0 ) {
+ LOGE("pcm_lp_dec: cannot open pcm_dec device and the error is %d", errno);
+ initCheck = false;
+ return;
+ } else {
+ initCheck = true;
+ LOGV("pcm_lp_dec: pcm_lp_dec Driver opened");
+ }
+ getAudioFlinger();
+ LOGV("Registering client with AudioFlinger");
+ mAudioFlinger->registerClient(AudioFlingerClient);
+ mAudioSinkOpen = false;
+ a2dpThreadStarted = true;
+ asyncReset = false;
+
+ bEffectConfigChanged = false;
+}
+
+LPAPlayer::~LPAPlayer() {
+ LOGV("LPAPlayer::~LPAPlayer()");
+ if (mQueueStarted) {
+ mQueue.stop();
+ }
+ if (mStarted) {
+ reset();
+ }
+ if (mAudioFlinger != NULL)
+ mAudioFlinger->deregisterClient(AudioFlingerClient);
+ objectsAlive--;
+}
+
+void LPAPlayer::getAudioFlinger() {
+ Mutex::Autolock _l(AudioFlingerLock);
+
+ if ( mAudioFlinger.get() == 0 ) {
+ sp<IServiceManager> sm = defaultServiceManager();
+ sp<IBinder> binder;
+ do {
+ binder = sm->getService(String16("media.audio_flinger"));
+ if ( binder != 0 )
+ break;
+ LOGW("AudioFlinger not published, waiting...");
+ usleep(500000); // 0.5 s
+ } while ( true );
+ if ( AudioFlingerClient == NULL ) {
+ AudioFlingerClient = new AudioFlingerLPAdecodeClient(this);
+ }
+
+ binder->linkToDeath(AudioFlingerClient);
+ mAudioFlinger = interface_cast<IAudioFlinger>(binder);
+ }
+ LOGE_IF(mAudioFlinger==0, "no AudioFlinger!?");
+}
+
+LPAPlayer::AudioFlingerLPAdecodeClient::AudioFlingerLPAdecodeClient(void *obj)
+{
+ LOGV("LPAPlayer::AudioFlingerLPAdecodeClient::AudioFlingerLPAdecodeClient");
+ pBaseClass = (LPAPlayer*)obj;
+}
+
+void LPAPlayer::AudioFlingerLPAdecodeClient::binderDied(const wp<IBinder>& who) {
+ Mutex::Autolock _l(pBaseClass->AudioFlingerLock);
+
+ pBaseClass->mAudioFlinger.clear();
+ LOGW("AudioFlinger server died!");
+}
+
+void LPAPlayer::AudioFlingerLPAdecodeClient::ioConfigChanged(int event, int ioHandle, void *param2) {
+ LOGV("ioConfigChanged() event %d", event);
+
+ if ( event != AudioSystem::A2DP_OUTPUT_STATE &&
+ event != AudioSystem::EFFECT_CONFIG_CHANGED) {
+ return;
+ }
+
+ switch ( event ) {
+ case AudioSystem::A2DP_OUTPUT_STATE:
+ {
+ LOGV("ioConfigChanged() A2DP_OUTPUT_STATE iohandle is %d with A2DPEnabled in %d", ioHandle, pBaseClass->bIsA2DPEnabled);
+ if ( -1 == ioHandle ) {
+ if ( pBaseClass->bIsA2DPEnabled ) {
+ pBaseClass->bIsA2DPEnabled = false;
+ if (pBaseClass->mStarted) {
+ pBaseClass->handleA2DPSwitch();
+ }
+ LOGV("ioConfigChanged:: A2DP Disabled");
+ }
+ } else {
+ if ( !pBaseClass->bIsA2DPEnabled ) {
+
+ pBaseClass->bIsA2DPEnabled = true;
+ if (pBaseClass->mStarted) {
+ pBaseClass->handleA2DPSwitch();
+ }
+
+ LOGV("ioConfigChanged:: A2DP Enabled");
+ }
+ }
+ }
+ break;
+ case AudioSystem::EFFECT_CONFIG_CHANGED:
+ {
+ LOGV("Received notification for change in effect module");
+ // Seek to current media time - flush the decoded buffers with the driver
+ if(!pBaseClass->bIsA2DPEnabled) {
+ pthread_mutex_lock(&pBaseClass->effect_mutex);
+ pBaseClass->bEffectConfigChanged = true;
+ pthread_mutex_unlock(&pBaseClass->effect_mutex);
+ // Signal effects thread to re-apply effects
+ LOGV("Signalling Effects Thread");
+ pthread_cond_signal(&pBaseClass->effect_cv);
+ }
+ }
+ }
+
+ LOGV("ioConfigChanged Out");
+}
+
+void LPAPlayer::handleA2DPSwitch() {
+ Mutex::Autolock autoLock(mLock);
+
+ LOGV("handleA2dpSwitch()");
+ if (bIsA2DPEnabled) {
+ if (!isPaused) {
+ if(mIsDriverStarted) {
+ if (ioctl(afd, AUDIO_PAUSE, 1) < 0) {
+ LOGE("AUDIO PAUSE failed");
+ }
+ }
+ /* Set timePlayed to time where we are pausing */
+ timePlayed += (nanoseconds_to_microseconds(systemTime(SYSTEM_TIME_MONOTONIC)) - timeStarted);
+ timeStarted = 0;
+ LOGV("paused for bt switch");
+ }
+
+ mInternalSeeking = true;
+ mReachedEOS = false;
+ mSeekTimeUs = timePlayed;
+
+ if(mIsDriverStarted) {
+ mIsDriverStarted = false;
+ if (ioctl(afd, AUDIO_STOP, 0) < 0) {
+ LOGE("%s: Audio stop event failed", __func__);
+ }
+ }
+ } else {
+ if (!isPaused) {
+ timePlayed += (nanoseconds_to_microseconds(systemTime(SYSTEM_TIME_MONOTONIC)) - timeStarted);
+ timeStarted = 0;
+ }
+
+ a2dpDisconnectPause = true;
+ }
+}
+
+void LPAPlayer::setSource(const sp<MediaSource> &source) {
+ CHECK_EQ(mSource, NULL);
+ LOGV("Setting source from LPA Player");
+ mSource = source;
+}
+
+status_t LPAPlayer::start(bool sourceAlreadyStarted) {
+ CHECK(!mStarted);
+ CHECK(mSource != NULL);
+
+ LOGV("start: sourceAlreadyStarted %d", sourceAlreadyStarted);
+ //Check if the source is started, start it
+ status_t err;
+ if (!sourceAlreadyStarted) {
+ err = mSource->start();
+
+ if (err != OK) {
+ return err;
+ }
+ }
+
+ //Create event, decoder and a2dp thread and initialize all the
+ //mutexes and coditional variables
+ createThreads();
+ LOGV("All Threads Created.");
+
+ // We allow an optional INFO_FORMAT_CHANGED at the very beginning
+ // of playback, if there is one, getFormat below will retrieve the
+ // updated format, if there isn't, we'll stash away the valid buffer
+ // of data to be used on the first audio callback.
+
+ CHECK(mFirstBuffer == NULL);
+
+ MediaSource::ReadOptions options;
+ if (mSeeking) {
+ options.setSeekTo(mSeekTimeUs);
+ mSeeking = false;
+ }
+
+ mFirstBufferResult = mSource->read(&mFirstBuffer, &options);
+ if (mFirstBufferResult == INFO_FORMAT_CHANGED) {
+ LOGV("INFO_FORMAT_CHANGED!!!");
+ CHECK(mFirstBuffer == NULL);
+ mFirstBufferResult = OK;
+ mIsFirstBuffer = false;
+ } else {
+ mIsFirstBuffer = true;
+ }
+
+ /*TODO: Check for bA2dpEnabled */
+
+ sp<MetaData> format = mSource->getFormat();
+ const char *mime;
+ bool success = format->findCString(kKeyMIMEType, &mime);
+ CHECK(success);
+ CHECK(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW));
+
+ success = format->findInt32(kKeySampleRate, &mSampleRate);
+ CHECK(success);
+
+ success = format->findInt32(kKeyChannelCount, &numChannels);
+ CHECK(success);
+
+ if ( afd >= 0 ) {
+ struct msm_audio_config config;
+ if ( ioctl(afd, AUDIO_GET_CONFIG, &config) < 0 ) {
+ LOGE("could not get config");
+ close(afd);
+ afd = -1;
+ return BAD_VALUE;
+ }
+
+ config.sample_rate = mSampleRate;
+ config.channel_count = numChannels;
+ LOGV(" in initate_play, sample_rate=%d and channel count=%d \n", mSampleRate, numChannels);
+ if ( ioctl(afd, AUDIO_SET_CONFIG, &config) < 0 ) {
+ LOGE("could not set config");
+ close(afd);
+ afd = -1;
+ return BAD_VALUE;
+ }
+ }
+
+ // Get the session id from the LPA Driver
+ // Register the session id with HAL for routing
+ if (mAudioSink.get() != NULL) {
+ unsigned short decId;
+ if ( ioctl(afd, AUDIO_GET_SESSION_ID, &decId) == -1 ) {
+ LOGE("AUDIO_GET_SESSION_ID FAILED\n");
+ return BAD_VALUE;
+ } else {
+ sessionId = (int)decId;
+ LOGV("AUDIO_GET_SESSION_ID success : decId = %d", decId);
+ }
+
+ if (!bIsA2DPEnabled) {
+ LOGV("Opening a routing session for audio playback: sessionId = %d mSampleRate %d numChannels %d",
+ sessionId, mSampleRate, numChannels);
+ status_t err = mAudioSink->openSession(AUDIO_FORMAT_PCM_16_BIT, sessionId, mSampleRate, numChannels);
+ if (err != OK) {
+ if (mFirstBuffer != NULL) {
+ mFirstBuffer->release();
+ mFirstBuffer = NULL;
+ }
+
+ if (!sourceAlreadyStarted) {
+ mSource->stop();
+ }
+
+ LOGE("Opening a routing session failed");
+ close(afd);
+ afd = -1;
+
+ return err;
+ }
+ LOGV("AudioSink Opened a session(%d)",sessionId);
+
+ //Start the Driver
+ if (ioctl(afd, AUDIO_START,0) < 0) {
+ LOGE("Driver start failed!");
+ return BAD_VALUE;
+ }
+ mIsDriverStarted = true;
+ bIsAudioRouted = true;
+ LOGV("LPA Driver Started");
+ } else {
+ LOGV("Before Audio Sink Open");
+ status_t ret = mAudioSink->open(mSampleRate, numChannels,AUDIO_FORMAT_PCM_16_BIT, DEFAULT_AUDIOSINK_BUFFERCOUNT);
+ mAudioSink->start();
+ LOGV("After Audio Sink Open");
+ mAudioSinkOpen = true;
+ //pthread_cond_signal(&a2dp_cv);
+ }
+ } else {
+ close(afd);
+ afd = -1;
+ LOGE("Audiosink is NULL");
+ return BAD_VALUE;
+ }
+
+ mStarted = true;
+
+ if (timeStarted == 0) {
+ timeStarted = nanoseconds_to_microseconds(systemTime(SYSTEM_TIME_MONOTONIC));
+ }
+
+ LOGV("Waking up decoder thread");
+ pthread_cond_signal(&decoder_cv);
+ return OK;
+}
+
+status_t LPAPlayer::seekTo(int64_t time_us) {
+ Mutex::Autolock autoLock(mLock);
+ LOGV("seekTo: time_us %ld", time_us);
+ if ( mReachedEOS ) {
+ mReachedEOS = false;
+ LOGV("Signalling to Decoder Thread");
+ pthread_cond_signal(&decoder_cv);
+ }
+ mSeeking = true;
+
+ mSeekTimeUs = time_us;
+ timePlayed = time_us;
+ timeStarted = 0;
+
+ LOGV("In seekTo(), mSeekTimeUs %lld",mSeekTimeUs);
+ if (!bIsA2DPEnabled) {
+ if(mIsDriverStarted) {
+ if (!isPaused) {
+ if (ioctl(afd, AUDIO_PAUSE, 1) < 0) {
+ LOGE("Audio Pause failed");
+ }
+ }
+ if (ioctl(afd, AUDIO_FLUSH, 0) < 0) {
+ LOGE("Audio Flush failed");
+ }
+ LOGV("Paused case, %d",isPaused);
+ if (isPaused) {
+ LOGV("AUDIO pause in seek()");
+ if (ioctl(afd, AUDIO_PAUSE, 1) < 0) {
+ LOGE("Audio Pause failed");
+ return BAD_VALUE;
+ }
+ }
+ }
+ } else {
+ mSeeked = true;
+ if (!isPaused) {
+ mAudioSink->pause();
+ mAudioSink->flush();
+ mAudioSink->start();
+ }
+ }
+
+ return OK;
+}
+
+void LPAPlayer::pause(bool playPendingSamples) {
+ CHECK(mStarted);
+
+ LOGV("pause: playPendingSamples %d", playPendingSamples);
+ isPaused = true;
+ if (playPendingSamples) {
+ if (!bIsA2DPEnabled) {
+ if (fsync(afd) != 0)
+ LOGE("fsync failed.");
+ if(!mPauseEventPending) {
+ LOGV("Posting an event for Pause timeout");
+ mQueue.postEventWithDelay(mPauseEvent, LPA_PAUSE_TIMEOUT_USEC);
+ mPauseEventPending = true;
+ }
+ if (mAudioSink.get() != NULL) {
+ mAudioSink->pauseSession();
+ }
+ timePlayed += (nanoseconds_to_microseconds(systemTime(SYSTEM_TIME_MONOTONIC)) - timeStarted);
+ }
+ else {
+ if (mAudioSink.get() != NULL)
+ mAudioSink->stop();
+ }
+ } else {
+ if (a2dpDisconnectPause) {
+ mAudioSink->pause();
+ } else {
+ if (!bIsA2DPEnabled) {
+ LOGV("LPAPlayer::Pause - Pause driver");
+ if (ioctl(afd, AUDIO_PAUSE, 1) < 0) {
+ LOGE("Audio Pause failed");
+ }
+ if(!mPauseEventPending) {
+ LOGV("Posting an event for Pause timeout");
+ mQueue.postEventWithDelay(mPauseEvent, LPA_PAUSE_TIMEOUT_USEC);
+ mPauseEventPending = true;
+ }
+
+ if (mAudioSink.get() != NULL) {
+ mAudioSink->pauseSession();
+ }
+ } else {
+ mAudioSink->pause();
+ mAudioSink->flush();
+ }
+ timePlayed += (nanoseconds_to_microseconds(systemTime(SYSTEM_TIME_MONOTONIC)) - timeStarted);
+ }
+ }
+}
+
+void LPAPlayer::resume() {
+ LOGV("resume: isPaused %d",isPaused);
+ Mutex::Autolock autoLock(resumeLock);
+ if ( isPaused) {
+ CHECK(mStarted);
+ if (bIsA2DPEnabled && a2dpDisconnectPause) {
+ isPaused = false;
+ mInternalSeeking = true;
+ mReachedEOS = false;
+ mSeekTimeUs = timePlayed;
+ a2dpDisconnectPause = false;
+ mAudioSink->start();
+ pthread_cond_signal(&decoder_cv);
+ pthread_cond_signal(&a2dp_cv);
+ }
+ else if (a2dpDisconnectPause) {
+ LOGV("A2DP disconnect resume");
+ mAudioSink->pause();
+ mAudioSink->stop();
+ mAudioSink->close();
+ mAudioSinkOpen = false;
+ LOGV("resume:: opening audio session with mSampleRate %d numChannels %d sessionId %d",
+ mSampleRate, numChannels, sessionId);
+ status_t err = mAudioSink->openSession(AUDIO_FORMAT_PCM_16_BIT, sessionId, mSampleRate, numChannels);
+ a2dpDisconnectPause = false;
+ mInternalSeeking = true;
+ mReachedEOS = false;
+ mSeekTimeUs = timePlayed;
+
+ if (ioctl(afd, AUDIO_START,0) < 0) {
+ LOGE("Driver start failed!");// TODO: How to report this error and stop playback ??
+ }
+ mIsDriverStarted = true;
+ LOGV("LPA Driver Started");
+
+ pthread_cond_signal(&event_cv);
+ pthread_cond_signal(&a2dp_cv);
+ pthread_cond_signal(&decoder_cv);
+
+ } else {
+ if (!bIsA2DPEnabled) {
+ LOGV("LPAPlayer::resume - Resuming Driver");
+
+ if(mPauseEventPending) {
+ LOGV("Resume(): Cancelling the puaseTimeout event");
+ mPauseEventPending = false;
+ mQueue.cancelEvent(mPauseEvent->eventID());
+ }
+
+ if(!bIsAudioRouted) {
+ unsigned short decId;
+ int sessionId;
+
+ mPlaybackSuspended = false;
+
+ CHECK(afd != -1);
+ if ( ioctl(afd, AUDIO_GET_SESSION_ID, &decId) == -1 ) {
+ LOGE("AUDIO_GET_SESSION_ID FAILED\n");
+ } else {
+ sessionId = (int)decId;
+ LOGV("AUDIO_GET_SESSION_ID success : decId = %d", decId);
+ }
+
+ LOGV("Resume:: Opening a session for playback: sessionId = %d", sessionId);
+ status_t err = mAudioSink->openSession(AUDIO_FORMAT_PCM_16_BIT, sessionId);
+ if (err != OK) {
+ LOGE("Opening a routing session failed");
+ if (mFirstBuffer != NULL) {
+ mFirstBuffer->release();
+ mFirstBuffer = NULL;
+ }
+
+ close(afd);
+ afd = -1;
+ return;
+ }
+ LOGV("Resume:: AudioSink Opened a session(%d)",sessionId);
+ //Start the Driver
+ LOGV("Resume:: Starting LPA Driver");
+ if (ioctl(afd, AUDIO_START,0) < 0) {
+ LOGE("Driver start failed!");
+ return; // TODO: How to report this error and stop playback ??
+ }
+ mIsDriverStarted = true;
+ bIsAudioRouted = true;
+
+ LOGV("Resume: Waking up decoder thread");
+ pthread_cond_signal(&decoder_cv);
+ } else {
+ if (ioctl(afd, AUDIO_PAUSE, 0) < 0) {
+ LOGE("Resume:: LPA driver resume failed");
+ // TODO: How to report this error and stop playback ??
+ }
+ if (mAudioSink.get() != NULL) {
+ mAudioSink->resumeSession();
+ }
+ }
+ } else {
+ isPaused = false;
+
+ if (!mAudioSinkOpen) {
+ if (mAudioSink.get() != NULL) {
+ LOGV("%s mAudioSink close session", __func__);
+ mAudioSink->closeSession();
+ } else {
+ LOGE("close session NULL");
+ }
+
+ LOGV("Resume: Before Audio Sink Open");
+ status_t ret = mAudioSink->open(mSampleRate, numChannels,AUDIO_FORMAT_PCM_16_BIT,
+ DEFAULT_AUDIOSINK_BUFFERCOUNT);
+ mAudioSink->start();
+ LOGV("Resume: After Audio Sink Open");
+ mAudioSinkOpen = true;
+
+ LOGV("Resume: Waking up the decoder thread");
+ pthread_cond_signal(&decoder_cv);
+ } else {
+ /* If AudioSink is already open just start it */
+ mAudioSink->start();
+ }
+ LOGV("Waking up A2dp thread");
+ pthread_cond_signal(&a2dp_cv);
+ }
+ }
+ isPaused = false;
+ /* Set timeStarted to current systemTime */
+ timeStarted = nanoseconds_to_microseconds(systemTime(SYSTEM_TIME_MONOTONIC));
+ }
+}
+
+void LPAPlayer::reset() {
+ CHECK(mStarted);
+ LOGV("Reset called!!!!!");
+ asyncReset = true;
+
+ if(!bIsA2DPEnabled) {
+ mIsDriverStarted = false;
+ ioctl(afd,AUDIO_STOP,0);
+ }
+
+ LOGV("reset() requestQueue.size() = %d, responseQueue.size() = %d effectsQueue.size() = %d",
+ memBuffersRequestQueue.size(), memBuffersResponseQueue.size(), effectsQueue.size());
+
+ // make sure the Effects thread has exited
+ requestAndWaitForEffectsThreadExit();
+
+ // make sure Decoder thread has exited
+ requestAndWaitForDecoderThreadExit();
+
+ // make sure the event thread also has exited
+ requestAndWaitForEventThreadExit();
+
+ requestAndWaitForA2DPThreadExit();
+
+ // Close the audiosink after all the threads exited to make sure
+ // there is no thread writing data to audio sink or applying effect
+ if (bIsA2DPEnabled) {
+ mAudioSink->close();
+ } else {
+ mAudioSink->closeSession();
+ }
+ mAudioSink.clear();
+
+ // Make sure to release any buffer we hold onto so that the
+ // source is able to stop().
+ if (mFirstBuffer != NULL) {
+ mFirstBuffer->release();
+ mFirstBuffer = NULL;
+ }
+
+ if (mInputBuffer != NULL) {
+ LOGV("AudioPlayer releasing input buffer.");
+ mInputBuffer->release();
+ mInputBuffer = NULL;
+ }
+
+ mSource->stop();
+
+ // The following hack is necessary to ensure that the OMX
+ // component is completely released by the time we may try
+ // to instantiate it again.
+ wp<MediaSource> tmp = mSource;
+ mSource.clear();
+ while (tmp.promote() != NULL) {
+ usleep(1000);
+ }
+
+ if ( afd >= 0 ) {
+ memBufferDeAlloc();
+ close(afd);
+ afd = -1;
+ }
+
+ LOGV("reset() after memBuffersRequestQueue.size() = %d, memBuffersResponseQueue.size() = %d ",memBuffersRequestQueue.size(),memBuffersResponseQueue.size());
+
+ mNumFramesPlayed = 0;
+ mPositionTimeMediaUs = -1;
+ mPositionTimeRealUs = -1;
+ mSeeking = false;
+ mInternalSeeking = false;
+ mReachedEOS = false;
+ mFinalStatus = OK;
+ mStarted = false;
+}
+
+
+bool LPAPlayer::isSeeking() {
+ Mutex::Autolock autoLock(mLock);
+ return mSeeking;
+}
+
+bool LPAPlayer::reachedEOS(status_t *finalStatus) {
+ *finalStatus = OK;
+
+ Mutex::Autolock autoLock(mLock);
+ *finalStatus = mFinalStatus;
+ return mReachedEOS;
+}
+
+
+void *LPAPlayer::decoderThreadWrapper(void *me) {
+ static_cast<LPAPlayer *>(me)->decoderThreadEntry();
+ return NULL;
+}
+
+void LPAPlayer::decoderThreadEntry() {
+
+ pthread_mutex_lock(&decoder_mutex);
+
+ setpriority(PRIO_PROCESS, 0, ANDROID_PRIORITY_AUDIO);
+ prctl(PR_SET_NAME, (unsigned long)"LPA DecodeThread", 0, 0, 0);
+
+ LOGV("decoderThreadEntry wait for signal \n");
+ if (!mStarted) {
+ pthread_cond_wait(&decoder_cv, &decoder_mutex);
+ }
+ LOGV("decoderThreadEntry ready to work \n");
+ pthread_mutex_unlock(&decoder_mutex);
+
+
+ audio_register_memory();
+ while (1) {
+ pthread_mutex_lock(&mem_request_mutex);
+
+ if (killDecoderThread) {
+ pthread_mutex_unlock(&mem_request_mutex);
+ break;
+ }
+
+ LOGV("decoder memBuffersRequestQueue.size() = %d, memBuffersResponseQueue.size() = %d ",
+ memBuffersRequestQueue.size(),memBuffersResponseQueue.size());
+
+ if (memBuffersRequestQueue.empty() || a2dpDisconnectPause || mReachedEOS ||
+ (bIsA2DPEnabled && !mAudioSinkOpen) || asyncReset || (!bIsA2DPEnabled && !mIsDriverStarted)) {
+ LOGV("decoderThreadEntry: a2dpDisconnectPause %d mReachedEOS %d bIsA2DPEnabled %d "
+ "mAudioSinkOpen %d asyncReset %d mIsDriverStarted %d", a2dpDisconnectPause,
+ mReachedEOS, bIsA2DPEnabled, mAudioSinkOpen, asyncReset, mIsDriverStarted);
+ LOGV("decoderThreadEntry: waiting on decoder_cv");
+ pthread_cond_wait(&decoder_cv, &mem_request_mutex);
+ pthread_mutex_unlock(&mem_request_mutex);
+ LOGV("decoderThreadEntry: received a signal to wake up");
+ continue;
+ }
+
+ List<BuffersAllocated>::iterator it = memBuffersRequestQueue.begin();
+ BuffersAllocated buf = *it;
+ memBuffersRequestQueue.erase(it);
+ pthread_mutex_unlock(&mem_request_mutex);
+
+ //Queue the buffers back to Request queue
+ if (mReachedEOS || (bIsA2DPEnabled && !mAudioSinkOpen) || asyncReset || a2dpDisconnectPause) {
+ LOGV("%s: mReachedEOS %d bIsA2DPEnabled %d ", __func__, mReachedEOS, bIsA2DPEnabled);
+ pthread_mutex_lock(&mem_request_mutex);
+ memBuffersRequestQueue.push_back(buf);
+ pthread_mutex_unlock(&mem_request_mutex);
+ }
+ //Queue up the buffers for writing either for A2DP or LPA Driver
+ else {
+ struct msm_audio_aio_buf aio_buf_local;
+
+ LOGV("Calling fillBuffer for size %d",MEM_BUFFER_SIZE);
+ buf.bytesToWrite = fillBuffer(buf.localBuf, MEM_BUFFER_SIZE);
+ LOGV("fillBuffer returned size %d",buf.bytesToWrite);
+
+ /* TODO: Check if we have to notify the app if an error occurs */
+ if (!bIsA2DPEnabled) {
+ if ( buf.bytesToWrite > 0) {
+ memset(&aio_buf_local, 0, sizeof(msm_audio_aio_buf));
+ aio_buf_local.buf_addr = buf.memBuf;
+ aio_buf_local.buf_len = buf.bytesToWrite;
+ aio_buf_local.data_len = buf.bytesToWrite;
+ aio_buf_local.private_data = (void*) buf.memFd;
+
+ if ( (buf.bytesToWrite % 2) != 0 ) {
+ LOGV("Increment for even bytes");
+ aio_buf_local.data_len += 1;
+ }
+
+ if (timeStarted == 0) {
+ timeStarted = nanoseconds_to_microseconds(systemTime(SYSTEM_TIME_MONOTONIC));
+ }
+ } else {
+ /* Put the buffer back into requestQ */
+ pthread_mutex_lock(&mem_request_mutex);
+ memBuffersRequestQueue.push_back(buf);
+ pthread_mutex_unlock(&mem_request_mutex);
+ /* This is zero byte buffer - no need to put in response Q*/
+ if (mObserver && mReachedEOS && memBuffersResponseQueue.empty()) {
+ LOGV("Posting EOS event to AwesomePlayer");
+ mObserver->postAudioEOS();
+ }
+ continue;
+ }
+ }
+ pthread_mutex_lock(&mem_response_mutex);
+ memBuffersResponseQueue.push_back(buf);
+ pthread_mutex_unlock(&mem_response_mutex);
+
+ if (bIsA2DPEnabled && !mAudioSinkOpen) {
+ LOGV("Close Session");
+ if (mAudioSink.get() != NULL) {
+ mAudioSink->closeSession();
+ LOGV("mAudioSink close session");
+ } else {
+ LOGE("close session NULL");
+ }
+
+ sp<MetaData> format = mSource->getFormat();
+ const char *mime;
+ bool success = format->findCString(kKeyMIMEType, &mime);
+ CHECK(success);
+ CHECK(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW));
+ success = format->findInt32(kKeySampleRate, &mSampleRate);
+ CHECK(success);
+ success = format->findInt32(kKeyChannelCount, &numChannels);
+ CHECK(success);
+ LOGV("Before Audio Sink Open");
+ status_t ret = mAudioSink->open(mSampleRate, numChannels,AUDIO_FORMAT_PCM_16_BIT, DEFAULT_AUDIOSINK_BUFFERCOUNT);
+ mAudioSink->start();
+ LOGV("After Audio Sink Open");
+ }
+
+ if (!bIsA2DPEnabled){
+ pthread_cond_signal(&event_cv);
+ // Make sure the buffer is added to response Q before applying effects
+ // If there is a change in effects while applying on current buffer
+ // it will be re applied as the buffer already present in responseQ
+ if (!asyncReset) {
+ pthread_mutex_lock(&apply_effect_mutex);
+ LOGV("decoderThread: applying effects on mem buf with fd %d", buf.memFd);
+ mAudioFlinger->applyEffectsOn((int16_t*)buf.localBuf,
+ (int16_t*)buf.memBuf,
+ (int)buf.bytesToWrite);
+
+ pthread_mutex_unlock(&apply_effect_mutex);
+
+ LOGV("decoderThread: Writing buffer to driver with mem fd %d", buf.memFd);
+ if ( ioctl(afd, AUDIO_ASYNC_WRITE, &aio_buf_local) < 0 ) {
+ LOGE("error on async write\n");
+ }
+ }
+ }
+ else
+ pthread_cond_signal(&a2dp_cv);
+ }
+ }
+ decoderThreadAlive = false;
+ LOGV("decoder Thread is dying");
+}
+
+void *LPAPlayer::eventThreadWrapper(void *me) {
+ static_cast<LPAPlayer *>(me)->eventThreadEntry();
+ return NULL;
+}
+
+void LPAPlayer::eventThreadEntry() {
+ struct msm_audio_event cur_pcmdec_event;
+
+ pthread_mutex_lock(&event_mutex);
+ eventThreadCreated = true;
+ pthread_cond_signal(&event_thread_cv);
+ int rc = 0;
+ setpriority(PRIO_PROCESS, 0, ANDROID_PRIORITY_AUDIO);
+ prctl(PR_SET_NAME, (unsigned long)"LPA EventThread", 0, 0, 0);
+
+ LOGV("eventThreadEntry wait for signal \n");
+ pthread_cond_wait(&event_cv, &event_mutex);
+ LOGV("eventThreadEntry ready to work \n");
+ pthread_mutex_unlock(&event_mutex);
+
+ if (killEventThread) {
+ eventThreadAlive = false;
+ LOGV("Event Thread is dying.");
+ return;
+ }
+
+ while (1) {
+ //Wait for an event to occur
+ rc = ioctl(afd, AUDIO_GET_EVENT, &cur_pcmdec_event);
+ LOGV("pcm dec Event Thread rc = %d and errno is %d",rc, errno);
+
+ if ( (rc < 0) && (errno == ENODEV ) ) {
+ LOGV("AUDIO_ABORT_GET_EVENT called. Exit the thread");
+ break;
+ }
+
+ switch ( cur_pcmdec_event.event_type ) {
+ case AUDIO_EVENT_WRITE_DONE:
+ {
+ LOGV("WRITE_DONE: addr %p len %d and fd is %d\n",
+ cur_pcmdec_event.event_payload.aio_buf.buf_addr,
+ cur_pcmdec_event.event_payload.aio_buf.data_len,
+ (int32_t) cur_pcmdec_event.event_payload.aio_buf.private_data);
+ Mutex::Autolock autoLock(mLock);
+ mNumFramesPlayed += cur_pcmdec_event.event_payload.aio_buf.buf_len/ mFrameSize;
+ pthread_mutex_lock(&mem_response_mutex);
+ BuffersAllocated buf = *(memBuffersResponseQueue.begin());
+ for (List<BuffersAllocated>::iterator it = memBuffersResponseQueue.begin();
+ it != memBuffersResponseQueue.end(); ++it) {
+ if (it->memBuf == cur_pcmdec_event.event_payload.aio_buf.buf_addr) {
+ buf = *it;
+ memBuffersResponseQueue.erase(it);
+ break;
+ }
+ }
+
+ /* If the rendering is complete report EOS to the AwesomePlayer */
+ if (mObserver && !asyncReset && mReachedEOS && memBuffersResponseQueue.empty()) {
+ LOGV("Posting EOS event to AwesomePlayer");
+ mObserver->postAudioEOS();
+ }
+ if (memBuffersResponseQueue.empty() && bIsA2DPEnabled && !mAudioSinkOpen) {
+ LOGV("Close Session");
+ if (mAudioSink.get() != NULL) {
+ mAudioSink->closeSession();
+ LOGV("mAudioSink close session");
+ } else {
+ LOGE("close session NULL");
+ }
+
+ sp<MetaData> format = mSource->getFormat();
+ const char *mime;
+ bool success = format->findCString(kKeyMIMEType, &mime);
+ CHECK(success);
+ CHECK(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW));
+
+ success = format->findInt32(kKeySampleRate, &mSampleRate);
+ CHECK(success);
+
+ success = format->findInt32(kKeyChannelCount, &numChannels);
+ CHECK(success);
+ LOGV("Before Audio Sink Open");
+ status_t ret = mAudioSink->open(mSampleRate, numChannels,AUDIO_FORMAT_PCM_16_BIT, DEFAULT_AUDIOSINK_BUFFERCOUNT);
+ mAudioSink->start();
+ LOGV("After Audio Sink Open");
+ mAudioSinkOpen = true;
+ }
+
+ pthread_mutex_unlock(&mem_response_mutex);
+
+ // Post buffer to request Q
+ pthread_mutex_lock(&mem_request_mutex);
+ memBuffersRequestQueue.push_back(buf);
+ pthread_mutex_unlock(&mem_request_mutex);
+
+ pthread_cond_signal(&decoder_cv);
+ }
+ break;
+ case AUDIO_EVENT_SUSPEND:
+ {
+ struct msm_audio_stats stats;
+ int nBytesConsumed = 0;
+
+ LOGV("AUDIO_EVENT_SUSPEND received\n");
+ if(mPauseEventPending) {
+ mPauseEventPending = false;
+ mQueue.cancelEvent(mPauseEvent->eventID());
+ } else {
+ LOGV("Not in paused, no need to honor SUSPEND event");
+ break;
+ }
+ if(!bIsA2DPEnabled) {
+ if(!mPlaybackSuspended) {
+ mPlaybackSuspended = true;
+ // 1. Get the Byte count that is consumed
+ if ( ioctl(afd, AUDIO_GET_STATS, &stats) < 0 ) {
+ LOGE("AUDIO_GET_STATUS failed");
+ } else {
+ LOGV("Number of bytes consumed by DSP is %u", stats.byte_count);
+ nBytesConsumed = stats.byte_count;
+ }
+ // Reset eosflag to resume playback where we actually paused
+ mInternalSeeking = true;
+ mReachedEOS = false;
+ mSeekTimeUs = timePlayed;
+
+ // 2. Close the session
+ if(bIsAudioRouted) {
+ mAudioSink->closeSession();
+ bIsAudioRouted = false;
+ }
+
+ // 3. Call AUDIO_STOP on the Driver.
+ LOGV("Received AUDIO_EVENT_SUSPEND and calling AUDIO_STOP");
+ mIsDriverStarted = false;
+ if ( ioctl(afd, AUDIO_STOP, 0) < 0 ) {
+ LOGE("AUDIO_STOP failed");
+ }
+ break;
+ }
+
+ // 4. Close the session if existing
+ if(bIsAudioRouted) {
+ mAudioSink->closeSession();
+ bIsAudioRouted = false;
+ }
+ }
+ }
+ break;
+ case AUDIO_EVENT_RESUME:
+ {
+ LOGV("AUDIO_EVENT_RESUME received\n");
+ }
+ break;
+ default:
+ LOGV("Received Invalid Event from driver\n");
+ break;
+ }
+ }
+ eventThreadAlive = false;
+ LOGV("Event Thread is dying.");
+
+}
+
+void *LPAPlayer::A2DPThreadWrapper(void *me) {
+ static_cast<LPAPlayer *>(me)->A2DPThreadEntry();
+ return NULL;
+}
+
+void LPAPlayer::A2DPThreadEntry() {
+ setpriority(PRIO_PROCESS, 0, ANDROID_PRIORITY_AUDIO);
+ prctl(PR_SET_NAME, (unsigned long)"LPA A2DPThread", 0, 0, 0);
+
+ //TODO: Remove this
+/*
+ LOGV("a2dpThreadEntry wait for signal \n");
+ pthread_cond_wait(&a2dp_cv, &a2dp_mutex);
+ LOGV("a2dpThreadEntry ready to work \n");
+ pthread_mutex_unlock(&a2dp_mutex);
+
+ a2dpThreadStarted = true;
+
+ if (killA2DPThread) {
+ a2dpThreadAlive = false;
+ return;
+ }
+*/
+ while (1) {
+ /* If exitPending break here */
+ if (killA2DPThread) {
+ break;
+ }
+
+ pthread_mutex_lock(&mem_response_mutex);
+ if (memBuffersResponseQueue.empty() || !mAudioSinkOpen || isPaused || !bIsA2DPEnabled) {
+ LOGV("A2DPThreadEntry:: responseQ empty %d mAudioSinkOpen %d isPaused %d bIsA2DPEnabled %d",
+ memBuffersResponseQueue.empty(), mAudioSinkOpen, isPaused, bIsA2DPEnabled);
+ LOGV("A2DPThreadEntry:: Waiting on a2dp_cv");
+ pthread_cond_wait(&a2dp_cv, &mem_response_mutex);
+ LOGV("A2DPThreadEntry:: received signal to wake up");
+ // A2DP got disabled -- Queue up everything back to Request Queue
+ if (!bIsA2DPEnabled) {
+ pthread_mutex_lock(&mem_request_mutex);
+ while (!memBuffersResponseQueue.empty()) {
+ LOGV("BUF transfer");
+ List<BuffersAllocated>::iterator it = memBuffersResponseQueue.begin();
+ BuffersAllocated buf = *it;
+ memBuffersRequestQueue.push_back(buf);
+ memBuffersResponseQueue.erase(it);
+ }
+ pthread_mutex_unlock(&mem_request_mutex);
+ }
+ pthread_mutex_unlock(&mem_response_mutex);
+ }
+ //A2DP is enabled -- Continue normal Playback
+ else {
+ List<BuffersAllocated>::iterator it = memBuffersResponseQueue.begin();
+ BuffersAllocated buf = *it;
+ memBuffersResponseQueue.erase(it);
+ pthread_mutex_unlock(&mem_response_mutex);
+ bytesToWrite = buf.bytesToWrite;
+ LOGV("bytes To write:%d",bytesToWrite);
+ if (timeStarted == 0) {
+ LOGV("Time started in A2DP thread");
+ timeStarted = nanoseconds_to_microseconds(systemTime(SYSTEM_TIME_MONOTONIC));
+ }
+ //LOGV("16 bit :: cmdid = %d, len = %u, bytesAvailInBuffer = %u, bytesToWrite = %u", cmdid, len, bytesAvailInBuffer, bytesToWrite);
+
+ uint32_t bytesWritten = 0;
+ uint32_t numBytesRemaining = 0;
+ uint32_t bytesAvailInBuffer = 0;
+ void* data = buf.localBuf;
+
+ while (bytesToWrite) {
+ /* If exitPending break here */
+ if (killA2DPThread || !bIsA2DPEnabled) {
+ LOGV("A2DPThreadEntry: A2DPThread set to be killed");
+ break;
+ }
+
+ bytesAvailInBuffer = mAudioSink->bufferSize();
+
+ uint32_t writeLen = bytesAvailInBuffer > bytesToWrite ? bytesToWrite : bytesAvailInBuffer;
+ //LOGV("16 bit :: cmdid = %d, len = %u, bytesAvailInBuffer = %u, bytesToWrite = %u", cmdid, len, bytesAvailInBuffer, bytesToWrite);
+ bytesWritten = mAudioSink->write(data, writeLen);
+ /*if ( bytesWritten != writeLen ) {
+ if (mSeeked) {
+ break;
+ }
+ LOGE("Error writing audio data");
+ pthread_mutex_lock(&a2dp_mutex);
+ pthread_cond_wait(&a2dp_cv, &a2dp_mutex);
+ pthread_mutex_unlock(&a2dp_mutex);
+ if (mSeeked) {
+ break;
+ }
+ }*/
+ if ( bytesWritten != writeLen ) {
+ //Paused - Wait till resume
+ if (isPaused) {
+ LOGV("Pausing A2DP playback");
+ pthread_mutex_lock(&a2dp_mutex);
+ pthread_cond_wait(&a2dp_cv, &a2dp_mutex);
+ pthread_mutex_unlock(&a2dp_mutex);
+ }
+
+ //Seeked: break out of loop, flush old buffers and write new buffers
+ LOGV("@_@bytes To write1:%d",bytesToWrite);
+ }
+ if (mSeeked) {
+ LOGV("Seeking A2DP Playback");
+ break;
+ }
+ data += bytesWritten;
+ bytesToWrite -= bytesWritten;
+ LOGV("@_@bytes To write2:%d",bytesToWrite);
+ }
+ if (mObserver && !asyncReset && mReachedEOS && memBuffersResponseQueue.empty()) {
+ LOGV("Posting EOS event to AwesomePlayer");
+ mObserver->postAudioEOS();
+ }
+ pthread_mutex_lock(&mem_request_mutex);
+ memBuffersRequestQueue.push_back(buf);
+ if (killA2DPThread) {
+ pthread_mutex_unlock(&mem_request_mutex);
+ break;
+ }
+ //flush out old buffer
+ if (mSeeked || !bIsA2DPEnabled) {
+ mSeeked = false;
+ LOGV("A2DPThread: Putting buffers back to requestQ from responseQ");
+ pthread_mutex_lock(&mem_response_mutex);
+ while (!memBuffersResponseQueue.empty()) {
+ List<BuffersAllocated>::iterator it = memBuffersResponseQueue.begin();
+ BuffersAllocated buf = *it;
+ memBuffersRequestQueue.push_back(buf);
+ memBuffersResponseQueue.erase(it);
+ }
+ pthread_mutex_unlock(&mem_response_mutex);
+ }
+ pthread_mutex_unlock(&mem_request_mutex);
+ // Signal decoder thread when a buffer is put back to request Q
+ pthread_cond_signal(&decoder_cv);
+ }
+ }
+ a2dpThreadAlive = false;
+
+ LOGV("AudioSink stop");
+ if(mAudioSinkOpen) {
+ mAudioSinkOpen = false;
+ mAudioSink->stop();
+ }
+
+ LOGV("A2DP Thread is dying.");
+}
+
+void *LPAPlayer::EffectsThreadWrapper(void *me) {
+ static_cast<LPAPlayer *>(me)->EffectsThreadEntry();
+ return NULL;
+}
+
+void LPAPlayer::EffectsThreadEntry() {
+ while(1) {
+ if(killEffectsThread) {
+ break;
+ }
+ pthread_mutex_lock(&effect_mutex);
+
+ if(bEffectConfigChanged) {
+ bEffectConfigChanged = false;
+
+ // 1. Clear current effectQ
+ LOGV("Clearing EffectQ: size %d", effectsQueue.size());
+ while (!effectsQueue.empty()) {
+ List<BuffersAllocated>::iterator it = effectsQueue.begin();
+ effectsQueue.erase(it);
+ }
+
+ // 2. Lock the responseQ mutex
+ pthread_mutex_lock(&mem_response_mutex);
+
+ // 3. Copy responseQ to effectQ
+ LOGV("Copying responseQ to effectQ: responseQ size %d", memBuffersResponseQueue.size());
+ for (List<BuffersAllocated>::iterator it = memBuffersResponseQueue.begin();
+ it != memBuffersResponseQueue.end(); ++it) {
+ BuffersAllocated buf = *it;
+ effectsQueue.push_back(buf);
+ }
+
+ // 4. Unlock the responseQ mutex
+ pthread_mutex_unlock(&mem_response_mutex);
+ }
+ // If effectQ is empty just wait for a signal
+ // Else dequeue a buffer, apply effects and delete it from effectQ
+ if(effectsQueue.empty() || asyncReset || bIsA2DPEnabled) {
+ LOGV("EffectQ is empty or Reset called or A2DP enabled, waiting for signal");
+ pthread_cond_wait(&effect_cv, &effect_mutex);
+ LOGV("effectsThread: received signal to wake up");
+ pthread_mutex_unlock(&effect_mutex);
+ } else {
+ pthread_mutex_unlock(&effect_mutex);
+
+ List<BuffersAllocated>::iterator it = effectsQueue.begin();
+ BuffersAllocated buf = *it;
+
+ pthread_mutex_lock(&apply_effect_mutex);
+ LOGV("effectsThread: applying effects on %p fd %d", buf.memBuf, (int)buf.memFd);
+ mAudioFlinger->applyEffectsOn((int16_t*)buf.localBuf,
+ (int16_t*)buf.memBuf,
+ (int)buf.bytesToWrite);
+ pthread_mutex_unlock(&apply_effect_mutex);
+ effectsQueue.erase(it);
+ }
+ }
+ LOGV("Effects thread is dead");
+ effectsThreadAlive = false;
+}
+
+void LPAPlayer::createThreads() {
+
+ //Initialize all the Mutexes and Condition Variables
+ pthread_mutex_init(&mem_request_mutex, NULL);
+ pthread_mutex_init(&mem_response_mutex, NULL);
+ pthread_mutex_init(&decoder_mutex, NULL);
+ pthread_mutex_init(&event_mutex, NULL);
+ pthread_mutex_init(&a2dp_mutex, NULL);
+ pthread_mutex_init(&effect_mutex, NULL);
+ pthread_mutex_init(&apply_effect_mutex, NULL);
+
+ pthread_cond_init (&event_cv, NULL);
+ pthread_cond_init (&decoder_cv, NULL);
+ pthread_cond_init (&a2dp_cv, NULL);
+ pthread_cond_init (&effect_cv, NULL);
+ pthread_cond_init (&event_thread_cv, NULL);
+ // Create 4 threads Effect, decoder, event and A2dp
+ pthread_attr_t attr;
+ pthread_attr_init(&attr);
+ pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
+
+ killDecoderThread = false;
+ killEventThread = false;
+ killA2DPThread = false;
+ killEffectsThread = false;
+
+ decoderThreadAlive = true;
+ eventThreadAlive = true;
+ a2dpThreadAlive = true;
+ effectsThreadAlive = true;
+
+ LOGV("Creating Event Thread");
+ pthread_create(&eventThread, &attr, eventThreadWrapper, this);
+
+ LOGV("Creating decoder Thread");
+ pthread_create(&decoderThread, &attr, decoderThreadWrapper, this);
+
+ LOGV("Creating A2dp Thread");
+ pthread_create(&A2DPThread, &attr, A2DPThreadWrapper, this);
+
+ LOGV("Creating Effects Thread");
+ pthread_create(&EffectsThread, &attr, EffectsThreadWrapper, this);
+
+ pthread_attr_destroy(&attr);
+}
+
+
+size_t LPAPlayer::fillBuffer(void *data, size_t size) {
+ LOGE("fillBuffer");
+ if (mNumFramesPlayed == 0) {
+ LOGV("AudioCallback");
+ }
+
+ LOGV("Number of Frames Played: %u", mNumFramesPlayed);
+ if (mReachedEOS) {
+ return 0;
+ }
+
+ size_t size_done = 0;
+ size_t size_remaining = size;
+ while (size_remaining > 0) {
+ MediaSource::ReadOptions options;
+ {
+ Mutex::Autolock autoLock(mLock);
+
+ if (mSeeking || mInternalSeeking) {
+ if (mIsFirstBuffer) {
+ if (mFirstBuffer != NULL) {
+ mFirstBuffer->release();
+ mFirstBuffer = NULL;
+ }
+ mIsFirstBuffer = false;
+ }
+
+ options.setSeekTo(mSeekTimeUs);
+
+ if (mInputBuffer != NULL) {
+ mInputBuffer->release();
+ mInputBuffer = NULL;
+ }
+
+ // This is to ignore the data already filled in the output buffer
+ size_done = 0;
+ size_remaining = size;
+
+ if (mSeeking){
+ mInternalSeeking = false;
+ }
+
+ mSeeking = false;
+ if (mObserver && !asyncReset && !mInternalSeeking) {
+ LOGV("fillBuffer: Posting audio seek complete event");
+ mObserver->postAudioSeekComplete();
+ }
+ mInternalSeeking = false;
+ }
+ }
+ if (mInputBuffer == NULL) {
+ status_t err;
+
+ if (mIsFirstBuffer) {
+ mInputBuffer = mFirstBuffer;
+ mFirstBuffer = NULL;
+ err = mFirstBufferResult;
+
+ mIsFirstBuffer = false;
+ } else {
+ err = mSource->read(&mInputBuffer, &options);
+ }
+
+ CHECK((err == OK && mInputBuffer != NULL)
+ || (err != OK && mInputBuffer == NULL));
+
+ Mutex::Autolock autoLock(mLock);
+
+ if (err != OK) {
+ LOGV("err != ok");
+ if (err == INFO_FORMAT_CHANGED) {
+ LOGV("INFO_FORMAT_CHANGED");
+ sp<MetaData> format = mSource->getFormat();
+ const char *mime;
+ bool success = format->findCString(kKeyMIMEType, &mime);
+ CHECK(success);
+ CHECK(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW));
+
+ success = format->findInt32(kKeySampleRate, &mSampleRate);
+ CHECK(success);
+
+ int32_t numChannels;
+ success = format->findInt32(kKeyChannelCount, &numChannels);
+ CHECK(success);
+
+ if(bIsA2DPEnabled) {
+ mAudioSink->stop();
+ mAudioSink->close();
+ mAudioSinkOpen = false;
+ status_t err = mAudioSink->open(
+ mSampleRate, numChannels, AUDIO_FORMAT_PCM_16_BIT,
+ DEFAULT_AUDIOSINK_BUFFERCOUNT);
+ if (err != OK) {
+ mSource->stop();
+ return err;
+ }
+ mAudioSinkOpen = true;
+ mLatencyUs = (int64_t)mAudioSink->latency() * 1000;
+ mFrameSize = mAudioSink->frameSize();
+ mAudioSink->start();
+ } else {
+ /* TODO: LPA driver needs to be reconfigured
+ For MP3 we might not come here but for AAC we need this */
+ mAudioSink->stop();
+ mAudioSink->closeSession();
+ LOGV("Opening a routing session in fillBuffer: sessionId = %d mSampleRate %d numChannels %d",
+ sessionId, mSampleRate, numChannels);
+ status_t err = mAudioSink->openSession(AUDIO_FORMAT_PCM_16_BIT, sessionId, mSampleRate, numChannels);
+ if (err != OK) {
+ mSource->stop();
+ return err;
+ }
+ }
+ break;
+ } else {
+ mReachedEOS = true;
+ mFinalStatus = err;
+ break;
+ }
+ }
+
+ CHECK(mInputBuffer->meta_data()->findInt64(
+ kKeyTime, &mPositionTimeMediaUs));
+
+ mFrameSize = mAudioSink->frameSize();
+ mPositionTimeRealUs =
+ ((mNumFramesPlayed + size_done / mFrameSize) * 1000000)
+ / mSampleRate;
+
+ // LOGV("buffer->size() = %d, "
+ // "mPositionTimeMediaUs=%.2f mPositionTimeRealUs=%.2f",
+ // mInputBuffer->range_length(),
+ // mPositionTimeMediaUs / 1E6, mPositionTimeRealUs / 1E6);
+ }
+ if (mInputBuffer->range_length() == 0) {
+ mInputBuffer->release();
+ mInputBuffer = NULL;
+ continue;
+ }
+
+ size_t copy = size_remaining;
+ if (copy > mInputBuffer->range_length()) {
+ copy = mInputBuffer->range_length();
+ }
+
+ memcpy((char *)data + size_done,
+ (const char *)mInputBuffer->data() + mInputBuffer->range_offset(),
+ copy);
+
+ mInputBuffer->set_range(mInputBuffer->range_offset() + copy,
+ mInputBuffer->range_length() - copy);
+
+ size_done += copy;
+ size_remaining -= copy;
+ }
+ return size_done;
+}
+
+int64_t LPAPlayer::getRealTimeUs() {
+ Mutex::Autolock autoLock(mLock);
+ return getRealTimeUsLocked();
+}
+
+
+int64_t LPAPlayer::getRealTimeUsLocked(){
+ /* struct msm_audio_stats stats;
+
+ // 1. Get the Byte count that is consumed
+ if ( ioctl(afd, AUDIO_GET_STATS, &stats) < 0 ) {
+ LOGE("AUDIO_GET_STATUS failed");
+ }
+
+ //mNumFramesDspPlayed = mNumFramesPlayed - ((PMEM_BUFFER_SIZE - stats.byte_count)/mFrameSize);
+ LOGE("AUDIO_GET_STATUS bytes %u, mNumFramesPlayed %u", stats.byte_count/mFrameSize,mNumFramesPlayed);
+ //mNumFramesDspPlayed = mNumFramesPlayed + stats.byte_count/mFrameSize;
+
+ int64_t temp = (stats.byte_count/mFrameSize)+mNumFramesPlayed;
+ LOGE("Number of frames played by the DSP is %u", temp);
+ int64_t temp1 = -mLatencyUs + (temp * 1000000) / mSampleRate;
+ LOGE("getRealTimeUsLocked() %u", temp1);
+ return temp1;*/
+
+ return nanoseconds_to_microseconds(systemTime(SYSTEM_TIME_MONOTONIC)) - timeStarted + timePlayed;
+}
+
+int64_t LPAPlayer::getMediaTimeUs() {
+ Mutex::Autolock autoLock(mLock);
+/*
+if (mPositionTimeMediaUs < 0 || mPositionTimeRealUs < 0) {
+return 0;
+}
+
+int64_t realTimeOffset = getRealTimeUsLocked() - mPositionTimeRealUs;
+if (realTimeOffset < 0) {
+realTimeOffset = 0;
+}
+
+return mPositionTimeMediaUs + realTimeOffset;
+*/
+ LOGV("getMediaTimeUs() isPaused %d timeStarted %d timePlayed %d", isPaused, timeStarted, timePlayed);
+ if (isPaused || timeStarted == 0) {
+ return timePlayed;
+ } else {
+ LOGV("curr_time %d", nanoseconds_to_microseconds(systemTime(SYSTEM_TIME_MONOTONIC)));
+ return nanoseconds_to_microseconds(systemTime(SYSTEM_TIME_MONOTONIC)) - timeStarted + timePlayed;
+ }
+
+ /*int64_t bytes = (int64_t)stats.byte_count;
+ LOGV("stats %u %u",bytes,stats.byte_count);
+ LOGV("secs played %u", ((stats.byte_count/4) * 1000000)/mSampleRate );
+ return((stats.byte_count/4) * 1000000)/mSampleRate;*/
+}
+
+bool LPAPlayer::getMediaTimeMapping(
+ int64_t *realtime_us, int64_t *mediatime_us) {
+ Mutex::Autolock autoLock(mLock);
+
+ *realtime_us = mPositionTimeRealUs;
+ *mediatime_us = mPositionTimeMediaUs;
+
+ return mPositionTimeRealUs != -1 && mPositionTimeMediaUs != -1;
+}
+
+void LPAPlayer::requestAndWaitForDecoderThreadExit() {
+
+ if (!decoderThreadAlive)
+ return;
+
+ pthread_mutex_lock(&mem_request_mutex);
+ killDecoderThread = true;
+ pthread_cond_signal(&decoder_cv);
+ pthread_mutex_unlock(&mem_request_mutex);
+ pthread_join(decoderThread,NULL);
+ LOGV("decoder thread killed");
+
+}
+
+void LPAPlayer::requestAndWaitForEventThreadExit() {
+ if (!eventThreadAlive)
+ return;
+ killEventThread = true;
+ pthread_mutex_lock(&event_mutex);
+ if (!eventThreadCreated)
+ pthread_cond_wait(&event_thread_cv,&event_mutex);
+ pthread_mutex_unlock(&event_mutex);
+ pthread_cond_signal(&event_cv);
+ if (ioctl(afd, AUDIO_ABORT_GET_EVENT, 0) < 0) {
+ LOGE("Audio Abort event failed");
+ }
+ /*pthread_cond_wait(&event_cv, &event_mutex);
+ pthread_mutex_unlock(&event_mutex);
+ */
+ pthread_join(eventThread,NULL);
+ LOGV("event thread killed");
+}
+
+void LPAPlayer::requestAndWaitForA2DPThreadExit() {
+ if (!a2dpThreadAlive)
+ return;
+ killA2DPThread = true;
+ pthread_cond_signal(&a2dp_cv);
+ pthread_join(A2DPThread,NULL);
+ LOGV("a2dp thread killed");
+}
+
+void LPAPlayer::requestAndWaitForEffectsThreadExit() {
+ if (!effectsThreadAlive)
+ return;
+ killEffectsThread = true;
+ pthread_cond_signal(&effect_cv);
+ pthread_join(EffectsThread,NULL);
+ LOGV("effects thread killed");
+}
+
+void LPAPlayer::onPauseTimeOut() {
+ Mutex::Autolock autoLock(resumeLock);
+ struct msm_audio_stats stats;
+ int nBytesConsumed = 0;
+ LOGV("onPauseTimeOut");
+ if (!mPauseEventPending) {
+ return;
+ }
+ mPauseEventPending = false;
+
+ if(!bIsA2DPEnabled) {
+ // Reset eosflag to resume playback where we actually paused
+ mInternalSeeking = true;
+ mReachedEOS = false;
+ mSeekTimeUs = timePlayed;
+ LOGV("%s: mSeekTimeUs %d ", __func__, mSeekTimeUs);
+
+ // 1. Get the Byte count that is consumed
+ if ( ioctl(afd, AUDIO_GET_STATS, &stats) < 0 ) {
+ LOGE("AUDIO_GET_STATUS failed");
+ } else {
+ LOGV("Number of bytes consumed by DSP is %u", stats.byte_count);
+ nBytesConsumed = stats.byte_count;
+ }
+
+ // 2. Close the session
+ mAudioSink->closeSession();
+ bIsAudioRouted = false;
+
+ // 3. Call AUDIO_STOP on the Driver.
+ mIsDriverStarted = false;
+ if ( ioctl(afd, AUDIO_STOP, 0) < 0 ) {
+ LOGE("AUDIO_STOP failed");
+ }
+ }
+}
+
+}//namespace android
diff --git a/media/libstagefright/LPAPlayerALSA.cpp b/media/libstagefright/LPAPlayerALSA.cpp new file mode 100755 index 0000000..30d13dd --- /dev/null +++ b/media/libstagefright/LPAPlayerALSA.cpp @@ -0,0 +1,1728 @@ +/*
+ * Copyright (C) 2009 The Android Open Source Project
+ * Copyright (c) 2009-2012, Code Aurora Forum. All rights reserved.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#define LOG_NDDEBUG 0
+#define LOG_NDEBUG 0
+#define LOG_TAG "LPAPlayerALSA"
+
+#include <utils/Log.h>
+#include <utils/threads.h>
+
+#include <signal.h>
+#include <sys/prctl.h>
+#include <sys/resource.h>
+#include <sys/poll.h>
+#include <sys/eventfd.h>
+#include <binder/IPCThreadState.h>
+#include <media/AudioTrack.h>
+
+extern "C" {
+ #include <sound/asound.h>
+ #include "alsa_audio.h"
+}
+
+#include <media/stagefright/LPAPlayer.h>
+#include <media/stagefright/MediaDebug.h>
+#include <media/stagefright/MediaDefs.h>
+#include <media/stagefright/MediaErrors.h>
+#include <media/stagefright/MediaSource.h>
+#include <media/stagefright/MetaData.h>
+#include <media/stagefright/MediaErrors.h>
+
+#include <hardware_legacy/power.h>
+
+#include <linux/unistd.h>
+
+#include "include/AwesomePlayer.h"
+#include <powermanager/PowerManager.h> + +static const char mName[] = "LPAPlayer"; +
+#define MEM_BUFFER_SIZE 262144
+//#define PMEM_BUFFER_SIZE (4800 * 4)
+#define MEM_BUFFER_COUNT 4
+ +//Values to exit poll via eventfd
+#define KILL_EVENT_THREAD 1
+#define SIGNAL_EVENT_THREAD 2
+#define PCM_FORMAT 2
+#define NUM_FDS 2
+namespace android {
+int LPAPlayer::objectsAlive = 0;
+
+LPAPlayer::LPAPlayer(
+ const sp<MediaPlayerBase::AudioSink> &audioSink, bool &initCheck,
+ AwesomePlayer *observer)
+:mInputBuffer(NULL),
+mSampleRate(0),
+mLatencyUs(0),
+mFrameSize(0),
+mSeekTimeUs(0),
+mNumFramesPlayed(0),
+mPositionTimeMediaUs(-1),
+mPositionTimeRealUs(-1),
+mPauseTime(0),
+mNumA2DPBytesPlayed(0),
+mSeeking(false),
+mInternalSeeking(false),
+mReachedEOS(false),
+mFinalStatus(OK),
+mStarted(false),
+mIsFirstBuffer(false),
+mFirstBufferResult(OK),
+mFirstBuffer(NULL),
+mAudioSink(audioSink),
+mObserver(observer),
+AudioPlayer(audioSink,observer) {
+ LOGV("LPAPlayer::LPAPlayer() ctor");
+ a2dpDisconnectPause = false;
+ mSeeked = false;
+ objectsAlive++;
+ timeStarted = 0;
+ numChannels =0;
+ afd = -1;
+ timePlayed = 0;
+ isPaused = false;
+ bIsA2DPEnabled = false;
+ mAudioFlinger = NULL;
+ AudioFlingerClient = NULL;
+ efd = -1;
+ /* Initialize Suspend/Resume related variables */
+ mQueue.start();
+ mQueueStarted = true;
+ mPauseEvent = new TimedEvent(this, &LPAPlayer::onPauseTimeOut);
+ mPauseEventPending = false;
+ mPlaybackSuspended = false;
+ getAudioFlinger();
+ LOGV("Registering client with AudioFlinger");
+ mAudioFlinger->registerClient(AudioFlingerClient);
+ mAudioSinkOpen = false;
+ mIsAudioRouted = false; + a2dpThreadStarted = true;
+ asyncReset = false;
+
+ bEffectConfigChanged = false;
+ initCheck = true;
+ + mDeathRecipient = new PMDeathRecipient(this); +} + +void LPAPlayer::acquireWakeLock() +{ + Mutex::Autolock _l(pmLock); + + if (mPowerManager == 0) { + // use checkService() to avoid blocking if power service is not up yet + sp<IBinder> binder = + defaultServiceManager()->checkService(String16("power")); + if (binder == 0) { + LOGW("Thread %s cannot connect to the power manager service", mName); + } else { + mPowerManager = interface_cast<IPowerManager>(binder); + binder->linkToDeath(mDeathRecipient); + } + } + if (mPowerManager != 0 && mWakeLockToken == 0) { + sp<IBinder> binder = new BBinder(); + status_t status = mPowerManager->acquireWakeLock(POWERMANAGER_PARTIAL_WAKE_LOCK, + binder, + String16(mName)); + if (status == NO_ERROR) { + mWakeLockToken = binder; + } + LOGV("acquireWakeLock() %s status %d", mName, status); + } +} + +void LPAPlayer::releaseWakeLock() +{ + Mutex::Autolock _l(pmLock); + + if (mWakeLockToken != 0) { + LOGV("releaseWakeLock() %s", mName); + if (mPowerManager != 0) { + mPowerManager->releaseWakeLock(mWakeLockToken, 0); + } + mWakeLockToken.clear(); + } +} + +void LPAPlayer::clearPowerManager() +{ + Mutex::Autolock _l(pmLock); + releaseWakeLock(); + mPowerManager.clear(); +} + +void LPAPlayer::PMDeathRecipient::binderDied(const wp<IBinder>& who) +{ + parentClass->clearPowerManager(); + LOGW("power manager service died !!!"); +}
+
+LPAPlayer::~LPAPlayer() {
+ LOGV("LPAPlayer::~LPAPlayer()");
+ if (mQueueStarted) {
+ mQueue.stop();
+ }
+
+ reset();
+
+ mAudioFlinger->deregisterClient(AudioFlingerClient);
+ objectsAlive--;
+ + releaseWakeLock(); + if (mPowerManager != 0) { + sp<IBinder> binder = mPowerManager->asBinder(); + binder->unlinkToDeath(mDeathRecipient); + } +}
+
+void LPAPlayer::getAudioFlinger() {
+ Mutex::Autolock _l(AudioFlingerLock);
+
+ if ( mAudioFlinger.get() == 0 ) {
+ sp<IServiceManager> sm = defaultServiceManager();
+ sp<IBinder> binder;
+ do {
+ binder = sm->getService(String16("media.audio_flinger"));
+ if ( binder != 0 )
+ break;
+ LOGW("AudioFlinger not published, waiting...");
+ usleep(500000); // 0.5 s
+ } while ( true );
+ if ( AudioFlingerClient == NULL ) {
+ AudioFlingerClient = new AudioFlingerLPAdecodeClient(this);
+ }
+
+ binder->linkToDeath(AudioFlingerClient);
+ mAudioFlinger = interface_cast<IAudioFlinger>(binder);
+ }
+ LOGE_IF(mAudioFlinger==0, "no AudioFlinger!?");
+}
+
+LPAPlayer::AudioFlingerLPAdecodeClient::AudioFlingerLPAdecodeClient(void *obj)
+{
+ LOGV("LPAPlayer::AudioFlingerLPAdecodeClient::AudioFlingerLPAdecodeClient");
+ pBaseClass = (LPAPlayer*)obj;
+}
+
+void LPAPlayer::AudioFlingerLPAdecodeClient::binderDied(const wp<IBinder>& who) {
+ Mutex::Autolock _l(pBaseClass->AudioFlingerLock);
+
+ pBaseClass->mAudioFlinger.clear();
+ LOGW("AudioFlinger server died!");
+}
+
+void LPAPlayer::AudioFlingerLPAdecodeClient::ioConfigChanged(int event, int ioHandle, void *param2) {
+ LOGV("ioConfigChanged() event %d", event);
+
+ if ( event != AudioSystem::A2DP_OUTPUT_STATE &&
+ event != AudioSystem::EFFECT_CONFIG_CHANGED) {
+ return;
+ }
+
+ switch ( event ) {
+ case AudioSystem::A2DP_OUTPUT_STATE:
+ {
+ LOGV("ioConfigChanged() A2DP_OUTPUT_STATE iohandle is %d with A2DPEnabled in %d", ioHandle, pBaseClass->bIsA2DPEnabled);
+ if ( -1 == ioHandle ) {
+ if ( pBaseClass->bIsA2DPEnabled ) {
+ pBaseClass->bIsA2DPEnabled = false;
+ if (pBaseClass->mStarted) {
+ pBaseClass->handleA2DPSwitch();
+ }
+ LOGV("ioConfigChanged:: A2DP Disabled");
+ }
+ } else {
+ if ( !pBaseClass->bIsA2DPEnabled ) {
+
+ pBaseClass->bIsA2DPEnabled = true;
+ if (pBaseClass->mStarted) {
+ pBaseClass->handleA2DPSwitch();
+ }
+
+ LOGV("ioConfigChanged:: A2DP Enabled");
+ }
+ }
+ }
+ break;
+ case AudioSystem::EFFECT_CONFIG_CHANGED:
+ {
+ LOGV("Received notification for change in effect module");
+ // Seek to current media time - flush the decoded buffers with the driver
+ if(!pBaseClass->bIsA2DPEnabled) {
+ pthread_mutex_lock(&pBaseClass->effect_mutex);
+ pBaseClass->bEffectConfigChanged = true;
+ pthread_mutex_unlock(&pBaseClass->effect_mutex);
+ // Signal effects thread to re-apply effects
+ LOGV("Signalling Effects Thread");
+ pthread_cond_signal(&pBaseClass->effect_cv);
+ }
+ }
+ }
+
+ LOGV("ioConfigChanged Out");
+}
+
+void LPAPlayer::handleA2DPSwitch() {
+ Mutex::Autolock autoLock(mLock);
+
+ LOGV("handleA2dpSwitch()");
+ if (bIsA2DPEnabled) {
+ struct pcm * local_handle = (struct pcm *)handle;
+ if (!isPaused) {
+ if(mIsAudioRouted) {
+ if (ioctl(local_handle->fd, SNDRV_PCM_IOCTL_PAUSE,1) < 0) {
+ LOGE("AUDIO PAUSE failed");
+ }
+ }
+
+ LOGV("paused for bt switch");
+ mSeekTimeUs += getTimeStamp(A2DP_CONNECT);
+ }
+ else {
+ mSeekTimeUs = mPauseTime;
+ }
+
+ mInternalSeeking = true;
+ mNumA2DPBytesPlayed = 0;
+ mReachedEOS = false;
+ pthread_cond_signal(&a2dp_notification_cv);
+ } else {
+ if (isPaused)
+ pthread_cond_signal(&a2dp_notification_cv);
+ else
+ a2dpDisconnectPause = true;
+ }
+}
+
+void LPAPlayer::setSource(const sp<MediaSource> &source) {
+ CHECK_EQ(mSource, NULL);
+ LOGV("Setting source from LPA Player");
+ mSource = source;
+}
+
+status_t LPAPlayer::start(bool sourceAlreadyStarted) {
+ CHECK(!mStarted);
+ CHECK(mSource != NULL);
+
+ LOGV("start: sourceAlreadyStarted %d", sourceAlreadyStarted);
+ //Check if the source is started, start it
+ status_t err;
+ if (!sourceAlreadyStarted) {
+ err = mSource->start();
+
+ if (err != OK) {
+ return err;
+ }
+ }
+
+ //Create event, decoder and a2dp thread and initialize all the
+ //mutexes and coditional variables
+ createThreads();
+ LOGV("All Threads Created.");
+
+ // We allow an optional INFO_FORMAT_CHANGED at the very beginning
+ // of playback, if there is one, getFormat below will retrieve the
+ // updated format, if there isn't, we'll stash away the valid buffer
+ // of data to be used on the first audio callback.
+
+ CHECK(mFirstBuffer == NULL);
+
+ MediaSource::ReadOptions options;
+ if (mSeeking) {
+ options.setSeekTo(mSeekTimeUs);
+ mSeeking = false;
+ }
+
+ mFirstBufferResult = mSource->read(&mFirstBuffer, &options);
+ if (mFirstBufferResult == INFO_FORMAT_CHANGED) {
+ LOGV("INFO_FORMAT_CHANGED!!!");
+ CHECK(mFirstBuffer == NULL);
+ mFirstBufferResult = OK;
+ mIsFirstBuffer = false;
+ } else {
+ mIsFirstBuffer = true;
+ }
+
+ /*TODO: Check for bA2dpEnabled */
+
+ sp<MetaData> format = mSource->getFormat();
+ const char *mime;
+ bool success = format->findCString(kKeyMIMEType, &mime);
+ CHECK(success);
+ CHECK(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW));
+
+ success = format->findInt32(kKeySampleRate, &mSampleRate);
+ CHECK(success);
+
+ success = format->findInt32(kKeyChannelCount, &numChannels);
+ CHECK(success);
+
+
+ if (!bIsA2DPEnabled) {
+ LOGV("Opening a routing session for audio playback: sessionId = %d mSampleRate %d numChannels %d",
+ sessionId, mSampleRate, numChannels);
+ status_t err = mAudioSink->openSession(AUDIO_FORMAT_PCM_16_BIT, 1, mSampleRate, numChannels);
+ if (err != OK) {
+ if (mFirstBuffer != NULL) {
+ mFirstBuffer->release();
+ mFirstBuffer = NULL;
+ }
+
+ if (!sourceAlreadyStarted) {
+ mSource->stop();
+ }
+
+ LOGE("Opening a routing session failed");
+ return err;
+ }
+ acquireWakeLock();
+ mIsAudioRouted = true;
+ }
+ else {
+ LOGV("Before Audio Sink Open");
+ status_t ret = mAudioSink->open(mSampleRate, numChannels,AUDIO_FORMAT_PCM_16_BIT, DEFAULT_AUDIOSINK_BUFFERCOUNT);
+ mAudioSink->start();
+ LOGV("After Audio Sink Open");
+ mAudioSinkOpen = true;
+ }
+
+ LOGV("pcm_open hardware 0,4 for LPA ");
+ //Open PCM driver
+ if (numChannels == 1)
+ handle = (void *)pcm_open((PCM_MMAP | DEBUG_ON | PCM_MONO) , "hw:0,4");
+ else
+ handle = (void *)pcm_open((PCM_MMAP | DEBUG_ON | PCM_STEREO) , "hw:0,4");
+
+ struct pcm * local_handle = (struct pcm *)handle;
+ if (!local_handle) {
+ LOGE("Failed to initialize ALSA hardware hw:0,4");
+ return BAD_VALUE;
+ }
+
+ struct snd_pcm_hw_params *params;
+ struct snd_pcm_sw_params *sparams;
+ params = (struct snd_pcm_hw_params*) calloc(1, sizeof(struct snd_pcm_hw_params));
+ if (!params) {
+ LOGV( "Aplay:Failed to allocate ALSA hardware parameters!");
+ return -1;
+ }
+ param_init(params);
+ param_set_mask(params, SNDRV_PCM_HW_PARAM_ACCESS, SNDRV_PCM_ACCESS_MMAP_INTERLEAVED);
+ param_set_mask(params, SNDRV_PCM_HW_PARAM_FORMAT, SNDRV_PCM_FORMAT_S16_LE);
+ param_set_mask(params, SNDRV_PCM_HW_PARAM_SUBFORMAT,SNDRV_PCM_SUBFORMAT_STD);
+ param_set_min(params, SNDRV_PCM_HW_PARAM_PERIOD_BYTES, MEM_BUFFER_SIZE);
+ param_set_int(params, SNDRV_PCM_HW_PARAM_SAMPLE_BITS, 16);
+ param_set_int(params, SNDRV_PCM_HW_PARAM_FRAME_BITS,
+ numChannels - 1 ? 32 : 16);
+ param_set_int(params, SNDRV_PCM_HW_PARAM_CHANNELS, numChannels);
+ param_set_int(params, SNDRV_PCM_HW_PARAM_RATE, mSampleRate);
+ param_set_hw_refine(local_handle, params);
+ if (param_set_hw_params(local_handle, params)) {
+ LOGV( "Aplay:cannot set hw params");
+ return -22;
+ }
+ param_dump(params);
+ local_handle->buffer_size = pcm_buffer_size(params);
+ local_handle->period_size = pcm_period_size(params);
+ local_handle->period_cnt = local_handle->buffer_size/local_handle->period_size;
+ LOGV("period_cnt = %d\n", local_handle->period_cnt);
+ LOGV("period_size = %d\n", local_handle->period_size);
+ LOGV("buffer_size = %d\n", local_handle->buffer_size);
+
+ sparams = (struct snd_pcm_sw_params*) calloc(1, sizeof(struct snd_pcm_sw_params));
+ if (!sparams) {
+ LOGV( "Aplay:Failed to allocate ALSA software parameters!\n");
+ return -1;
+ }
+ // Get the current software parameters
+ sparams->tstamp_mode = SNDRV_PCM_TSTAMP_NONE;
+ sparams->period_step = 1;
+ sparams->avail_min = local_handle->period_size/2;
+ sparams->start_threshold = local_handle->period_size/2;
+ sparams->stop_threshold = local_handle->buffer_size;
+ sparams->xfer_align = (local_handle->flags & PCM_MONO) ? local_handle->period_size/2 : local_handle->period_size/4; /* needed for old kernels */
+ sparams->silence_size = 0;
+ sparams->silence_threshold = 0;
+ if (param_set_sw_params(local_handle, sparams)) {
+ LOGV( "Aplay:cannot set sw params");
+ return -22;
+ }
+ mmap_buffer(local_handle);
+ if (!bIsA2DPEnabled)
+ pcm_prepare(local_handle);
+ handle = (void *)local_handle;
+ //Map PMEM buffer
+ LOGV("LPA Driver Started");
+ mStarted = true;
+
+ LOGV("Waking up decoder thread");
+ pthread_cond_signal(&decoder_cv);
+ return OK;
+}
+
+status_t LPAPlayer::seekTo(int64_t time_us) {
+ Mutex::Autolock autoLock1(mSeekLock);
+ Mutex::Autolock autoLock(mLock);
+ LOGV("seekTo: time_us %ld", time_us);
+ if ( mReachedEOS ) {
+ mReachedEOS = false;
+ }
+ mSeeking = true;
+
+ mSeekTimeUs = time_us;
+ struct pcm * local_handle = (struct pcm *)handle;
+ LOGV("In seekTo(), mSeekTimeUs %lld",mSeekTimeUs);
+ if (!bIsA2DPEnabled) {
+ if (mStarted) {
+ LOGV("Paused case, %d",isPaused);
+
+ pthread_mutex_lock(&mem_response_mutex);
+ pthread_mutex_lock(&mem_request_mutex);
+ memBuffersResponseQueue.clear();
+ memBuffersRequestQueue.clear();
+
+ List<BuffersAllocated>::iterator it = bufPool.begin();
+ for(;it!=bufPool.end();++it) {
+ memBuffersRequestQueue.push_back(*it);
+ }
+
+ pthread_mutex_unlock(&mem_request_mutex);
+ pthread_mutex_unlock(&mem_response_mutex);
+ LOGV("Transferred all the buffers from response queue to rquest queue to handle seek");
+ if (!isPaused) {
+ if (ioctl(local_handle->fd, SNDRV_PCM_IOCTL_PAUSE,1) < 0) {
+ LOGE("Audio Pause failed");
+ }
+ local_handle->start = 0;
+ pcm_prepare(local_handle);
+ LOGV("Reset, drain and prepare completed");
+ local_handle->sync_ptr->flags = SNDRV_PCM_SYNC_PTR_APPL | SNDRV_PCM_SYNC_PTR_AVAIL_MIN;
+ sync_ptr(local_handle);
+ LOGV("appl_ptr= %d", local_handle->sync_ptr->c.control.appl_ptr);
+ pthread_cond_signal(&decoder_cv);
+ }
+ }
+ } else {
+ if (!memBuffersResponseQueue.empty())
+ mSeeked = true;
+
+ if (!isPaused) {
+ mAudioSink->pause();
+ mAudioSink->flush();
+ mAudioSink->start();
+ }
+ mNumA2DPBytesPlayed = 0;
+ }
+
+ return OK;
+}
+
+void LPAPlayer::pause(bool playPendingSamples) {
+ CHECK(mStarted);
+
+ LOGV("pause: playPendingSamples %d", playPendingSamples);
+ isPaused = true;
+ A2DPState state;
+ if (playPendingSamples) {
+ isPaused = true;
+ if (!bIsA2DPEnabled) {
+ struct pcm * local_handle = (struct pcm *)handle;
+ if (ioctl(local_handle->fd, SNDRV_PCM_IOCTL_PAUSE,1) < 0) {
+ LOGE("Audio Pause failed");
+ }
+ if (!mPauseEventPending) {
+ LOGV("Posting an event for Pause timeout");
+ mQueue.postEventWithDelay(mPauseEvent, LPA_PAUSE_TIMEOUT_USEC);
+ mPauseEventPending = true;
+ }
+ if (mAudioSink.get() != NULL)
+ mAudioSink->pauseSession();
+ state = A2DP_DISABLED;
+ }
+ else {
+ if (mAudioSink.get() != NULL)
+ mAudioSink->stop();
+ state = A2DP_ENABLED;
+ }
+ mPauseTime = mSeekTimeUs + getTimeStamp(state);
+ } else {
+ if (a2dpDisconnectPause) {
+ a2dpDisconnectPause = false;
+ mAudioSink->pause(); + mPauseTime = mSeekTimeUs + getTimeStamp(A2DP_DISCONNECT);
+ pthread_cond_signal(&a2dp_notification_cv);
+ } else {
+ if (!bIsA2DPEnabled) {
+ LOGV("LPAPlayer::Pause - Pause driver");
+ struct pcm * local_handle = (struct pcm *)handle;
+ pthread_mutex_lock(&pause_mutex);
+ if (local_handle->start != 1) {
+ pthread_cond_wait(&pause_cv, &pause_mutex);
+ }
+ pthread_mutex_unlock(&pause_mutex);
+ if (ioctl(local_handle->fd, SNDRV_PCM_IOCTL_PAUSE,1) < 0) {
+ LOGE("Audio Pause failed");
+ }
+
+ if(!mPauseEventPending) {
+ LOGV("Posting an event for Pause timeout");
+ mQueue.postEventWithDelay(mPauseEvent, LPA_PAUSE_TIMEOUT_USEC);
+ mPauseEventPending = true;
+ }
+
+ if (mAudioSink.get() != NULL) {
+ mAudioSink->pauseSession();
+ }
+ state = A2DP_DISABLED;
+ } else {
+ mAudioSink->pause();
+ mAudioSink->flush();
+ state = A2DP_ENABLED;
+ }
+ mPauseTime = mSeekTimeUs + getTimeStamp(state);
+ }
+ }
+}
+
+void LPAPlayer::resume() {
+ LOGV("resume: isPaused %d",isPaused);
+ Mutex::Autolock autoLock(resumeLock);
+ if ( isPaused) {
+ CHECK(mStarted);
+ if (!bIsA2DPEnabled) {
+ LOGE("LPAPlayer::resume - Resuming Driver");
+ if(mPauseEventPending) {
+ LOGV("Resume(): Cancelling the puaseTimeout event");
+ mPauseEventPending = false;
+ mQueue.cancelEvent(mPauseEvent->eventID());
+ }
+ if (mAudioSinkOpen) {
+ mAudioSink->close();
+ mAudioSinkOpen = false;
+ LOGV("Singal to A2DP thread for clean up after closing Audio sink");
+ pthread_cond_signal(&a2dp_cv);
+ }
+
+ if (!mIsAudioRouted) {
+ LOGV("Opening a session for LPA playback");
+ status_t err = mAudioSink->openSession(AUDIO_FORMAT_PCM_16_BIT, sessionId);
+ acquireWakeLock();
+ mIsAudioRouted = true;
+ }
+
+ LOGV("Attempting Sync resume\n");
+ struct pcm * local_handle = (struct pcm *)handle;
+ if (!(mSeeking || mInternalSeeking)) {
+ if (ioctl(local_handle->fd, SNDRV_PCM_IOCTL_PAUSE,0) < 0)
+ LOGE("AUDIO Resume failed");
+ LOGV("Sync resume done\n");
+ }
+ else {
+ local_handle->start = 0;
+ pcm_prepare(local_handle);
+ LOGV("Reset, drain and prepare completed");
+ local_handle->sync_ptr->flags = SNDRV_PCM_SYNC_PTR_APPL | SNDRV_PCM_SYNC_PTR_AVAIL_MIN;
+ sync_ptr(local_handle);
+ LOGV("appl_ptr= %d", local_handle->sync_ptr->c.control.appl_ptr);
+ }
+ if (mAudioSink.get() != NULL) {
+ mAudioSink->resumeSession();
+ }
+ } else {
+ isPaused = false;
+
+ if (!mAudioSinkOpen) {
+ if (mAudioSink.get() != NULL) {
+ LOGV("%s mAudioSink close session", __func__);
+ mAudioSink->closeSession();
+ releaseWakeLock();
+ mIsAudioRouted = false;
+ } else {
+ LOGE("close session NULL");
+ }
+
+ LOGV("Resume: Before Audio Sink Open");
+ status_t ret = mAudioSink->open(mSampleRate, numChannels,AUDIO_FORMAT_PCM_16_BIT,
+ DEFAULT_AUDIOSINK_BUFFERCOUNT);
+ mAudioSink->start();
+ LOGV("Resume: After Audio Sink Open");
+ mAudioSinkOpen = true;
+
+ LOGV("Resume: Waking up the decoder thread");
+ pthread_cond_signal(&decoder_cv);
+ } else {
+ /* If AudioSink is already open just start it */
+ mAudioSink->start();
+ }
+ LOGV("Waking up A2dp thread");
+ pthread_cond_signal(&a2dp_cv);
+ }
+ isPaused = false;
+ pthread_cond_signal(&decoder_cv);
+
+ /*
+ Signal to effects thread so that it can apply the new effects
+ enabled during pause state
+ */
+ pthread_cond_signal(&effect_cv);
+ }
+}
+
+void LPAPlayer::reset() {
+ LOGV("Reset called!!!!!");
+ asyncReset = true;
+
+ struct pcm * local_handle = (struct pcm *)handle;
+
+ LOGV("reset() requestQueue.size() = %d, responseQueue.size() = %d effectsQueue.size() = %d",
+ memBuffersRequestQueue.size(), memBuffersResponseQueue.size(), effectsQueue.size());
+
+ // make sure the Effects thread has exited
+ requestAndWaitForEffectsThreadExit();
+
+ // make sure Decoder thread has exited
+ requestAndWaitForDecoderThreadExit();
+
+ // make sure the event thread also has exited
+ requestAndWaitForEventThreadExit();
+
+ requestAndWaitForA2DPThreadExit();
+
+ requestAndWaitForA2DPNotificationThreadExit();
+
+
+
+ // Make sure to release any buffer we hold onto so that the
+ // source is able to stop().
+ if (mFirstBuffer != NULL) {
+ mFirstBuffer->release();
+ mFirstBuffer = NULL;
+ }
+
+ if (mInputBuffer != NULL) {
+ LOGV("AudioPlayer releasing input buffer.");
+ mInputBuffer->release();
+ mInputBuffer = NULL;
+ }
+
+ mSource->stop();
+
+ // The following hack is necessary to ensure that the OMX
+ // component is completely released by the time we may try
+ // to instantiate it again.
+ wp<MediaSource> tmp = mSource;
+ mSource.clear();
+ while (tmp.promote() != NULL) {
+ usleep(1000);
+ }
+
+ memBufferDeAlloc();
+ LOGE("Buffer Deallocation complete! Closing pcm handle");
+
+ if (local_handle->start) {
+ if (ioctl(local_handle->fd, SNDRV_PCM_IOCTL_PAUSE,1) < 0) {
+ LOGE("Audio Pause failed");
+ }
+ }
+ local_handle->start = 0;
+ if (mIsAudioRouted)
+ pcm_prepare(local_handle);
+ pcm_close(local_handle);
+ handle = (void*)local_handle;
+
+ // Close the audiosink after all the threads exited to make sure
+ // there is no thread writing data to audio sink or applying effect
+ if (bIsA2DPEnabled) {
+ mAudioSink->close();
+ } else {
+ mAudioSink->closeSession();
+ releaseWakeLock();
+ }
+ mAudioSink.clear();
+
+ LOGV("reset() after memBuffersRequestQueue.size() = %d, memBuffersResponseQueue.size() = %d ",memBuffersRequestQueue.size(),memBuffersResponseQueue.size());
+
+ mNumFramesPlayed = 0;
+ mPositionTimeMediaUs = -1;
+ mPositionTimeRealUs = -1;
+ mSeeking = false;
+ mInternalSeeking = false;
+ mReachedEOS = false;
+ mFinalStatus = OK;
+ mStarted = false;
+}
+
+
+bool LPAPlayer::isSeeking() {
+ Mutex::Autolock autoLock(mLock);
+ return mSeeking;
+}
+
+bool LPAPlayer::reachedEOS(status_t *finalStatus) {
+ *finalStatus = OK;
+
+ Mutex::Autolock autoLock(mLock);
+ *finalStatus = mFinalStatus;
+ return mReachedEOS;
+}
+
+
+void *LPAPlayer::decoderThreadWrapper(void *me) {
+ static_cast<LPAPlayer *>(me)->decoderThreadEntry();
+ return NULL;
+}
+
+
+void LPAPlayer::decoderThreadEntry() {
+
+ pthread_mutex_lock(&decoder_mutex);
+
+ pid_t tid = gettid(); + androidSetThreadPriority(tid, ANDROID_PRIORITY_AUDIO); + prctl(PR_SET_NAME, (unsigned long)"LPA DecodeThread", 0, 0, 0);
+
+ LOGV("decoderThreadEntry wait for signal \n");
+ if (!mStarted) {
+ pthread_cond_wait(&decoder_cv, &decoder_mutex);
+ }
+ LOGV("decoderThreadEntry ready to work \n");
+ pthread_mutex_unlock(&decoder_mutex);
+ if (killDecoderThread) {
+ pthread_mutex_unlock(&mem_request_mutex);
+ return;
+ }
+ pthread_cond_signal(&event_cv);
+
+ int32_t mem_fd;
+
+ //TODO check PMEM_BUFFER_SIZE from handle.
+ memBufferAlloc(MEM_BUFFER_SIZE, &mem_fd);
+ while (1) {
+ pthread_mutex_lock(&mem_request_mutex);
+
+ if (killDecoderThread) {
+ pthread_mutex_unlock(&mem_request_mutex);
+ break;
+ }
+
+ LOGV("decoder memBuffersRequestQueue.size() = %d, memBuffersResponseQueue.size() = %d ",
+ memBuffersRequestQueue.size(),memBuffersResponseQueue.size());
+
+ if (memBuffersRequestQueue.empty() || mReachedEOS || isPaused ||
+ (bIsA2DPEnabled && !mAudioSinkOpen) || asyncReset ) {
+ LOGV("decoderThreadEntry: a2dpDisconnectPause %d mReachedEOS %d bIsA2DPEnabled %d "
+ "mAudioSinkOpen %d asyncReset %d ", a2dpDisconnectPause,
+ mReachedEOS, bIsA2DPEnabled, mAudioSinkOpen, asyncReset);
+ LOGV("decoderThreadEntry: waiting on decoder_cv");
+ pthread_cond_wait(&decoder_cv, &mem_request_mutex);
+ pthread_mutex_unlock(&mem_request_mutex);
+ LOGV("decoderThreadEntry: received a signal to wake up");
+ continue;
+ }
+
+ pthread_mutex_unlock(&mem_request_mutex);
+
+ //Queue the buffers back to Request queue
+ if (mReachedEOS || (bIsA2DPEnabled && !mAudioSinkOpen) || asyncReset || a2dpDisconnectPause) {
+ LOGV("%s: mReachedEOS %d bIsA2DPEnabled %d ", __func__, mReachedEOS, bIsA2DPEnabled);
+ }
+ //Queue up the buffers for writing either for A2DP or LPA Driver
+ else {
+ struct msm_audio_aio_buf aio_buf_local;
+ Mutex::Autolock autoLock(mSeekLock);
+
+ pthread_mutex_lock(&mem_request_mutex);
+ List<BuffersAllocated>::iterator it = memBuffersRequestQueue.begin();
+ BuffersAllocated buf = *it;
+ memBuffersRequestQueue.erase(it);
+ pthread_mutex_unlock(&mem_request_mutex);
+ memset(buf.localBuf, 0x0, MEM_BUFFER_SIZE);
+ memset(buf.memBuf, 0x0, MEM_BUFFER_SIZE);
+
+ LOGV("Calling fillBuffer for size %d",MEM_BUFFER_SIZE);
+ buf.bytesToWrite = fillBuffer(buf.localBuf, MEM_BUFFER_SIZE);
+ LOGV("fillBuffer returned size %d",buf.bytesToWrite);
+
+ if ( buf.bytesToWrite == 0) {
+ /* Put the buffer back into requestQ */
+ /* This is zero byte buffer - no need to put in response Q*/
+ pthread_mutex_lock(&mem_request_mutex);
+ memBuffersRequestQueue.push_front(buf);
+ pthread_mutex_unlock(&mem_request_mutex);
+ /*Post EOS to Awesome player when i/p EOS is reached,
+ all input buffers have been decoded and response queue is empty*/
+ if(mObserver && mReachedEOS && memBuffersResponseQueue.empty()) {
+ LOGV("Posting EOS event..zero byte buffer and response queue is empty");
+ mObserver->postAudioEOS();
+ }
+ continue;
+ }
+ pthread_mutex_lock(&mem_response_mutex);
+ memBuffersResponseQueue.push_back(buf);
+ pthread_mutex_unlock(&mem_response_mutex);
+
+ if (!bIsA2DPEnabled){
+ LOGV("Start Event thread\n");
+ pthread_cond_signal(&event_cv);
+ // Make sure the buffer is added to response Q before applying effects
+ // If there is a change in effects while applying on current buffer
+ // it will be re applied as the buffer already present in responseQ
+ if (!asyncReset) {
+ pthread_mutex_lock(&apply_effect_mutex);
+ LOGV("decoderThread: applying effects on mem buf at buf.memBuf %x", buf.memBuf);
+ mAudioFlinger->applyEffectsOn((int16_t*)buf.localBuf,
+ (int16_t*)buf.memBuf,
+ (int)buf.bytesToWrite);
+ pthread_mutex_unlock(&apply_effect_mutex);
+ LOGV("decoderThread: Writing buffer to driver with mem fd %d", buf.memFd);
+
+ {
+ if (mSeeking) {
+ continue;
+ }
+ LOGV("PCM write start");
+ struct pcm * local_handle = (struct pcm *)handle;
+ pcm_write(local_handle, buf.memBuf, local_handle->period_size);
+ if (mReachedEOS) {
+ if (ioctl(local_handle->fd, SNDRV_PCM_IOCTL_START) < 0)
+ LOGE("AUDIO Start failed");
+ else
+ local_handle->start = 1;
+ }
+ if (buf.bytesToWrite < MEM_BUFFER_SIZE && memBuffersResponseQueue.size() == 1) {
+ LOGV("Last buffer case");
+ uint64_t writeValue = SIGNAL_EVENT_THREAD;
+ write(efd, &writeValue, sizeof(uint64_t));
+ }
+ LOGV("PCM write complete");
+ pthread_mutex_lock(&pause_mutex);
+ pthread_cond_signal(&pause_cv);
+ pthread_mutex_unlock(&pause_mutex);
+ }
+ }
+ }
+ else
+ pthread_cond_signal(&a2dp_cv);
+ }
+ }
+ decoderThreadAlive = false;
+ LOGV("decoder Thread is dying");
+}
+
+void *LPAPlayer::eventThreadWrapper(void *me) {
+ static_cast<LPAPlayer *>(me)->eventThreadEntry();
+ return NULL;
+}
+
+void LPAPlayer::eventThreadEntry() {
+ struct msm_audio_event cur_pcmdec_event;
+
+ pthread_mutex_lock(&event_mutex);
+ int rc = 0;
+ int err_poll = 0;
+ int avail = 0;
+ int i = 0;
+
+ pid_t tid = gettid(); + androidSetThreadPriority(tid, ANDROID_PRIORITY_AUDIO); + prctl(PR_SET_NAME, (unsigned long)"LPA EventThread", 0, 0, 0);
+
+
+ LOGV("eventThreadEntry wait for signal \n");
+ pthread_cond_wait(&event_cv, &event_mutex);
+ LOGV("eventThreadEntry ready to work \n");
+ pthread_mutex_unlock(&event_mutex);
+
+ if (killEventThread) {
+ eventThreadAlive = false;
+ LOGV("Event Thread is dying.");
+ return;
+ }
+
+ LOGV("Allocating poll fd");
+ struct pollfd pfd[NUM_FDS];
+
+ struct pcm * local_handle = (struct pcm *)handle;
+ pfd[0].fd = local_handle->timer_fd;
+ pfd[0].events = (POLLIN | POLLERR | POLLNVAL);
+ LOGV("Allocated poll fd");
+ bool audioEOSPosted = false;
+ int timeout = -1;
+
+ efd = eventfd(0,0);
+ pfd[1].fd = efd;
+ pfd[1].events = (POLLIN | POLLERR | POLLNVAL);
+ while (1) {
+ if (killEventThread) {
+ eventThreadAlive = false;
+ LOGV("Event Thread is dying.");
+ return;
+ }
+
+ err_poll = poll(pfd, NUM_FDS, timeout);
+
+ if (err_poll == EINTR)
+ LOGE("Timer is intrrupted");
+ if (pfd[1].revents & POLLIN) {
+ uint64_t u;
+ read(efd, &u, sizeof(uint64_t));
+ LOGE("POLLIN event occured on the event fd, value written to %llu",(unsigned long long)u);
+ pfd[1].revents = 0;
+ if (u == SIGNAL_EVENT_THREAD) {
+ BuffersAllocated tempbuf = *(memBuffersResponseQueue.begin());
+ timeout = 1000 * tempbuf.bytesToWrite / (numChannels * PCM_FORMAT * mSampleRate);
+ LOGV("Setting timeout due Last buffer seek to %d, mReachedEOS %d, memBuffersRequestQueue.size() %d", timeout, mReachedEOS,memBuffersResponseQueue.size());
+ continue;
+ }
+ }
+ if ((pfd[1].revents & POLLERR) || (pfd[1].revents & POLLNVAL))
+ LOGE("POLLERR or INVALID POLL");
+
+ LOGV("LPA event");
+ if (killEventThread) {
+ break;
+ }
+
+ if (timeout != -1 && mReachedEOS) {
+ LOGV("Timeout %d: Posting EOS event to AwesomePlayer",timeout);
+ isPaused = true;
+ mPauseTime = mSeekTimeUs + getTimeStamp(A2DP_DISABLED);
+ mObserver->postAudioEOS();
+ audioEOSPosted = true;
+ timeout = -1;
+ }
+ if (!mReachedEOS) {
+ timeout = -1;
+ }
+ if (err_poll < 0) {
+ LOGV("fatal err in poll:%d\n", err_poll);
+ eventThreadAlive = false;
+ LOGV("Event Thread is dying.");
+ break;
+ }
+ struct snd_timer_tread rbuf[4];
+ read(local_handle->timer_fd, rbuf, sizeof(struct snd_timer_tread) * 4 );
+
+ if (!(pfd[0].revents & POLLIN))
+ continue;
+
+ pfd[0].revents = 0;
+ //pfd[1].revents = 0;
+
+ LOGV("After an event occurs");
+
+ if (killEventThread) {
+ break;
+ }
+ if (memBuffersResponseQueue.empty())
+ continue;
+
+ //exit on abrupt event
+ Mutex::Autolock autoLock(mLock);
+ pthread_mutex_lock(&mem_response_mutex);
+ BuffersAllocated buf = *(memBuffersResponseQueue.begin());
+ memBuffersResponseQueue.erase(memBuffersResponseQueue.begin());
+ /* If the rendering is complete report EOS to the AwesomePlayer */
+ if (mObserver && !asyncReset && mReachedEOS && memBuffersResponseQueue.size() == 1) {
+ BuffersAllocated tempbuf = *(memBuffersResponseQueue.begin());
+ timeout = 1000 * tempbuf.bytesToWrite / (numChannels * PCM_FORMAT * mSampleRate);
+ LOGV("Setting timeout to %d,nextbuffer %d, buf.bytesToWrite %d, mReachedEOS %d, memBuffersRequestQueue.size() %d", timeout, tempbuf.bytesToWrite, buf.bytesToWrite, mReachedEOS,memBuffersResponseQueue.size());
+ }
+
+ pthread_mutex_unlock(&mem_response_mutex);
+ // Post buffer to request Q
+ pthread_mutex_lock(&mem_request_mutex);
+ memBuffersRequestQueue.push_back(buf);
+ pthread_mutex_unlock(&mem_request_mutex);
+
+ pthread_cond_signal(&decoder_cv);
+
+ }
+ eventThreadAlive = false;
+ if (efd != -1)
+ close(efd);
+ LOGV("Event Thread is dying.");
+
+}
+
+void *LPAPlayer::A2DPThreadWrapper(void *me) {
+ static_cast<LPAPlayer *>(me)->A2DPThreadEntry();
+ return NULL;
+}
+
+void LPAPlayer::A2DPThreadEntry() {
+ pid_t tid = gettid(); + androidSetThreadPriority(tid,ANDROID_PRIORITY_URGENT_AUDIO); + prctl(PR_SET_NAME, (unsigned long)"LPA A2DPThread", 0, 0, 0);
+
+ while (1) {
+ /* If exitPending break here */
+ if (killA2DPThread) {
+ break;
+ }
+
+ //TODO: Remove this
+ pthread_mutex_lock(&mem_response_mutex);
+ if (memBuffersResponseQueue.empty() || !mAudioSinkOpen || isPaused || !bIsA2DPEnabled) {
+ LOGV("A2DPThreadEntry:: responseQ empty %d mAudioSinkOpen %d isPaused %d bIsA2DPEnabled %d",
+ memBuffersResponseQueue.empty(), mAudioSinkOpen, isPaused, bIsA2DPEnabled);
+ LOGV("A2DPThreadEntry:: Waiting on a2dp_cv");
+ pthread_cond_wait(&a2dp_cv, &mem_response_mutex);
+ LOGV("A2DPThreadEntry:: received signal to wake up");
+ pthread_mutex_unlock(&mem_response_mutex);
+ continue;
+ }
+ // A2DP got disabled -- Queue up everything back to Request Queue
+ if (!bIsA2DPEnabled) {
+ pthread_mutex_lock(&mem_request_mutex);
+ memBuffersResponseQueue.clear();
+ memBuffersRequestQueue.clear();
+
+ List<BuffersAllocated>::iterator it = bufPool.begin();
+ for(;it!=bufPool.end();++it) {
+ memBuffersRequestQueue.push_back(*it);
+ }
+ pthread_mutex_unlock(&mem_response_mutex);
+ pthread_mutex_unlock(&mem_request_mutex);
+ }
+ //A2DP is enabled -- Continue normal Playback
+ else {
+ List<BuffersAllocated>::iterator it = memBuffersResponseQueue.begin();
+ BuffersAllocated buf = *it;
+ memBuffersResponseQueue.erase(it);
+ pthread_mutex_unlock(&mem_response_mutex);
+ bytesToWrite = buf.bytesToWrite;
+ LOGV("bytes To write:%d",bytesToWrite);
+
+ uint32_t bytesWritten = 0;
+ uint32_t numBytesRemaining = 0;
+ uint32_t bytesAvailInBuffer = 0;
+ void* data = buf.localBuf;
+
+ while (bytesToWrite) {
+ /* If exitPending break here */
+ if (killA2DPThread || !bIsA2DPEnabled) {
+ LOGV("A2DPThreadEntry: A2DPThread set to be killed");
+ break;
+ }
+
+ bytesAvailInBuffer = mAudioSink->bufferSize();
+
+ uint32_t writeLen = bytesAvailInBuffer > bytesToWrite ? bytesToWrite : bytesAvailInBuffer;
+ LOGV("Writing %d bytes to A2DP ", writeLen);
+ bytesWritten = mAudioSink->write(data, writeLen);
+ if ( bytesWritten != writeLen ) {
+ //Paused - Wait till resume
+ if (isPaused && bIsA2DPEnabled) {
+ LOGV("Pausing A2DP playback");
+ pthread_mutex_lock(&a2dp_mutex);
+ pthread_cond_wait(&a2dp_cv, &a2dp_mutex);
+ pthread_mutex_unlock(&a2dp_mutex);
+ }
+
+
+ //Seeked: break out of loop, flush old buffers and write new buffers
+ LOGV("@_@bytes To write1:%d",bytesToWrite);
+ }
+ if (mSeeked) {
+ LOGV("Seeking A2DP Playback");
+ break;
+ }
+ data += bytesWritten;
+ mNumA2DPBytesPlayed += bytesWritten;
+ bytesToWrite -= bytesWritten;
+ LOGV("@_@bytes To write2:%d",bytesToWrite);
+ }
+ if (mObserver && !asyncReset && mReachedEOS && memBuffersResponseQueue.empty()) {
+ LOGV("Posting EOS event to AwesomePlayer");
+ mObserver->postAudioEOS();
+ }
+ pthread_mutex_lock(&mem_request_mutex);
+ memBuffersRequestQueue.push_back(buf);
+ if (killA2DPThread) {
+ pthread_mutex_unlock(&mem_request_mutex);
+ break;
+ }
+ //flush out old buffer
+ if (mSeeked || !bIsA2DPEnabled) {
+ mSeeked = false;
+ LOGV("A2DPThread: Putting buffers back to requestQ from responseQ");
+ pthread_mutex_lock(&mem_response_mutex);
+ memBuffersResponseQueue.clear();
+ memBuffersRequestQueue.clear();
+
+ List<BuffersAllocated>::iterator it = bufPool.begin();
+ for(;it!=bufPool.end();++it) {
+ memBuffersRequestQueue.push_back(*it);
+ }
+ pthread_mutex_unlock(&mem_response_mutex);
+ }
+ pthread_mutex_unlock(&mem_request_mutex);
+ // Signal decoder thread when a buffer is put back to request Q
+ pthread_cond_signal(&decoder_cv);
+ }
+ }
+ a2dpThreadAlive = false;
+
+ LOGV("AudioSink stop");
+ if(mAudioSinkOpen) {
+ mAudioSinkOpen = false;
+ mAudioSink->stop();
+ }
+
+ LOGV("A2DP Thread is dying.");
+}
+
+void *LPAPlayer::EffectsThreadWrapper(void *me) {
+ static_cast<LPAPlayer *>(me)->EffectsThreadEntry();
+ return NULL;
+}
+
+void LPAPlayer::EffectsThreadEntry() {
+ while(1) {
+ if(killEffectsThread) {
+ break;
+ }
+ pthread_mutex_lock(&effect_mutex);
+
+ if(bEffectConfigChanged && !isPaused) {
+ bEffectConfigChanged = false;
+
+ // 1. Clear current effectQ
+ LOGV("Clearing EffectQ: size %d", effectsQueue.size());
+ while (!effectsQueue.empty()) {
+ List<BuffersAllocated>::iterator it = effectsQueue.begin();
+ effectsQueue.erase(it);
+ }
+
+ // 2. Lock the responseQ mutex
+ pthread_mutex_lock(&mem_response_mutex);
+
+ // 3. Copy responseQ to effectQ
+ LOGV("Copying responseQ to effectQ: responseQ size %d", memBuffersResponseQueue.size());
+ for (List<BuffersAllocated>::iterator it = memBuffersResponseQueue.begin();
+ it != memBuffersResponseQueue.end(); ++it) {
+ BuffersAllocated buf = *it;
+ effectsQueue.push_back(buf);
+ }
+
+ // 4. Unlock the responseQ mutex
+ pthread_mutex_unlock(&mem_response_mutex);
+ }
+ // If effectQ is empty just wait for a signal
+ // Else dequeue a buffer, apply effects and delete it from effectQ
+ if(effectsQueue.empty() || asyncReset || bIsA2DPEnabled || isPaused) {
+ LOGV("EffectQ is empty or Reset called or A2DP enabled, waiting for signal");
+ pthread_cond_wait(&effect_cv, &effect_mutex);
+ LOGV("effectsThread: received signal to wake up");
+ pthread_mutex_unlock(&effect_mutex);
+ } else {
+ pthread_mutex_unlock(&effect_mutex);
+
+ List<BuffersAllocated>::iterator it = effectsQueue.begin();
+ BuffersAllocated buf = *it;
+
+ pthread_mutex_lock(&apply_effect_mutex);
+ LOGV("effectsThread: applying effects on %p fd %d", buf.memBuf, (int)buf.memFd);
+ mAudioFlinger->applyEffectsOn((int16_t*)buf.localBuf,
+ (int16_t*)buf.memBuf,
+ (int)buf.bytesToWrite);
+ pthread_mutex_unlock(&apply_effect_mutex);
+ effectsQueue.erase(it);
+ }
+ }
+ LOGV("Effects thread is dead");
+ effectsThreadAlive = false;
+}
+
+void *LPAPlayer::A2DPNotificationThreadWrapper(void *me) {
+ static_cast<LPAPlayer *>(me)->A2DPNotificationThreadEntry();
+ return NULL;
+}
+
+
+void LPAPlayer::A2DPNotificationThreadEntry() {
+ while (1) {
+ pthread_mutex_lock(&a2dp_notification_mutex);
+ pthread_cond_wait(&a2dp_notification_cv, &a2dp_notification_mutex);
+ pthread_mutex_unlock(&a2dp_notification_mutex);
+ if (killA2DPNotificationThread) {
+ break;
+ }
+
+ LOGV("A2DP notification has come bIsA2DPEnabled: %d", bIsA2DPEnabled);
+
+ if (bIsA2DPEnabled) {
+ struct pcm * local_handle = (struct pcm *)handle;
+ LOGV("Flushing all the buffers");
+ pthread_mutex_lock(&mem_response_mutex);
+ pthread_mutex_lock(&mem_request_mutex);
+ memBuffersResponseQueue.clear();
+ memBuffersRequestQueue.clear();
+
+ List<BuffersAllocated>::iterator it = bufPool.begin();
+ for(;it!=bufPool.end();++it) {
+ memBuffersRequestQueue.push_back(*it);
+ }
+ pthread_mutex_unlock(&mem_request_mutex);
+ pthread_mutex_unlock(&mem_response_mutex);
+ LOGV("All the buffers flushed, Now flushing the driver");
+ if (ioctl(local_handle->fd, SNDRV_PCM_IOCTL_RESET))
+ LOGE("Reset failed!");
+ LOGV("Driver flushed and opening mAudioSink");
+ if (!mAudioSinkOpen) {
+ LOGV("Close Session");
+ if (mAudioSink.get() != NULL) {
+ mAudioSink->closeSession();
+ releaseWakeLock();
+ LOGV("mAudioSink close session");
+ mIsAudioRouted = false;
+ } else {
+ LOGE("close session NULL");
+ }
+ sp<MetaData> format = mSource->getFormat();
+ const char *mime;
+ bool success = format->findCString(kKeyMIMEType, &mime);
+ CHECK(success);
+ CHECK(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW));
+ success = format->findInt32(kKeySampleRate, &mSampleRate);
+ CHECK(success);
+ success = format->findInt32(kKeyChannelCount, &numChannels);
+ CHECK(success);
+ LOGV("Before Audio Sink Open");
+ status_t ret = mAudioSink->open(mSampleRate, numChannels,AUDIO_FORMAT_PCM_16_BIT, DEFAULT_AUDIOSINK_BUFFERCOUNT);
+ mAudioSink->start();
+ LOGV("After Audio Sink Open");
+ mAudioSinkOpen = true;
+ }
+ LOGV("Signalling to decoder cv");
+ pthread_cond_signal(&decoder_cv);
+ }
+ else {
+ mInternalSeeking = true;
+ mReachedEOS = false;
+ mSeekTimeUs += getTimeStamp(A2DP_DISCONNECT);
+ mNumA2DPBytesPlayed = 0;
+ pthread_cond_signal(&a2dp_cv);
+ }
+ }
+ a2dpNotificationThreadAlive = false;
+ LOGV("A2DPNotificationThread is dying");
+
+}
+
+void *LPAPlayer::memBufferAlloc(int32_t nSize, int32_t *mem_fd){
+ int32_t memfd = -1;
+ void *mem_buf = NULL;
+ void *local_buf = NULL;
+ int i = 0;
+ struct pcm * local_handle = (struct pcm *)handle;
+
+ for (i = 0; i < MEM_BUFFER_COUNT; i++) {
+ mem_buf = (int32_t *)local_handle->addr + (nSize * i/sizeof(int));
+ local_buf = malloc(nSize);
+ if (NULL == local_buf) {
+ return NULL;
+ }
+
+ // 3. Store this information for internal mapping / maintanence
+ BuffersAllocated buf(local_buf, mem_buf, nSize, memfd);
+ memBuffersRequestQueue.push_back(buf);
+ bufPool.push_back(buf);
+
+ // 4. Send the mem fd information
+ LOGV("memBufferAlloc calling with required size %d", nSize);
+ LOGV("The MEM that is allocated is %d and buffer is %x", memfd, (unsigned int)mem_buf);
+ }
+ *mem_fd = memfd;
+ return NULL;
+}
+
+void LPAPlayer::memBufferDeAlloc()
+{
+ //Remove all the buffers from bufpool
+ while (!bufPool.empty()) {
+ List<BuffersAllocated>::iterator it = bufPool.begin();
+ BuffersAllocated &memBuffer = *it;
+ // free the local buffer corresponding to mem buffer
+ free(memBuffer.localBuf);
+ LOGV("Removing from bufpool");
+ bufPool.erase(it);
+ }
+
+}
+
+void LPAPlayer::createThreads() {
+
+ //Initialize all the Mutexes and Condition Variables
+ pthread_mutex_init(&mem_request_mutex, NULL);
+ pthread_mutex_init(&mem_response_mutex, NULL);
+ pthread_mutex_init(&decoder_mutex, NULL);
+ pthread_mutex_init(&event_mutex, NULL);
+ pthread_mutex_init(&a2dp_mutex, NULL);
+ pthread_mutex_init(&effect_mutex, NULL);
+ pthread_mutex_init(&apply_effect_mutex, NULL);
+ pthread_mutex_init(&a2dp_notification_mutex, NULL);
+ pthread_mutex_init(&pause_mutex,NULL);
+
+ pthread_cond_init (&event_cv, NULL);
+ pthread_cond_init (&decoder_cv, NULL);
+ pthread_cond_init (&a2dp_cv, NULL);
+ pthread_cond_init (&a2dp_notification_cv, NULL);
+ pthread_cond_init (&pause_cv, NULL);
+
+ // Create 4 threads Effect, decoder, event and A2dp
+ pthread_attr_t attr;
+ pthread_attr_init(&attr);
+ pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
+
+ killDecoderThread = false;
+ killEventThread = false;
+ killA2DPThread = false;
+ killEffectsThread = false;
+ killA2DPNotificationThread = false;
+
+ decoderThreadAlive = true;
+ eventThreadAlive = true;
+ a2dpThreadAlive = true;
+ effectsThreadAlive = true;
+ a2dpNotificationThreadAlive = true;
+
+ LOGV("Creating Event Thread");
+ pthread_create(&eventThread, &attr, eventThreadWrapper, this);
+
+ LOGV("Creating decoder Thread");
+ pthread_create(&decoderThread, &attr, decoderThreadWrapper, this);
+
+ LOGV("Creating A2DP Thread");
+ pthread_create(&A2DPThread, &attr, A2DPThreadWrapper, this);
+
+ LOGV("Creating Effects Thread");
+ pthread_create(&EffectsThread, &attr, EffectsThreadWrapper, this);
+
+ LOGV("Creating A2DP Notification Thread");
+ pthread_create(&A2DPNotificationThread, &attr, A2DPNotificationThreadWrapper, this);
+
+ pthread_attr_destroy(&attr);
+}
+
+
+size_t LPAPlayer::fillBuffer(void *data, size_t size) {
+ LOGE("fillBuffer");
+ if (mNumFramesPlayed == 0) {
+ LOGV("AudioCallback");
+ }
+
+ LOGV("Number of Frames Played: %u", mNumFramesPlayed);
+ if (mReachedEOS) {
+ return 0;
+ }
+
+ size_t size_done = 0;
+ size_t size_remaining = size;
+ while (size_remaining > 0) {
+ MediaSource::ReadOptions options;
+ {
+ Mutex::Autolock autoLock(mLock);
+
+ if (mSeeking) {
+ mInternalSeeking = false;
+ }
+ if (mSeeking || mInternalSeeking) {
+ if (mIsFirstBuffer) {
+ if (mFirstBuffer != NULL) {
+ mFirstBuffer->release();
+ mFirstBuffer = NULL;
+ }
+ mIsFirstBuffer = false;
+ }
+
+ options.setSeekTo(mSeekTimeUs);
+
+ if (mInputBuffer != NULL) {
+ mInputBuffer->release();
+ mInputBuffer = NULL;
+ }
+
+ // This is to ignore the data already filled in the output buffer
+ size_done = 0;
+ size_remaining = size;
+
+ mSeeking = false;
+ if (mObserver && !asyncReset && !mInternalSeeking) {
+ LOGV("fillBuffer: Posting audio seek complete event");
+ mObserver->postAudioSeekComplete();
+ }
+ mInternalSeeking = false;
+ }
+ }
+
+ if (mInputBuffer == NULL) {
+ status_t err;
+
+ if (mIsFirstBuffer) {
+ mInputBuffer = mFirstBuffer;
+ mFirstBuffer = NULL;
+ err = mFirstBufferResult;
+
+ mIsFirstBuffer = false;
+ } else {
+ err = mSource->read(&mInputBuffer, &options);
+ }
+
+ CHECK((err == OK && mInputBuffer != NULL)
+ || (err != OK && mInputBuffer == NULL));
+
+ Mutex::Autolock autoLock(mLock);
+
+ if (err != OK) {
+ if (err == INFO_FORMAT_CHANGED) {
+ sp<MetaData> format = mSource->getFormat();
+ const char *mime;
+ bool success = format->findCString(kKeyMIMEType, &mime);
+ CHECK(success);
+ CHECK(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW));
+
+ success = format->findInt32(kKeySampleRate, &mSampleRate);
+ CHECK(success);
+
+ int32_t numChannels;
+ success = format->findInt32(kKeyChannelCount, &numChannels);
+ CHECK(success);
+
+ if(bIsA2DPEnabled) {
+ mAudioSink->stop();
+ mAudioSink->close();
+ mAudioSinkOpen = false;
+ status_t err = mAudioSink->open(
+ mSampleRate, numChannels, AUDIO_FORMAT_PCM_16_BIT,
+ DEFAULT_AUDIOSINK_BUFFERCOUNT);
+ if (err != OK) {
+ mSource->stop();
+ return err;
+ }
+ mAudioSinkOpen = true;
+ mLatencyUs = (int64_t)mAudioSink->latency() * 1000;
+ mFrameSize = mAudioSink->frameSize();
+ mAudioSink->start();
+ } else {
+ /* TODO: LPA driver needs to be reconfigured
+ For MP3 we might not come here but for AAC we need this */
+ mAudioSink->stop();
+ mAudioSink->closeSession();
+ LOGV("Opening a routing session in fillBuffer: sessionId = %d mSampleRate %d numChannels %d",
+ sessionId, mSampleRate, numChannels);
+ status_t err = mAudioSink->openSession(AUDIO_FORMAT_PCM_16_BIT, sessionId, mSampleRate, numChannels);
+ if (err != OK) {
+ mSource->stop();
+ return err;
+ }
+ }
+ break;
+ } else {
+ mReachedEOS = true;
+ mFinalStatus = err;
+ break;
+ }
+ }
+
+ CHECK(mInputBuffer->meta_data()->findInt64(
+ kKeyTime, &mPositionTimeMediaUs));
+ mFrameSize = mAudioSink->frameSize();
+ mPositionTimeRealUs =
+ ((mNumFramesPlayed + size_done / mFrameSize) * 1000000)
+ / mSampleRate;
+
+ }
+
+ if (mInputBuffer->range_length() == 0) {
+ mInputBuffer->release();
+ mInputBuffer = NULL;
+ continue;
+ }
+
+ size_t copy = size_remaining;
+ if (copy > mInputBuffer->range_length()) {
+ copy = mInputBuffer->range_length();
+ }
+
+ memcpy((char *)data + size_done,
+ (const char *)mInputBuffer->data() + mInputBuffer->range_offset(),
+ copy);
+
+ mInputBuffer->set_range(mInputBuffer->range_offset() + copy,
+ mInputBuffer->range_length() - copy);
+
+ size_done += copy;
+ size_remaining -= copy;
+ }
+ return size_done;
+}
+
+int64_t LPAPlayer::getRealTimeUs() {
+ Mutex::Autolock autoLock(mLock);
+ return getRealTimeUsLocked();
+}
+
+
+int64_t LPAPlayer::getRealTimeUsLocked(){
+ //Used for AV sync: irrelevant API for LPA.
+ return 0;
+}
+
+int64_t LPAPlayer::getTimeStamp(A2DPState state) {
+ int64_t timestamp = 0;
+ switch (state) {
+ case A2DP_ENABLED:
+ case A2DP_DISCONNECT:
+ timestamp = (mNumA2DPBytesPlayed * 1000000)
+ /(2 * numChannels * mSampleRate);
+ break;
+ case A2DP_DISABLED:
+ case A2DP_CONNECT: {
+ struct pcm * local_handle = (struct pcm *)handle;
+ struct snd_compr_tstamp tstamp;
+ if (ioctl(local_handle->fd, SNDRV_COMPRESS_TSTAMP, &tstamp)) {
+ LOGE("Tunnel Player: failed SNDRV_COMPRESS_TSTAMP\n");
+ }
+ else {
+ LOGV("timestamp = %lld\n", tstamp.timestamp);
+ timestamp = tstamp.timestamp;
+ }
+ break;
+ }
+ default:
+ break;
+ }
+ return timestamp;
+}
+
+int64_t LPAPlayer::getMediaTimeUs() {
+ Mutex::Autolock autoLock(mLock);
+ LOGV("getMediaTimeUs() isPaused %d mSeekTimeUs %d mPauseTime %d", isPaused, mSeekTimeUs, mPauseTime);
+ if (isPaused) {
+ return mPauseTime;
+ } else {
+ A2DPState state = bIsA2DPEnabled ? A2DP_ENABLED : A2DP_DISABLED;
+ return (mSeekTimeUs + getTimeStamp(state));
+ }
+}
+
+bool LPAPlayer::getMediaTimeMapping(
+ int64_t *realtime_us, int64_t *mediatime_us) {
+ Mutex::Autolock autoLock(mLock);
+
+ *realtime_us = mPositionTimeRealUs;
+ *mediatime_us = mPositionTimeMediaUs;
+
+ return mPositionTimeRealUs != -1 && mPositionTimeMediaUs != -1;
+}
+
+void LPAPlayer::requestAndWaitForDecoderThreadExit() {
+
+ if (!decoderThreadAlive)
+ return;
+ killDecoderThread = true;
+ pthread_cond_signal(&decoder_cv);
+ pthread_join(decoderThread,NULL);
+ LOGV("decoder thread killed");
+
+}
+
+void LPAPlayer::requestAndWaitForEventThreadExit() {
+ if (!eventThreadAlive)
+ return;
+ killEventThread = true;
+ uint64_t writeValue = KILL_EVENT_THREAD;
+ LOGE("Writing to efd %d",efd);
+ write(efd, &writeValue, sizeof(uint64_t));
+ if(!bIsA2DPEnabled) {
+ }
+ pthread_cond_signal(&event_cv);
+ pthread_join(eventThread,NULL);
+ LOGV("event thread killed");
+}
+
+void LPAPlayer::requestAndWaitForA2DPThreadExit() {
+ if (!a2dpThreadAlive)
+ return;
+ killA2DPThread = true;
+ pthread_cond_signal(&a2dp_cv);
+ pthread_join(A2DPThread,NULL);
+ LOGV("a2dp thread killed");
+}
+
+void LPAPlayer::requestAndWaitForEffectsThreadExit() {
+ if (!effectsThreadAlive)
+ return;
+ killEffectsThread = true;
+ pthread_cond_signal(&effect_cv);
+ pthread_join(EffectsThread,NULL);
+ LOGV("effects thread killed");
+}
+
+void LPAPlayer::requestAndWaitForA2DPNotificationThreadExit() {
+ if (!a2dpNotificationThreadAlive)
+ return;
+ killA2DPNotificationThread = true;
+ pthread_cond_signal(&a2dp_notification_cv);
+ pthread_join(A2DPNotificationThread,NULL);
+ LOGV("a2dp notification thread killed");
+}
+
+void LPAPlayer::onPauseTimeOut() {
+ Mutex::Autolock autoLock(resumeLock);
+ struct msm_audio_stats stats;
+ int nBytesConsumed = 0;
+ LOGV("onPauseTimeOut");
+ if (!mPauseEventPending) {
+ return;
+ }
+ mPauseEventPending = false;
+ if(!bIsA2DPEnabled) {
+ // 1.) Set seek flags
+ mInternalSeeking = true;
+ mReachedEOS = false;
+ mSeekTimeUs += getTimeStamp(A2DP_DISABLED);
+
+ // 2.) Flush the buffers and transfer everything to request queue
+ pthread_mutex_lock(&mem_response_mutex);
+ pthread_mutex_lock(&mem_request_mutex);
+ memBuffersResponseQueue.clear();
+ memBuffersRequestQueue.clear();
+ List<BuffersAllocated>::iterator it = bufPool.begin();
+ for(;it!=bufPool.end();++it) {
+ memBuffersRequestQueue.push_back(*it);
+ }
+ pthread_mutex_unlock(&mem_request_mutex);
+ pthread_mutex_unlock(&mem_response_mutex);
+ LOGV("onPauseTimeOut after memBuffersRequestQueue.size() = %d, memBuffersResponseQueue.size() = %d ",memBuffersRequestQueue.size(),memBuffersResponseQueue.size());
+
+ // 3.) Close routing Session
+ mAudioSink->closeSession();
+ mIsAudioRouted = false;
+
+ // 4.) Release Wake Lock
+ releaseWakeLock(); + }
+
+}
+
+} //namespace android
diff --git a/media/libstagefright/LPAPlayerION.cpp b/media/libstagefright/LPAPlayerION.cpp new file mode 100644 index 0000000..ef94579 --- /dev/null +++ b/media/libstagefright/LPAPlayerION.cpp @@ -0,0 +1,168 @@ +/* + * Copyright (C) 2009 The Android Open Source Project + * Copyright (c) 2012, Code Aurora Forum. All rights reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#define LOG_NDEBUG 0 +#define LOG_TAG "LPAPlayerION" +#include <utils/Log.h> + +#include <media/stagefright/LPAPlayer.h> + +#define MEM_BUFFER_SIZE 524288 +#define MEM_BUFFER_COUNT 4 + +namespace android { +void LPAPlayer::audio_register_memory() { + void *ion_buf; int32_t ion_fd; + struct msm_audio_ion_info ion_info; + //1. Open the ion_audio + ionfd = open("/dev/ion", O_RDONLY | O_SYNC); + if (ionfd < 0) { + LOGE("/dev/ion open failed \n"); + return; + } + for (int i = 0; i < MEM_BUFFER_COUNT; i++) { + ion_buf = memBufferAlloc(MEM_BUFFER_SIZE, &ion_fd); + memset(&ion_info, 0, sizeof(msm_audio_ion_info)); + LOGV("Registering ION with fd %d and address as %x", ion_fd, ion_buf); + ion_info.fd = ion_fd; + ion_info.vaddr = ion_buf; + if ( ioctl(afd, AUDIO_REGISTER_ION, &ion_info) < 0 ) { + LOGE("Registration of ION with the Driver failed with fd %d and memory %x", + ion_info.fd, (unsigned int)ion_info.vaddr); + } + } +} + +void *LPAPlayer::memBufferAlloc(int32_t nSize, int32_t *ion_fd){ + void *ion_buf = NULL; + void *local_buf = NULL; + struct ion_fd_data fd_data; + struct ion_allocation_data alloc_data; + + alloc_data.len = nSize; + alloc_data.align = 0x1000; + alloc_data.flags = ION_HEAP(ION_AUDIO_HEAP_ID); + int rc = ioctl(ionfd, ION_IOC_ALLOC, &alloc_data); + if (rc) { + LOGE("ION_IOC_ALLOC ioctl failed\n"); + return ion_buf; + } + fd_data.handle = alloc_data.handle; + + rc = ioctl(ionfd, ION_IOC_SHARE, &fd_data); + if (rc) { + LOGE("ION_IOC_SHARE ioctl failed\n"); + rc = ioctl(ionfd, ION_IOC_FREE, &(alloc_data.handle)); + if (rc) { + LOGE("ION_IOC_FREE ioctl failed\n"); + } + return ion_buf; + } + + // 2. MMAP to get the virtual address + ion_buf = mmap(NULL, nSize, PROT_READ | PROT_WRITE, MAP_SHARED, fd_data.fd, 0); + if(MAP_FAILED == ion_buf) { + LOGE("mmap() failed \n"); + close(fd_data.fd); + rc = ioctl(ionfd, ION_IOC_FREE, &(alloc_data.handle)); + if (rc) { + LOGE("ION_IOC_FREE ioctl failed\n"); + } + return ion_buf; + } + + local_buf = malloc(nSize); + if (NULL == local_buf) { + // unmap the corresponding ION buffer and close the fd + munmap(ion_buf, MEM_BUFFER_SIZE); + close(fd_data.fd); + rc = ioctl(ionfd, ION_IOC_FREE, &(alloc_data.handle)); + if (rc) { + LOGE("ION_IOC_FREE ioctl failed\n"); + } + return NULL; + } + + // 3. Store this information for internal mapping / maintanence + BuffersAllocated buf(local_buf, ion_buf, nSize, fd_data.fd, alloc_data.handle); + memBuffersRequestQueue.push_back(buf); + + // 4. Send the mem fd information + *ion_fd = fd_data.fd; + LOGV("IONBufferAlloc calling with required size %d", nSize); + LOGV("ION allocated is %d, fd_data.fd %d and buffer is %x", *ion_fd, fd_data.fd, (unsigned int)ion_buf); + + // 5. Return the virtual address + return ion_buf; +} + +void LPAPlayer::memBufferDeAlloc() +{ + int rc = 0; + //Remove all the buffers from request queue + while (!memBuffersRequestQueue.empty()) { + List<BuffersAllocated>::iterator it = memBuffersRequestQueue.begin(); + BuffersAllocated &ionBuffer = *it; + struct msm_audio_ion_info ion_info; + ion_info.vaddr = (*it).memBuf; + ion_info.fd = (*it).memFd; + if (ioctl(afd, AUDIO_DEREGISTER_ION, &ion_info) < 0) { + LOGE("ION deregister failed"); + } + LOGV("Ion Unmapping the address %u, size %d, fd %d from Request",ionBuffer.memBuf,ionBuffer.bytesToWrite,ionBuffer.memFd); + munmap(ionBuffer.memBuf,MEM_BUFFER_SIZE); + LOGV("closing the ion shared fd"); + close(ionBuffer.memFd); + rc = ioctl(ionfd, ION_IOC_FREE, &ionBuffer.ion_handle); + if (rc) { + LOGE("ION_IOC_FREE ioctl failed\n"); + } + // free the local buffer corresponding to ion buffer + free(ionBuffer.localBuf); + LOGE("Removing from request Q"); + memBuffersRequestQueue.erase(it); + } + + //Remove all the buffers from response queue + while(!memBuffersResponseQueue.empty()){ + List<BuffersAllocated>::iterator it = memBuffersResponseQueue.begin(); + BuffersAllocated &ionBuffer = *it; + struct msm_audio_ion_info ion_info; + ion_info.vaddr = (*it).memBuf; + ion_info.fd = (*it).memFd; + if (ioctl(afd, AUDIO_DEREGISTER_ION, &ion_info) < 0) { + LOGE("ION deregister failed"); + } + LOGV("Ion Unmapping the address %u, size %d, fd %d from Request",ionBuffer.memBuf,ionBuffer.bytesToWrite,ionBuffer.memFd); + munmap(ionBuffer.memBuf, MEM_BUFFER_SIZE); + LOGV("closing the ion shared fd"); + close(ionBuffer.memFd); + rc = ioctl(ionfd, ION_IOC_FREE, &ionBuffer.ion_handle); + if (rc) { + LOGE("ION_IOC_FREE ioctl failed\n"); + } + // free the local buffer corresponding to ion buffer + free(ionBuffer.localBuf); + LOGV("Removing from response Q"); + memBuffersResponseQueue.erase(it); + } + if (ionfd >= 0) { + close(ionfd); + ionfd = -1; + } +} +}// namespace android diff --git a/media/libstagefright/LPAPlayerPMEM.cpp b/media/libstagefright/LPAPlayerPMEM.cpp new file mode 100644 index 0000000..7698d13 --- /dev/null +++ b/media/libstagefright/LPAPlayerPMEM.cpp @@ -0,0 +1,129 @@ +/* + * Copyright (C) 2009 The Android Open Source Project + * Copyright (c) 2012, Code Aurora Forum. All rights reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#define LOG_NDEBUG 0 +#define LOG_TAG "LPAPlayerPMEM" +#include <utils/Log.h> + +#include <media/stagefright/LPAPlayer.h> + +#define MEM_BUFFER_SIZE 524288 +#define MEM_BUFFER_COUNT 4 + +namespace android { +void LPAPlayer::audio_register_memory() { + void *pmem_buf; int32_t pmem_fd; + struct msm_audio_pmem_info pmem_info; + for (int i = 0; i < MEM_BUFFER_COUNT; i++) { + pmem_buf = memBufferAlloc(MEM_BUFFER_SIZE, &pmem_fd); + memset(&pmem_info, 0, sizeof(msm_audio_pmem_info)); + LOGV("Registering PMEM with fd %d and address as %x", pmem_fd, pmem_buf); + pmem_info.fd = pmem_fd; + pmem_info.vaddr = pmem_buf; + if ( ioctl(afd, AUDIO_REGISTER_PMEM, &pmem_info) < 0 ) { + LOGE("Registration of PMEM with the Driver failed with fd %d and memory %x", + pmem_info.fd, (unsigned int)pmem_info.vaddr); + } + } + +} + +void *LPAPlayer::memBufferAlloc(int32_t nSize, int32_t *pmem_fd){ + int32_t pmemfd = -1; + void *pmem_buf = NULL; + void *local_buf = NULL; + + // 1. Open the pmem_audio + pmemfd = open("/dev/pmem_audio", O_RDWR); + if (pmemfd < 0) { + LOGE("memBufferAlloc failed to open pmem_audio"); + *pmem_fd = -1; + return pmem_buf; + } + + // 2. MMAP to get the virtual address + pmem_buf = mmap(0, nSize, PROT_READ | PROT_WRITE, MAP_SHARED, pmemfd, 0); + if ( NULL == pmem_buf) { + LOGE("memBufferAlloc failed to mmap"); + *pmem_fd = -1; + return NULL; + } + + local_buf = malloc(nSize); + if (NULL == local_buf) { + // unmap the corresponding PMEM buffer and close the fd + munmap(pmem_buf, MEM_BUFFER_SIZE); + close(pmemfd); + return NULL; + } + + // 3. Store this information for internal mapping / maintanence + BuffersAllocated buf(local_buf, pmem_buf, nSize, pmemfd); + memBuffersRequestQueue.push_back(buf); + + // 4. Send the pmem fd information + *pmem_fd = pmemfd; + LOGV("memBufferAlloc calling with required size %d", nSize); + LOGV("The PMEM that is allocated is %d and buffer is %x", pmemfd, (unsigned int)pmem_buf); + + // 5. Return the virtual address + return pmem_buf; +} + +void LPAPlayer::memBufferDeAlloc() +{ + //Remove all the buffers from request queue + while (!memBuffersRequestQueue.empty()) { + List<BuffersAllocated>::iterator it = memBuffersRequestQueue.begin(); + BuffersAllocated &pmemBuffer = *it; + struct msm_audio_pmem_info pmem_info; + pmem_info.vaddr = (*it).memBuf; + pmem_info.fd = (*it).memFd; + if (ioctl(afd, AUDIO_DEREGISTER_PMEM, &pmem_info) < 0) { + LOGE("PMEM deregister failed"); + } + LOGV("Unmapping the address %u, size %d, fd %d from Request",pmemBuffer.memBuf,pmemBuffer.bytesToWrite,pmemBuffer.memFd); + munmap(pmemBuffer.memBuf, MEM_BUFFER_SIZE); + LOGV("closing the pmem fd"); + close(pmemBuffer.memFd); + // free the local buffer corresponding to pmem buffer + free(pmemBuffer.localBuf); + LOGV("Removing from request Q"); + memBuffersRequestQueue.erase(it); + } + + //Remove all the buffers from response queue + while(!memBuffersResponseQueue.empty()){ + List<BuffersAllocated>::iterator it = memBuffersResponseQueue.begin(); + BuffersAllocated &pmemBuffer = *it; + struct msm_audio_pmem_info pmem_info; + pmem_info.vaddr = (*it).memBuf; + pmem_info.fd = (*it).memFd; + if (ioctl(afd, AUDIO_DEREGISTER_PMEM, &pmem_info) < 0) { + LOGE("PMEM deregister failed"); + } + LOGV("Unmapping the address %u, size %d, fd %d from Response",pmemBuffer.memBuf,MEM_BUFFER_SIZE,pmemBuffer.memFd); + munmap(pmemBuffer.memBuf, MEM_BUFFER_SIZE); + LOGV("closing the pmem fd"); + close(pmemBuffer.memFd); + // free the local buffer corresponding to pmem buffer + free(pmemBuffer.localBuf); + LOGV("Removing from response Q"); + memBuffersResponseQueue.erase(it); + } +} +} //namespace android diff --git a/media/libstagefright/OMXCodec.cpp b/media/libstagefright/OMXCodec.cpp index c7e15c7..9a385b9 100755..100644 --- a/media/libstagefright/OMXCodec.cpp +++ b/media/libstagefright/OMXCodec.cpp @@ -19,11 +19,13 @@ #define LOG_TAG "OMXCodec" #include <utils/Log.h> +#include "include/AACDecoder.h" #include "include/AACEncoder.h" #include "include/AMRNBEncoder.h" #include "include/AMRWBEncoder.h" #include "include/AVCEncoder.h" #include "include/M4vH263Encoder.h" +#include "include/MP3Decoder.h" #include "include/ESDS.h" @@ -141,6 +143,10 @@ const int32_t ColorFormatInfo::preferredColorFormat[] = { }; #endif +#define FACTORY_CREATE(name) \ +static sp<MediaSource> Make##name(const sp<MediaSource> &source) { \ + return new name(source); \ +} #define FACTORY_CREATE_ENCODER(name) \ static sp<MediaSource> Make##name(const sp<MediaSource> &source, const sp<MetaData> &meta) { \ @@ -149,6 +155,10 @@ static sp<MediaSource> Make##name(const sp<MediaSource> &source, const sp<MetaDa #define FACTORY_REF(name) { #name, Make##name }, +#ifdef WITH_QCOM_LPA +FACTORY_CREATE(MP3Decoder) +FACTORY_CREATE(AACDecoder) +#endif FACTORY_CREATE_ENCODER(AMRNBEncoder) FACTORY_CREATE_ENCODER(AMRWBEncoder) FACTORY_CREATE_ENCODER(AACEncoder) @@ -180,6 +190,29 @@ static sp<MediaSource> InstantiateSoftwareEncoder( return NULL; } +#ifdef WITH_QCOM_LPA +static sp<MediaSource> InstantiateSoftwareDecoder( + const char *name, const sp<MediaSource> &source) { + struct FactoryInfo { + const char *name; + sp<MediaSource> (*CreateFunc)(const sp<MediaSource> &); + }; + + static const FactoryInfo kFactoryInfo[] = { + FACTORY_REF(MP3Decoder) + FACTORY_REF(AACDecoder) + }; + for (size_t i = 0; + i < sizeof(kFactoryInfo) / sizeof(kFactoryInfo[0]); ++i) { + if (!strcmp(name, kFactoryInfo[i].name)) { + return (*kFactoryInfo[i].CreateFunc)(source); + } + } + + return NULL; +} +#endif + #undef FACTORY_REF #undef FACTORY_CREATE @@ -203,6 +236,9 @@ static const CodecInfo kDecoderInfo[] = { { MEDIA_MIMETYPE_IMAGE_JPEG, "OMX.TI.JPEG.decode" }, // { MEDIA_MIMETYPE_AUDIO_MPEG, "OMX.TI.MP3.decode" }, { MEDIA_MIMETYPE_AUDIO_MPEG, "OMX.google.mp3.decoder" }, +#ifdef WITH_QCOM_LPA + { MEDIA_MIMETYPE_AUDIO_MPEG, "MP3Decoder" }, +#endif { MEDIA_MIMETYPE_AUDIO_MPEG_LAYER_II, "OMX.Nvidia.mp2.decoder" }, // { MEDIA_MIMETYPE_AUDIO_AMR_NB, "OMX.TI.AMR.decode" }, // { MEDIA_MIMETYPE_AUDIO_AMR_NB, "OMX.Nvidia.amr.decoder" }, @@ -213,6 +249,9 @@ static const CodecInfo kDecoderInfo[] = { // { MEDIA_MIMETYPE_AUDIO_AAC, "OMX.Nvidia.aac.decoder" }, { MEDIA_MIMETYPE_AUDIO_AAC, "OMX.TI.AAC.decode" }, { MEDIA_MIMETYPE_AUDIO_AAC, "OMX.google.aac.decoder" }, +#ifdef WITH_QCOM_LPA + { MEDIA_MIMETYPE_AUDIO_AAC, "AACDecoder" }, +#endif { MEDIA_MIMETYPE_AUDIO_G711_ALAW, "OMX.google.g711.alaw.decoder" }, { MEDIA_MIMETYPE_AUDIO_G711_MLAW, "OMX.google.g711.mlaw.decoder" }, { MEDIA_MIMETYPE_VIDEO_MPEG4, "OMX.TI.DUCATI1.VIDEO.DECODER" }, @@ -361,7 +400,8 @@ static void InitOMXParams(T *params) { } static bool IsSoftwareCodec(const char *componentName) { - if (!strncmp("OMX.google.", componentName, 11)) { + if (!strncmp("OMX.google.", componentName, 11) + || !strncmp("OMX.PV.", componentName, 7)) { return true; } @@ -628,15 +668,17 @@ sp<MediaSource> OMXCodec::Create( componentName = tmp.c_str(); } + sp<MediaSource> softwareCodec; if (createEncoder) { - sp<MediaSource> softwareCodec = - InstantiateSoftwareEncoder(componentName, source, meta); - - if (softwareCodec != NULL) { - LOGV("Successfully allocated software codec '%s'", componentName); - - return softwareCodec; - } + softwareCodec = InstantiateSoftwareEncoder(componentName, source, meta); +#ifdef WITH_QCOM_LPA + } else { + softwareCodec = InstantiateSoftwareDecoder(componentName, source); +#endif + } + if (softwareCodec != NULL) { + LOGE("Successfully allocated software codec '%s'", componentName); + return softwareCodec; } LOGE("Attempting to allocate OMX node '%s'", componentName); diff --git a/media/libstagefright/codecs/aacdec/AACDecoder.cpp b/media/libstagefright/codecs/aacdec/AACDecoder.cpp new file mode 100644 index 0000000..5a32e42 --- /dev/null +++ b/media/libstagefright/codecs/aacdec/AACDecoder.cpp @@ -0,0 +1,504 @@ +/* + * Copyright (C) 2009 The Android Open Source Project + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +/* + *Copyright (c) 2011, Code Aurora Forum. All rights reserved. +*/ +#include "AACDecoder.h" +#define LOG_TAG "AACDecoder" + +#include "../../include/ESDS.h" + +#include "pvmp4audiodecoder_api.h" + +#include <media/stagefright/MediaBufferGroup.h> +#include <media/stagefright/MediaDebug.h> +#include <media/stagefright/MediaDefs.h> +#include <media/stagefright/MetaData.h> + +namespace android { + +AACDecoder::AACDecoder(const sp<MediaSource> &source) + : mSource(source), + mStarted(false), + mBufferGroup(NULL), + mConfig(new tPVMP4AudioDecoderExternal), + mDecoderBuf(NULL), + mAnchorTimeUs(0), + mNumSamplesOutput(0), + mInputBuffer(NULL), + mTempInputBuffer(NULL), + mTempBufferTotalSize(0), + mTempBufferDataLen(0), + mInputBufferSize(0) { + + sp<MetaData> srcFormat = mSource->getFormat(); + + int32_t sampleRate; + CHECK(srcFormat->findInt32(kKeySampleRate, &sampleRate)); + + mMeta = new MetaData; + mMeta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_AUDIO_RAW); + + // We'll always output stereo, regardless of how many channels are + // present in the input due to decoder limitations. + mMeta->setInt32(kKeyChannelCount, 2); + mMeta->setInt32(kKeySampleRate, sampleRate); + + int64_t durationUs; + if (srcFormat->findInt64(kKeyDuration, &durationUs)) { + mMeta->setInt64(kKeyDuration, durationUs); + } + mMeta->setCString(kKeyDecoderComponent, "AACDecoder"); + + mInitCheck = initCheck(); +} + +status_t AACDecoder::initCheck() { + memset(mConfig, 0, sizeof(tPVMP4AudioDecoderExternal)); + mConfig->outputFormat = OUTPUTFORMAT_16PCM_INTERLEAVED; + mConfig->aacPlusEnabled = 1; + + // The software decoder doesn't properly support mono output on + // AACplus files. Always output stereo. + mConfig->desiredChannels = 2; + + + int32_t samplingRate; + sp<MetaData> meta = mSource->getFormat(); + meta->findInt32(kKeySampleRate, &samplingRate); + mConfig->samplingRate = samplingRate; + + int32_t bitRate; + meta->findInt32(kKeyBitRate, &bitRate); + + int32_t encodedChannelCnt; + meta->findInt32(kKeyChannelCount, &encodedChannelCnt); + //mConfig->desiredChannels = encodedChannelCnt; + + // The software decoder doesn't properly support mono output on + // AACplus files. Always output stereo. + mConfig->desiredChannels = 2; + + UInt32 memRequirements = PVMP4AudioDecoderGetMemRequirements(); + mDecoderBuf = malloc(memRequirements); + + status_t err = PVMP4AudioDecoderInitLibrary(mConfig, mDecoderBuf); + if (err != MP4AUDEC_SUCCESS) { + LOGE("Failed to initialize MP4 audio decoder"); + return UNKNOWN_ERROR; + } + + uint32_t type; + const void *data; + size_t size; + if (meta->findData(kKeyESDS, &type, &data, &size)) { + ESDS esds((const char *)data, size); + CHECK_EQ(esds.InitCheck(), OK); + + const void *codec_specific_data; + size_t codec_specific_data_size; + esds.getCodecSpecificInfo( + &codec_specific_data, &codec_specific_data_size); + + mConfig->pInputBuffer = (UChar *)codec_specific_data; + mConfig->inputBufferCurrentLength = codec_specific_data_size; + mConfig->inputBufferMaxLength = 0; + + if (PVMP4AudioDecoderConfig(mConfig, mDecoderBuf) + != MP4AUDEC_SUCCESS) { + LOGE("Error in setting AAC decoder config"); + return ERROR_UNSUPPORTED; + } + } + + //this is used by mm-parser only, usually format block size is 2 + if (meta->findData(kKeyAacCodecSpecificData, &type, &data, &size)) { + if( size > AAC_MAX_FORMAT_BLOCK_SIZE ) { + LOGE("AAC FormatBlock is too big %d", size); + return ERROR_UNSUPPORTED; + } + memcpy( mFormatBlock, (uint8_t*)data, size); + mConfig->pInputBuffer = mFormatBlock; + mConfig->inputBufferCurrentLength = size; + mConfig->inputBufferMaxLength = 0; + + if (PVMP4AudioDecoderConfig(mConfig, mDecoderBuf) + != MP4AUDEC_SUCCESS) { + LOGE("Error in setting AAC decoder config"); + return ERROR_UNSUPPORTED; + } + } + return OK; +} + +AACDecoder::~AACDecoder() { + if (mStarted) { + stop(); + } + + delete mConfig; + mConfig = NULL; + + //Reset temp buffer + if( mTempInputBuffer != NULL ) { + free(mTempInputBuffer); + mTempInputBuffer = NULL; + } +} + +status_t AACDecoder::start(MetaData *params) { + CHECK(!mStarted); + + if (mInitCheck != OK) { + LOGE("InitCheck Failed"); + return UNKNOWN_ERROR; + } + + mBufferGroup = new MediaBufferGroup; + mBufferGroup->add_buffer(new MediaBuffer(4096 * 2)); + + mSource->start(); + + mAnchorTimeUs = 0; + mNumSamplesOutput = 0; + mStarted = true; + mNumDecodedBuffers = 0; + mUpsamplingFactor = 2; + + return OK; +} + +status_t AACDecoder::stop() { + CHECK(mStarted); + + if (mInputBuffer) { + mInputBuffer->release(); + mInputBuffer = NULL; + } + + free(mDecoderBuf); + mDecoderBuf = NULL; + + delete mBufferGroup; + mBufferGroup = NULL; + + if( mTempInputBuffer != NULL ) { + free(mTempInputBuffer); + mTempInputBuffer = NULL; + } + mTempBufferDataLen = 0; + mTempBufferTotalSize = 0; + + mSource->stop(); + + mStarted = false; + + return OK; +} + +sp<MetaData> AACDecoder::getFormat() { + return mMeta; +} + +status_t AACDecoder::read( + MediaBuffer **out, const ReadOptions *options) { + status_t err; + + *out = NULL; + + int64_t seekTimeUs; + ReadOptions::SeekMode mode; + if (options && options->getSeekTo(&seekTimeUs, &mode)) { + CHECK(seekTimeUs >= 0); + + mNumSamplesOutput = 0; + + if (mInputBuffer) { + mInputBuffer->release(); + mInputBuffer = NULL; + } + + // Make sure that the next buffer output does not still + // depend on fragments from the last one decoded. + PVMP4AudioDecoderResetBuffer(mDecoderBuf); + } else { + seekTimeUs = -1; + } + + + uint8_t* inputBuffer = NULL; + uint32_t inputBufferSize = 0; + + if (mInputBuffer == NULL) { + err = mSource->read(&mInputBuffer, options); + + if (err != OK) { + if(mInputBuffer){ + mInputBuffer->release(); + mInputBuffer = NULL; + } + + if(mTempInputBuffer != NULL){ + free(mTempInputBuffer); + mTempInputBuffer = NULL; + } + + mTempBufferDataLen = 0; + mTempBufferTotalSize = 0; + mInputBufferSize = 0; + return err; + } + + int64_t timeUs; + if (mInputBuffer->meta_data()->findInt64(kKeyTime, &timeUs)) { + mAnchorTimeUs = timeUs; + if( timeUs != 0 ) { + mNumSamplesOutput = 0; + } + } else { + // We must have a new timestamp after seeking. + CHECK(seekTimeUs < 0); + } + + inputBuffer = (UChar *)mInputBuffer->data() + mInputBuffer->range_offset(); + inputBufferSize = mInputBuffer->range_length(); + if ( mInputBufferSize == 0 ) { + // Remember the first input buffer size + mInputBufferSize = mInputBuffer->size(); + } + //Check if there was incomplete frame assembly started + if ( mTempBufferDataLen ) { + LOGV("Incomplete frame assembly is in progress mTempBufferDataLen %d", mTempBufferDataLen); + if ( mTempBufferDataLen + inputBufferSize > mTempBufferTotalSize ) { + LOGE("Temp buffer size exceeded %d input size %d", mTempBufferTotalSize, inputBufferSize); + return UNKNOWN_ERROR; + } + //append new input buffer to temp buffer + memcpy( mTempInputBuffer + mTempBufferDataLen, inputBuffer, inputBufferSize ); + + //update the new iput buffer data + if ( inputBufferSize + mTempBufferDataLen < mInputBufferSize ) { + LOGV("Reached end of stream case" ); + inputBufferSize += mTempBufferDataLen; + mTempBufferDataLen = 0; + mInputBufferSize = inputBufferSize; + mInputBuffer->set_range(0, inputBufferSize); + } + memcpy( inputBuffer, mTempInputBuffer, inputBufferSize); + } + } + else { + inputBuffer = (UChar *)mInputBuffer->data() + mInputBuffer->range_offset(); + inputBufferSize = mInputBuffer->range_length(); + } + + //Allocate Output buffer + MediaBuffer *buffer; + CHECK_EQ(mBufferGroup->acquire_buffer(&buffer), (status_t)OK); + + //Get the input buffer + LOGV("Input Buffer Length %d Offset %d size %d", mInputBuffer->range_length(), mInputBuffer->range_offset(), mInputBufferSize); + + mConfig->pInputBuffer = inputBuffer; + + mConfig->inputBufferCurrentLength = inputBufferSize; + mConfig->inputBufferMaxLength = 0; + mConfig->inputBufferUsedLength = 0; + mConfig->remainderBits = 0; + + mConfig->pOutputBuffer = static_cast<Int16 *>(buffer->data()); + mConfig->pOutputBuffer_plus = &mConfig->pOutputBuffer[2048]; + mConfig->repositionFlag = false; + + Int decoderErr; + + decoderErr = PVMP4AudioDecodeFrame(mConfig, mDecoderBuf); + + /* + * AAC+/eAAC+ streams can be signalled in two ways: either explicitly + * or implicitly, according to MPEG4 spec. AAC+/eAAC+ is a dual + * rate system and the sampling rate in the final output is actually + * doubled compared with the core AAC decoder sampling rate. + * + * Explicit signalling is done by explicitly defining SBR audio object + * type in the bitstream. Implicit signalling is done by embedding + * SBR content in AAC extension payload specific to SBR, and hence + * requires an AAC decoder to perform pre-checks on actual audio frames. + * + * Thus, we could not say for sure whether a stream is + * AAC+/eAAC+ until the first data frame is decoded. + */ + if (++mNumDecodedBuffers <= 2) { + LOGV("audio/extended audio object type: %d + %d", + mConfig->audioObjectType, mConfig->extendedAudioObjectType); + LOGV("aac+ upsampling factor: %d desired channels: %d", + mConfig->aacPlusUpsamplingFactor, mConfig->desiredChannels); + + CHECK(mNumDecodedBuffers > 0); + if (mNumDecodedBuffers == 1) { + mUpsamplingFactor = mConfig->aacPlusUpsamplingFactor; + // Check on the sampling rate to see whether it is changed. + int32_t sampleRate; + CHECK(mMeta->findInt32(kKeySampleRate, &sampleRate)); + if (mConfig->samplingRate != sampleRate) { + mMeta->setInt32(kKeySampleRate, mConfig->samplingRate); + LOGW("Sample rate was %d Hz, but now is %d Hz", + sampleRate, mConfig->samplingRate); + buffer->release(); + mInputBuffer->release(); + mInputBuffer = NULL; + return INFO_FORMAT_CHANGED; + } + } else { // mNumDecodedBuffers == 2 + if (mConfig->extendedAudioObjectType == MP4AUDIO_AAC_LC || + mConfig->extendedAudioObjectType == MP4AUDIO_LTP) { + if (mUpsamplingFactor == 2) { + // The stream turns out to be not aacPlus mode anyway + LOGW("Disable AAC+/eAAC+ since extended audio object type is %d", + mConfig->extendedAudioObjectType); + mConfig->aacPlusEnabled = 0; + } + } else { + if (mUpsamplingFactor == 1) { + // aacPlus mode does not buy us anything, but to cause + // 1. CPU load to increase, and + // 2. a half speed of decoding + LOGW("Disable AAC+/eAAC+ since upsampling factor is 1"); + mConfig->aacPlusEnabled = 0; + } + } + } + } + + size_t numOutBytes = + mConfig->frameLength * sizeof(int16_t) * mConfig->desiredChannels; + if (mUpsamplingFactor == 2) { + if (mConfig->desiredChannels == 1) { + memcpy(&mConfig->pOutputBuffer[1024], &mConfig->pOutputBuffer[2048], numOutBytes * 2); + } + numOutBytes *= 2; + } + + LOGV("AAC decoder %d frame length %d used length %d ", decoderErr, inputBufferSize, mConfig->inputBufferUsedLength); + if( inputBufferSize < mConfig->inputBufferUsedLength ) { + LOGE("unexpected error actual len %d is less than used len %d", inputBufferSize, mConfig->inputBufferUsedLength); + decoderErr = MP4AUDEC_INVALID_FRAME; + } + + int aacformattype = 0; + sp<MetaData> metadata = mSource->getFormat(); + metadata->findInt32(kkeyAacFormatAdif, &aacformattype); + + if ( decoderErr == MP4AUDEC_INCOMPLETE_FRAME && aacformattype == true) { + LOGW("Handle Incomplete frame error inputBufSize %d, usedLength %d", inputBufferSize, mConfig->inputBufferUsedLength); + if(mConfig->inputBufferUsedLength == mInputBufferSize){ + LOGW("Decoder cannot process the buffer due to invalid frame"); + decoderErr = MP4AUDEC_INVALID_FRAME; + } else { + if ( !mTempInputBuffer ) { + //Allocate Temp buffer + uint32_t bytesToAllocate = 2 * mInputBuffer->size(); + mTempInputBuffer = (uint8_t*)malloc( bytesToAllocate ); + mTempBufferDataLen = 0; + if (mTempInputBuffer == NULL) { + LOGE("Could not allocate temp buffer bytesToAllocate quit playing"); + return UNKNOWN_ERROR; + } + mTempBufferTotalSize = bytesToAllocate; + LOGV("Allocated tempBuffer of size %d data len %d", mTempBufferTotalSize, mTempBufferDataLen); + } + // copy the remaining data into temp buffer + memcpy( mTempInputBuffer, inputBuffer, mConfig->inputBufferUsedLength ); + + if (mTempBufferDataLen != 0) { + //append previous remaining data back into temp buffer + LOGV("Appending remaining data tempDataLen %d usedLength %d", mTempBufferDataLen, mConfig->inputBufferUsedLength); + memcpy( mTempInputBuffer + mConfig->inputBufferUsedLength, + mTempInputBuffer + mInputBufferSize, + mTempBufferDataLen ); + } + + mTempBufferDataLen += mConfig->inputBufferUsedLength; + LOGV("mTempBufferDataLen %d inputBufferUsedLength %d", mTempBufferDataLen, mConfig->inputBufferUsedLength); + // temp buffer has accumulated one frame size worth data + // copy it back to input buffer so that it is fed to decoder next + if ( mTempBufferDataLen >= mInputBufferSize ) { + LOGV("mTempBufferDataLen %d exceeded mInputBufferSize %d ", mTempBufferDataLen, mInputBufferSize); + memcpy((UChar*)mInputBuffer->data(), mTempInputBuffer, mInputBufferSize ); + mTempBufferDataLen -= mInputBufferSize; + mInputBuffer->set_range( 0, mInputBufferSize ); + mConfig->inputBufferUsedLength = 0; + } + + //reset the output buffer size + numOutBytes = 0; + } // end of else INVALID FRAME + + } + if (decoderErr != MP4AUDEC_SUCCESS && decoderErr != MP4AUDEC_INCOMPLETE_FRAME) { + LOGW("AAC decoder returned error %d, substituting silence", decoderErr); + + memset(buffer->data(), 0, numOutBytes); + + // Discard input buffer. + if( mInputBuffer != NULL ) { + mInputBuffer->release(); + mInputBuffer = NULL; + } + + if(mTempBufferDataLen) { + //put previous remaining data to temp buffer beginning + memcpy( mTempInputBuffer, + mTempInputBuffer + mInputBufferSize, + mTempBufferDataLen ); + } + + // fall through + } + + buffer->set_range(0, numOutBytes); + + if (mInputBuffer != NULL) { + mInputBuffer->set_range( + mInputBuffer->range_offset() + mConfig->inputBufferUsedLength, + mInputBuffer->range_length() - mConfig->inputBufferUsedLength); + + if (mInputBuffer->range_length() == 0) { + if(decoderErr == MP4AUDEC_SUCCESS && mTempBufferDataLen) { + //put previous remaining data to temp buffer beginning + memcpy( mTempInputBuffer, + mTempInputBuffer + mInputBufferSize, + mTempBufferDataLen ); + } + mInputBuffer->release(); + mInputBuffer = NULL; + } + } + + buffer->meta_data()->setInt64( + kKeyTime, + mAnchorTimeUs + + (mNumSamplesOutput * 1000000) / mConfig->samplingRate); + + if(numOutBytes > 0) + mNumSamplesOutput += mConfig->frameLength * mUpsamplingFactor; + + *out = buffer; + + return OK; +} + +} // namespace android diff --git a/media/libstagefright/codecs/aacdec/Android.mk b/media/libstagefright/codecs/aacdec/Android.mk index 20c7bc0..c8bfd28 100644 --- a/media/libstagefright/codecs/aacdec/Android.mk +++ b/media/libstagefright/codecs/aacdec/Android.mk @@ -151,7 +151,7 @@ LOCAL_C_INCLUDES := \ LOCAL_ARM_MODE := arm -LOCAL_MODULE := libstagefright_aacdec +LOCAL_MODULE := libstagefright_aacdec_omx include $(BUILD_STATIC_LIBRARY) @@ -169,7 +169,7 @@ LOCAL_C_INCLUDES := \ LOCAL_CFLAGS := -DOSCL_IMPORT_REF= LOCAL_STATIC_LIBRARIES := \ - libstagefright_aacdec + libstagefright_aacdec_omx LOCAL_SHARED_LIBRARIES := \ libstagefright_omx libstagefright_foundation libutils @@ -178,3 +178,161 @@ LOCAL_MODULE := libstagefright_soft_aacdec LOCAL_MODULE_TAGS := optional include $(BUILD_SHARED_LIBRARY) + +################################################################################ + +ifeq ($(TARGET_USES_QCOM_LPA),true) +include $(CLEAR_VARS) + +LOCAL_SRC_FILES := \ + analysis_sub_band.cpp \ + apply_ms_synt.cpp \ + apply_tns.cpp \ + buf_getbits.cpp \ + byte_align.cpp \ + calc_auto_corr.cpp \ + calc_gsfb_table.cpp \ + calc_sbr_anafilterbank.cpp \ + calc_sbr_envelope.cpp \ + calc_sbr_synfilterbank.cpp \ + check_crc.cpp \ + dct16.cpp \ + dct64.cpp \ + decode_huff_cw_binary.cpp \ + decode_noise_floorlevels.cpp \ + deinterleave.cpp \ + digit_reversal_tables.cpp \ + dst16.cpp \ + dst32.cpp \ + dst8.cpp \ + esc_iquant_scaling.cpp \ + extractframeinfo.cpp \ + fft_rx4_long.cpp \ + fft_rx4_short.cpp \ + fft_rx4_tables_fxp.cpp \ + find_adts_syncword.cpp \ + fwd_long_complex_rot.cpp \ + fwd_short_complex_rot.cpp \ + gen_rand_vector.cpp \ + get_adif_header.cpp \ + get_adts_header.cpp \ + get_audio_specific_config.cpp \ + get_dse.cpp \ + get_ele_list.cpp \ + get_ga_specific_config.cpp \ + get_ics_info.cpp \ + get_prog_config.cpp \ + get_pulse_data.cpp \ + get_sbr_bitstream.cpp \ + get_sbr_startfreq.cpp \ + get_sbr_stopfreq.cpp \ + get_tns.cpp \ + getfill.cpp \ + getgroup.cpp \ + getics.cpp \ + getmask.cpp \ + hcbtables_binary.cpp \ + huffcb.cpp \ + huffdecode.cpp \ + hufffac.cpp \ + huffspec_fxp.cpp \ + idct16.cpp \ + idct32.cpp \ + idct8.cpp \ + imdct_fxp.cpp \ + infoinit.cpp \ + init_sbr_dec.cpp \ + intensity_right.cpp \ + inv_long_complex_rot.cpp \ + inv_short_complex_rot.cpp \ + iquant_table.cpp \ + long_term_prediction.cpp \ + long_term_synthesis.cpp \ + lt_decode.cpp \ + mdct_fxp.cpp \ + mdct_tables_fxp.cpp \ + mdst.cpp \ + mix_radix_fft.cpp \ + ms_synt.cpp \ + pns_corr.cpp \ + pns_intensity_right.cpp \ + pns_left.cpp \ + ps_all_pass_filter_coeff.cpp \ + ps_all_pass_fract_delay_filter.cpp \ + ps_allocate_decoder.cpp \ + ps_applied.cpp \ + ps_bstr_decoding.cpp \ + ps_channel_filtering.cpp \ + ps_decode_bs_utils.cpp \ + ps_decorrelate.cpp \ + ps_fft_rx8.cpp \ + ps_hybrid_analysis.cpp \ + ps_hybrid_filter_bank_allocation.cpp \ + ps_hybrid_synthesis.cpp \ + ps_init_stereo_mixing.cpp \ + ps_pwr_transient_detection.cpp \ + ps_read_data.cpp \ + ps_stereo_processing.cpp \ + pulse_nc.cpp \ + pv_div.cpp \ + pv_log2.cpp \ + pv_normalize.cpp \ + pv_pow2.cpp \ + pv_sine.cpp \ + pv_sqrt.cpp \ + pvmp4audiodecoderconfig.cpp \ + pvmp4audiodecoderframe.cpp \ + pvmp4audiodecodergetmemrequirements.cpp \ + pvmp4audiodecoderinitlibrary.cpp \ + pvmp4audiodecoderresetbuffer.cpp \ + q_normalize.cpp \ + qmf_filterbank_coeff.cpp \ + sbr_aliasing_reduction.cpp \ + sbr_applied.cpp \ + sbr_code_book_envlevel.cpp \ + sbr_crc_check.cpp \ + sbr_create_limiter_bands.cpp \ + sbr_dec.cpp \ + sbr_decode_envelope.cpp \ + sbr_decode_huff_cw.cpp \ + sbr_downsample_lo_res.cpp \ + sbr_envelope_calc_tbl.cpp \ + sbr_envelope_unmapping.cpp \ + sbr_extract_extended_data.cpp \ + sbr_find_start_andstop_band.cpp \ + sbr_generate_high_freq.cpp \ + sbr_get_additional_data.cpp \ + sbr_get_cpe.cpp \ + sbr_get_dir_control_data.cpp \ + sbr_get_envelope.cpp \ + sbr_get_header_data.cpp \ + sbr_get_noise_floor_data.cpp \ + sbr_get_sce.cpp \ + sbr_inv_filt_levelemphasis.cpp \ + sbr_open.cpp \ + sbr_read_data.cpp \ + sbr_requantize_envelope_data.cpp \ + sbr_reset_dec.cpp \ + sbr_update_freq_scale.cpp \ + set_mc_info.cpp \ + sfb.cpp \ + shellsort.cpp \ + synthesis_sub_band.cpp \ + tns_ar_filter.cpp \ + tns_decode_coef.cpp \ + tns_inv_filter.cpp \ + trans4m_freq_2_time_fxp.cpp \ + trans4m_time_2_freq_fxp.cpp \ + unpack_idx.cpp \ + window_tables_fxp.cpp \ + pvmp4setaudioconfig.cpp \ + AACDecoder.cpp + +LOCAL_CFLAGS := -DAAC_PLUS -DHQ_SBR -DPARAMETRICSTEREO -DOSCL_IMPORT_REF= -DOSCL_EXPORT_REF= -DOSCL_UNUSED_ARG= + +LOCAL_C_INCLUDES := frameworks/base/media/libstagefright/include + +LOCAL_MODULE := libstagefright_aacdec + +include $(BUILD_STATIC_LIBRARY) +endif diff --git a/media/libstagefright/codecs/mp3dec/Android.mk b/media/libstagefright/codecs/mp3dec/Android.mk index a08c9f0..4b29871 100644 --- a/media/libstagefright/codecs/mp3dec/Android.mk +++ b/media/libstagefright/codecs/mp3dec/Android.mk @@ -1,7 +1,10 @@ LOCAL_PATH:= $(call my-dir) + +ifeq ($(TARGET_USES_QCOM_LPA),true) include $(CLEAR_VARS) LOCAL_SRC_FILES := \ + MP3Decoder.cpp \ src/pvmp3_normalize.cpp \ src/pvmp3_alias_reduction.cpp \ src/pvmp3_crc.cpp \ @@ -52,6 +55,65 @@ LOCAL_CFLAGS := \ LOCAL_MODULE := libstagefright_mp3dec +include $(BUILD_STATIC_LIBRARY) + +endif + + +#LOCAL_PATH:= $(call my-dir) +include $(CLEAR_VARS) + +LOCAL_SRC_FILES := \ + src/pvmp3_normalize.cpp \ + src/pvmp3_alias_reduction.cpp \ + src/pvmp3_crc.cpp \ + src/pvmp3_decode_header.cpp \ + src/pvmp3_decode_huff_cw.cpp \ + src/pvmp3_getbits.cpp \ + src/pvmp3_dequantize_sample.cpp \ + src/pvmp3_framedecoder.cpp \ + src/pvmp3_get_main_data_size.cpp \ + src/pvmp3_get_side_info.cpp \ + src/pvmp3_get_scale_factors.cpp \ + src/pvmp3_mpeg2_get_scale_data.cpp \ + src/pvmp3_mpeg2_get_scale_factors.cpp \ + src/pvmp3_mpeg2_stereo_proc.cpp \ + src/pvmp3_huffman_decoding.cpp \ + src/pvmp3_huffman_parsing.cpp \ + src/pvmp3_tables.cpp \ + src/pvmp3_imdct_synth.cpp \ + src/pvmp3_mdct_6.cpp \ + src/pvmp3_dct_6.cpp \ + src/pvmp3_poly_phase_synthesis.cpp \ + src/pvmp3_equalizer.cpp \ + src/pvmp3_seek_synch.cpp \ + src/pvmp3_stereo_proc.cpp \ + src/pvmp3_reorder.cpp \ + +ifeq ($(TARGET_ARCH),arm) +LOCAL_SRC_FILES += \ + src/asm/pvmp3_polyphase_filter_window_gcc.s \ + src/asm/pvmp3_mdct_18_gcc.s \ + src/asm/pvmp3_dct_9_gcc.s \ + src/asm/pvmp3_dct_16_gcc.s +else +LOCAL_SRC_FILES += \ + src/pvmp3_polyphase_filter_window.cpp \ + src/pvmp3_mdct_18.cpp \ + src/pvmp3_dct_9.cpp \ + src/pvmp3_dct_16.cpp +endif + +LOCAL_C_INCLUDES := \ + frameworks/base/media/libstagefright/include \ + $(LOCAL_PATH)/src \ + $(LOCAL_PATH)/include + +LOCAL_CFLAGS := \ + -DOSCL_UNUSED_ARG= + +LOCAL_MODULE := libstagefright_mp3dec_omx + LOCAL_ARM_MODE := arm include $(BUILD_STATIC_LIBRARY) @@ -73,7 +135,7 @@ LOCAL_SHARED_LIBRARIES := \ libstagefright libstagefright_omx libstagefright_foundation libutils LOCAL_STATIC_LIBRARIES := \ - libstagefright_mp3dec + libstagefright_mp3dec_omx LOCAL_MODULE := libstagefright_soft_mp3dec LOCAL_MODULE_TAGS := optional diff --git a/media/libstagefright/codecs/mp3dec/MP3Decoder.cpp b/media/libstagefright/codecs/mp3dec/MP3Decoder.cpp new file mode 100644 index 0000000..f53ff10 --- /dev/null +++ b/media/libstagefright/codecs/mp3dec/MP3Decoder.cpp @@ -0,0 +1,586 @@ +/* + * Copyright (C) 2009 The Android Open Source Project + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "MP3Decoder.h" + +#include "include/pvmp3decoder_api.h" + +#include <media/stagefright/MediaBufferGroup.h> +#include <media/stagefright/MediaDebug.h> +#include <media/stagefright/MediaDefs.h> +#include <media/stagefright/MetaData.h> +#include <media/stagefright/Utils.h> + +namespace android { + +// Everything must match except for +// protection, bitrate, padding, private bits, mode extension, +// copyright bit, original bit and emphasis. +// Yes ... there are things that must indeed match... +static const uint32_t kMask = 0xfffe0cc0; + +static bool get_mp3_frame_size( + uint32_t header, size_t *frame_size, + int *out_sampling_rate = NULL, int *out_channels = NULL, + int *out_bitrate = NULL) { + *frame_size = 0; + + if (out_sampling_rate) { + *out_sampling_rate = 0; + } + + if (out_channels) { + *out_channels = 0; + } + + if (out_bitrate) { + *out_bitrate = 0; + } + + if ((header & 0xffe00000) != 0xffe00000) { + return false; + } + + unsigned version = (header >> 19) & 3; + + if (version == 0x01) { + return false; + } + + unsigned layer = (header >> 17) & 3; + + if (layer == 0x00) { + return false; + } + + unsigned protection = (header >> 16) & 1; + + unsigned bitrate_index = (header >> 12) & 0x0f; + + if (bitrate_index == 0 || bitrate_index == 0x0f) { + // Disallow "free" bitrate. + return false; + } + + unsigned sampling_rate_index = (header >> 10) & 3; + + if (sampling_rate_index == 3) { + return false; + } + + static const int kSamplingRateV1[] = { 44100, 48000, 32000 }; + int sampling_rate = kSamplingRateV1[sampling_rate_index]; + if (version == 2 /* V2 */) { + sampling_rate /= 2; + } else if (version == 0 /* V2.5 */) { + sampling_rate /= 4; + } + + unsigned padding = (header >> 9) & 1; + + if (layer == 3) { + // layer I + + static const int kBitrateV1[] = { + 32, 64, 96, 128, 160, 192, 224, 256, + 288, 320, 352, 384, 416, 448 + }; + + static const int kBitrateV2[] = { + 32, 48, 56, 64, 80, 96, 112, 128, + 144, 160, 176, 192, 224, 256 + }; + + int bitrate = + (version == 3 /* V1 */) + ? kBitrateV1[bitrate_index - 1] + : kBitrateV2[bitrate_index - 1]; + + if (out_bitrate) { + *out_bitrate = bitrate; + } + + *frame_size = (12000 * bitrate / sampling_rate + padding) * 4; + } else { + // layer II or III + + static const int kBitrateV1L2[] = { + 32, 48, 56, 64, 80, 96, 112, 128, + 160, 192, 224, 256, 320, 384 + }; + + static const int kBitrateV1L3[] = { + 32, 40, 48, 56, 64, 80, 96, 112, + 128, 160, 192, 224, 256, 320 + }; + + static const int kBitrateV2[] = { + 8, 16, 24, 32, 40, 48, 56, 64, + 80, 96, 112, 128, 144, 160 + }; + + int bitrate; + if (version == 3 /* V1 */) { + bitrate = (layer == 2 /* L2 */) + ? kBitrateV1L2[bitrate_index - 1] + : kBitrateV1L3[bitrate_index - 1]; + } else { + // V2 (or 2.5) + + bitrate = kBitrateV2[bitrate_index - 1]; + } + + if (out_bitrate) { + *out_bitrate = bitrate; + } + + if (version == 3 /* V1 */) { + *frame_size = 144000 * bitrate / sampling_rate + padding; + } else { + // V2 or V2.5 + *frame_size = 72000 * bitrate / sampling_rate + padding; + } + } + + if (out_sampling_rate) { + *out_sampling_rate = sampling_rate; + } + + if (out_channels) { + int channel_mode = (header >> 6) & 3; + + *out_channels = (channel_mode == 3) ? 1 : 2; + } + + return true; +} + +static bool resync( + uint8_t *data, uint32_t size, uint32_t match_header, off_t *out_pos) { + + bool valid = false; + off_t pos = 0; + *out_pos = 0; + do { + if (pos + 4 > size) { + // Don't scan forever. + LOGV("no dice, no valid sequence of frames found."); + break; + } + + uint32_t header = U32_AT(data + pos); + + if (match_header != 0 && (header & kMask) != (match_header & kMask)) { + ++pos; + continue; + } + + LOGV("found possible frame at %ld (header = 0x%08x)", pos, header); + + // We found what looks like a valid frame, + valid = true; + *out_pos = pos; + } while (!valid); + + return valid; +} + + +MP3Decoder::MP3Decoder(const sp<MediaSource> &source) + : mSource(source), + mNumChannels(0), + mStarted(false), + mBufferGroup(NULL), + mConfig(new tPVMP3DecoderExternal), + mDecoderBuf(NULL), + mAnchorTimeUs(0), + mNumFramesOutput(0), + mInputBuffer(NULL), + mPartialBuffer(NULL), + mFixedHeader(0) { + init(); +} + +void MP3Decoder::init() { + sp<MetaData> srcFormat = mSource->getFormat(); + + int32_t sampleRate; + CHECK(srcFormat->findInt32(kKeyChannelCount, &mNumChannels)); + CHECK(srcFormat->findInt32(kKeySampleRate, &sampleRate)); + + mMeta = new MetaData; + mMeta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_AUDIO_RAW); + mMeta->setInt32(kKeyChannelCount, mNumChannels); + mMeta->setInt32(kKeySampleRate, sampleRate); + + int64_t durationUs; + if (srcFormat->findInt64(kKeyDuration, &durationUs)) { + mMeta->setInt64(kKeyDuration, durationUs); + } + + mMeta->setCString(kKeyDecoderComponent, "MP3Decoder"); +} + +MP3Decoder::~MP3Decoder() { + if (mStarted) { + stop(); + } + + delete mConfig; + mConfig = NULL; +} + +status_t MP3Decoder::start(MetaData *params) { + CHECK(!mStarted); + + mBufferGroup = new MediaBufferGroup; + mBufferGroup->add_buffer(new MediaBuffer(4608 * 2)); + + mConfig->equalizerType = flat; + mConfig->crcEnabled = false; + + uint32_t memRequirements = pvmp3_decoderMemRequirements(); + mDecoderBuf = malloc(memRequirements); + + pvmp3_InitDecoder(mConfig, mDecoderBuf); + + mSource->start(); + + mAnchorTimeUs = 0; + mNumFramesOutput = 0; + mStarted = true; + + return OK; +} + +status_t MP3Decoder::stop() { + CHECK(mStarted); + + if (mInputBuffer) { + mInputBuffer->release(); + mInputBuffer = NULL; + } + + free(mDecoderBuf); + mDecoderBuf = NULL; + + delete mBufferGroup; + mBufferGroup = NULL; + + mSource->stop(); + + mStarted = false; + + return OK; +} + +sp<MetaData> MP3Decoder::getFormat() { + return mMeta; +} + +status_t MP3Decoder::updatePartialFrame() { + status_t err = OK; + if (mPartialBuffer == NULL) { + return err; + } + + size_t frameSize = 0; + uint32_t partialBufLen = mPartialBuffer->range_length(); + uint32_t inputBufLen = mInputBuffer->range_length(); + uint8_t frameHeader[4]; + uint8_t *frmHdr; + uint32_t header; + + + // Look at the frame size and complete the partial frame + // Also check if a vaild header is found after the partial frame + if (partialBufLen < 4) { // check if partial frame has the 4 bytes header + if (inputBufLen < (4 - partialBufLen)) { + // input buffer does not have the frame header bytes + // bail out TODO + LOGE("MP3Decoder::updatePartialFrame buffer to small header not found" + " partial buffer len %d, input buffer len %d", + partialBufLen, inputBufLen); + //mPartialBuffer->release(); + //mPartialBuffer = NULL; + return UNKNOWN_ERROR; + } + + // copy the header bytes to frameHeader + memcpy (frameHeader, mPartialBuffer->data(), partialBufLen); + memcpy (frameHeader + partialBufLen, mInputBuffer->data(), (4 - partialBufLen)); + // get the first 4 bytes of the buffer + header = U32_AT((uint8_t *)frameHeader); + frmHdr = frameHeader; + } else { + frmHdr = (uint8_t *)mPartialBuffer->data(); + } + + // check if its a good frame, and the frame size + // get the first 4 bytes of the buffer + header = U32_AT(frmHdr); + bool curFrame = get_mp3_frame_size(header,&frameSize); + if (!curFrame) { + LOGE("MP3Decoder::read - partial frame does not have a vaild header 0x%x", + header); + return UNKNOWN_ERROR; + } + + // check if the following frame is good + uint32_t nextFrameOffset = frameSize - partialBufLen; + if ((nextFrameOffset + 4) <= inputBufLen) { + header = U32_AT((uint8_t *)mInputBuffer->data() + nextFrameOffset); + if ((header & 0xffe00000) != 0xffe00000) { + // next frame does not have a valid header, + // this may not be the next buffer, bail out. + LOGE("MP3Decoder::read - next frame does not have a vaild header 0x%x", + header); + return UNKNOWN_ERROR; + } + } else { + // next frame header is out of range + // assume good header for now + LOGE("MP3Decoder::read - assuming next frame is good"); + } + + // check if the input buffer has the remaining partial frame + if (frameSize > (partialBufLen + inputBufLen)) { + // input buffer does not have the remaining partial frame, + // discard data here as frame split in 3 buffers not supported + LOGE("MP3Decoder::updatePartialFrame - input buffer does not have the complete frame." + " frame size %d, saved partial buffer len %d," + " input buffer len %d", frameSize, partialBufLen, inputBufLen); + return UNKNOWN_ERROR; + } + + // check if the mPartialBuffer can fit the remaining frame + if ((mPartialBuffer->size() - partialBufLen) < (frameSize - partialBufLen)) { + // mPartialBuffer is small to hold the reaming frame + //TODO + LOGE("MP3Decoder::updatePartialFrame - mPartialBuffer is small, size %d, required &d", + (mPartialBuffer->size() - partialBufLen), (frameSize - partialBufLen)); + return UNKNOWN_ERROR; + } + + // done with error checks + // copy the partial frames to from a complete frame + // Copy the remaining frame from input buffer + uint32_t bytesRemaining = frameSize - mPartialBuffer->range_length(); + memcpy ((uint8_t *)mPartialBuffer->data() + mPartialBuffer->range_length(), + (uint8_t *)mInputBuffer->data() + mInputBuffer->range_offset(), + bytesRemaining); + + // mark the bytes as consumed from input buffer + mInputBuffer->set_range( + mInputBuffer->range_offset() + bytesRemaining, + mInputBuffer->range_length() - bytesRemaining); + + // set the range and length of mPartialBuffer + mPartialBuffer->set_range(0, + mPartialBuffer->range_length() + bytesRemaining); + + LOGE("MP3Decoder::updatePartialFrame - copied the partial frame %d, input buffer length %d", + bytesRemaining, mInputBuffer->range_length()); + + return err; +} + +status_t MP3Decoder::read( + MediaBuffer **out, const ReadOptions *options) { + status_t err; + + *out = NULL; + bool usedPartialFrame = false; + bool seekSource = false; + + int64_t seekTimeUs; + ReadOptions::SeekMode mode; + if (options && options->getSeekTo(&seekTimeUs, &mode)) { + CHECK(seekTimeUs >= 0); + + mNumFramesOutput = 0; + seekSource = true; + + if (mInputBuffer) { + mInputBuffer->release(); + mInputBuffer = NULL; + } + + if (mPartialBuffer) { + mPartialBuffer->release(); + mPartialBuffer = NULL; + } + + // Make sure that the next buffer output does not still + // depend on fragments from the last one decoded. + pvmp3_InitDecoder(mConfig, mDecoderBuf); + } else { + seekTimeUs = -1; + } + + if (mInputBuffer == NULL) { + err = mSource->read(&mInputBuffer, options); + + if (err != OK) { + return err; + } + + if ((mFixedHeader == 0) && (mInputBuffer->range_length() > 4)) { + //save the first 4 bytes as fixed header for the reset of the file + mFixedHeader = U32_AT((uint8_t *)mInputBuffer->data()); + } + + if (seekSource == true) { + off_t syncOffset = 0; + bool valid = resync((uint8_t *)mInputBuffer->data() + mInputBuffer->range_offset() + ,mInputBuffer->range_length(), mFixedHeader, &syncOffset); + if (valid) { + // consume these bytes, we might find a frame header in next buffer + mInputBuffer->set_range( + mInputBuffer->range_offset() + syncOffset, + mInputBuffer->range_length() - syncOffset); + LOGV("mp3 decoder found a sync point after seek syncOffset %d", syncOffset); + } else { + LOGV("NO SYNC POINT found, buffer length %d",mInputBuffer->range_length()); + } + } + + int64_t timeUs; + if (mInputBuffer->meta_data()->findInt64(kKeyTime, &timeUs)) { + mAnchorTimeUs = timeUs; + mNumFramesOutput = 0; + } else { + // We must have a new timestamp after seeking. + CHECK(seekTimeUs < 0); + } + // check for partial frame + if (mPartialBuffer != NULL) { + err = updatePartialFrame(); + if (err != OK) { + // updating partial frame failed, discard the previously + // saved partial frame and continue + mPartialBuffer->release(); + mPartialBuffer = NULL; + err = OK; + } + } + } + + MediaBuffer *buffer; + CHECK_EQ(mBufferGroup->acquire_buffer(&buffer), OK); + + if (mPartialBuffer != NULL) { + mConfig->pInputBuffer = + (uint8_t *)mPartialBuffer->data() + mPartialBuffer->range_offset(); + mConfig->inputBufferCurrentLength = mPartialBuffer->range_length(); + usedPartialFrame = true; + } else { + mConfig->pInputBuffer = + (uint8_t *)mInputBuffer->data() + mInputBuffer->range_offset(); + mConfig->inputBufferCurrentLength = mInputBuffer->range_length(); + } + + mConfig->inputBufferMaxLength = 0; + mConfig->inputBufferUsedLength = 0; + + mConfig->outputFrameSize = buffer->size() / sizeof(int16_t); + mConfig->pOutputBuffer = static_cast<int16_t *>(buffer->data()); + + ERROR_CODE decoderErr; + if ((decoderErr = pvmp3_framedecoder(mConfig, mDecoderBuf)) + != NO_DECODING_ERROR) { + LOGV("mp3 decoder returned error %d", decoderErr); + + if ((decoderErr != NO_ENOUGH_MAIN_DATA_ERROR) && + (decoderErr != SYNCH_LOST_ERROR)) { + buffer->release(); + buffer = NULL; + + mInputBuffer->release(); + mInputBuffer = NULL; + if (mPartialBuffer) { + mPartialBuffer->release(); + mPartialBuffer = NULL; + } + LOGE("mp3 decoder returned UNKNOWN_ERROR"); + + return UNKNOWN_ERROR; + } + + if ((mPartialBuffer == NULL) && (decoderErr == NO_ENOUGH_MAIN_DATA_ERROR)) { + // Might be a partial frame, save it + mPartialBuffer = new MediaBuffer(mInputBuffer->size()); + memcpy ((uint8_t *)mPartialBuffer->data(), + mConfig->pInputBuffer, mConfig->inputBufferCurrentLength); + mPartialBuffer->set_range(0, mConfig->inputBufferCurrentLength); + // set output buffer to 0 + mConfig->outputFrameSize = 0; + // consume the copied bytes from input + mConfig->inputBufferUsedLength = mConfig->inputBufferCurrentLength; + } else if(decoderErr == SYNCH_LOST_ERROR) { + // Try to find the mp3 frame header in the current buffer + off_t syncOffset = 0; + bool valid = resync(mConfig->pInputBuffer, mConfig->inputBufferCurrentLength, + mFixedHeader, &syncOffset); + if (!valid || !syncOffset) { + // consume these bytes, we might find a frame header in next buffer + syncOffset = mConfig->inputBufferCurrentLength; + } + // set output buffer to 0 + mConfig->outputFrameSize = 0; + // consume the junk bytes from input buffer + mConfig->inputBufferUsedLength = syncOffset; + } else { + // This is recoverable, just ignore the current frame and + // play silence instead. + memset(buffer->data(), 0, mConfig->outputFrameSize * sizeof(int16_t)); + mConfig->inputBufferUsedLength = mInputBuffer->range_length(); + } + } + + buffer->set_range( + 0, mConfig->outputFrameSize * sizeof(int16_t)); + + if ((mPartialBuffer != NULL) && usedPartialFrame) { + mPartialBuffer->set_range( + mPartialBuffer->range_offset() + mConfig->inputBufferUsedLength, + mPartialBuffer->range_length() - mConfig->inputBufferUsedLength); + mPartialBuffer->release(); + mPartialBuffer = NULL; + } else { + mInputBuffer->set_range( + mInputBuffer->range_offset() + mConfig->inputBufferUsedLength, + mInputBuffer->range_length() - mConfig->inputBufferUsedLength); + } + + if (mInputBuffer->range_length() == 0) { + mInputBuffer->release(); + mInputBuffer = NULL; + } + + buffer->meta_data()->setInt64( + kKeyTime, + mAnchorTimeUs + + (mNumFramesOutput * 1000000) / mConfig->samplingRate); + + mNumFramesOutput += mConfig->outputFrameSize / mNumChannels; + + *out = buffer; + + return OK; +} + +} // namespace android diff --git a/media/libstagefright/include/AACDecoder.h b/media/libstagefright/include/AACDecoder.h index 886a3b7..a5160a4 100644 --- a/media/libstagefright/include/AACDecoder.h +++ b/media/libstagefright/include/AACDecoder.h @@ -19,6 +19,7 @@ #define AAC_DECODER_H_ #include <media/stagefright/MediaSource.h> +#define AAC_MAX_FORMAT_BLOCK_SIZE 16 struct tPVMP4AudioDecoderExternal; @@ -40,7 +41,6 @@ struct AACDecoder : public MediaSource { protected: virtual ~AACDecoder(); - private: sp<MetaData> mMeta; sp<MediaSource> mSource; @@ -57,6 +57,13 @@ private: int32_t mUpsamplingFactor; MediaBuffer *mInputBuffer; + uint8_t mFormatBlock[AAC_MAX_FORMAT_BLOCK_SIZE]; + + // Temporary buffer to store incomplete frame buffers + uint8_t* mTempInputBuffer; // data ptr + uint32_t mTempBufferTotalSize; // total size allocated + uint32_t mTempBufferDataLen; // actual data length + uint32_t mInputBufferSize; // input data length status_t initCheck(); AACDecoder(const AACDecoder &); diff --git a/media/libstagefright/include/MP3Decoder.h b/media/libstagefright/include/MP3Decoder.h index 4086fb6..8ff570a 100644 --- a/media/libstagefright/include/MP3Decoder.h +++ b/media/libstagefright/include/MP3Decoder.h @@ -53,13 +53,16 @@ private: void *mDecoderBuf; int64_t mAnchorTimeUs; int64_t mNumFramesOutput; + uint32_t mFixedHeader; MediaBuffer *mInputBuffer; + MediaBuffer *mPartialBuffer; void init(); MP3Decoder(const MP3Decoder &); MP3Decoder &operator=(const MP3Decoder &); + status_t updatePartialFrame(); }; } // namespace android diff --git a/services/audioflinger/AudioFlinger.cpp b/services/audioflinger/AudioFlinger.cpp index d655598..df8792d 100644 --- a/services/audioflinger/AudioFlinger.cpp +++ b/services/audioflinger/AudioFlinger.cpp @@ -1,6 +1,7 @@ /* //device/include/server/AudioFlinger/AudioFlinger.cpp ** ** Copyright 2007, The Android Open Source Project +** Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved. ** ** Licensed under the Apache License, Version 2.0 (the "License"); ** you may not use this file except in compliance with the License. @@ -158,9 +159,14 @@ static const char *audio_interfaces[] = { AudioFlinger::AudioFlinger() : BnAudioFlinger(), - mPrimaryHardwareDev(0), mMasterVolume(1.0f), mMasterMute(false), mNextUniqueId(1), + mPrimaryHardwareDev(0), +#ifdef WITH_QCOM_LPA + mLPALeftVol(1.0), mLPARightVol(1.0), +#endif + mMasterVolume(1.0f), mMasterMute(false), mNextUniqueId(1), mBtNrecIsOff(false) { + } void AudioFlinger::onFirstRef() @@ -171,7 +177,15 @@ void AudioFlinger::onFirstRef() /* TODO: move all this work into an Init() function */ mHardwareStatus = AUDIO_HW_IDLE; - +#ifdef WITH_QCOM_LPA + mLPAOutput = NULL; + mLPAHandle = -1; + mA2DPHandle = -1; + mLPAStreamIsActive = false; + mLPASessionId = -2; // -2 is invalid session ID + mIsEffectConfigChanged = false; + mLPAEffectChain = NULL; +#endif for (size_t i = 0; i < ARRAY_SIZE(audio_interfaces); i++) { const hw_module_t *mod; audio_hw_device_t *dev; @@ -236,7 +250,12 @@ AudioFlinger::~AudioFlinger() // closeOutput() will remove first entry from mPlaybackThreads closeOutput(mPlaybackThreads.keyAt(0)); } - +#ifdef WITH_QCOM_LPA + if (mLPAOutput) { + // Close the Output + closeSession(mLPAHandle); + } +#endif for (int i = 0; i < num_devs; i++) { audio_hw_device_t *dev = mAudioHwDevs[i]; audio_hw_device_close(dev); @@ -472,6 +491,201 @@ Exit: } return trackHandle; } +#ifdef WITH_QCOM_LPA +void AudioFlinger::createSession( + pid_t pid, + uint32_t sampleRate, + int channelCount, + int *sessionId, + status_t *status) +{ + status_t lStatus = NO_ERROR; + { + // createSession can be called from same PID (mediaserver process) only + if(pid != getpid()){ + lStatus = BAD_VALUE; + goto Exit; + } + Mutex::Autolock _l(mLock); + + LOGV("createSession() sessionId: %d sampleRate %d channelCount %d", + *sessionId, sampleRate, channelCount); + if (sessionId != NULL && *sessionId != AUDIO_SESSION_OUTPUT_MIX) { + for (size_t i = 0; i < mPlaybackThreads.size(); i++) { + sp<PlaybackThread> t = mPlaybackThreads.valueAt(i); + // Check if the session ID is already associated with a track + uint32_t sessions = t->hasAudioSession(*sessionId); + if (sessions & PlaybackThread::TRACK_SESSION) { + LOGE("There is a track already associated with this session %d", *sessionId); + lStatus = BAD_VALUE; + goto Exit; + } + // check if an effect with same session ID is waiting for a ssession to be created + if (sessions & PlaybackThread::EFFECT_SESSION) { + // Clear reference to previous effect chain if any + if(mLPAEffectChain.get()) { + mLPAEffectChain.clear(); + } + t->mLock.lock(); + mLPAEffectChain = t->getEffectChain_l(*sessionId); + t->mLock.unlock(); + } + } + mLPASessionId = *sessionId; + LOGV("createSession() lSessionId: %d", mLPASessionId); + if (mLPAEffectChain != NULL) { + mLPAEffectChain->setLPAFlag(true); + // For LPA, the volume will be applied in DSP. No need for volume + // control in the Effect chain, so setting it to unity. + uint32_t volume = 0x1000000; // Equals to 1.0 in 8.24 format + mLPAEffectChain->setVolume_l(&volume,&volume); + } else { + LOGW("There was no effectChain created for the sessionId(%d)", mLPASessionId); + } + } else { + if(sessionId != NULL) { + LOGE("Error: Invalid sessionID (%d) for LPA playback", *sessionId); + } + } + mLPASampleRate = sampleRate; + mLPANumChannels = channelCount; + } + +#ifdef SRS_PROCESSING + LOGD("SRS_Processing - CreateSession - OutNotify_Init: %p TID %d\n", this, gettid()); + SRS_Processing::ProcessOutNotify(SRS_Processing::AUTO, this, true); +#endif + +Exit: + if(status) { + *status = lStatus; + } +} + +void AudioFlinger::deleteSession() +{ + Mutex::Autolock _l(mLock); + LOGV("deleteSession"); + // -2 is invalid session ID + mLPASessionId = -2; + if (mLPAEffectChain != NULL) { + mLPAEffectChain->setLPAFlag(false); + size_t i, numEffects = mLPAEffectChain->getNumEffects(); + for(i = 0; i < numEffects; i++) { + sp<EffectModule> effect = mLPAEffectChain->getEffectFromIndex_l(i); + effect->setInBuffer(mLPAEffectChain->inBuffer()); + if (i == numEffects-1) { + effect->setOutBuffer(mLPAEffectChain->outBuffer()); + } else { + effect->setOutBuffer(mLPAEffectChain->inBuffer()); + } + effect->configure(); + } + mLPAEffectChain.clear(); + mLPAEffectChain = NULL; + } +#ifdef SRS_PROCESSING + LOGD("SRS_Processing - deleteSession - OutNotify_Init: %p TID %d\n", this, gettid()); + SRS_Processing::ProcessOutNotify(SRS_Processing::AUTO, this, false); +#endif +} + +// ToDo: Should we go ahead with this frameCount? +#define DEAFULT_FRAME_COUNT 1200 +void AudioFlinger::applyEffectsOn(int16_t *inBuffer, int16_t *outBuffer, int size) +{ + LOGV("applyEffectsOn: inBuf %p outBuf %p size %d", inBuffer, outBuffer, size); + // This might be the first buffer to apply effects after effect config change + // should not skip effects processing + mIsEffectConfigChanged = false; + + volatile size_t numEffects = 0; + if(mLPAEffectChain != NULL) { + numEffects = mLPAEffectChain->getNumEffects(); + } + + if( numEffects > 0) { + size_t i = 0; + int16_t *pIn = inBuffer; + int16_t *pOut = outBuffer; + + int frameCount = size / (sizeof(int16_t) * mLPANumChannels); + + while(frameCount > 0) { + if(mLPAEffectChain == NULL) { + LOGV("LPA Effect Chain is removed - No effects processing !!"); + numEffects = 0; + break; + } + mLPAEffectChain->lock(); + + numEffects = mLPAEffectChain->getNumEffects(); + if(!numEffects) { + LOGV("applyEffectsOn: All the effects are removed - nothing to process"); + mLPAEffectChain->unlock(); + break; + } + + int outFrameCount = (frameCount > DEAFULT_FRAME_COUNT ? DEAFULT_FRAME_COUNT: frameCount); + bool isEffectEnabled = false; + for(i = 0; i < numEffects; i++) { + // If effect configuration is changed while applying effects do not process further + if(mIsEffectConfigChanged) { + mLPAEffectChain->unlock(); + LOGV("applyEffectsOn: mIsEffectConfigChanged is set - no further processing"); + return; + } + sp<EffectModule> effect = mLPAEffectChain->getEffectFromIndex_l(i); + if(effect == NULL) { + LOGE("getEffectFromIndex_l(%d) returned NULL ptr", i); + mLPAEffectChain->unlock(); + return; + } + if(i == 0) { + // For the first set input and output buffers different + isEffectEnabled = effect->isProcessEnabled(); + effect->setInBuffer(pIn); + effect->setOutBuffer(pOut); + } else { + // For the remaining use previous effect's output buffer as input buffer + effect->setInBuffer(pOut); + effect->setOutBuffer(pOut); + } + // true indicates that it is being applied on LPA output + effect->configure(true, mLPASampleRate, mLPANumChannels, outFrameCount); + } + + if(isEffectEnabled) { + // Clear the output buffer + memset(pOut, 0, (outFrameCount * mLPANumChannels * sizeof(int16_t))); + } else { + // Copy input buffer content to the output buffer + memcpy(pOut, pIn, (outFrameCount * mLPANumChannels * sizeof(int16_t))); + } + + mLPAEffectChain->process_l(); + + mLPAEffectChain->unlock(); + + // Update input and output buffer pointers + pIn += (outFrameCount * mLPANumChannels); + pOut += (outFrameCount * mLPANumChannels); + frameCount -= outFrameCount; + } + } + + if (!numEffects) { + LOGV("applyEffectsOn: There are no effects to be applied"); + if(inBuffer != outBuffer) { + // No effect applied so just copy input buffer to output buffer + memcpy(outBuffer, inBuffer, size); + } + } +#ifdef SRS_PROCESSING + SRS_Processing::ProcessOut(SRS_Processing::AUTO, this, outBuffer, size, mLPASampleRate, mLPANumChannels); +#endif +} +#endif uint32_t AudioFlinger::sampleRate(int output) const { @@ -549,7 +763,9 @@ status_t AudioFlinger::setMasterVolume(float value) } mHardwareStatus = AUDIO_HW_IDLE; } - +#ifdef WITH_QCOM_LPA + mA2DPHandle = -1; +#endif Mutex::Autolock _l(mLock); mMasterVolume = value; for (uint32_t i = 0; i < mPlaybackThreads.size(); i++) @@ -649,6 +865,19 @@ bool AudioFlinger::masterMute() const return mMasterMute; } +#ifdef WITH_QCOM_LPA +status_t AudioFlinger::setSessionVolume(int stream, float left, float right) +{ + mLPALeftVol = left; + mLPARightVol = right; + if( (mLPAOutput != NULL) && (mLPAStreamType == stream) ) { + mLPAOutput->stream->set_volume(mLPAOutput->stream,left*mStreamTypes[stream].volume, + right*mStreamTypes[stream].volume); + } + return NO_ERROR; +} +#endif + status_t AudioFlinger::setStreamVolume(int stream, float value, int output) { // check calling permissions @@ -661,11 +890,29 @@ status_t AudioFlinger::setStreamVolume(int stream, float value, int output) } AutoMutex lock(mLock); + +#ifdef WITH_QCOM_LPA + if( (mLPAOutput != NULL) && + (mLPAStreamType == stream) ) { + mStreamTypes[stream].volume = value; + mLPAOutput->stream->set_volume(mLPAOutput->stream,mLPALeftVol*value, + mLPARightVol*value); + } +#endif + PlaybackThread *thread = NULL; if (output) { thread = checkPlaybackThread_l(output); if (thread == NULL) { +#ifndef WITH_QCOM_LPA return BAD_VALUE; +#else + if (mLPAOutput == NULL) { + return BAD_VALUE; + } else { + return NO_ERROR; + } +#endif } } @@ -695,6 +942,7 @@ status_t AudioFlinger::setStreamMute(int stream, bool muted) } AutoMutex lock(mLock); + mStreamTypes[stream].mute = muted; for (uint32_t i = 0; i < mPlaybackThreads.size(); i++) mPlaybackThreads.valueAt(i)->setStreamMute(stream, muted); @@ -796,6 +1044,18 @@ status_t AudioFlinger::setParameters(int ioHandle, const String8& keyValuePairs) #endif } +#ifdef WITH_QCOM_LPA + // Ensure that the routing to LPA is invoked only when the LPA stream is + // active. Otherwise if there is a input routing request and if there is a + // Valid LPA handle, routing gets applied for the output descriptor rather + // than to the input descriptor. + if ( mLPAOutput && mLPAStreamIsActive && mLPAHandle == ioHandle ) { + result = mLPAOutput->stream->common.set_parameters(&mLPAOutput->stream->common, + keyValuePairs.string()); + return result; + } +#endif + // hold a strong ref on thread in case closeOutput() or closeInput() is called // and the thread is exited once the lock is released sp<ThreadBase> thread; @@ -915,17 +1175,29 @@ void AudioFlinger::registerClient(const sp<IAudioFlingerClient>& client) { Mutex::Autolock _l(mLock); - +#ifdef WITH_QCOM_LPA + sp<IBinder> binder = client->asBinder(); + if (mNotificationClients.indexOfKey(binder) < 0) { +#else int pid = IPCThreadState::self()->getCallingPid(); if (mNotificationClients.indexOfKey(pid) < 0) { +#endif sp<NotificationClient> notificationClient = new NotificationClient(this, client, +#ifdef WITH_QCOM_LPA + binder); + LOGV("registerClient() client %p, binder %p", notificationClient.get(), binder.get()); + + mNotificationClients.add(binder, notificationClient); +#else pid); LOGV("registerClient() client %p, pid %d", notificationClient.get(), pid); mNotificationClients.add(pid, notificationClient); sp<IBinder> binder = client->asBinder(); +#endif + binder->linkToDeath(notificationClient); // the config change is always sent from playback or record threads to avoid deadlock @@ -938,19 +1210,55 @@ void AudioFlinger::registerClient(const sp<IAudioFlingerClient>& client) mRecordThreads.valueAt(i)->sendConfigEvent(AudioSystem::INPUT_OPENED); } } + +#ifdef WITH_QCOM_LPA + // Send the notification to the client only once. + if (mA2DPHandle != -1) { + LOGV("A2DP active. Notifying the registered client"); + client->ioConfigChanged(AudioSystem::A2DP_OUTPUT_STATE, mA2DPHandle, NULL); + } +#endif } -void AudioFlinger::removeNotificationClient(pid_t pid) +#ifdef WITH_QCOM_LPA +status_t AudioFlinger::deregisterClient(const sp<IAudioFlingerClient>& client) { + LOGV("deregisterClient() %p, tid %d, calling tid %d", client.get(), gettid(), IPCThreadState::self()->getCallingPid()); Mutex::Autolock _l(mLock); + sp<IBinder> binder = client->asBinder(); + int index = mNotificationClients.indexOfKey(binder); + if (index >= 0) { + mNotificationClients.removeItemsAt(index); + return true; + } + return false; +} +#endif + +#ifdef WITH_QCOM_LPA +void AudioFlinger::removeNotificationClient(sp<IBinder> binder) +#else +void AudioFlinger::removeNotificationClient(pid_t pid) +#endif +{ + Mutex::Autolock _l(mLock); +#ifdef WITH_QCOM_LPA + int index = mNotificationClients.indexOfKey(binder); + if (index >= 0) { + sp <NotificationClient> client = mNotificationClients.valueFor(binder); + LOGV("removeNotificationClient() %p, binder %p", client.get(), binder.get()); + mNotificationClients.removeItem(binder); + } + int pid = IPCThreadState::self()->getCallingPid(); +#else int index = mNotificationClients.indexOfKey(pid); if (index >= 0) { sp <NotificationClient> client = mNotificationClients.valueFor(pid); LOGV("removeNotificationClient() %p, pid %d", client.get(), pid); mNotificationClients.removeItem(pid); } - +#endif LOGV("%d died, releasing its sessions", pid); int num = mAudioSessionRefs.size(); bool removed = false; @@ -974,6 +1282,12 @@ void AudioFlinger::removeNotificationClient(pid_t pid) // audioConfigChanged_l() must be called with AudioFlinger::mLock held void AudioFlinger::audioConfigChanged_l(int event, int ioHandle, void *param2) { +#ifdef WITH_QCOM_LPA + LOGV("AudioFlinger::audioConfigChanged_l: event %d", event); + if (event == AudioSystem::EFFECT_CONFIG_CHANGED) { + mIsEffectConfigChanged = true; + } +#endif size_t size = mNotificationClients.size(); for (size_t i = 0; i < size; i++) { mNotificationClients.valueAt(i)->client()->ioConfigChanged(event, ioHandle, param2); @@ -1067,6 +1381,15 @@ status_t AudioFlinger::ThreadBase::setParameters(const String8& keyValuePairs) return status; } +#ifdef WITH_QCOM_LPA +void AudioFlinger::ThreadBase::effectConfigChanged() { + mAudioFlinger->mLock.lock(); + LOGV("New effect is being added to LPA chain, Notifying LPA Player"); + mAudioFlinger->audioConfigChanged_l(AudioSystem::EFFECT_CONFIG_CHANGED, 0, NULL); + mAudioFlinger->mLock.unlock(); +} +#endif + void AudioFlinger::ThreadBase::sendConfigEvent(int event, int param) { Mutex::Autolock _l(mLock); @@ -1725,7 +2048,17 @@ void AudioFlinger::PlaybackThread::audioConfigChanged_l(int event, int param) { default: break; } - mAudioFlinger->audioConfigChanged_l(event, mId, param2); +#ifdef WITH_QCOM_LPA + if (event != AudioSystem::A2DP_OUTPUT_STATE) { +#endif + mAudioFlinger->audioConfigChanged_l(event, mId, param2); +#ifdef WITH_QCOM_LPA + } + else + { + mAudioFlinger->audioConfigChanged_l(event, param, NULL); + } +#endif } void AudioFlinger::PlaybackThread::readOutputParameters() @@ -2031,7 +2364,13 @@ bool AudioFlinger::MixerThread::threadLoop() // sleepTime == 0 means we must write to audio hardware if (sleepTime == 0) { for (size_t i = 0; i < effectChains.size(); i ++) { - effectChains[i]->process_l(); +#ifdef WITH_QCOM_LPA + if (effectChains[i] != mAudioFlinger->mLPAEffectChain) { +#endif + effectChains[i]->process_l(); +#ifdef WITH_QCOM_LPA + } +#endif } // enable changes in effect chain unlockEffectChains(effectChains); @@ -4058,8 +4397,16 @@ const sp<MemoryDealer>& AudioFlinger::Client::heap() const AudioFlinger::NotificationClient::NotificationClient(const sp<AudioFlinger>& audioFlinger, const sp<IAudioFlingerClient>& client, +#ifdef WITH_QCOM_LPA + sp<IBinder> binder) +#else pid_t pid) - : mAudioFlinger(audioFlinger), mPid(pid), mClient(client) +#endif + : mAudioFlinger(audioFlinger), +#ifdef WITH_QCOM_LPA + mBinder(binder), +#endif + mClient(client) { } @@ -4072,7 +4419,11 @@ void AudioFlinger::NotificationClient::binderDied(const wp<IBinder>& who) { sp<NotificationClient> keep(this); { +#ifdef WITH_QCOM_LPA + mAudioFlinger->removeNotificationClient(mBinder); +#else mAudioFlinger->removeNotificationClient(mPid); +#endif } } @@ -4976,6 +5327,14 @@ int AudioFlinger::openOutput(uint32_t *pDevices, if (pChannels) *pChannels = channels; if (pLatencyMs) *pLatencyMs = thread->latency(); +#ifdef WITH_QCOM_LPA + // if the device is a A2DP, then this is an A2DP Output + if ( true == audio_is_a2dp_device((audio_devices_t) *pDevices) ) + { + mA2DPHandle = id; + LOGV("A2DP device activated. The handle is set to %d", mA2DPHandle); + } +#endif // notify client processes of the new output creation thread->audioConfigChanged_l(AudioSystem::OUTPUT_OPENED); return id; @@ -4984,6 +5343,95 @@ int AudioFlinger::openOutput(uint32_t *pDevices, return 0; } +#ifdef WITH_QCOM_LPA +int AudioFlinger::openSession(uint32_t *pDevices, + uint32_t *pFormat, + uint32_t flags, + int32_t streamType, + int32_t sessionId) +{ + status_t status; + mHardwareStatus = AUDIO_HW_OUTPUT_OPEN; + uint32_t format = pFormat ? *pFormat : 0; + audio_stream_out_t *outStream; + audio_hw_device_t *outHwDev; + + LOGV("openSession(), Device %x, Format %d, flags %x sessionId %x", + pDevices ? *pDevices : 0, + format, + flags, + sessionId); + + if (pDevices == NULL || *pDevices == 0) { + return 0; + } + Mutex::Autolock _l(mLock); + + outHwDev = findSuitableHwDev_l(*pDevices); + if (outHwDev == NULL) + return 0; + status = outHwDev->open_output_session(outHwDev, *pDevices, (int *)&format,sessionId,&outStream); + + LOGV("openSession() openOutputSession returned output %p, Format %d, status %d", + outStream, + format, + status); + + mHardwareStatus = AUDIO_HW_IDLE; + + if (outStream != NULL) { + mLPAOutput = new AudioStreamOut(outHwDev, outStream); + int id = nextUniqueId(); + mLPAHandle = id; + mLPAStreamType = streamType; + mLPAStreamIsActive = true; + if (pFormat) *pFormat = format; + return id; + } + return 0; +} + +status_t AudioFlinger::pauseSession(int output, int32_t streamType) +{ + if (output == mLPAHandle && streamType == mLPAStreamType ) { + mLPAStreamIsActive = false; + } + + return NO_ERROR; +} + +status_t AudioFlinger::resumeSession(int output, int32_t streamType) +{ + if (output == mLPAHandle && streamType == mLPAStreamType ) { + mLPAStreamIsActive = true; + } + + return NO_ERROR; +} + +status_t AudioFlinger::closeSession(int output) +{ + Mutex::Autolock _l(mLock); + LOGV("closeSession() %d", output); + + // Is this required? + //AudioSystem::stopOutput(output, (AudioSystem::stream_type)mStreamType); + + // Delete the Audio session + if (mLPAOutput && (output == mLPAHandle)) { + mLPAOutput->stream->common.standby(&mLPAOutput->stream->common); + mLPAOutput->hwDev->close_output_stream(mLPAOutput->hwDev, mLPAOutput->stream); + delete mLPAOutput; + mLPAOutput = NULL; + mLPAHandle = -1; + mLPAStreamIsActive = false; + mLPAStreamType = -1; + } + + return NO_ERROR; +} +#endif + int AudioFlinger::openDuplicateOutput(int output1, int output2) { Mutex::Autolock _l(mLock); @@ -5029,6 +5477,14 @@ status_t AudioFlinger::closeOutput(int output) void *param2 = 0; audioConfigChanged_l(AudioSystem::OUTPUT_CLOSED, output, param2); mPlaybackThreads.removeItem(output); +#ifdef WITH_QCOM_LPA + if (mA2DPHandle == output) + { + mA2DPHandle = -1; + LOGV("A2DP OutputClosed Notifying Client"); + audioConfigChanged_l(AudioSystem::A2DP_OUTPUT_STATE, mA2DPHandle, param2); + } +#endif } thread->exit(); @@ -5205,7 +5661,12 @@ status_t AudioFlinger::setStreamOutput(uint32_t stream, int output) srcThread->invalidateTracks(stream); } } - +#ifdef WITH_QCOM_LPA + if ( mA2DPHandle == output ) { + LOGV("A2DP Activated and hence notifying the client"); + dstThread->sendConfigEvent(AudioSystem::A2DP_OUTPUT_STATE, mA2DPHandle); + } +#endif return NO_ERROR; } @@ -5746,6 +6207,21 @@ sp<AudioFlinger::EffectHandle> AudioFlinger::ThreadBase::createEffect_l( addEffectChain_l(chain); chain->setStrategy(getStrategyForSession_l(sessionId)); chainCreated = true; +#ifdef WITH_QCOM_LPA + if(sessionId == mAudioFlinger->mLPASessionId) { + // Clear reference to previous effect chain if any + if(mAudioFlinger->mLPAEffectChain.get()) { + mAudioFlinger->mLPAEffectChain.clear(); + } + LOGV("New EffectChain is created for LPA session ID %d", sessionId); + mAudioFlinger->mLPAEffectChain = chain; + chain->setLPAFlag(true); + // For LPA, the volume will be applied in DSP. No need for volume + // control in the Effect chain, so setting it to unity. + uint32_t volume = 0x1000000; // Equals to 1.0 in 8.24 format + chain->setVolume_l(&volume,&volume); + } +#endif } else { effect = chain->getEffectFromDesc_l(desc); } @@ -5774,6 +6250,11 @@ sp<AudioFlinger::EffectHandle> AudioFlinger::ThreadBase::createEffect_l( effect->setDevice(mDevice); effect->setMode(mAudioFlinger->getMode()); +#ifdef WITH_QCOM_LPA + if(chain == mAudioFlinger->mLPAEffectChain) { + effect->setLPAFlag(true); + } +#endif } // create effect handle and connect it to effect module handle = new EffectHandle(effect, client, effectClient, priority); @@ -5877,7 +6358,10 @@ void AudioFlinger::ThreadBase::lockEffectChains_l( { effectChains = mEffectChains; for (size_t i = 0; i < mEffectChains.size(); i++) { - mEffectChains[i]->lock(); +#ifdef WITH_QCOM_LPA + if (mEffectChains[i] != mAudioFlinger->mLPAEffectChain) +#endif + mEffectChains[i]->lock(); } } @@ -5885,7 +6369,10 @@ void AudioFlinger::ThreadBase::unlockEffectChains( Vector<sp <AudioFlinger::EffectChain> >& effectChains) { for (size_t i = 0; i < effectChains.size(); i++) { - effectChains[i]->unlock(); +#ifdef WITH_QCOM_LPA + if (mEffectChains[i] != mAudioFlinger->mLPAEffectChain) +#endif + effectChains[i]->unlock(); } } @@ -6112,7 +6599,11 @@ AudioFlinger::EffectModule::EffectModule(const wp<ThreadBase>& wThread, int id, int sessionId) : mThread(wThread), mChain(chain), mId(id), mSessionId(sessionId), mEffectInterface(NULL), +#ifdef WITH_QCOM_LPA + mStatus(NO_INIT), mState(IDLE), mSuspended(false), mIsForLPA(false) +#else mStatus(NO_INIT), mState(IDLE), mSuspended(false) +#endif { LOGV("Constructor %p", this); int lStatus; @@ -6248,6 +6739,9 @@ sp<AudioFlinger::EffectHandle> AudioFlinger::EffectModule::controlHandle() void AudioFlinger::EffectModule::disconnect(const wp<EffectHandle>& handle, bool unpiniflast) { +#ifdef WITH_QCOM_LPA + setEnabled(false); +#endif LOGV("disconnect() %p handle %p ", this, handle.unsafe_get()); // keep a strong reference on this EffectModule to avoid calling the // destructor before we exit @@ -6353,9 +6847,18 @@ void AudioFlinger::EffectModule::reset_l() (*mEffectInterface)->command(mEffectInterface, EFFECT_CMD_RESET, 0, NULL, 0, NULL); } +#ifdef WITH_QCOM_LPA +status_t AudioFlinger::EffectModule::configure(bool isForLPA, int sampleRate, int channelCount, int frameCount) +#else status_t AudioFlinger::EffectModule::configure() +#endif { uint32_t channels; +#ifdef WITH_QCOM_LPA + // Acquire lock here to make sure that any other thread does not delete + // the effect handle and release the effect module. + Mutex::Autolock _l(mLock); +#endif if (mEffectInterface == NULL) { return NO_INIT; } @@ -6366,11 +6869,25 @@ status_t AudioFlinger::EffectModule::configure() } // TODO: handle configuration of effects replacing track process - if (thread->channelCount() == 1) { - channels = AUDIO_CHANNEL_OUT_MONO; +#ifdef WITH_QCOM_LPA + mIsForLPA = isForLPA; + if(isForLPA) { + if (channelCount == 1) { + channels = AUDIO_CHANNEL_OUT_MONO; + } else { + channels = AUDIO_CHANNEL_OUT_STEREO; + } + LOGV("%s: LPA ON - channels %d", __func__, channels); } else { - channels = AUDIO_CHANNEL_OUT_STEREO; +#endif + if (thread->channelCount() == 1) { + channels = AUDIO_CHANNEL_OUT_MONO; + } else { + channels = AUDIO_CHANNEL_OUT_STEREO; + } +#ifdef WITH_QCOM_LPA } +#endif if ((mDescriptor.flags & EFFECT_FLAG_TYPE_MASK) == EFFECT_FLAG_TYPE_AUXILIARY) { mConfig.inputCfg.channels = AUDIO_CHANNEL_OUT_MONO; @@ -6380,7 +6897,16 @@ status_t AudioFlinger::EffectModule::configure() mConfig.outputCfg.channels = channels; mConfig.inputCfg.format = AUDIO_FORMAT_PCM_16_BIT; mConfig.outputCfg.format = AUDIO_FORMAT_PCM_16_BIT; - mConfig.inputCfg.samplingRate = thread->sampleRate(); +#ifdef WITH_QCOM_LPA + if(isForLPA){ + mConfig.inputCfg.samplingRate = sampleRate; + LOGV("%s: LPA ON - sampleRate %d", __func__, sampleRate); + } else { +#endif + mConfig.inputCfg.samplingRate = thread->sampleRate(); +#ifdef WITH_QCOM_LPA + } +#endif mConfig.outputCfg.samplingRate = mConfig.inputCfg.samplingRate; mConfig.inputCfg.bufferProvider.cookie = NULL; mConfig.inputCfg.bufferProvider.getBuffer = NULL; @@ -6405,7 +6931,16 @@ status_t AudioFlinger::EffectModule::configure() } mConfig.inputCfg.mask = EFFECT_CONFIG_ALL; mConfig.outputCfg.mask = EFFECT_CONFIG_ALL; - mConfig.inputCfg.buffer.frameCount = thread->frameCount(); +#ifdef WITH_QCOM_LPA + if(isForLPA) { + mConfig.inputCfg.buffer.frameCount = frameCount; + LOGV("%s: LPA ON - frameCount %d", __func__, frameCount); + } else { +#endif + mConfig.inputCfg.buffer.frameCount = thread->frameCount(); +#ifdef WITH_QCOM_LPA + } +#endif mConfig.outputCfg.buffer.frameCount = mConfig.inputCfg.buffer.frameCount; LOGV("configure() %p thread %p buffer %p framecount %d", @@ -6553,48 +7088,65 @@ status_t AudioFlinger::EffectModule::command(uint32_t cmdCode, status_t AudioFlinger::EffectModule::setEnabled(bool enabled) { +#ifdef WITH_QCOM_LPA + bool effectStateChanged = false; + { +#endif + Mutex::Autolock _l(mLock); + LOGV("setEnabled %p enabled %d", this, enabled); - Mutex::Autolock _l(mLock); - LOGV("setEnabled %p enabled %d", this, enabled); - - if (enabled != isEnabled()) { - status_t status = AudioSystem::setEffectEnabled(mId, enabled); - if (enabled && status != NO_ERROR) { - return status; - } + if (enabled != isEnabled()) { +#ifdef WITH_QCOM_LPA + effectStateChanged = true; +#endif + status_t status = AudioSystem::setEffectEnabled(mId, enabled); + if (enabled && status != NO_ERROR) { + return status; + } - switch (mState) { - // going from disabled to enabled - case IDLE: - mState = STARTING; - break; - case STOPPED: - mState = RESTART; - break; - case STOPPING: - mState = ACTIVE; - break; + switch (mState) { + // going from disabled to enabled + case IDLE: + mState = STARTING; + break; + case STOPPED: + mState = RESTART; + break; + case STOPPING: + mState = ACTIVE; + break; - // going from enabled to disabled - case RESTART: - mState = STOPPED; - break; - case STARTING: - mState = IDLE; - break; - case ACTIVE: - mState = STOPPING; - break; - case DESTROYED: - return NO_ERROR; // simply ignore as we are being destroyed - } - for (size_t i = 1; i < mHandles.size(); i++) { - sp<EffectHandle> h = mHandles[i].promote(); - if (h != 0) { - h->setEnabled(enabled); + // going from enabled to disabled + case RESTART: + mState = STOPPED; + break; + case STARTING: + mState = IDLE; + break; + case ACTIVE: + mState = STOPPING; + break; + case DESTROYED: + return NO_ERROR; // simply ignore as we are being destroyed + } + for (size_t i = 1; i < mHandles.size(); i++) { + sp<EffectHandle> h = mHandles[i].promote(); + if (h != 0) { + h->setEnabled(enabled); + } } } +#ifdef WITH_QCOM_LPA } + /* + Send notification event to LPA Player when an effect for + LPA output is enabled or disabled. + */ + if (effectStateChanged && mIsForLPA) { + sp<ThreadBase> thread = mThread.promote(); + thread->effectConfigChanged(); + } +#endif return NO_ERROR; } @@ -7026,6 +7578,19 @@ status_t AudioFlinger::EffectHandle::command(uint32_t cmdCode, return disable(); } +#ifdef WITH_QCOM_LPA + LOGV("EffectHandle::command: isOnLPA %d", mEffect->isOnLPA()); + if(mEffect->isOnLPA() && + ((cmdCode == EFFECT_CMD_SET_PARAM) || (cmdCode == EFFECT_CMD_SET_PARAM_DEFERRED) || + (cmdCode == EFFECT_CMD_SET_PARAM_COMMIT) || (cmdCode == EFFECT_CMD_SET_DEVICE) || + (cmdCode == EFFECT_CMD_SET_VOLUME) || (cmdCode == EFFECT_CMD_SET_AUDIO_MODE)) ) { + // Notify LPA Player for the change in Effect module + // TODO: check if it is required to send mLPAHandle + LOGV("Notifying LPA player for the change in effect config"); + mClient->audioFlinger()->audioConfigChanged_l(AudioSystem::EFFECT_CONFIG_CHANGED, 0, NULL); + } +#endif + return mEffect->command(cmdCode, cmdSize, pCmdData, replySize, pReplyData); } @@ -7097,7 +7662,14 @@ AudioFlinger::EffectChain::EffectChain(const wp<ThreadBase>& wThread, int sessionId) : mThread(wThread), mSessionId(sessionId), mActiveTrackCnt(0), mTrackCnt(0), mTailBufferCount(0), mOwnInBuffer(false), mVolumeCtrlIdx(-1), mLeftVolume(UINT_MAX), mRightVolume(UINT_MAX), - mNewLeftVolume(UINT_MAX), mNewRightVolume(UINT_MAX) + mNewLeftVolume(UINT_MAX), +#ifndef WITH_QCOM_LPA + mNewRightVolume(UINT_MAX) +#else + mNewRightVolume(UINT_MAX), + mIsForLPATrack(false) +#endif + { mStrategy = AudioSystem::getStrategyForStream(AUDIO_STREAM_MUSIC); sp<ThreadBase> thread = mThread.promote(); @@ -7147,6 +7719,20 @@ sp<AudioFlinger::EffectModule> AudioFlinger::EffectChain::getEffectFromId_l(int return effect; } +#ifdef WITH_QCOM_LPA +sp<AudioFlinger::EffectModule> AudioFlinger::EffectChain::getEffectFromIndex_l(int idx) +{ + sp<EffectModule> effect = NULL; + if(idx < 0 || idx >= mEffects.size()) { + LOGE("EffectChain::getEffectFromIndex_l: invalid index %d", idx); + } + if(mEffects.size() > 0){ + effect = mEffects[idx]; + } + return effect; +} +#endif + // getEffectFromType_l() must be called with ThreadBase::mLock held sp<AudioFlinger::EffectModule> AudioFlinger::EffectChain::getEffectFromType_l( const effect_uuid_t *type) @@ -7197,7 +7783,11 @@ void AudioFlinger::EffectChain::process_l() } size_t size = mEffects.size(); +#ifdef WITH_QCOM_LPA + if (doProcess || isForLPATrack()) { +#else if (doProcess) { +#endif for (size_t i = 0; i < size; i++) { mEffects[i]->process(); } diff --git a/services/audioflinger/AudioFlinger.h b/services/audioflinger/AudioFlinger.h index 9bd2c7f..ddc062b 100644 --- a/services/audioflinger/AudioFlinger.h +++ b/services/audioflinger/AudioFlinger.h @@ -87,6 +87,17 @@ public: int *sessionId, status_t *status); +#ifdef WITH_QCOM_LPA + virtual void createSession( + pid_t pid, + uint32_t sampleRate, + int channelCount, + int *sessionId, + status_t *status); + + virtual void deleteSession(); +#endif + virtual uint32_t sampleRate(int output) const; virtual int channelCount(int output) const; virtual uint32_t format(int output) const; @@ -99,6 +110,9 @@ public: virtual float masterVolume() const; virtual bool masterMute() const; +#ifdef WITH_QCOM_LPA + virtual status_t setSessionVolume(int stream, float left, float right); +#endif virtual status_t setStreamVolume(int stream, float value, int output); virtual status_t setStreamMute(int stream, bool muted); @@ -125,6 +139,20 @@ public: uint32_t *pLatencyMs, uint32_t flags); +#ifdef WITH_QCOM_LPA + virtual int openSession( uint32_t *pDevices, + uint32_t *pFormat, + uint32_t flags, + int32_t streamType, + int32_t sessionId); + + virtual status_t pauseSession(int output, int32_t streamType); + + virtual status_t resumeSession(int output, int32_t streamType); + + virtual status_t closeSession(int output); +#endif + virtual int openDuplicateOutput(int output1, int output2); virtual status_t closeOutput(int output); @@ -147,6 +175,10 @@ public: virtual status_t getRenderPosition(uint32_t *halFrames, uint32_t *dspFrames, int output); +#ifdef WITH_QCOM_LPA + virtual status_t deregisterClient(const sp<IAudioFlingerClient>& client); +#endif + virtual int newAudioSessionId(); virtual void acquireAudioSessionId(int audioSession); @@ -211,7 +243,11 @@ public: uint32_t getMode() { return mMode; } bool btNrecIsOff() { return mBtNrecIsOff; } - +#ifdef WITH_QCOM_LPA + void applyEffectsOn(int16_t *buffer1, + int16_t *buffer2, + int size); +#endif private: AudioFlinger(); virtual ~AudioFlinger(); @@ -248,7 +284,11 @@ private: public: NotificationClient(const sp<AudioFlinger>& audioFlinger, const sp<IAudioFlingerClient>& client, +#ifdef WITH_QCOM_LPA + sp<IBinder> binder); +#else pid_t pid); +#endif virtual ~NotificationClient(); sp<IAudioFlingerClient> client() { return mClient; } @@ -261,7 +301,11 @@ private: NotificationClient& operator = (const NotificationClient&); sp<AudioFlinger> mAudioFlinger; +#ifdef WITH_QCOM_LPA + sp<IBinder> mBinder; +#else pid_t mPid; +#endif sp<IAudioFlingerClient> mClient; }; @@ -424,6 +468,9 @@ private: virtual status_t setParameters(const String8& keyValuePairs); virtual String8 getParameters(const String8& keys) = 0; virtual void audioConfigChanged_l(int event, int param = 0) = 0; +#ifdef WITH_QCOM_LPA + void effectConfigChanged(); +#endif void sendConfigEvent(int event, int param = 0); void sendConfigEvent_l(int event, int param = 0); void processConfigEvents(); @@ -928,7 +975,11 @@ private: void removeClient_l(pid_t pid); +#ifdef WITH_QCOM_LPA + void removeNotificationClient(sp<IBinder> binder); +#else void removeNotificationClient(pid_t pid); +#endif // record thread @@ -1087,7 +1138,14 @@ private: void *pReplyData); void reset_l(); +#ifdef WITH_QCOM_LPA + status_t configure(bool isForLPA = false, + int sampleRate = 0, + int channelCount = 0, + int frameCount = 0); +#else status_t configure(); +#endif status_t init(); uint32_t state() { return mState; @@ -1130,6 +1188,11 @@ private: bool isPinned() { return mPinned; } void unPin() { mPinned = false; } +#ifdef WITH_QCOM_LPA + bool isOnLPA() { return mIsForLPA;} + void setLPAFlag(bool isForLPA) {mIsForLPA = isForLPA; } +#endif + status_t dump(int fd, const Vector<String16>& args); protected: @@ -1161,6 +1224,9 @@ private: // sending disable command. uint32_t mDisableWaitCnt; // current process() calls count during disable period. bool mSuspended; // effect is suspended: temporarily disabled by framework +#ifdef WITH_QCOM_LPA + bool mIsForLPA; +#endif }; // The EffectHandle class implements the IEffect interface. It provides resources @@ -1263,12 +1329,18 @@ private: status_t addEffect_l(const sp<EffectModule>& handle); size_t removeEffect_l(const sp<EffectModule>& handle); +#ifdef WITH_QCOM_LPA + size_t getNumEffects() { return mEffects.size(); } +#endif int sessionId() { return mSessionId; } void setSessionId(int sessionId) { mSessionId = sessionId; } sp<EffectModule> getEffectFromDesc_l(effect_descriptor_t *descriptor); sp<EffectModule> getEffectFromId_l(int id); +#ifdef WITH_QCOM_LPA + sp<EffectModule> getEffectFromIndex_l(int idx); +#endif sp<EffectModule> getEffectFromType_l(const effect_uuid_t *type); bool setVolume_l(uint32_t *left, uint32_t *right); void setDevice_l(uint32_t device); @@ -1311,6 +1383,10 @@ private: bool enabled); status_t dump(int fd, const Vector<String16>& args); +#ifdef WITH_QCOM_LPA + bool isForLPATrack() {return mIsForLPATrack; } + void setLPAFlag(bool flag) {mIsForLPATrack = flag;} +#endif protected: friend class AudioFlinger; @@ -1353,6 +1429,9 @@ private: uint32_t mNewLeftVolume; // new volume on left channel uint32_t mNewRightVolume; // new volume on right channel uint32_t mStrategy; // strategy for this effect chain +#ifdef WITH_QCOM_LPA + bool mIsForLPATrack; +#endif // mSuspendedEffects lists all effect currently suspended in the chain // use effect type UUID timelow field as key. There is no real risk of identical // timeLow fields among effect type UUIDs. @@ -1396,17 +1475,40 @@ private: DefaultKeyedVector< int, sp<PlaybackThread> > mPlaybackThreads; PlaybackThread::stream_type_t mStreamTypes[AUDIO_STREAM_CNT]; +#ifdef WITH_QCOM_LPA + float mLPALeftVol; + float mLPARightVol; +#endif float mMasterVolume; bool mMasterMute; DefaultKeyedVector< int, sp<RecordThread> > mRecordThreads; - +#ifdef WITH_QCOM_LPA + DefaultKeyedVector< sp<IBinder>, sp<NotificationClient> > mNotificationClients; +#else DefaultKeyedVector< pid_t, sp<NotificationClient> > mNotificationClients; +#endif volatile int32_t mNextUniqueId; uint32_t mMode; bool mBtNrecIsOff; +#ifdef WITH_QCOM_LPA + int mA2DPHandle; // Handle to notify client (MIO) + int mLPAStreamType; + AudioStreamOut *mLPAOutput; + audio_io_handle_t mLPAHandle; + int mLPAStreamIsActive; + volatile bool mIsEffectConfigChanged; +#endif Vector<AudioSessionRef*> mAudioSessionRefs; + +#ifdef WITH_QCOM_LPA + public: + int mLPASessionId; + sp<EffectChain> mLPAEffectChain; + int mLPASampleRate; + int mLPANumChannels; +#endif }; diff --git a/services/audioflinger/AudioPolicyService.cpp b/services/audioflinger/AudioPolicyService.cpp index 8da5ca1..f4d99b3 100644 --- a/services/audioflinger/AudioPolicyService.cpp +++ b/services/audioflinger/AudioPolicyService.cpp @@ -267,6 +267,21 @@ audio_io_handle_t AudioPolicyService::getOutput(audio_stream_type_t stream, return mpAudioPolicy->get_output(mpAudioPolicy, stream, samplingRate, format, channels, flags); } +#ifdef WITH_QCOM_LPA +audio_io_handle_t AudioPolicyService::getSession(audio_stream_type_t stream, + uint32_t format, + audio_policy_output_flags_t flags, + int32_t sessionId) +{ + if (mpAudioPolicy == NULL) { + return 0; + } + LOGV("getSession() tid %d", gettid()); + Mutex::Autolock _l(mLock); + return mpAudioPolicy->get_session(mpAudioPolicy, stream, format, flags, sessionId); +} +#endif + status_t AudioPolicyService::startOutput(audio_io_handle_t output, audio_stream_type_t stream, int session) @@ -301,6 +316,66 @@ void AudioPolicyService::releaseOutput(audio_io_handle_t output) mpAudioPolicy->release_output(mpAudioPolicy, output); } +#ifdef WITH_QCOM_LPA +status_t AudioPolicyService::pauseSession(audio_io_handle_t output, + audio_stream_type_t stream) +{ + LOGV("pauseSession() tid %d", gettid()); + if (mpAudioPolicy != NULL) { + Mutex::Autolock _l(mLock); + mpAudioPolicy->pause_session(mpAudioPolicy,output, + stream); + } + + sp<IAudioFlinger> af = AudioSystem::get_audio_flinger(); + if (af == 0) { + LOGW("pauseSession() could not get AudioFlinger"); + return 0; + } + + return af->pauseSession((int) output, (int32_t) stream); +} + +status_t AudioPolicyService::resumeSession(audio_io_handle_t output, + audio_stream_type_t stream) +{ + LOGV("resumeSession() tid %d", gettid()); + + sp<IAudioFlinger> af = AudioSystem::get_audio_flinger(); + if (af == 0) { + LOGW("resumeSession() could not get AudioFlinger"); + return 0; + } + + if (NO_ERROR != af->resumeSession((int) output, (int32_t) stream)) + { + LOGE("Resume Session failed from AudioFligner"); + } + + if (mpAudioPolicy != NULL) { + Mutex::Autolock _l(mLock); + mpAudioPolicy->resume_session(mpAudioPolicy,output, + stream); + } + + return 0; +} + +status_t AudioPolicyService::closeSession(audio_io_handle_t output) +{ + LOGV("closeSession() tid %d", gettid()); + if (mpAudioPolicy != NULL) { + Mutex::Autolock _l(mLock); + mpAudioPolicy->release_session(mpAudioPolicy,output); + } + + sp<IAudioFlinger> af = AudioSystem::get_audio_flinger(); + if (af == 0) return PERMISSION_DENIED; + + return af->closeSession(output); +} +#endif + audio_io_handle_t AudioPolicyService::getInput(int inputSource, uint32_t samplingRate, uint32_t format, @@ -1363,6 +1438,34 @@ static audio_io_handle_t aps_open_output(void *service, pLatencyMs, flags); } +#ifdef WITH_QCOM_LPA +static audio_io_handle_t aps_open_session(void *service, + uint32_t *pDevices, + uint32_t *pFormat, + audio_policy_output_flags_t flags, + int32_t stream, + int32_t sessionId) +{ + sp<IAudioFlinger> af = AudioSystem::get_audio_flinger(); + if (af == 0) { + LOGW("openSession() could not get AudioFlinger"); + return 0; + } + + return af->openSession(pDevices, (uint32_t *)pFormat, flags, stream, sessionId); +} + +static int aps_close_session(void *service, audio_io_handle_t output) +{ + LOGV("closeSession() tid %d", gettid()); + + sp<IAudioFlinger> af = AudioSystem::get_audio_flinger(); + if (af == 0) return PERMISSION_DENIED; + + return af->closeSession(output); +} +#endif + static audio_io_handle_t aps_open_dup_output(void *service, audio_io_handle_t output1, audio_io_handle_t output2) @@ -1505,6 +1608,10 @@ static int aps_set_voice_volume(void *service, float volume, int delay_ms) namespace { struct audio_policy_service_ops aps_ops = { open_output : aps_open_output, +#ifdef WITH_QCOM_LPA + open_session : aps_open_session, + close_session : aps_close_session, +#endif open_duplicate_output : aps_open_dup_output, close_output : aps_close_output, suspend_output : aps_suspend_output, diff --git a/services/audioflinger/AudioPolicyService.h b/services/audioflinger/AudioPolicyService.h index d898a53..fe82d6d 100644 --- a/services/audioflinger/AudioPolicyService.h +++ b/services/audioflinger/AudioPolicyService.h @@ -69,6 +69,12 @@ public: uint32_t channels = 0, audio_policy_output_flags_t flags = AUDIO_POLICY_OUTPUT_FLAG_INDIRECT); +#ifdef WITH_QCOM_LPA + virtual audio_io_handle_t getSession(audio_stream_type_t stream, + uint32_t format = AUDIO_FORMAT_DEFAULT, + audio_policy_output_flags_t flags = AUDIO_POLICY_OUTPUT_FLAG_DIRECT, + int32_t sessionId=-1); +#endif virtual status_t startOutput(audio_io_handle_t output, audio_stream_type_t stream, int session = 0); @@ -76,6 +82,11 @@ public: audio_stream_type_t stream, int session = 0); virtual void releaseOutput(audio_io_handle_t output); +#ifdef WITH_QCOM_LPA + virtual status_t pauseSession(audio_io_handle_t output, audio_stream_type_t stream); + virtual status_t resumeSession(audio_io_handle_t output, audio_stream_type_t stream); + virtual status_t closeSession(audio_io_handle_t output); +#endif virtual audio_io_handle_t getInput(int inputSource, uint32_t samplingRate = 0, uint32_t format = AUDIO_FORMAT_DEFAULT, |