page.title=API Overview page.keywords=preview,sdk,compatibility page.tags=previewresources, androidm sdk.platform.apiLevel=22-mnc page.image=images/cards/card-api-overview_16-9_2x.png @jd:body
The M Developer Preview gives you an advance look at the upcoming release for the Android platform, which offers new features for users and app developers. This document provides an introduction to the most notable APIs.
The M Developer Preview 3 release includes the final APIs for Android 6.0 (API level 23). If you are preparing an app for use on Android 6.0, download the latest SDK and to complete your final updates and release testing. You can review the final APIs in the API Reference and see the API differences in the Android API Differences Report.
Important: You may now publish apps that target Android 6.0 (API level 23) to the Google Play store.
Note: If you have been working with previous preview releases and want to see the differences between the final API and previous preview versions, download the additional difference reports included in the preview docs reference.
If you have previously published an app for Android, be aware that your app might be affected by changes in the platform.
Please see Behavior Changes for complete information.
This preview enhances Android’s intent system by providing more powerful app linking. This feature allows you to associate an app with a web domain you own. Based on this association, the platform can determine the default app to use to handle a particular web link and skip prompting users to select an app. To learn how to implement this feature, see App Linking.
The system now performs automatic full data backup and restore for apps. For the duration of the M Developer Preview program, all apps are backed up, independent of which SDK version they target. After the final M SDK release, your app must target M to enable this behavior; you do not need to add any additional code. If users delete their Google accounts, their backup data is deleted as well. To learn how this feature works and how to configure what to back up on the file system, see Auto Backup for Apps.
This preview offers new APIs to let you authenticate users by using their fingerprint scans on supported devices, and check how recently the user was last authenticated using a device unlocking mechanism (such as a lockscreen password). Use these APIs in conjunction with the Android Keystore system.
To authenticate users via fingerprint scan, get an instance of the new {@link android.hardware.fingerprint.FingerprintManager} class and call the {@link android.hardware.fingerprint.FingerprintManager#authenticate(android.hardware.fingerprint.FingerprintManager.CryptoObject, android.os.CancellationSignal, int, android.hardware.fingerprint.FingerprintManager.AuthenticationCallback, android.os.Handler) authenticate()} method. Your app must be running on a compatible device with a fingerprint sensor. You must implement the user interface for the fingerprint authentication flow on your app, and use the standard Android fingerprint icon in your UI. The Android fingerprint icon ({@code c_fp_40px.png}) is included in the sample app. If you are developing multiple apps that use fingerprint authentication, note that each app must authenticate the user’s fingerprint independently.
To use this feature in your app, first add the {@link android.Manifest.permission#USE_FINGERPRINT} permission in your manifest.
<uses-permission android:name="android.permission.USE_FINGERPRINT" />
To see an app implementation of fingerprint authentication, refer to the Fingerprint Dialog sample. For a demonstration of how you can use these authentication APIs in conjunction with other Android APIs, see the video Fingerprint and Payment APIs.
If you are testing this feature, follow these steps:
adb -e emu finger touch <finger_id>
On Windows, you may have to run {@code telnet 127.0.0.1 <emulator-id>} followed by {@code finger touch <finger_id>}.
Your app can authenticate users based on how recently they last unlocked their device. This feature frees users from having to remember additional app-specific passwords, and avoids the need for you to implement your own authentication user interface. Your app should use this feature in conjunction with a public or secret key implementation for user authentication.
To set the timeout duration for which the same key can be re-used after a user is successfully authenticated, call the new {@link android.security.keystore.KeyGenParameterSpec.Builder#setUserAuthenticationValidityDurationSeconds(int) setUserAuthenticationValidityDurationSeconds()} method when you set up a {@link javax.crypto.KeyGenerator} or {@link java.security.KeyPairGenerator}.
Avoid showing the re-authentication dialog excessively -- your apps should try using the cryptographic object first and if the the timeout expires, use the {@link android.app.KeyguardManager#createConfirmDeviceCredentialIntent(java.lang.CharSequence, java.lang.CharSequence) createConfirmDeviceCredentialIntent()} method to re-authenticate the user within your app.
To see an app implementation of this feature, refer to the Confirm Credential sample.
This preview provides you with APIs to make sharing intuitive and quick for users. You can now define direct share targets that launch a specific activity in your app. These direct share targets are exposed to users via the Share menu. This feature allows users to share content to targets, such as contacts, within other apps. For example, the direct share target might launch an activity in another social network app, which lets the user share content directly to a specific friend or community in that app.
To enable direct share targets you must define a class that extends the {@link android.service.chooser.ChooserTargetService} class. Declare your service in the manifest. Within that declaration, specify the {@link android.Manifest.permission#BIND_CHOOSER_TARGET_SERVICE} permission and an intent filter using the {@link android.service.chooser.ChooserTargetService#SERVICE_INTERFACE SERVICE_INTERFACE} action.
The following example shows how you might declare the {@link android.service.chooser.ChooserTargetService} in your manifest.
<service android:name=".ChooserTargetService" android:label="@string/service_name" android:permission="android.permission.BIND_CHOOSER_TARGET_SERVICE"> <intent-filter> <action android:name="android.service.chooser.ChooserTargetService" /> </intent-filter> </service>
For each activity that you want to expose to {@link android.service.chooser.ChooserTargetService}, add a {@code <meta-data>} element with the name {@code "android.service.chooser.chooser_target_service"} in your app manifest.
<activity android:name=".MyShareActivity” android:label="@string/share_activity_label"> <intent-filter> <action android:name="android.intent.action.SEND" /> </intent-filter> <meta-data android:name="android.service.chooser.chooser_target_service" android:value=".ChooserTargetService" /> </activity>
This preview provides a new voice interaction API which, together with Voice Actions, allows you to build conversational voice experiences into your apps. Call the {@link android.app.Activity#isVoiceInteraction()} method to determine if a voice action triggered your activity. If so, your app can use the {@link android.app.VoiceInteractor} class to request a voice confirmation from the user, select from a list of options, and more.
Most voice interactions originate from a user voice action. A voice interaction activity can also, however, start without user input. For example, another app launched through a voice interaction can also send an intent to launch a voice interaction. To determine if your activity launched from a user voice query or from another voice interaction app, call the {@link android.app.Activity#isVoiceInteractionRoot()} method. If another app launched your activity, the method returns {@code false}. Your app may then prompt the user to confirm that they intended this action.
To learn more about implementing voice actions, see the Voice Actions developer site.
This preview offers a new way for users to engage with your apps through an assistant. To use this feature, the user must enable the assistant to use the current context. Once enabled, the user can summon the assistant within any app, by long-pressing on the Home button.
Your app can elect to not share the current context with the assistant by setting the {@link android.view.WindowManager.LayoutParams#FLAG_SECURE} flag. In addition to the standard set of information that the platform passes to the assistant, your app can share additional information by using the new {@link android.app.assist.AssistContent} class.
To provide the assistant with additional context from your app, follow these steps:
This preview adds the following API changes for notifications:
This preview provides improved support for user input using a Bluetooth stylus. Users can pair and connect a compatible Bluetooth stylus with their phone or tablet. While connected, position information from the touch screen is fused with pressure and button information from the stylus to provide a greater range of expression than with the touch screen alone. Your app can listen for stylus button presses and perform secondary actions, by registering {@link android.view.View.OnContextClickListener} and {@link android.view.GestureDetector.OnContextClickListener} objects in your activity.
Use the {@link android.view.MotionEvent} methods and constants to detect stylus button interactions:
If your app performs performs Bluetooth Low Energy scans, use the new {@link android.bluetooth.le.ScanSettings.Builder#setCallbackType(int) setCallbackType()} method to specify that you want the system to notify callbacks when it first finds, or sees after a long time, an advertisement packet matching the set {@link android.bluetooth.le.ScanFilter}. This approach to scanning is more power-efficient than what’s provided in the previous platform version.
This preview adds support for the Hotspot 2.0 Release 1 spec on Nexus 6 and Nexus 9 devices. To provision Hotspot 2.0 credentials in your app, use the new methods of the {@link android.net.wifi.WifiEnterpriseConfig} class, such as {@link android.net.wifi.WifiEnterpriseConfig#setPlmn(java.lang.String) setPlmn()} and {@link android.net.wifi.WifiEnterpriseConfig#setRealm(java.lang.String) setRealm()}. In the {@link android.net.wifi.WifiConfiguration} object, you can set the {@link android.net.wifi.WifiConfiguration#FQDN} and the {@link android.net.wifi.WifiConfiguration#providerFriendlyName} fields. The new {@link android.net.wifi.ScanResult#isPasspointNetwork()} method indicates if a detected network represents a Hotspot 2.0 access point.
The platform now allows apps to request that the display resolution be upgraded to 4K rendering on compatible hardware. To query the current physical resolution, use the new {@link android.view.Display.Mode} APIs. If the UI is drawn at a lower logical resolution and is upscaled to a larger physical resolution, be aware that the physical resolution the {@link android.view.Display.Mode#getPhysicalWidth()} method returns may differ from the logical resolution reported by {@link android.view.Display#getSize(android.graphics.Point) getSize()}.
You can request the system to change the physical resolution in your app as it runs, by setting the {@link android.view.WindowManager.LayoutParams#preferredDisplayModeId} property of your app’s window. This feature is useful if you want to switch to 4K display resolution. While in 4K display mode, the UI continues to be rendered at the original resolution (such as 1080p) and is upscaled to 4K, but {@link android.view.SurfaceView} objects may show content at the native resolution.
Theme attributes are now supported in {@link android.content.res.ColorStateList} for devices running the M Preview. The {@link android.content.res.Resources#getColorStateList(int) getColorStateList()} and {@link android.content.res.Resources#getColor(int) getColor()} methods have been deprecated. If you are calling these APIs, call the new {@link android.content.Context#getColorStateList(int) getColorStateList()} or {@link android.content.Context#getColor(int) getColor()} methods instead. These methods are also available in the v4 appcompat library via {@link android.support.v4.content.ContextCompat}.
This preview adds enhancements to audio processing on Android, including:
This preview adds new capabilities to the video processing APIs, including:
This preview includes the following new APIs for accessing the camera’s flashlight and for camera reprocessing of images:
If a camera device has a flash unit, you can call the {@link android.hardware.camera2.CameraManager#setTorchMode(java.lang.String, boolean) setTorchMode()} method to switch the flash unit’s torch mode on or off without opening the camera device. The app does not have exclusive ownership of the flash unit or the camera device. The torch mode is turned off and becomes unavailable whenever the camera device becomes unavailable, or when other camera resources keeping the torch on become unavailable. Other apps can also call {@link android.hardware.camera2.CameraManager#setTorchMode(java.lang.String, boolean) setTorchMode()} to turn off the torch mode. When the last app that turned on the torch mode is closed, the torch mode is turned off.
You can register a callback to be notified about torch mode status by calling the {@link android.hardware.camera2.CameraManager#registerTorchCallback(android.hardware.camera2.CameraManager.TorchCallback, android.os.Handler) registerTorchCallback()} method. The first time the callback is registered, it is immediately called with the torch mode status of all currently known camera devices with a flash unit. If the torch mode is turned on or off successfully, the {@link android.hardware.camera2.CameraManager.TorchCallback#onTorchModeChanged(java.lang.String, boolean) onTorchModeChanged()} method is invoked.
The {@link android.hardware.camera2 Camera2} API is extended to support YUV and private
opaque format image reprocessing. To determine if these reprocessing capabilities are available,
call {@link android.hardware.camera2.CameraManager#getCameraCharacteristics(java.lang.String)
getCameraCharacteristics()} and check for the
{@link android.hardware.camera2.CameraCharacteristics#REPROCESS_MAX_CAPTURE_STALL} key. If a
device supports reprocessing, you can create a reprocessable camera capture session by calling
createReprocessableCaptureSession()
,
and create requests for input buffer reprocessing.
Use the {@link android.media.ImageWriter} class to connect the input buffer flow to the camera reprocessing input. To get an empty buffer, follow this programming model:
If you are using a {@link android.media.ImageWriter} object together with an {@link android.graphics.ImageFormat#PRIVATE} image, your app cannot access the image data directly. Instead, pass the {@link android.graphics.ImageFormat#PRIVATE} image directly to the {@link android.media.ImageWriter} by calling the {@link android.media.ImageWriter#queueInputImage(android.media.Image) queueInputImage()} method without any buffer copy.
The {@link android.media.ImageReader} class now supports {@link android.graphics.ImageFormat#PRIVATE} format image streams. This support allows your app to maintain a circular image queue of {@link android.media.ImageReader} output images, select one or more images, and send them to the {@link android.media.ImageWriter} for camera reprocessing.
This preview includes the following new APIs for Android for Work:
A Profile or Device Owner can set a permission policy for all runtime requests of all applications using {@link android.app.admin.DevicePolicyManager#setPermissionPolicy(android.content.ComponentName, int) setPermissionPolicy()}, to either prompt the user to grant the permission or automatically grant or deny the permission silently. If the latter policy is set, the user cannot modify the selection made by the Profile or Device Owner within the app’s permissions screen in Settings.
For a detailed view of all API changes in the M Developer Preview, see the API Differences Report.