This post is part of a series:
- Inside SafetyNet part 1 (Oct 2015)
- Inside SafetyNet part 2 (Feb 2016)
- Inside SafetyNet part 3 (Nov 2016)
- How to implement Attestation securely using server-side checks (my blog, Cigital blog)
- SafetyNet Playground (POC server-side implementation) Play Store - Android source - PHP source
What is SafetyNet
The Android Pay application got released a few days ago. Some people using rooted devices discovered that it refused to work. This is because it uses a new Google Play Services feature: SafetyNet attestation.
SafetyNet attestation is Google telling the app their opinion regarding the CTS compatibility status of a device. CTS normally stands for Compatibility Test Suite, which is a suite of tests a device must pass, prior to release, to be allowed to include Google Play Services. It means something different in the SafetyNet context, like ‘the device is currently in a non-tampered state’.
Tampered state has multiple definitions and can include ‘being rooted’, ‘being monitored’ or ‘being infected with malware’.
‘CTS compatible’ does not mean vulnerability-free. Google does not check if a device is up to date or vulnerable to public exploits, as part of the SafetyNet service. It checks if it has been tampered compared to an expected normal and safe state.
One can argue that this is what application developers want: Vulnerability status of the device would be useful to end users but not so much to developers. The reason is that this is unrealistic: If an application refused to run on vulnerable devices, very few apps would work even in the most recent Android devices. SafetyNet is about assuring app developers that the device is ‘safe to run’, as opposed to ensure end users the device is ‘secure’ - different target group, different goals.
Google obviously didn’t want to use a very loaded term like rooting or tamper detection, so it went with the neutral “CTS compatible”.
Using SafetyNet attestation
SafetyNet Attestation is a newish feature, at least for 3rd party application developers. Any application developer can use it in his app.
The process has a few steps:
- An application calls
SafetyNetApi.attest(). This is provided by the Google Play Services SDK. The request uses
GoogleApiClientto reach the Google servers.
- The request must include a nonce. This is very important to prevent replay attacks. Best practice is for a server to generate this nonce and send it to the device to use in the request.
- Google responds with the attestation result. This is in
JSON Web Signature (JWS)format - a type of signed JSON object. The response includes the various signatures and the following:
- The developer needs to verify the fields of the response manually. The signature of the response can also be verified by Google itself using another API call, and this is the best practice.
- Assuming the response is verified, if
ctsProfileMatchis true, then the developer can be have a degree of certainty that the device has not been tampered (..is CTS compatible).
What is interesting is that the response can also be verified on the developer’s server. An app can grab the JWS attestation response and send it to the application server it normally connects to. That server can then directly ask Google to verify the JWS signature (or do it itself) and proceed to act on the results on the server side, for example deny API access to the client.
This is good design: security decisions happen on the server, not the client. Even if the client is manipulated, the server will refuse to provide services. From what I can tell, in AndroidPay, the attestation result is used as a parameter in pretty much every wallet & pay API. Having said that, this doesn’t mean that the attestation system can’t be fooled - a malicious environment could feed tampered data to the collectors. Moreover, it doesn’t mean that the attestation result is always fresh.. But better something than nothing.
Developers can find instructions on using this feature here: https://developer.android.com/training/safetynet/index.html
But how does it all work?
SafetyNet System Design
SafetyNet is a data collection system used by Google to gather security-related information from 1 billion Play-enabled Android devices.
The idea is that Google Play Services, a closed-source package on the device starts an always-running service named
snet. This service frequently collects various pieces of data from the device and sends it back to Google.
Google uses this information for multiple purposes, such as ecosystem analysis and threat profiling of devices.
It turns out that based on the collected information, Google is in a position to determine if a device is being tampered in a multitude of ways. Google maintains this information and knows at any point in time if a specific device is in a suspicious state.
Attestation is how this information is exposed to 3rd party developers. When an application performs an attestation request, Google sends back a signed response that includes its decision about “CTS compatibility”, based on analysis of information previously collected from the device.
The actual analysis of the collected data is done server-side, leaving less room for manipulation; again good security design.
Of course, understanding the which pieces of data are collected could mean that someone could eventually develop a hooking system that constantly feeds
snet with ‘non-malicious’ information.
However, this is not trivial:
- The mechanism used to update
snetis very flexible, as discussed below.
- Google does not disclose how exactly it determines “CTS compatibility” based on the collected data. For much of this data it is not very obvious what would constitute ‘safe’ and what not. For example, if Google collects a list of all the paths of files in the filesystem, an attacker would have to figure out what to hide by trial and error. Even though he would be able to make educated guesses, he wouldn’t know what exactly Google is looking for.
When a 3rd party application wants to do an attestation request, it calls
com.google.android.gms.safetynet.SafetyNetApi;->attest(mGoogleApiClient, nonce), the attest method of the Play Services SDK included in the app.
This library transacts with the
com.google.android.gms.safetynet.internal.ISafetyNetService service running on the device through Binder.
SafetyNetService is one of the Google Play Services. The service handling code is packaged in the Google Play Services package that ships with Google-endorsed Android devices and is updated through the Play Store.
However, digging a bit deeper reveals a very interesting trick:
The actual implementation of
snet is not inside any APKs.
The SafetyNet service reaches out to a Google server and downloads a binary package with the code.
It goes to great lengths to validate the integrity of the package, for example using hardcoded certificates (pinning). This binary package is essentially a JAR file that contains a
classes.dex file with java bytecode. Play Services caches it in
snet.dex) and loads it dynamically using reflection.
This is very convenient for Google: The actual implementation of the collection methods can be very easily updated, even without pushing apps through Google Play.
Here are two versions of the package: https://www.gstatic.com/android/snet/12042014-1626247.snet https://www.gstatic.com/android/snet/07172015-2097462.snet
These files are not obfuscated in any way (not even using ProGuard) - although Google Play packages are. After talking to members of the Android Security Team, it appears this is done on purpose: they want an implementation than can be easily reviewed. My guess is that they want to make sure that people know they are not collecting sensitive/privacy related information. Obfuscation could cast doubts.
As you can see from the package dates, parts of this system are not new at all - SafetyNet exists since at least December 2014 but it’s been considerably enhanced in recent versions.
This JAR file holds the implementation of the
com.google.android.snet.Snet class. The
enterSnet method is where the fun begins - this is what Play Services call through reflection.
Google downloads security-related code in more occasions. For example, Android devices also download a native shared library named
droidguard and run it, but let’s leave that for another post.
The system is very modular:
snet can be started by Play Services using a configuration file that defines which collection modules will be used. Not all of them are enabled by default.
Let’s see what each of these modules does in more detail:
This creates a list of the preferred packages for certain actions and reports back which packages are used for web browsing and package installations. It specifically checks if the preferred web browser is
I can assume this is done to detect situations where a user has authorised a non-standard browser that could be a malware - Google could maintain a list on their backend.
Reports back if the files
/system/xbin/su exist. If they do, it is a clear indication of tampering.
I do hope that the attestation result is not solely based on this check - although there’s evidence it plays a major role. On a non-infected, just rooted device, moving these files elsewhere seems to result in a positive attestation result. The same result is achieved via actions like “Disable SuperSU”. Maybe Google is being extra cautious.
Collects various security-related fields from
android.provider.Settings$Global depending on the OS version. Settings collected include the values of variables like
Obviously all these are indications that something might be ‘interesting’ about the device.
Reports back the current locale configuration of the device. I assume this is done so that they can risk profile users according to locality and adjust their thresholds as needed.
This is an interesting module. It tries to establish if the device correctly follows SSL redirects. It collects information like the type of the active connection, the DNS servers in use, the available connections.
It then creates a request to the following hosts:
http://pubads.g.doubleclick.net - using random user agent, even mimicking an iPhone, having ‘follow redirects’ disabled. All these hosts redirect to HTTPS versions of the sites and the module collects the Location HTTP header from the redirect response.
It then does the same request again, this time following the redirects. After the redirects reports back the IP and hostname of the the final endpoint host. This second request is even done randomly using either
ApacheHTTPclient classes or
Some people asked for more information about the
doubleclick.net domain. This is a domain used to serve advertisements to applications and the service is owned by Google. I can only assume this is also an attempt to detect if an ad-blocker is installed.
This anothere very interesting module. It attempts to figure out if communications can be intercepted in a number of ways, such as via having an SSL-Kill-Switch app installed.
The code attempts to contact three hosts:
For each host the following algorithm is followed and all results of every step are captured, along with any possible errors.
- The module attempts an SSL socket connection using an ‘accept-all’
- The peer certificates are retrieved
- The code finds all TrustManagers of the system
- Each found
TrustManageris initialized with no trust anchors and the
checkServerTrusted()method is executed on the retrieved certificate chain. This would normally throw exceptions but under most SSL Kill Switch implementations it will not. The code verifies if exceptions are thrown (great check)
DefaultHostnameVerifieris used to verify the hostname of the connection
- Then the module manually validates the certificate chain, also checking if any certificates use
MD5withRSAalgorithm and public keys shorter than 2048 bits.
- For each received certificate in the chain, the module checks if the issuer exists in the CA store of the system (
/system/etc/security/cacerts) OR if it has been added by the user (
- The module also includes a hardcoded, pinned intermediate certificate for Google and checks if it matches one of the received chain certificates
- Finally the Enhanced Key Usage Object Identifiers of the leaf certificate are also retrieved and compared with a hardcoded list (!)
After all these checks, all the information as to whether the connections succeeded, what were the received certificates, if chain validation and trust checks passed, is sent back to Google.
For each of the DNS server set up on the device, the
gmail.com hostname is resolved and the MX servers are collected, along with the full DNS response.
I assume this exists in order to detect if the device is configured in a way that lets attackers intercept emails.
https://www.google.com using an SSL Socket initiated with
TLS, making use of
SNI extensions and
SessionTicket features. It gathers if the socket was connected and if hostname verification passes. If the connection doesn’t go through, it is retried using
SSLv3 instead of the modern TLS features mentioned above. The module collects all resulting artifacts and error messages.
This module is obviously designed to detect network related attacks like SSL downgrade.
Collects whether there are proxies configured on the device, what are their IP addresses and if these IPs are local IPs for the device. This tries to establish if there is a traffic-snooping malware on the device (some malware - and ad-blockers - work using proxies) or if the communication is sent to external known-bad locations.
Collects if SELinux support is available on the device (if
/sys/fs/selinux/enforce is present) and if it is in enforcing mode (via reading the contents of that file)
If SELinux is in non-enforcing mode on newer Android versions it is a clear indication that something fishy is going on.
Attempts to understand if the SD card has been tampered with. This creates a JPEG file named
gmsnet2.jpg on the SD card, fills it with some hard-coded content. IT then analyses if the length of the file matches a hard-coded value and verifies if the written bytes match what was sent, byte by byte.
I am not sure I understand why this check exists and under what security-related conditions it would fail.
Tries to determine if the length of the HTML code of
www.google.com exceeds some preconfigured threshold.
Collects the IP address of
clients3.google.com. It then uses a random user agent to connect to
http://clients3.google.com/generate_204 and captures the response’s return code and the body length.
This tries to establish if the device uses a captive portal that redirects responses. Captive portals can be used as a way to perform MitM attacks.
logcat -d command and looks for a configurable set of ‘interesting strings’.
Google could just upload all logs to their servers and search in them there, but I am guessing it took this approach so that it avoids accidentally leaking user-private information. The set of ‘interesting strings’ is empty by default, so I can’t tell what they are looking for.
Similar to the logcat check but for
EventLog events. Skips events with a
do-not-log- tag prefix.
First checks if a Google Account exists on the device. Then, depending on configuration switches provided by Google Play, it can gather and report information about all Non-System Apps or all System Apps. This is module is not enabled by default. During application tests we flag such behavior as a privacy issue. But I guess Google already knows which apps you have installed, since you most likely used Google Play to do it, thus it is probably not much of leak. I can understand why Google would be interested to see what other apps are installed on the device.
http://www.google.com page using a random user agent and stores the received HTML code.
This is the same as the
google_page test but first uses the results of the
ssl_handshake and the
ssl_redirect tests, trying to determine if connections are secure for
www.google.com. If they are not, no results are saved.
This sends the retrieved HTML code back to Google. I guess they try to understand if some malware is locally intercepting traffic and injecting things in the HTML.
Collects information about the
com.google.android.gms application on the device. This is the Google Mobile Services application, also known as Play Services. Information collected includes the full path to the APK and the package signatures.
Here Google tries to see if something (e.g. malware) has tried to statically tamper with play services.
Does an attestation request and collects the
Would be interesting to know how much past attestation results are considered in the calculation of newer responses.
Retrieves the active device administrators of the device, if any. Checks them against the contents of a file named
device_admin_blacklist.txt. This is included in the downloaded
snet.jar file and contains more than 70 entries like the following:
com.android.vendlng,66fd5cb0,f84d0d9f org.helpzek.minecraft,1cf1e47b google.android.xmppm,e39125e5,40c7d1e7 org.zl.zlorg,d2a16633,38d1153c com.devguru.minecraft,9746f466 com.adobe.flplayer,1c0ec8e1,98837829,08dadccb,b094083a,81565420,76a05ce0,e5f7325e com.android.pfx12,dc61b3d0 com.cryengine.gplay,61996cf8 com.wikihelp.minecraftwikiversion,2379f1d0 com.xsys.cfgs,749a7f99,3c0a9c89 com.cheatgroup.app,d2ef0ea4
This is a HashMap where keys are package names and values are the prefixes of the sha256 hash of a full path name.
If it is found that any of the blacklisted apps, on the correct APK path, is an active device admin, then
removeActiveAdmin() is called to remove the admin and the package is stopped using
forceStopPackage(). The application info for device admin packages is collected.
As evident this goes beyond collection and actualy attempts to protect the device against dangerous applications using a blacklisting approach.
Collects various information about a configurable list of files of interest under
/system. Information includes all permissions, names of files, symlink targets, selinux security context, hashes etc. Currently the list of files of interest seems empty.
It is interesting that the framework makes use of red herring in order to confuse attackers: along with the files of interest, a configurable number of random files can be accessed in the same way.
Great stuff, red herring approaches are commonly used in advanced app protection products. However this is not enabled by default.
The module contains a configurable list of sha256 hashes of certificates. This check looks at the contents of
/system/etc/security/cacerts.bks (pre-ICS) trying to find those and returns them if found, either the full certs or just subject name, issuer name and signature information, again configurable.
This attempts to understand if traffic is being intercepted via looking for blacklisted certificates. There’s malware that drops malicious certificates in the cert store, allowing SSL traffic to be intercepted on the network layer.
This check, as you would expect, collects information about
setuid files present in the filesystem. It uses
libcore.io.Libcore and related classes to retrieve the information.
Presense of setuid executables is an obvious red flag in newer Android versions.
####Privacy concerns Several people commented that privacy concerns should be addressed separately in this blog post. Here are a few points:
- the system seems to be designed so that collection is transparent: the data collection code is not obfuscated on purpose, so that it can be reviewed, as done in this analysis
- the system also seems to be designed so that it does not accidentally collect private information. For example the log and event collector only collect certain events matching regexes instead of the whole logs. Of course, we cannot know what exactly is collected without the regexes themselves.
- no user-sensitive identifiers are collected, like IMEI, IMSI and others
- several collectors are disabled by default. I imagine that Google can enabled them in specific regions only, or after it first detects signs of tampering, in order to gather additional information.
- most of the collected information does not require system privileges or many permissions to be collected. Some of it can be collected by normal apps. I have seen advertisement libraries embedded in hundreds of apps that collect way more private information.
These said, I understand how some people can find SafetyNet intrusive - each person has different views of what constitutes private/sensitive information.
It must also be said that an opt-out system doesn’t seem to exist. That would have interesting implications once 3rd party apps begin to adopt SafetyNet attestation.
- This is a well designed system as a security control. This is because it moves an attacker’s effort from bypassing client side controls to trying to figure out what data to feed to the collectors in order for Google’s decision to return true. This is an arms race where attackers have restricted visibility compared to traditional anti-rooting controls applications currently use. Of course, no matter how well designed, given enough effort, root-privileged attackers will always be able to bypass such systems.
- SafetyNet does not seam to leak sensitive/private user information like IMEI/IMSI - at least not with current default configuration.
- If developers use SafetyNet attestation, they should be aware that:
- it is not root detection. No root detection system can offer any guarantees.
- they should try to adopt the best practices in designing their app for SafetyNet: Use a flow where the nonce is generated at your server, retrieved by your app and the attestation response is sent back to your server for verification and action.
- Use the result to feed it into your fraud detection system, or deny API access on the server, not the client.
- SafetyNet will not work on devices lacking Google Play, such as various custom and 3rd party ROMs. Developers must design around this if they want to support non-Play devices. Even if users side-load Play services on non-Google ROMs, SafetyNet may detect the devices as not compatible (sticking to the original CTS compatibility definition).
- This is not a vulnerability detection system.
- A vulnerability detection system tries to establish if a device is “vulnerable” (as in unpatched)
- SafetyNet tries to establish if a device “is currently in a tampered state” (as in infested with malware or being MitM)
- Even though security professionals tend to think that ‘user-security == vulnerability free’, these two cases are very different for application developers. Developers need to have some assurance that a device is not in a compromised state while their application runs, even though the whole system might be vulnerable to exploits. That’s where SafetyNet can help them. This is the exact same reason some applications include rooting and tamper detection code, even though most know these systems can be bypassed given enough effort.
- There are some areas around SafetyNet I hope to find time to explore in more depth. One example is how often do collections take place: SafetyNet is also closely related to the Verify Apps feature and there are some claims that scans run weekly. Don’t forget that attestation is only Google telling the app their opinion on CTS compatibility, but the application doesn’t know how long ago was that opinion formed.