Certificate Pinning for mobile apps - OWASP AppSecEU16 slides

As you might have guessed from previous posts on the topic, I've been researching certificate pinning implementations in mobile apps for the last couple of years.

Two months ago I presented a talk on certificate pinning at OWASP AppSecEU16 conference in Rome, Italy. The conference was pretty fun, met so many interesting people.

So, here are my slides: https://goo.gl/SNuQHN

Here's the official abstract:

Pinning Certificates (“Cert Pinning”) trends perennially, coming to the fore with each new SSL hack. Security urges developers to pin certs and many mobile apps do — some applying pinning to problems it doesn’t solve while others do so entirely unnecessarily. What risks does pinning really reduce? What should a developer consider prior to deciding to pin certs? Are there tradeoffs? Once decided, how should they do it?

Taking a perspective useful to both developers and penetration testers, this presentation covers these tradeoffs; from how organizational maturity impacts viability, to the risk reduction offered by the choices developers make about which elements of the certificate and chain to validate. The presentation will quickly recap the basics of certificates, their chains, and SSL validation.

Expect to leave understanding common misconceptions and key subtleties of pinning that may in fact /decrease/ security or impose undue complexity. Expect to understand common developer mistakes in pinning, for example in mobile WebViews. By the end of the presentation attendees will understand organizational and operational complexities, relevant design, and implementation-level detail.

A long paper on the same content is in the works, will publish more on the topic soon. I also promised Jim Manico that I'll work on OWASP's cert pinning wiki pages.. now I've got to find time to do that.

Testing for CVE-2016-2402 and similar pinning issues

Two weeks ago I published details of an attack method that can be used to bypass various implementations of certificate pinning in Android or generally Java applications.

Several applications and frameworks are still vulnerable to the attack, among them every Java or Android application using a version of the popular OkHttp networking library before versions 3.1.2 and 2.7.4. [The OkHttp issue is tracked as CVE-2016-2402]

Brief overview

Certificate pinning is a control used to mitigate Man-In-The-Middle attacks by privileged attackers. These attackers are assumed to have access to the private key used to sign a certificate that is trusted by the system hosting the application under attack.

It turns out that certificate pinning - if implemented using certain Java APIs like checkServerTrusted() - without taking extra steps - can be easily bypassed by the same attackers that the control is supposed to protect against. If vulnerable, pinning becomes completely ineffective as a control as it doesn't do what it sets out to do.

The main reason behind the flaw is that for pinning to work correctly, developers should not check pins against the list of certificates sent by the server. Instead, pins should be checked against the new, 'clean' chain that is created during SSL validation.

Attack prerequisites

There has been some confusion over the pre-requirements for such attack. These are:

  1. The attacker must be able to intercept network comms.
  2. The attacker must have the private key of a certificate trusted by the host system/device.

Under other circumstances, the 2nd prerequisite would be a 'huge' one. However that's not the case here: Certificate pinning, as a control, only makes sense against attackers that already have such access. Applications that implement pinning want to actively protect against such attackers. The described attack enables this attack vector, bypassing the pinning control.

Local or remote?

Some confusion may also exist over whether this is a 'local' or 'remote' attack.
The answer is that it can be both. It boils down to the method used by an attacker to obtain the private key of a trusted certificate. Some of these are mentioned in the original post:

  • Compromising a CA or intermediate CA
  • Direct or indirect access to a CA
    • A malicious CA employee
    • Phishing a CA employee
    • Leveraging mistakes in the certificate request process
    • Configuration mistake on the CA
    • Nation-state attackers
  • Phishing a user into installing a malicious certificate in their system’s trusted store

If an attacker uses a compromised trusted CA key, the attack can be remote. If an attacker needs to install or somehow trick the user into installing their own malicious CA certificate, then the attack could be local and requires the attacker to ‘phish’ a user.

How to test

Say Alice is the client, Bob is the server and Mallory is the intercepting host.

The general idea is:

  1. Alice, while attempting to connect to Bob, is somehow redirected to Mallory.
  2. Mallory knows that Alice wanted to connect to Bob, so, before fully establishing an SSL connection with Alice, he first contacts Bob and grabs all his SSL certificates.
  3. Mallory then uses the private key of a CA that Alice trusts to sign a certificate masquerading as belonging to Bob.
  4. Mallory also appends all of Bob's real certificates in the list of certificates it sends to Alice
  5. Alice completes the SSL handshake with Mallory instead of Bob. The handshake is verified successfully because Mallory's certificate is signed by a CA that Alice trusts.
  6. Alice now goes through certificate pinning checks using a vulnerable implementation which looks for Bob's exact certificates in the received chain. The checks pass because ...they are there.

Here's a sequence diagram version of the above steps:

If the above scenario is possible, then an application that claims to perform pinning is vulnerable.

Testing for this can be tricky, because no ready-made tools exist - so I created a few.

POC server

While submitting bug reports to OkHttp and several other applications I used a simple python script that acts as a malicious HTTPS web server that masquerades as a specified trusted server.
You can find it in my github.
Start this server with a command line parameter indicating which trusted server to masquerade as (domain name). Any client connecting to it will receive a certificate chain that looks like the following:

[0] malicious server end-entity cert signed by CA A
[1] real server end-entity cert signed by CA B
[2] real server intermediate CA B signed by CA C

CA A and CA C are trusted by the system.

Of course, for this to work, for POC purposes we have to manually create a 'CA A' certificate and insert it in the host system's trusted store. A real attacker may not have to do this step.

Using this for testing is simple:

  • Start the POC server masquerading as a domain name the application is going to connect to
  • Import the POC server's CA into the app's host system
  • Redirect traffic from the application to the server. The easy way is to change /etc/hosts so that requests for a particular domain end up to the malicious server. DNS spoofing or a variety of other methods can also be used.
  • Once you start the application, if the attack worked, you should see GET or POST requests hitting the malicious web server - you should be able to read their contents. The POC server is not a proxy - there will be no responses back or any means for the application to make use of the connection. This is just to prove the attack works. If pinning was not vulnerable, then you wouldn't see a plaintext GET or POST request in your malicious server's logs; the client should have refused to establish the SSL channel with the malicious server.

mitmproxy

An easier way to demonstrate the attack is by making use of a new feature I added to the popular mitmproxy intercepting proxy and was merged a few days ago.

To use this, for now, you'll have to checkout the master branch (zip). The feature will be part of v0.17 - once that is tagged and released.

Once you have the code, all you have to do is:

  1. Insert mitmproxy's CA certificate in the tested system (unless you have the key of a trusted CA like a real attacker would)
  2. Configure device networking to pass through mitmproxy (e.g. proxy settings on the device, invisible proxying via vpn, gateway etc)
  3. Start mitmproxy or mitmdump using the new --add-upstream-certs-to-client-chain command line switch.

While operating in this mode, mitmproxy will automatically add all certificates of the upstream server to the certificate list that is served to the client.

If connections using normal proxying fail (due to pinning) but work in this new mode, then you can easily conclude that the pinning implementation exhibits this flaw.

Sample vulnerable app

I created [a sample vulnerable Java app] for demo purposes. Find it in my github. This uses OkHttp 3.0.1 (which is vulnerable to CVE-2016-2402) to connect to github.com and retrieve a file.

Since pinning is used, if you attempt to proxy this app or redirect its connections you shouldn't be able to see the encrypted traffic as the application will refuse to connect to the proxy - even if the proxy certificate is trusted by the system.

However, because OkHttp 3.0.1 is vulnerable to CVE-2016-2402, you should be able to intercept the traffic if you redirect to the POC malicious server or if you use mitmproxy with the new --add-upstream-certs-to-client-chain option.

Happy testing.

Inside SafetyNet - part 2

It's been six months since my last blog post on Android's SafetyNet. I was then examining a mid-July 2015 version of the system. As expected, there have been updates since then; the last was released mid-December 2015. I'll briefly describe the differences in this post; for a more complete overview of the checks inside the SafetyNet system and its usage please read through my previous posts.

SafetyNet changes

A few but important new modules have been added in recent versions and some older ones were restructured.

Dalvik Cache module

This module attempts to find modified dalvik cache files. As is known, dex code inside an APK gets optimised during installation and is kept in a separate folder in "odex" files [on old Android versions that still use Dalvik]. Malicious actors could modify these optimized files directly instead of modifying APKs, in order to evade detection.
The module monitors /data/dalvik-cache/arm or /data/dalvik-cache and maintains the results, comparing the hashes of odexed files with their stored versions.

LOG DEVICE STATE module

This module retrieves a few system properties from android.os.SystemProperties and sends them back:

  • ro.boot.verifiedbootstate
  • ro.boot.veritymode
  • ro.build.version.security_patch
  • ro.oem_unlock_supported
  • ro.boot.flash.locked
LOG SYSTEM PARTITION FILES module

This module has been previously discussed. A new submodule has been now been added, named SystemIntegrityChecker (SIC). This attempts to remotely verify the state of the /system partition; an interesting concept from many aspects.

SIC retrieves the SHA256 hash oof the /system entry from SafetyNet's data store. It then performs a HTTPS request to a SIC server containing the hash and some meta-information about the directory. The response will contain a hashMatches integer flag. SafetyNet will use this flag and report through the appropriate SafetyNet APIs.

As far as I can tell the SIC system is not yet in use. I am not sure why a request to a separate SIC server needs to happen; the only reasonable explanation seems to be that entities other than Google might need to maintain their own SIC servers, e.g. device manufacturers. Still, the whole process could possibly happen through backend APIs instead.
In any case, someone is going to have to maintain a list of hashes of the system partitions of various devices/configurations or the "last seen hash" for each user, so that changes are detected.
We'll know soon enough I guess.

How is the /system hash created?

SafetyNet runs an process that recursively walks "/system" and calculates a HashTree over its contents.
For every file it encounters it captures meta-information (timestamps, permissions, selinux context etc) and its SHA256 hash into a local data store. For every directory, it generates a hash that considers the store entry of every file inside the directory. If there are hash mismatches between previous and current recursive walks over /system, the offending files are entered in separate lists to be audited.

The LOG SYSTEM PARTITION FILES module continues to include the results of the SystemPartitionFileFinder sub-module.
As a reminder, this module retrieves the status of various files in /system. The list of "files of interest" is configured over the air. Currently, the following files are checked, along with 5 random files:

      /system/app/providerdown.apk,
      /system/priv-app/cameraupdate.apk, 
      /system/app/cameraupdate.apk,
      /system/priv-app/ThemeManags.apk,
      /system/app/HTMLViewer.apk,
      /system/app/com.android.hardware.ext0.apk,
      /system/app/com.android.wp.net.log.apk,
      /system/app/com.google.fk.json.slo.apk,
      /system/app/com.google.model.mi.apk,
      /system/app/SettingProvider.apk,
      /system/app/SecurityCertificate.apk,
      /system/app/LiveWallpaper.apk,
      /system/app/BatteryControl.apk,
      /system/app/Models.apk,
      /system/bin/.daemon,
      /system/bin/.daemon/mis,
      /system/bin/.daemon/nis,
      /system/bin/daemonnis,
      /system/bin/nis,
      /system/bin/.sr/nis,
      /system/bin/.sr,
      /system/bin/.memnut,
      /system/bin/.suv,
      /system/bin/.sc/mis,
      /system/bin/uis,
      /system/usr/.suv,
      /system/xbin/.memnut,
      /system/xbin/.suv,
      /system/xbin/ku.sud,
      /system/xbin/.rt_daemon,
      /system/xbin/.rt_bridge,
      /system/xbin/.monkey.base,
      /system/xbin/.ext.base,
      /system/xbin/.like.base,
      /system/xbin/.look.base,
      /system/xbin/.must.base,
      /system/xbin/.team.base,
      /system/xbin/.type.base,
      /system/xbin/.view.base,
      /system/xbin/.word.base,
      /system/xbin/.zip.base,
      /system/xbin/.bat.base,
      /system/xbin/com.android.wp.net.log,
      /system/xbin/.b,
      /system/xbin/.df,
      /system/xbin/.c,
      /system/xbin/.sys.apk,
      /system/xbin/.ld.js,
      /system/xbin/.ls

SafetyNet modules

Here is an up-to-date list of all SafetyNet logging modules. My previous blog post describes most of these.

LOG_APPS_TAG = "apps";  
LOG_ATTESTATION_TAG = "attest";  
LOG_CAPTIVE_PORTAL_TEST_TAG = "captive_portal_test";  
LOG_DALVIK_CACHE_TAG = "dalvik_cache_monitor";  
LOG_DEVICE_ADMIN_TAG = "device_admin_deactivator";  
LOG_DEVICE_STATE_TAG = "device_state";  
LOG_EVENT_LOG_TAG = "event_log";  
LOG_FILES_TAG = "su_files";  
LOG_GMSCORE_INFO_TAG = "gmscore";  
LOG_GOOGLE_PAGE_INFO_TAG = "google_page_info";  
LOG_GOOGLE_PAGE_TAG = "google_page";  
LOG_HANDSHAKE_TAG = "ssl_handshake";  
LOG_LOCALE_TAG = "locale";  
LOG_LOGCAT_TAG = "logcat";  
LOG_MX_RECORDS_TAG = "mx_record";  
LOG_PACKAGES_TAG = "default_packages";  
LOG_PROXY_TAG = "proxy";  
LOG_REDIRECT_TAG = "ssl_redirect";  
LOG_SD_CARD_TAG = "sd_card_test";  
LOG_SELINUX_TAG = "selinux_status";  
LOG_SETTINGS_TAG = "settings";  
LOG_SETUID_TAG = "setuid_files";  
LOG_SSLV3_TAG = "sslv3_fallback";  
LOG_SUSPICIOUS_PAGE_TAG = "suspicious_google_page";  
LOG_SYSTEM_CA_CERT_STORE_TAG = "system_ca_cert_store";  
LOG_SYSTEM_PARTITION_FILES_TAG = "system_partition_files";  

Extras

SafetyNet is not just about the modules described here. During the attestation process some other checks happen via different systems; for example there is code that acts as old-fashioned root-detection, trying to figure out if the following files/directories exist in the filesystem (or if traces of them appear in device logs).

I do hope that the output of the rest of the SafetyNet modules is also taken into account during the calculation the ctsCompatibility response.

"/system/bin/su"
"/system/xbin/su"
"/system/bin/.su"
"/system/xbin/.su"
"/system/xbin"
"/system/bin"
"/system/sd/xbin"
"/system/bin/failsafe"
"/data/local"
"/system"
"/system/bin/.ext"
"/data/local/xbin"
"/data/local/bin"

Over-the-air configuration

As mentioned above, SafetyNet is configured by Google at runtime; even though the code itself is also updated once every three months on average.

The following are some of the more interesting configuration options:

Signal Tags Whitelist - Idle Mode

This configures which modules are used by the SafetyNet "idle mode" logger.

  n: "snet_idle_tags_whitelist"
  v: "system_partition_files,
      system_ca_cert_store,
      setuid_files,
      dalvik_cache_monitor,
      logcat,
      event_log,
      device_state"
Signal Tags Whitelist - Normal Mode

This configures which modules are used by the SafetyNet "normal mode" logger.

  n: "snet_tags_whitelist"
  v: "default_packages, 
      su_files,
      settings,
      locale,
      ssl_redirect,
      ssl_handshake,
      sslv3_fallback,
      proxy,
      selinux_status,
      sd_card_test,
      google_page_info,
      captive_portal_test,
      gmscore,
      logcat,
      event_log"
Event Log Tags

This is used by the Event Logger module. The SafetyNet service is configured to retrieve and log the following event tags:

  n: "snet_report_event_logs"
  v: "50125:2,
      50128:2,
      conscrypt:3,
      78001:2,
      65537:2,
      90201:2,
      90202:2,
      70151:2"

The tags correspond to /system/etc/event-log-tags:

  • 50125:2
    • SMS denied by user
    • exp_det_sms_denied_by_user (app_signature|3)
  • 50128:2
    • SMS denied by user
    • exp_det_sms_sent_by_user (app_signature|3)
  • conscrypt:3
    • unexpected (early) ChangeCipherSpec message
  • 78001:2
    • FrameworkListener dispatchCommand overflow
    • exp_det_dispatchCommand_overflow
  • 65537:2
    • FrameworkListener dispatchCommand overflow
    • exp_det_netlink_failure (uid|1)
  • 90201:2
    • log whether user accepted and activated device admin
    • exp_det_device_admin_activated_by_user (app_signature|3)
  • 90202:2
    • log whether user declined activation of device admin
    • exp_det_device_admin_declined_by_user (app_signature|3)
  • 70151:2
    • exp_det_attempt_to_call_object_getclass (app_signature|3)
SIC Server URL
  n: "snet_sic_server_url"
  v: ""

This is currently empty, but will eventually be the server URL for the "System Integrity Checker" service described above.

DroidGuard

These posts are aimed primarily at providing some clarity over the SafetyNet system to developers who wish to adopt attestation APIs in their applications. It must be noted that attestation is just a small aspect of the SafetyNet system; the main use is to retrieve data so that Google can monitor the security of the Android ecosystem and track on-going incidents.

As I've hinted in my previous post, while performing this investigation I stumbled upon DroidGuard, a set of components that communicates with remote Google Play APIs and is used for fraud detection, anti-abuse and operations like DRM.

SafetyNet interacts, along with many other components, with DroidGuard. Although these two systems may co-operate for some checks, DroidGuard is an independent system that serves different purposes, more inline with Google's anti-malware efforts. I will not be revealing details about this system; as I think that such details would only benefit malware authors, not application developers that want to keep their Android apps protected. Similarly, revealing details on 'how to bypass SafetyNet' is not the goal here. Such details are shared directly with Google and enterprise developers interested in assessing the system before using it.

Improving SafetyNet

Here's a bucket list of things I'd like to see in SafetyNet and some thoughts.

  • SafetyNet is not a root detection system although it goes a long way towards that goal. It suffers from some early symptoms of more traditional on-device checking systems: It's designed for large scale data gathering and does not adequately protect itself against targeted attacks. It will tell Google that X% of devices are tampered, but, for now, it will stop short of trying to actively resist tampering by malware that specifically wants to present a false image to the checkers. Of course this is an ultimately futile effort, but the bar can be raised.
  • I'd like to see at least some degree of code protection for the checkers.
  • It'd be great if checks were performed using a range of high-level and low-level APIs.
  • I'd also be good if more SafetyNet checkers influence the compatibility decision; more than the straightforward su binary tests.
  • How much the compatibility decision is influenced by historical data about a device is an open question. Moving away from point-in-time checks could be a worthwhile goal.
  • Some clarity around the SIC server system would be nice to have.
  • Making use of trustzone
  • Multi-platform support for the Attestation APIs would be interesting to see (iOS attestation...)

Network Security Policy configuration for Android apps

Android engineers have recently been busy building out AndroidNSSP (Android Network Security Provider): a system that application developers will be able to use in order control aspects of the network security policy of their application. It's been long overdue, and there are various bits and pieces still missing; however important parts were merged to AOSP master about a month ago.

This is the second part of my previous post on the topic (the war on cleartext traffic).

android.security.net.config

The android.security.net.config package holds most of the new code. This package contains various new classes used to parse user configuration about the network security policy of an app (from AndroidManifest,xml or other XML files) and configure the policy. There's a lot of rework in the networking internals; mainly so that applications use a new NetworkSecurityTrustManager.

Capabilities

Applications using the new system will be able to control the following for their app context:

  • Block clear-text traffic
  • Enforce HSTS
  • Use Certificate Pinning
  • Configure custom Trust Anchors

These configuration options can be application-wide or per-domain.

Permit clear-text traffic

The cleartextTrafficPermitted=False property will let developers block app traffic that does not happen over a secure channel (e.g. HTTP, FTP, WebSockets, XMPP, IMAP, SMTP without TLS or STARTTLS).

It will also be possible to set this option to False per domain, allowing for greater granularity. For example you might want to block cleartext traffic to your server but still let advertisement libraries communicate with their backends in any way they want.

The default will be True, allowing cleartext traffic for all.

However, there are various caveats:

This flag is honored on a best effort basis because it's impossible to prevent all cleartext traffic from Android applications given the level of access provided to them.

For example, there's no expectation that the java.net.Socket API will honor this flag because it cannot determine whether its traffic is in cleartext. However, most network traffic from applications is handled by higher-level network stacks/components which can honor this aspect of the policy.

NOTE: WebView does not honor this flag.

HSTS enforcement

The hstsEnforced=True setting will allow developers to enforce HTTP Strict Transport Security for their apps. As many of you know, HSTS is a mitigation control for SSL-Stripping style attacks.
It still allows MitM attacks during the first connection to an unknown host. Browsers attempt to mitigate this attack window via pre-loading HSTS pins for popular websites; unsure how that can work for normal Android apps.

Again, this setting will be configurable per domain or application-wide.

Certificate Pinning

Android has had native Certificate Pinning support since version 4.2. A big thanks to Nikolay Elenkov for his detailed (although a bit dated now) write-up from 2012.

The problem with this pinning implementation is that it's been practically unused so far: Android applications didn't have a way to import their own pins unless they had root access to the system.

The new Network Security Policy system fixes this: Applications will be able to configure the pins they want to use, for the domains they want to use them for.

Here is a sample per-domain configuration for koz.io

<domain-config hstsEnforced=[True|False] cleartextTrafficPermitted=[True|False]>  
    <domain includeSubdomains=[True|False]>koz.io</domain>
    <pin-set expiration="exp-date">
        <pin digest=sha256>PaJOmDNhWkVBvuXfzqXMyfo7kgtGpcyZp6L8EqvM8Ck=</pin>
    </pin-set>
</domain-config>  

The pins are base64 encoded SHA-256 hashes of the certificate's Subject Public Key Info (SPKI) field, as described in RFC7469.

Instructions on generating the pin for the end-entity certificate of koz.io:

openssl x509 -in koz.io.pem.crt -pubkey -noout | openssl rsa -pubin -outform der | openssl dgst -sha256 -binary | openssl enc -base64  

or

openssl s_client -servername koz.io -connect koz.io:443 | openssl x509 -pubkey -noout | openssl rsa -pubin -outform der | openssl dgst -sha256 -binary | openssl enc -base64  

The pinning implementation, as expected, hooks into the default TrustManager's checkServerTrusted(); it's been that way for quite a while but apps so far were not able to use their own pins.

Custom Trust Anchors

Developers will also be able to set up their own custom Trust Anchors to be used for their connections. This is useful in case they are testing against backends with a custom self-signed certificate (instead of disabling validation alltogether), if they plan to move away of the PKI system, if they want to implement a trust-anchor based way of pinning etc.

The system will support loading certificate files from the following sources:

  • system trust store (/etc/security/cacerts)
  • user trust store (cacerts-added)
  • keystore file shipped with app
  • directory with certificates shipped with the app
  • other resource file shipped with the app

The implementation for most of these is not there yet.

Here is a sample application-wide configuration

<base-config hstsEnforced=[True|False] cleartextTrafficPermitted=[True|False]>  
    <trust-anchors>
        <certificates src=["system"|"user"|"resource-id-ref"] overridePins=[True|False]>
    </trust-anchors>
</base-config>  

You might have noticed the overridePins property. This gets tricky very quickly: a custom trust anchor may or may not override any pins (for certificate pinning) also set for a particular domain.

Conclusion

I welcome the changes and hope they make it into Android N which is due to be released on May 18. They'll bring some much needed flexibility and robustness in the Android app networking security situation which is currently messy to say the least.

Using Android's tamper detection securely in your app

In a previous blogpost, I described how Google Play's SafetyNet service is structured, from a technical perspective, diving deep into details and the checks it perfoms on the device.

Recap: Google Play's SafetyNet service allows your application to gain information about the 'CTS compatibility' status of the device you are running on. You can think of CTS compatibility as a mix of rooting detection, device tampering detection and active MitM detection.

Many applications use commercial 'protection suites' to do some of these tasks, or roll their own solution - which is often trivially broken.

Google Play's SafetyNet service can provide your app with similar information for free - and, although the checks are basic, it is harder to bypass than rolling your own solution. I believe using this API is worth a shot if you really want to do tamper detection but not invest in a specialized product or consultancy services.

However, using the SafetyNet API 'properly' is not straight-forward for non security-aware developers.

Using SafetyNet insecurely

For example, this sample app (source) and this app implement the API in a client-side-only way. These applications get the attestation result and check the signature and the CTS compatibility field locally using the getBoolean() method on the ctsProfileMatch and isValidSignature fields.

The problem with this approach is that an attacker, who already has root access on the device, can hook the getBoolean() method and make it return always true - tricking your app into believing that the device is indeed CTS compatible, while the real SafetyNet response says it is not. The same problem exists if you are locally checking the signature of the JWS AttestationResult object.

An Xposed module that does exactly this sort of hooking has been published already - allowing for easy bypass.

Alternatively, an attacker could just repackage your application and strip out all these checks, achieving the same results.

Avoid client-side checks

This is hardly a new best-practice advice: avoiding client-side checks is good for you.

Me and Georgi Boiko, fellow Cigital consultant, created SafetyNet Playground - a sample open-source Android application that attempts to tackle these 'trivial to bypass' issues. It uses the SafetyNet API much in the same way Android Pay does.

The application is designed so that checks are done on the server side. The idea is that your server will not return any useful data unless the SafetyNet service responds that your device is CTS compatible.

With such a solution, an attacker can no longer trivially hook things in your application. He needs to invest time and effort playing catch up and understanding the constantly-changing SafetyNet service, attempting to hook everything it collects from the device and figuring out what would constitute an 'acceptable' state for each check.

Of course, an attacker can still repackage the application after striping out the Attestation API. Depending on how smart the attacker is this could be defeated as well, because JWS objects include the signature of the package that did the request... An attacker would have to fake that signature so that Google Services think that a different app did the request.

This blog post has more details about the design of SafetyNet Playground. The Android app and web-service are open source, so that you can reuse parts of the code or study it.

Enjoy!