Tool or Project Name: Ruse

Short Abstract:
Facial recognition is eroding privacy and other human rights. Industry and government have ethical responsibilities to prevent this, but what if there were a way to enhance privacy for individuals without waiting for the cavalry? Adversarial technology gives people a way to protect this biometric. Ruse is an open-source mobile app that uses some of the research from the past year to enable “normal” people to protect the photos that they put online from being processed by commercial facial recognition products.

Short Developer Bio:
Mike Kiser is insecure. He has been this way since birth, despite holding a panoply of industry positions over the past 20 years—from the Office of the CTO to Security Strategist to Security Analyst to Security Architect—that might imply otherwise. In spite of this, he has designed, directed, and advised on large-scale security deployments for a global clientele. He is currently in a long-term relationship with fine haberdashery, is a chronic chronoptimist (look it up), and delights in needlessly convoluted verbiage. He speaks regularly at events such as the European Identity Conference and the RSA Conference, is a member of several standards groups, and has presented identity-related research at Black Hat and Def Con. He is currently a Senior Identity Strategist for SailPoint Technologies.

URL to any additional information:
https://github.com/derrumbe/Ruse

Detailed Explanation of Tool:

In an ideal world, this tool would utilize two of the latest techniques (Fawkes (http://sandlab.cs.uchicago.edu/fawkes/) / Lowkey) that have been pioneered at various academic institutions over the past year. However, for an app such as this one to truly work, ease-of-use is essential. This means that it *must* be delivered in a mobile format, which restricts the app to using TensorFlow Lite - which in turn means no on-board learning, and that whatever techniques it uses must be as quick and as easy to use as FaceID on a localized device is. (ironic, no?)

However, decent results can be had with a cheaper, faster combination of techniques — injecting perlin noise into the photos, a la Camera Adversaria: https://github.com/kieranbrowne/camera-adversaria, and modifying the photo by applying an arbitrary style through the relatively well known “arbitrary style transfer” technique. The combination of these two is powerful enough to warrant further development because it impacts two different processes involved in facial recognition: facial detection and facial classification.

This currently comes at a slight cost to the end user in terms of human intelligibility, but the app also allows for in-flow modification of the impact of these changes (and their protection.) There are some onboard facilities to check for the impact of these changes: Google MLKit to check for facial recognition, for example, so that the end user can dial down the modifications to a limit that is effective but not as disruptive.

This is a camera-centric mobile app, so the flow looks like this: photo from camera or roll -> apply perlin noise -> apply style filter -> check for impact against facial recognition -> save to roll or upload to social media

The app is on github here: https://github.com/derrumbe/Ruse and will be released onto the android and apple app stores in its first release (hopefully for DefCon): as noted before, ease-of-use is the goal.

Operating system:
Swift (iOS) / Java (android – lagging behind ios currently, but it will be transposed later this summer, hopefully)
Tensorflow Version: TensorFlowLiteSwift , nightly build (with GPU accel on)
GoogleMLKit
GPUImage: https://github.com/BradLarson/GPUImage (open source)
SimplexNoise : https://weber.itn.liu.se/~stegu/simp...plexNoise.java (open source)

Supporting Files, Code, etc:
https://github.com/derrumbe/Ruse

Target Audience:
Consumer Mobile Offense?