Last week’s Twitter hack was an example of how good design and implementation isn’t a panacea for functional insecurity.
It’s easy to obfuscate or hide the shameful truth of how you’ve been compromised, but I have to give props to the Twitter security team for being frank and open about what happened. In short, parties-somewhat-unknown managed to take over a healthy fistful of ostensibly secured and verified high-profile accounts via social engineering and access to an internal Twitter slack channel upon which access credentials and tools had been left for all to see. This – and I don’t want to blow anyone’s mind by stating this – was a Bad Thing To Do on Twitter’s part – understandable, but nonetheless unwise.
Still, for a moment or two the prevailing theory was that some of those accounts had been compromised using the same method that prior Twitter hacks employed, and thus – for a brief moment – the narrative illustrated the Achilles heel of a lot of modern authentication exploits, chiefly that SMS hacks and SIM swaps are a remarkably efficient way of completely defeating text-based two-factor authentication.
In simple terms, a SIM swap is usually a socially-engineered hack whereby someone convinces your phone carrier to port your phone number over to a SIM card that isn’t yours and that they own. Sometimes this is done with the aid of an unscrupulous phone company employee, sometimes this is done with enough personal information and details about the victim that a legitimate employee might be fooled, but the net effect is the same: a person (who is not you) is in possession of a phone (with a number that latterly belonged to you) that they can use to receive SMS messages from companies and institutions that use SMS as a means of secondary authentication. Thus, they can seize control of anything that sends a one-time code to your phone to prove your identity, steal your social media accounts, raid your bank accounts and generally play havoc with your life.
While a host of companies (I’m looking at you Google) employ the use of authentication software that isn’t tied to SMS messages, Apple’s approach is fairly ingenious. Simply, if you’re trying to access your iCloud account on a new device then it’ll send a six-digit code to another device signed in with the same Apple ID, so that your prospective attacker would need to also have access to another device signed into your iCloud account in order to be able to get the code to access your iCloud account. Which, if you think about it for a second, is as silly as it is unlikely.
It’s a solid approach, but only really good for data inside their own little ecosystem, and for my money there are are better two-factor solutions out there. Apps like Google Authenticator give you an extra level of security in that the code required to access the relevant service is created on a device that you already own and that you’ve already configured to create the relevant code. Better yet, Google supports the use of physical authentication methods like Yubikey, which requires an actual USB or NFC device to be plugged into the computer or phone in order to allow access. The beauty of this is that if someone wants to steal your data then they’re not only going to have to steal your device but also swipe your physical keys, which rather ups the game on their end from “stupid, greedy and malicious” to “Thomas Crowne Affair-style shenanigans.”
Still, setting up physical keys and enabling the use of apps like Google Authenticator are – while strongly recommended for critical services – potentially a lot to manage for a lot of people. I love Google Authenticator, but I have about eight or nine different services set up on the app and have to scroll around the list to find the right one only to find that the key is about to time out, which is admittedly a pretty low-stakes complaint but apparently yes, I’m that petty and shallow. It’s an unfortunate truth that if you make a system too complex or difficult then people won’t use it, and third party authenticators sadly seem to fall into that category. It’s understandable; there are hoops to jump through and apps to install and codes to scan and the miscellany of the operation can be tiresome and easy to push off to tomorrow or ignore altogether.
There is, fortunately, a fantastically simple way of locking unauthorized people out of your phone accounts: put a PIN on the account. AT&T (for example) calls it a “wireless passcode”, and it’s simple to find if you log into your account via their customer portal; simply create a four-to-eight digit code and then that will be required whenever you (or someone pretending to be you) wants to do anything to your account either online or in a retail store. Verizon and T-Mobile offer something similar – it won’t protect you if someone at the phone company is willfully consorting with criminal elements to defraud you and compromise your identity, but it’ll stop anyone from walking into a phone store with a tale of woe and your personal details and walking out with a new phone and the keys to your life…