If you run a system with hidden users (which may sound bizarre and creepy on the face of it, but it’s something occasionally required on MDM-managed installs) and you want to log in as that user on your computer then this is a useful tip that used to be available in macOS documentation back when macOS was good old-fashioned OS X. It seems to have disappeared now, which is a shame, but it’s also pretty cool that it’s quietly persisted over the last dozen OS versions.
The default display at the login window is the icon view for each user; click on the user, enter the password (or authenticate via TouchID) and you’re in. Great. But if you want to be able to set that login window to display Username/Password fields then you have to log in, go to the System Preferences and make changes there so that the next time you log in you get that option. It’s a drag, because it’s a bunch of extra steps. If only there were a quick and easy way of toggling between those login window options. You can see where this is going.
Here’s how to do it:
• At the login window – when presented with the icons for each user – don’t click on anything. Instead, tap the left or right arrow on the keyboard to highlight a user. If it’s not your main user then that doesn’t matter; the trick is to highlight something.
• Hit Option-Return on the keyboard, and the window will switch to Username/Password fields, thus allowing you to type in the short name and password of the user you want to log in as.
You can hit Option-Return again to toggle back, but as the change is only for that login window session you’ll get your normal icon view back the next time you log in…
A few weeks back I wrote about the FrankenMini – the Mac mini I assembled a la Tony Stark out of a box of scraps. Okay, Iron Man did his thing in a cave in the desert with a bunch of weaponry and I did mine in my garage with a couple of screwdrivers and a certain amount of swearing, but we both looked a little sickly and had uncool scruffy facial hair, so I’m calling it pretty much a dead heat.
I used to run a much nicer Mac mini as a file/web/whatever server in my home office, but switched that to a nice Synology Diskstation that’s synced to an identical unit in my downtown office, and with that in place I no longer had need of the nice Mac mini which went on down the road to a friend of mine. This, it turns out, was no great loss, as Apple has been steadily and methodically stripping functionality out of macOS Server for years, and as such there wasn’t a lot of tinkering possibility locked away in the thing.
But now I have this old, terrible, not nice-but-perfectly-serviceable Mac mini doing little except feeding the occasional print job to my 3D printer and sitting reproachfully on a workbench, so I thought I’d do something useful with it and implement DNS over HTTPS (or DoH if you don’t like a lot of typing. Which I don’t.)
Hmm? What’s DNS over HTTPS, I hear you say? Let me explain.
If you’re interested in a slightly higher level view of the basic mechanics of DNS – and I highly recommend you dip a toe into that water because it’s perfectly warm and not as full of monsters as you may expect – I’d encourage you to go look here at this excellent write up on cloudflare.com:
DNS is (to cut a tremendously, tremendously long story short) the way that your computer turns human-readable internet names (e.g., www.apple.com) into the actual IP addresses that your computer needs to get to a webpage (e.g., 23.15.137.53). It does this by reading what you type into, say your browser (although many, many things on your computer use DNS, a lot of which you wouldn’t think of and many that you’d never know about) and then sending that inquiry off to a DNS server (or more correctly a DNS resolver) – usually on the internet – that does the footwork and goes and queries other DNS servers to work out where it is you want to go.
But how does your computer know where the DNS server/resolver is? Well, the address of the DNS server/resolver is something that’s provided to your computer by whatever network your computer or device is connected to – your home router, or your mobile hotspot, or the coffee shop wifi network (back when we could go and sit in coffee shops, that is). Those networks assign your computer or device an address on their local network and a DNS server so that when you send a query out to the world at large to look at www.apple.com that network will know where you are to send that information back to you.
The internet as we know it only functions because of DNS, and the architecture of DNS is one of the great unsung marvels of the modern age. It’s not sexy or cool or attention-grabbing, but were it not for DNS there’d be no Google, or Amazon or Netflix or… well, you get the idea. Still, DNS dates back to 1983 and because it’s such a crucial part of the way that the modern world works, progress on things like security have been slow albeit steady. It’s not the kind of thing you can make major, paradigm-shifting changes to without breaking modern civilization very badly, which, I think we can mostly all agree, would be A Bad Thing.
The chief problem with DNS is that the resolver sits there, happily gathering information on what you’re sending it whether you like it or not. As a general point of principle it’s a little creepy that your ISP or whoever is providing you with the resolver knows what’s going in and out of your computer, but principle aside there are a lot of use cases where that’s data that you really don’t want getting out there. I’ve worked with clients who are doing contract work for the DoD, for example, and who aren’t thrilled that someone with a will and a relatively small amount of resources could theoretically sniff the traffic they’re sending to their DNS resolver. Happily there’s a way of stepping around that, which is where the FrankenMini™ steps in.
Cloudflare offers a fast, secure DNS resolver that encrypts traffic with TLS via HTTPS. In more normal English, HTTPS is the securely encrypted version of HTTP, which is in turn the protocol used to deliver web content over the internet. When you connect to a secure webpage – a bank or online store for example – you’re probably connecting via HTTPS. However, when your computer goes to initially send an inquiry to most DNS resolvers that inquiry is sent as an unencrypted HTTP request – in effect, your ISP can’t see what you’re doing on a secure website, but it can snoop on where you’re going.
Many modern browsers (Chrome, Edge, Brave, Firefox et al) already have DoH built in as options, but Safari does not. And that’s vexing. Additionally, there are other types of software than web browsers that also use DNS, and it’d be nice to be able to roll those protections over for those other pieces of software or services. Happily, Cloudflare has an option for that – the cloudflared client. If you install this onto your Mac then anything on that Mac will be able to use DoH. And if you make that Mac your DNS server then anything on your network will likewise automatically be able to use it, too.
So, on one hand I have a FrankenMini™ and in the other hand I have cloudflared. I think you can probably work out where this is going.
To business, then – setting up the FrankenMini™ to be a DNS server so that every DNS search is encrypted and nice and secure!
First, we’ll need a DNS server that we can run on the Mini, because otherwise there’ll be nothing for the devices on the network to use. I’m going to use dnsmasq because it’s open source, easy to configure, pretty well-documented and generally awesome. Once that’s in place I’m going to install cloudflared so that DNS requests that the Mini sends out are covered by DoH, and finally I’m going to tie those two things together so that DNS requests from computers/devices/whatever on my network come into the Mini via dnsmasq and the Mini then pushes those DNS requests out via cloudflared.
Installing dnsmasq and cloudflared is done via homebrew, thus:
brew install dnsmasq
…and then
brew install cloudflare/cloudflare/cloudflared
Once dnsmasq is installed the next step is to configure it to look at itself so that it can pass requests to the cloudflared client. This is done by digging into the dnsmasq configuration file at /usr/local/etc/dnsmasq.conf and then changing the default of #server=/localnet/192.168.0.1 to server=127.0.0.1#54
The next step is to configure cloudflared. Happily, Cloudflare has this covered on their site and it’s a simple matter of making a cloudflared directory in /usr/local/etc/ and then creating a file called config.ymlin that directory. You populate that file via a copy/paste job with a couple of minor tweaks – while normal, common-or-garden DNS runs on port 53 we’re telling dnsmasq to send inquiries out on port 54 and cloudflared to listen for requests on port 54 so that they can talk to each other privately and don’t start playing havoc with every other device on the network, and additionally we’re going to set a no-autoupdate flag:
All that’s left is to make cloudflared and dnsmasq start when the computer boots up – the FrankenMini™ sips power like a little tiny baby bird, so it stays up and running 24/7, but on the off-chance that it needs rebooting it would be a hassle to have to remember to go into the garage (which, as mentioned in the main FrankenMini article is cold and chiefly filled with ghosts and rats) to manually go and kick-start the DNS server. Happily, cloudflared and dnsmasq include tools for this very purpose which you can install by typing the following into the terminal…
sudo cloudflared service install
…which will install a launchctl item at /Library/LaunchDaemons/com.cloudflared.plist, and…
sudo brew services start dnsmasq
…which will similarly enable dnsmasq
After that it’s simply a case of setting each device on your network to look at the IP address of the machine that’s running dnsmasq/cloudflared (or, in my case, setting my router up with that address so that every device connected to my network automatically gets that address).
One unanticipated but welcome discovery has been that Cloudflare’s DNS resolver (1.1.1.1) is astoundingly fast, and I don’t know about you but I’ll take fast and secure over slow and dreadful any day of the week…
Last time I finished up with a tremendously simplistic run through of how digital encryption uses certificates to create chains of proof that computers can use to establish trust in the identities of services and other computers they’re talking to. Still, certificates are only part of the deal, and you can’t have Public Key Infrastructure without, well, keys.
Remember Public Key Infrastructure? The thing that I made a fuss about and then said I wasn’t going to explain what it was until next time? Well, this is next time, so here goes.
To understand how PKI works it’s important to revisit symmetric encryption and talk about asymmetric encryption.
If symmetric encryption is using one key to encode and decode a piece of information then asymmetric encryption is the use of two keys to do the same.The two keys are a public key and a private key, and while in the real world you’d have one key that both locks and unlocks a door, you can think of these two keys as special keys that can only lock or unlock a door.
Let’s say that I run Big Dave’s Taco Shack™. In this example I’ve taken leave of my senses, thrown all my computer stuff into a hole in the ground and immediately embraced all my desires to become a terrible restauranteur. I go and sign a lease on a restaurant in a strip mall that nobody ever goes to, make a probably very politically and morally problematic sign to hang over the front door, then go and buy… I don’t know. Commercial taco-making apparatus, probably. This is a terrible example. Anyway, I also hire three people to work there – Jeff, Andy and Bob.
Jeff, Andy and Bob all close up the restaurant at the end of the night, and I give them a public key because locking the doors of my burgeoning taco enterprise so that it can’t be burglarized seems like a function that I’d want to make public – inasmuch as “public” means Jeff, Andy and Rob. The key that I give them can only be used to lock the door – if the door is locked and they put that special key into it and try and unlock it then it won’t work.
I’m the owner of this doomed culinary enterprise, and I don’t want anyone else to be able to get in there in the morning except me, so I have a special private key that only exists to unlock the door.
If Jeff, Andy and Bob leave their keys lying around then that’s actually okay, because nobody can use those keys to do anything except secure my fine dining establishment. Heck, I could just start handing out those keys to everyone I know or meet just in case they ever needed to lock my probably-failing-by-now eatery up safe and sound.
That’s how public and private keys work – one key (the public key, freely handed out to the world at large) cryptographically secures a piece of information or data, and the other (the private key, kept securely away from the public) cryptographically unlocks that piece of information or data.
Where to go from here.
There’s a lot more to dig into, here, and what I’ve laid out so far is an extremely simplified and high altitude view, tailored for people who’ve heard some of these terms and needed a broad explanation of what they mean. I’ve deliberately not touched on session keys, the mechanics of how digitally signing happens, signatures and hashes or even going through the anatomy of an SSL/TLS transaction works in the real world. When I had the idea for these articles I wanted to focus on the basic, basic mechanics of encryption and not obfuscate matters with a lot of more esoteric information, but there’s one final thing that I wanted to touch on…
You (or: Where It All Goes Horribly Wrong).
All right, maybe not you per se, but people. Historically, people have been the Achilles’ heel of security, because human beings are not reliable. They don’t work in absolute, finite and predictable ways, and they’re so far from infallible that it’s frankly laughable.
We’re careless. We leave passwords scrawled on post-it notes on our screens, or stuck underneath the keyboard and mouse mat. We use the same passwords for every service. We put together cryptographically secure wifi networks and write the password on a whiteboard and then have our photo taken of ourselves next to that whiteboard just like this chap during the Brazil World Cup:
Way back in the first article I wrote that Helen Keller quote about security, and I’ll throw it out there again:
“Security is mostly a superstition. It does not exist in nature.”
It’s an ugly truth, but nonetheless one that has to be carefully acknowledged. No matter how well-designed and cryptographically robust a security system you design or use, you can’t simply set it in place and rely on it to self-maintain. As a person who makes a living advising people on this kind of stuff it might seem self-serving to use this opportunity to do a hard sell on engaging professional help, but it’s nonetheless a good idea; still, there are some logical and fairly easy steps that you can take yourself.
• Be aware of PCI/HIPAA regulations, and talk with your credit card merchant (if you have one) and insurance company (again, if you have one) about their recommendations. These are entities that really, really don’t want you to have an expensive lapse in security and can probably direct you to resources that you’d find useful.
• Educate yourself. This little burst of articles about security and encryption were fun to write, but they in no way go into the kind of depth of information that’s out there (probably in better-written and more cogent form).
• Have good documentation about the products and services you’re using. I’m not suggesting writing down a list of passwords; rather it’s important to know what certificates and keys you’re using, who they’re authorized through and their expiration dates.
In the last article I covered about 3800 years’ worth of the history of encryption in, to be fair, a very abbreviated form. I highlighted three distinct encryption methods because they informed the basic point of the piece – which was that prior to the invention of the modern computer and the prevalence of wide-scale networks like the internet the only practicable kind of encryption was symmetric encryption – meaning that if you want to secure a piece of information then you do that with a key that you use to encrypt it, and that the other party who wants to read that piece of information has to use that same key to decrypt it. There are countless dozens of other encryption techniques that might be fun to take a deep dive on (e.g,. Null Cipher, Playfair, Rasterschlussel 44, Pigpen, Rail Fence, Four Square, Straddling Checkerboard and on and on and on…), and I’d warmly recommend doing so if you have even a passing interest in the history of how we’ve collectively tried to keep secrets from each other since we collectively crawled out of the primordial ooze.
Still, once we moved into the Age Of The Computer new and intriguing options opened up because we’re no longer chained to models of handling and processing information that have to be understood and accurately processed by the average human being, who – while excelling about things like putting on pants, deciding what to have for lunch and doing laundry – is probably not designed for doing lightning-fast advanced algorithmic calculations with unerring reliability. The advent and widespread use of new technologies meant that in the span of less than a century we went from this:
to this:
(Incidentally, I would have loved to be a fly on the wall of that photoshoot. “Okay, Bill, I think we have everything we need, but let’s have you climb up on the desk. Right. Just like that, yes. Okay, now, give me those bedroom eyes…”)
Unsurprisingly, many of the initial notable digital encryption technologies followed the old symmetric model – the one where one single key was used to both lock and unlock the information. I won’t go into massive depth on them, but the big three were DES, 3DES and AES.
DES (“Digital Encryption Standard”) was invented by IBM in the 1970’s; it was a 56-bit algorithm that was considered the gold standard of encryption in its day, but – as I mentioned in the last article – Security is mostly a superstition, and rather than calling something unbreakable it’s wiser to adopt a mindset of it-hasn’t-been-broken-yet; to that point, DES was successfully compromised in 1998. It had a good run, though.
3DES (or “Triple-DES”) was 1998’s successor to DES, and was essentially just comprised of three incidences of DES running on the same piece of information, rather like three six year-olds standing on each others’ shoulders and wearing a long overcoat to sneak into an R-rated movie. Although, frankly, if I ran a movie theater and three six year-olds tried that I’d probably just let them in on the grounds that you have to reward that kind of ingenuity and moxie, and that they’re not my kids and their parents can deal with the therapy bills. 3DES was (and is) slow, but while it’s currently regarded as being secure it’s also being deprecated and retired, which is something a lot of us can identify with right now.
AES (“Advanced Encryption Standard”) is the technology that most people are more familiar with, as it’s used by default in most commercial and residential WiFi routers and networks. It comes in a variety of flavors (128/192/256-bits) and Intel and AMD build AES acceleration directly into their chips to make it faster and easier to implement. That’s good, because AES is everywhere; it’s used in iOS to do device encryption, it’s used in Filevault on macOS, it’s used to encrypt IPSec VPN, WPA WiFi, and SMB 3 for encrypted macOS and Windows networking. There are no guarantees that some bright spark won’t come out with a hack for it tomorrow, but right now cracking it would be a significantly difficult task on the level of major world powers throwing vast resources at a problem and generally burning trillions of dollars in the process. AES is about as good a symmetric encryption product as you could possibly want.
So, there are excellent products available for securing data using symmetric encryption, but there’s a whole other raft of tools and techniques available that use the power and miracle of the Internet™ to handle more complex and flexible encryption tasks. Behold, ladies and gentlemen, the awesome majesty of…
Public Key Infrastructure (PKI)
You see, we’re already starting with the weird acronyms and non-intuitive naming conventions and models.
Now, this whole section is a bit of bait-and-switch because I’m probably going to go into detail on what PKI is and how it works in the next article, but before we get there I think it’s worth laying some groundwork on the basic mechanic of how computers trust each other and how that works. To do that we’ll go back and revisit our friend Faceless Bob from the last article. Remember Bob? Classic. That scamp.
So, Bob wants to go on vacation, because in his world of weird faceless people you can leave the house without fear of pandemics and the airlines are actually running. I know, it’s crazy! Work with me here.
Anyway, Bob gets off the plane in a distant, non-specific foreign land and walks up to the immigration officer at the airport.
The immigration officer has no idea who Bob is or where he comes from. Bob could be a Nobel Peace Prize winner on his way to do humanitarian outreach among the fjords or a homicidal maniac with a suitcase full of axes and rubber chickens. He might not even be Bob. He could be Faceless Jeff, or (god forbid) Faceless Ted, that noted creep. Fortunately, Bob remembered to pack his passport, so he holds it up and shows it to the immigration officer:
…which is all well and good, and is admittedly a good start. “Of course,” the immigration officer thinks to himself, “this doesn’t tell me much about this Bob character. Who is he? What does he want in life? Is he going to come to my homeland and cavort around committing acts of literal mayhem? Can I trust this character and let him through immigration?”
These are all sensible questions, because while Bob has some identification, the immigration officer can’t just rely on any old piece of paper with Bob’s name on it, so he asks for Bob’s passport and Bob hands it over:
The immigration officer looks at the passport and is satisfied that it is, in fact a legitimate document. But still, just because the passport is real that offers no intrinsic guarantees about Bob’s worthiness to come into the country and walk its rural byways and enjoy the pleasures of the Glorious Democratic Peoples National Clockwork Doll Museum.
Still, the immigration officer doesn’t have to trust Bob and Bob’s intentions, because the passport was issued by the US State Department, who the immigration officer does trust:
And if the US State Department says that Bob is okay, then it’s essentially vouching for Bob and saying that he is who he says he is and that they trust him with a passport. Satisfied, the immigration officer agrees that he trusts Bob because he trusts the State Department, and because the State Department trusts Bob:
“Right,” I can hear you saying. “This is all very diverting, but what do faceless men and passports and green check marks have to do with… what did you call it? PKI?”
Well, I’d tell you, that’s a smart question that deserves a simple answer that I hope I can make as clear as possible because this is conceptually a little goofy, and I’ve yet to see a metaphor or clever little parable that does a neat job of explaining this. So:
In the above example, Bob is a computer program running on, say, your bank’s website. The faceless State Department guy is an entity called a Certificate Authority, which uses complex mathematical algorithms to create and authenticate digital certificates that computers use to prove their identity to each other. That’d be the passport part of the equation – and once it’s created that digital certificate it gives it to Bob’s program. This certificate will only work with Bob’s program.
Your computer is the immigration officer, and when it encounters Bob’s program it doesn’t have any way of knowing for sure that Bob’s program is actually working for, say, your bank and isn’t just pretending to be from your bank. However, Bob’s program has the certificate (that is made of terrifyingly complex and realistically unbreakable math) which your computer can look at, and then your computer can check against a Certificate Authority that it absolutely trusts.
Your computer contacts the Certificate Authority and shows it the certificate that Bob’s program gave you so that the Certificate Authority can verify whether it’s legitimate or not. If it checks out then your computer knows it can trust Bob’s program – because the certificate is uniquely and specifically written to prove that Bob’s program is what it purports to be, and as your computer trusts the Certificate Authority that gave Bob’s program its certificate then therefore it decides in turn that it is safe to trust Bob’s program.
And this, in a nutshell, is how modern encryption works; not just with faceless men and passports and green check marks, but also because we have fast and widespread secure networking in the form of the internet it is possible to set up chains of trust so that you don’t have to directly worry about trusting the information that you’re dealing as long as someone higher in the chain than you does trust it.
This is, of course, a very, very simple way of looking at a pretty complicated subject, and there are loopholes and caveats and details aplenty to dig into next time…
I pride myself on being a man with his finger on the pulse of the wants and needs of the general public; a sort of psychic thermometer of the zeitgeist. It’s a responsibility I take tremendously seriously, so when I sat down to figure out what to post about I immediately considered the unsettled nature of society and the precarious economic brinksmanship we’re all engaged in and knew – without the merest consideration of the possibility of the fractional perception of any feasible doubt – that it was my duty to write a couple of posts about encryption.
It’s a subject that – in the normal world where we’re not all filled with low-grade dread about curves and spikes and rates – often throws people for a loop. I’ve been lucky to work with (and for) some remarkable people in a breadth of roles and industries, and it’s always a little humbling to come across someone who knows a lot more than you about something eclectic or esoteric. I, for example, may know a lot about mobile device management, but I know nothing about being a chocolatier. If you put me in front of a barrel of raw chocolate and told me to make a ganache then there’d be little that I could bring to the table that couldn’t be slotted into the categories of “Inedible” or “Vile.” Likewise, I can design and build robust and secure wifi networks, but I can’t practice dentistry or surgery (and really shouldn’t ever be in a position where anyone would want me to try.)
You can’t be a specialist and a generalist at the same time, but we pick up broad strokes about fields of knowledge as we go along through life and use those to build models of about how the world works. I know that chocolate is made with dairy and sugar and that dentistry involves scrapy tools, soothing music and an ability to understand your patient’s incomprehensible gurgling and translate it to lies about how often they floss.
Encryption seems like it falls into this category; most clients are aware of the general idea of what it is, and that it’s important and that it’s a good thing, but there’s an inherent reticence to trust it. People are fine with the idea of end-to-end encryption because that’s something that conversationally makes sense; that something can be encrypted all the way through to its destination. It’s a functional linguistic model, but when you start talking about public and private keys and certificates then that model breaks down; not because laymen aren’t capable of understanding it, but because those aren’t concepts that can easily be understood in simple, layman’s terms by people who aren’t IT people and are, say, chocolatiers and dentists.
So, I’m going to try and fix that in a handful of simple, easy lessons that involve minimal bloodshed.
(Attention: Other IT people, please make note: this is supposed to be a high altitude view of the subject. I don’t want a lot of messages along the lines of “what about session keys” or “actually, you can use private keys to encrypt data for public keys when you’re working with digital signatures” and so on. This one’s for the chocolatiers and dentists, not you, okay? Cool.)
The Golden Rule
“Security is mostly a superstition. It does not exist in nature.” – Helen Keller
I like to trot this quote out a lot when I talk about encryption and security, because while Helen Keller was famously not a security expert (and the rest of the quote was talking about how you should embrace risk and change,) this absolutely cuts to the core of the problems we face in keeping people and data safe.
Notably: you can’t. Security is a lie. It doesn’t exist. Zebra are not safe from Lions. You can skip driving and never fly in a plane and obsessively try and control every aspect of your health and still die from being hit by lightning or choke on a brussel sprout. There are no guarantees; in the real world there’s no 100% infallible path to safety. When we talk about security, what we’re really talking about is the mitigation of risk.
And really, that’s fine. Think of information security as a boat with a hole in the bottom. Provided you’re paying attention and bailing water out as fast as it comes in, it won’t sink; and in fact if you keep paying attention and are practical and smart about your bailing strategy then the boat can keep floating indefinitely. The other passengers may not even notice that there’s a hole in the boat at all.
Okay, that’s not the greatest metaphor, but what I’m trying to get across is this: there’s always going to be some unseen, unanticipated vector of attack, but with good practices and responsible vigilance you can greatly cut down or almost eliminate how that could effect you.
The History of Encryption (1900BC to 1970AD)
Well, now that’s out of the way we can dial in on what modern encryption is and how it works, and a good introduction to that is to go into what modern encryption isn’t.
When we think about encryption in the simplest of terms we tend to think about codes and ciphers, which are forms of symmetric encryption. The idea of substituting one thing for another in order to obfuscate a message is the simplest and oldest form of encryption, and given a good enough schema it can be decently effective.
Symmetric encryption really refers to a form of encryption where one key is used to encode and decode a message. While what’s below mostly refers to antiquated forms of symmetric encryption (because they’re historically interesting and conceptually easy to get your head around) it’s still a method that’s used in modern security. Blowfish, DES, and the assorted flavors of AES (AES-128, AES-192 and AES-256) are all examples of symmetric encryption.
The Egyptians were doing it with hieroglyphs, but as seems to be the case with a lot of technologies it was the Romans who kicked things up another conceptual notch by inventing Shift Substitution ciphers – while Julius Caesar is widely credited with the invention there’s no absolute way to be sure who had that bright idea, but the fact remains that by the 1st Century A.D the Romans had it all solidly nailed down. The problem with the old Substitution cipher was that if messages were complex enough then it was possible to look at the frequency of certain words or glyphs and speculate/infer what they might refer to, but Shift Substitution was a degree more complex because it shifted each character in a message up by a fixed interval, thus:
In the above example you can see that each letter of the alphabet is shifted up a pre-ordained number of places, so that A becomes E, B becomes F and so on. Without knowing the number of places that each character is shifted through it’s difficult (although ultimately possible) to work out what the message might be.
Finally, we jump forward about 1700 years to the late 1800s and the invention of the One-Time Pad:
One-Time pads were cyphers initially created for Telegraphic transmission, and they were used extensively during World War II, and sort of built off of the shift-substitution model. Each message sent with a one-time pad used a unique set of numbers as ciphers – one per letter – with the sending and receiving party encoding and de-encoding using the same sets. After each transmission the set of numbers was destroyed (thus only used “One-Time”), so if the transmission was intercepted there was no way that the message could be cryptographically compromised. So, if we wanted to send a message (“Hello World”) then we’d assign each letter a number in the alphabet with, say, A=1, B=2, C=3 and so on, then add a pre-defined set of numbers to that numerical value as a key, wrapping values above 26 back around so that 27=A, 28=B etcetera. The end result would look something like this:
Given a long enough set of numbers (or “key”) then One-Time pads were functionally unbreakable.
All the above are examples of symmetric encryption, and follow the same rules. First, they apply a shared key to a message – by “shared” key I mean simply that both the person sending and the person receiving the message have a piece of knowledge that they share that let them know how to encode/decode that message. Secondly, that shared key can be a string, a character, or an integer. Thirdly, that key can be an operation.
Let’s say that I’m Bob and my friend Alice and I want to share a piece of encoded information. The process would look something like this.
First, I’d encrypt my message with a key (a substitution or shift-substitution cypher like the Egyptian hieroglyphs or the Roman shift substitution or a one-time pad, or possibly an operation or algorithim-based approach like AES-128).
Next, I’d send that key to Alice so that when I sent her the message she’d have the key to open it. If the key is intercepted en route or doesn’t get through then it’s just the key – it doesn’t reveal anything about the message because the message hasn’t been sent yet.
Then I’d send Alice the encoded message.
Finally, Alice could open the message sing the key I’d sent her:
While these traditional, straightforward substitution cyphers aren’t exactly sophisticated in the modern world, they were sort of the bleeding edge of encryption back in the day, but even if you were using something as well-designed as a One-Time pad you were still stuck with the essential flaw at the root of this approach: key distribution. How you got the key from one party to another was problematic; you could go and meet the person face-to-face and give them the key, you could use an existing secure channel of communication to get the key to them, or you could give the key to someone you trusted and have them hand the key over for you. None of those are iron-clad secure options, and all of them are rife with breakpoints and weaknesses.
Next time: The History of Encryption (1970AD to today)…
One of the enjoyable things about this whole Global Pandemic Jamboree has been that – like the great bulk of my peers – I’ve had a chance to contemplate the greater truths of life and ponder the big questions about the fleeting nature of time and the fragility of man. I’ve also had a lot of time to sort through a lot of things, and not in the incorporeal emotional self-actualization sort of way, but in the “I-have-a-pile-of-dead-gear” kind of way. Which, if you’re weighing those two options, is a lot more fun and involves slightly less weeping.
Specifically, I had five dead Mac minis in various states of decay and destruction; all of them having been torn down and worked on and declared unfixable in a reasonable economic sense by clients, and all in the big box of broken stuff that I run to eWaste recycling every few months. Now, when someone hands me a dead computer and tells me that they don’t want to ever see it again I do the sensible, grown-up and professional thing of immediately destroying the hard drive with a hammer and drill so that any data on the thing is irretrievable, which generally leaves me with about 90% of a computer that most likely has some terminal issue (either diagnosed or undiagnosed.) Bad RAM. Bad logic board. Torn cabling, damaged fans, iffy-or burned out power supplies. Given enough time, one could go through all those machines and work out what was wrong with each and possibly – given even more time – cobble them together into some FrankenMac.
Well, of late I seem for some incalculable reason to have time to burn, so I did exactly that. Behold my creation! Look upon it and weep:
Okay, well it’s not that impressive. I feel I rather hyped it up, and that’s on me. But it thinks that it’s a 2010 Mac mini with a 2.4Ghz Core 2 Duo, 8GB RAM and an SSD, and all-in-all it’s a decent-if-unremarkable box that I leave hooked up to the stereo in my garage and also use to feed print jobs to my 3D printer.
I like to 3D print at night because I have no patience and if something’s going to take 9 hours to reach fruition then I’d rather as much of that as possible take place while I am dead to the world. The thing is, however, that at night time the garage is really, really cold and drafty, and also I saw a rat in there one time and there might also be spooky ghosts, so I don’t generally feel excited about sitting around in there at night and setting up print jobs. I’d far rather just push the files to the thing and screen share in so I can make the thing run overnight and go check on my prints the following morning when it’s warm and the rats are sleeping and the chances of seeing sets of macabre, incorporeal twins holding hands and telling me I’m going to play with them forever are substantially lower. Also, the Mac mini is precariously perched on a shelf and prone to falling down if I fuss with the USB ports. I mean, look at that picture – it’s basically held in place by two sharpies and a certain amount of luck.
I have the thing set up with key-based SSH and a static IP, and because I do a fair amount of fussing around and tweaking of 3D print files I wanted to be able to just have the print folder on my Mac Pro (in my home office, warm, no rats, only occasionally haunted by a cat who sits in my good chair and won’t move) synchronize with the print folder on the FrankenMini, which meant setting up rsync.
rsync and I are old friends and go back a ways; right back to the earlier days of OS X, pre-Time Machine. Backing up a Mac back then was usually accomplished with backup software like Retrospect, but while Retrospect was fine at what it did I ran into a few situations where one didn’t so much need a specialized software backup as much as one needed to shove a bunch of files from point A to point B and have that operation not copy over anything that hadn’t changed.
rsync was simple, elegant and fast. Simply, you feed rsync a folder/directory/file and then it synchronizes that input with another folder/directory/file – either locally on the same computer or with another computer. When you fire up rsync to have it sync to another computer it opens up an SSH session and fires up another rsync instance on that remote machine. The two instances of rsync compare notes, calculating hashes for each file, and if the hashes don’t match then the appropriate file is copied to the appropriate machine.
There are a couple of gotchas that you have to bear in mind, though. The big one is that when you copy a file from one Mac to another then the file creation date on arrival on the remote Mac changes to the modification date of the file; in plain English, if it’s April 28th and you copy over a file last modified on April 25th, then when you look at the file that’s been copied to the remote Mac it shows the file creation data as April 28th. This isn’t ideal, but can be fixed by invoking archive mode using the -a flag.
The second problem is that rsync neglects to copy over extended attributes and resource forks. Extended attributes are bits of metadata in the file that can contain things like quarantine info, label information and so on. You can take a look at the extended attributes of a file by invoking the xattr command thus:
Resource forks are a different issue – and one that’s increasingly become less pressing as time goes by now that they’ve been effectively deprecated in macOS, but both extended attributes and resource fork problems can be resolved by invoking the -E flag.
So, to business. The FrankenMini sits at IP address 10.0.0.64 and already has SSH enabled via the Sharing prefpane. If I have a folder called “3D_Print_Files” in my ~/Documents folder then the command would look like this:
So, what’s happening there? Well we’re invoking rsync with the -H (preserve hard links), -a (archive mode to preserve date and time and other flags) and -E (extended attributes and resource forks) options, then pointing to the source folder that the initial sync is running from (in this case, "/Users/daveb/Documents/3D_Print_Files/" on my Mac Pro in my non-haunted and non-rat-infested office). The destination folder is called through an SSH session by feeding it the username of an account and the IP address of the remote machine (dave@10.0.0.64), and then the location of the destination that you’ll be copying the data to ("/Users/Dave/Documents/3D_Print_Files")
The net result will be that you’ll end up copying the contents of the source folder to the destination folder, ignoring files that have not been updated. And this is all well and good, but what do you do if you want to actually make the two folders sync with each other rather than just copying source to destination? All that’s required is to switch the command around and append it to the first command, like so:
This is, admittedly, rather a mouthful, but you can easily make a command alias out of it by throwing it into your .profile/.zprofile, which should simplify matters considerably.
This is, in essence, a pretty simple trick, but all talk of rats and ghosts aside it’s a remarkably simple and easy way of replicating and synchronizing potentially vast amounts of data over both a local and wide network. I wouldn’t suggest using it over the internet without some kind of encryption (either certificate-based or an honest-to-goodness VPN), though, but if either of those is an option then rsync can be a pretty versatile tool in the arsenal of anyone who isn’t going out much these days…
A couple of weeks ago I posted this little ditty about how to cold boot your Mac remotely, and one of the options nestled in the screenshot I included with that post was the intriguing “Wake for network access” checkbox.
It’s an option that I’ve mostly avoided paying any attention to over the years, because the computers I mostly tend to deal with are ones that are seldom (if ever) actually turned off. There are lots of reasons why this is the case, but they tend to fall into the two general buckets of This Computer Needs To Stay On Because People Are Getting Files From It, and the equally capitalized This Computer Needs To Stay On Because People Are Getting Services On It. The idea of needing to wake a computer remotely seemed a fringe issue at best, but oh how the wheel turns and time makes fools of us all etc etc.
We’re living in a world where remotely tinkering with non-servers is starting to be more of a pressing issue and a requirement than a suggestion. And there are ways of dealing with those kinds of requirements that aren’t immediately obvious. If you put the average, intelligent person in front of a screen with a checkbox marked “Wake for network access” then chances are they’d look up at the fact that the preference pane that the option is nestled in, note that it’s the Energy Saver pane, and come to the logical conclusion that if this is a place that controls when your computer goes to sleep and there’s an option there for something to do with waking over a network then it’s not a huge or illogical conceptual jump to decide that that if their computer was asleep and you tried to connect to it over the network then the computer would wake up.
This, it turns out, is absolutely true. And – in a more precise and accurate sense – an utter, utter lie. It’s entirely possible to wake your computer remotely, but there’s a specific way of doing it that isn’t immediately obvious and that isn’t ever really called out, and that way is by the implementation and use of a Magic Packet.
Okay, I should probably explain what a Magic Packet is. I could also call it a magic packet, sans capitalization, but that’s less fun, and if you’re going to reference supernatural capabilities in your IT doublespeak then you might as well lean into it. A Magic Packet is a specially-crafted network packet that contains the MAC address of the computer that it’s intended to reach, sent out over UDP to port 0, 7 or 9. It’s a highly targeted, highly specific finger prod to the sleeping computer that only that computer will respond to, and if you want to make one on your Mac then you have to jump through a hoop or two.
Firstly, you need a way to make a Magic Packet. This is probably unsurprising for anyone who’s read more than half a dozen of my posts, but I’m going to use homebrew to install a package to create and send said Magic Packet, thus:
brew install wakeonlan
Secondly, you’ll need some information about the computer you’re crafting the Magic Packet for – notably its IP address, the port number you’re aiming for, and its MAC address. The port number is 9 on macOS (at least, it is in every case I’ve seen so far, and if that doesn’t work you can try ports 0 and 7), and the command should be formatted something like this:
wakeonlan -i 10.0.0.1 -p 9 12:34:56:78:ab:cd
Plug that into the Terminal on the computer you’re trying to connect from, and all things being equal it’ll find its way to the targeted computer and raise it from it’s slumbers…
While IT consulting (where the particular scope and method of execution I tend to pursue is as a sort of Freelance Sysadmin For Hire) is an essential service, I’m finding that things are… quiet on the job front. When your business is largely composed of helping other businesses design, implement and maintain their Apple IT infrastructure and those businesses are either on hiatus or just plain twiddling their thumbs, then you find your workload substantially reduced.
Which, actually, is fine by me; like most people in my particular nook and cranny of the industry I work by appointment and very flexible hours, and have been doing it long enough and well enough that work is always there. That’s great, but the downside is that one rarely has time to go and do new things, or branch out and try something new. Still, now I have more free time while the world descends deeper into lockdown, so I’m using that time to learn something more about Swift via the miracle of Swift Playgrounds.
Swift Playgrounds is squarely pitched at kids, but as a forty-six year old it’s not insulting or too dreadful, and if you have nothing else to do and a passing interest then I urge you to take a look. I’ve spent the last week or so working through a lot of lessons that require you to guide a weird little cartoon guy through mazes and picking up jewels and toggling switches, and once that’s done I’m going to have a crack at the 100 Days of Swift thing. I’m great at knocking out some quick and dirty bash scripts, but I have an Anthropology degree and plunged straight out of college into working in 1994 and have never really stopped since; ergo I have zero formal coding experience and have had to kind of reverse-engineer things and self-teach in odd trajectories as I’ve gone along.
So far it’s been a bit of an eye-opener for exactly how rusty I am in all kinds of areas. I’ve managed to get through the first module with moderate ease, but the second module is a mite more challenging because it falls squarely into the trap of all Coding-For-Idiots type things; that it’s clearly written by people who know what they’re doing.
I’m sure you know what I’m talking about. You want to learn about a thing, so you buy a book that purports to teach you about it in thirty days, or twenty-four hours or something equivalent. The subject matter is immaterial; it could be C++, brain surgery, or washing machine repair. Let’s say it’s washing machine repair because that’s more fun. At any rate, the first couple of lessons are helpful and practical; they explain the broad strokes of what a washing machine is and a high-level view of its operation and fundamental purpose (ergo, dirty pants and hot water and soap go in one end and clean pants come out of the other), and then they detail the major bits of the washing machine – the drum, hoses, knobs and switches and whatnot.
And then they blindfold you and throw you into the deep end of a swimming pool with chains and weights around your feet by launching into, I don’t know, spindle torque ratio and foot-pounds per inch of water filtration, because it doesn’t occur to the jerk who wrote the thing that you have at best a passing level of experience at the operation of the WashMatic 9000™, didn’t do much physics or math in school, and that the little you remember about those things is buried beneath the better part of thirty years’ worth of other stuff that turns out to be considerably more pressing on a day-to-day basis. You’re a moron, is what I’m trying to get across here. You’re a moron and the book is probably written for what an expert thinks constitutes a moron, which is in actuality a person who is a less-qualified expert.
Anyway, a lot of the bits of the second module so far are in the mold of “This is an open-ended challenge that you can address any way you like using the things you’ve learned,” and you look at that little cartoon chap and his gems and switches and profound lack of spatial awareness and elementary common sense and shake your head wearily.
Further, it’s clear that whoever wrote this thing subsists on a diet of lies, and does in fact have a very particular way that they want you to solve the puzzle, despite their protestations to the contrary. You can feel the silent judgment.
Still. I have a very good friend from back in the UK who has taken to calling this current moment an “Unexpected Holiday,” which I think is a wonderfully optimistic way of looking at things. This is time out from our regularly scheduled lives, and it’s best to look at it as an opportunity and not a curse. We can’t go to the gym (because gyms are plague houses) and we can’t see our friends except as tiny, blocky images on Zoom, and staring at the walls – while fun – has a definite shelf life, so we might as well try and do something useful with the time. If anyone needs me then you’ll know where to find me, but until then I’ll be trying to work out why none of my code works while a little cartoon man on my screen glares at me, radiating mild disappointment.
In this period of ongoing peril and, well, frankly terrifyingly uncertain business environment, I’ve talked to a lot of folks who are having to radically adjust the equipment they use and the way that they use that equipment; i.e., not being in the same room as some of the computers that they’re used to being in the same room with.
This is, let’s face it, mostly servers. Entirely servers, actually, and as seems to be the spirit of the age right now it’s clear that things were easier back in the Good Old Days. Of course, I’m not talking about the Good Old Days pre-pandemic; I’m talking about the days when you could pick up the telephone, break out your credit card, and talk to a nice person at Apple and give them a lot of money in exchange for an Xserve.
Xserves were fabulous. Yes, they weren’t price or feature comparable to some of the offerings you’d get from a good PC server vendor, and they lacked a lot of things like expandability and were sorely limited by their 2U standard height and the number of drives you could throw in one, but if you wanted a macOS (Sorry, “Mac OS X”) server then they were the only game in town and they did an excellent job. I’ve talked in the past about how loud they were, but they included such niceties as redundant power supplies and Lights Out Management (LOM) unit, which allowed you to do all kinds of hardware management that included remote-booting the thing.
You can’t do that with a Mac today. If the box on your desk or in your server room down the hall or (as seems to be the case of late) forty miles down the road in a locked, sealed building is off then there’s ostensibly no way of turning it on unless you’re willing to lean over and push a button, walk down the hall and push a button, or get in your car, drive through the proto-apocalyptic wasteland that we apparently now all live in, break into a building, and push a button. If your Mac suffers a power outage then you can absolutely check this box:
…and the thing will obligingly fire up once power is restored, and that’s great, but it’s not the same thing as being able to shut a computer down and then fire it up again.
However, as is often the case with this kind of thing, there’s a way of doing it through the command-line if you don’t mind getting a little dirty, and by dirty I mean performing a dirty shutdown. What’s a dirty shutdown, you ask? Well, I’m glad you brought it up, because it’s a neat little trick that’s built into the OS that allows you to combine all the convenience of a nice, orderly shutdown with the exciting thrill of suddenly yanking the power cable out of the back of the thing like some kind of foaming maniac.
If you look at the man page for shutdown, you’ll see the following options:
The one to pay attention to there is the -u flag, which normally comes into play when you have your computer attached to a UPS. When the power goes out and your UPS kicks in then the computer will struggle on for as long as possible, but providing your UPS has a management port and a USB cable that you have plugged into your Mac then when it in turn realizes that it’s about to run out of juice it’ll send a command to your Mac to shutdown, but instead of just sitting there once power is restored the UPS simulates a power cut so that your Mac automatically boots.
It’s ingenious, and also allows you to shutdown your Mac in a nice, orderly fashion and then once you’re ready to reapply power it’ll automatically fire up instead of sitting there like a big metal lump.
So, if you just shut your Mac down from the command line by typing in sudo shutdown -h now then it’d halt the system, quit everything that needed quitting, and turn itself off in a neat, orderly fashion. Whether you manually disconnect power or not, the computer will require you to physically power it on by hitting the power button.
But if you invoke the -u flag and type in sudo shutdown -h -u now then the computer will do all the sensible, practical, good-housekeeping things you’d expect from a proper shutdown, and then tell itself that the power was unexpectedly disconnected so that when power is restored the computer will just fire right up.
“That’s great,” I hear you say. “But what good is it if, as you mentioned earlier, I’m stuck at home in glorious self-isolation with a year’s worth of bathroom tissue while the End of the World rages around me?”
Well, firstly that’s a little hysterical of you, but I’ll skate over that bit because you make a decent point. Where this option is useful is if you have a UPS with a network connection and remote on/off capabilities. There are options out there; just because there’s no LOM still extant on the Mac doesn’t mean that you can’t effectively outsource that component – just remote into the Mac, issue the dirty shutdown, then hop onto the UPS and turn it off or (if it’s possible with your model) turn off the power to the Mac. When you want to cold boot your Mac you just remote into the UPS, power it on, and that in turn supplies power to the Mac which – because it thinks it was unexpectedly shut down – automatically boots from cold, leaving you at the login screen.
This was a fun little project I poked around in recently while helping another ACN member with a data recovery project.
Back when Apple started shipping computers with Fusion drives, said Fusion drives were wonderful things. Essentially what they did was pair a 128GB SSD with a 1TB+ rotational hard drive and use CoreStorage to create a LUN that packed the two together and gave you the best of both worlds; a fast drive that held data for immediate use and a slower drive that was substantially larger and fed data to the fast drive. What you ended up with was what appeared to be a 1TB+ hard drive that was somewhat slower than a (greatly more expensive) SSD, but a lot faster than a regular 7200 rpm hard drive.
The trade off was that – at least in the Mac mini – that reduced the number of available drive slots to one, which was frustrating because the prior generation of Mac mini had two drive slots, thus allowing you to make a mirrored RAID of the boot drive. Which was very handy if you were using said Mac mini as a server, which a lot of people were doing. To get around the issue I’d break the Fusion drive into it’s constituent elements – a 128GB SSD and a larger hard drive – and then create a RAID mirror of the two. It wasn’t ideal (because the mirrored RAID would take the size of the lowest element – i.e., you’d only be able to use 128GB out of that 1TB+ hard drive), but if you were really just looking to use the internal drives for boot data and some caching then it was just fine. After all, actual user data would usually sit on a fast external RAID anyway.
Breaking the Fusion drive was pretty straightforward, and worked thus:
• First, boot from an external drive or put the Mac mini in Target Disk Mode, connected to another Mac.
• Second, open the Terminal and plug in diskutil coreStorage list to get a list of all the connected coreStorage volumes.
• Third, make a note of the logical volume group universal unique identifier (or lvgUUID if you don’t want to have to say that all the time). It’s a 32-digit number expressed in five groups, and it looks something like 1234a678-1a23-1b23-1c23-1234567890ab
• Finally, append that lvgUUID to the end of a diskutil command to delete the LUN, thus: diskutil coreStorage delete 1234a678-1a23-1b23-1c23-1234567890ab
Lo and behold, you’d now have a plain, regular, basic SSD and hard drive available for your RAIDing pleasure.
But what if you want to go the other way? If you have a small SSD and a large hard drive and you don’t really want two drives clogging the joint up and would prefer one faster drive? Well, it turns out that rolling your own Fusion drive requires a couple more steps than breaking one, but isn’t that difficult.
• First, get a list of disks connected to your Mac by typing diskutil list
• Choose the disks you want to use for your new Fusion drive. Let’s say they’re /dev/disk1 and /dev/disk2
Note: Exercise caution and take a moment over that last step. Where things go from this point on involve erasing and breaking things in your computer. Make absolutely sure that you’ve chosen the correct disks because otherwise Very Bad Things can happen.
• Type diskutil coresStorage create MyNewFusionDrive /dev/disk1 /dev/disk2
• You’ll be shown another lvgUUID. Make a note of that, as you’ll need it in the next step.
• Use that lvgUUID to create a new volume, stipulating the name, type, and amount of the drive you want to use. For example: diskutil coreStorage createVolume 1234a678-1a23-1b23-1c23-1234567890ab "NewFusionDrive" jhfs+ 100%
…and that’s about it. You should now have a new Fusion drive on your desktop.