Making Big Sur a little faster.

The new M1 Macs are things of wonder and delight. Cool, inexpensive and fast. Staggeringly, ludicrously fast. Everyone should have one. In fact, if you’re reading this and you don’t own one then you should probably close this window (I mean, bookmark it first) and go and buy one and then come back and we’ll continue.

You did that? Good. Congratulations! I don’t think I know anyone who is disappointed with the whole Apple Silicon experience except for my friend who, for the sake of argument, we’ll call David. Because this is in fact his name.

My friend bought an M1 Mac mini on my recommendation to do some development work on, and was immediately frustrated that many of the things that he’d want to use don’t work properly (at least, not yet). Cocoapods, Docker, the native version of Homebrew – these are all things that are actively incoming. And that’s fine – these things might be a month or two out from being fully and totally supported and reliably working. He understands that; what seems to bother him is that despite all the purported speed increases, the Mac mini feels… slow.

The problem, I suspect, is Big Sur. Whether you’re running the slowest, oldest, least supported i3 iMac or a $55k Mac Pro there are some things that remain true across the board. Speed is all about perception, and there are tangible roadblocks that have been put in place in the name of User Experience in Big Sur which rely on tinkering with the balance between speed and aesthetics.

One example is the wretched rollover delay in the Finder. Here, for example, is what I see when I roll over the (stupidly hidden) spot to reveal the proxy icon for the current folder:

(Pardon the odd artifacts at the top of the image. gifsicle is a fickle beast.)

Ignoring the fact that there’s no earthly reason why this icon should be hidden in the first place there’s the matter of the delay before it pops into being. It’s a little more than a second, no matter what computer you’re using. A little more than a second may not sound like a lot, but if you’re spending the day – purely for the sake of argument – doing a lot of filing of documents and general end-of-year/spring cleaning on your Apple IT Consulting business then you’re going to end up rolling over proxy icons a lot, and that grain of sand between your toes is going to grate.

Fortunately there’s a simple way of fixing it by adjusting the default rollover delay via the Terminal, thus:

defaults write com.apple.Finder NSToolbarTitleViewRolloverDelay -float 0; killall Finder

This tweaks the delay down to 0 seconds, which ends up looking like this:

Bonkers!

Still, that’s not enough. There are other delays built into macOS that (depending on your taste) you might want to decrease or dispose of altogether. My personal favorites are reducing the initial delay when pressing a key and then also reducing the key repeat rate. Thankfully, these can both be demonstrated with a single animated gif and tweaked with a couple of simple Terminal commands.

The “before” gif:

This isn’t terrible, but if you’d rather have something more responsive then you can try the following Terminal command:

defaults write NSGlobalDomain InitialKeyRepeat -int 12; defaults write NSGlobalDomain KeyRepeat -int 1

Which – after you log out and in again – gets you this:

I feel the need – the need for… a lot of k’s being typed and deleted.

These are, it has to be said, small hacks. Trifles. Almost inconsequential in the grander scheme of things, and definitely fringe cases where your mileage may vary. Not everyone, after all, is happy with a version of the world where hitting the delete key for a fraction longer than they normally would can result in vast swaths of text being instantly deleted. But on the other hand, they do point to a greater issue; that the way that you work with your computer is wholly subject to the decisions made on your behalf by other people. Apple’s fit and finished on their operating systems is… well, if not without reproach (because that’s a bold claim) at least the result of rather more careful thought and review than the fit and finish of most of their competitors.

Still, having some flexibility and the ability to customize the way your Mac operates isn’t a repudiation of Apple’s work. If anything, it condones it. Or, at least, makes it a little faster.

Quitting Zoom on a Mac (or: Zoomkiller™).

Okay. This isn’t some philosophical nonsense about how to tear yourself away from the screen, or how it’s important to compartmentalize your life, or how we’re all engendering negative self-imagery because we’re looking at pictures of ourselves all day or anything of that nature. This, in a very literal sense, is about how to Quit Zoom.

Or more precisely, how to get Zoom to quit. I’m sure that I’m not alone in my frustrations in this department; you’re done with your Zoom call and everyone is saying their farewells, so you hit the “Leave” button in the bottom right hand corner of the Zoom window, thus:

Buh-bye!

and… don’t leave, because you have to click “Leave Meeting” a second time:

I SAID BUH-BYE DAMMIT WHY ARE YOU STILL HERE

I mean, sure, the world is full of horrors right now, but while we’re all carrying rocks in our back with names like “insurrection” and “pandemic” and “looming economic apocalypse” it’s the little things that seem to get me down the most. The flecks of grit in ones proverbial sock, and as I spend a lot of every day on Zoom this is the one that chafes me the most. So I decided to do something about it, and that something is a little script/application that I call…. Zoomkiller.

Okay, the name is a work in progress. I’m workshopping a few alternatives. I should probably also come up with an icon while I’m at it, because right now it looks like this:

Behold.

The reason it looks like this is because it is, in fact, an AppleScript application. I could have written the thing in Swift (and may actually go that route at some point), but AppleScript (while increasingly archaic) is pretty great for knocking together very simple tools to do very simple jobs.

Telling AppleScript to quit an open application is easy – you just tell it to activate the application and then use System Events to feed it the appropriate keystroke, like so:

If Zoom was something that played nice and quit right away with one simple Command-Q keystroke then that’d be all that was required (and, more to the point, something simple enough that Zoomkiller wouldn’t be required at all), but unfortunately it’s not that simple. When you try and quit Zoom, you get that pesky second “Leave Meeting” button that pops up on the screen – fortunately that can be killed with AppleScript and System Events again:

This does the same thing as the first script, but then additionally tells System Events to go look at the front-most window and click the first button (which is, in this case, “Leave Meeting”). The next step is to save the thing as an AppleScript application by choosing “Export” from the “File” menu and selecting the appropriate options as seen below:

Et voila! There are just a couple of more steps to get this thing to run properly. Firstly, you’ll need to tell your Mac that it’s okay for Zoomkiller to control your computer (i.e., that it’s allowed to use System Events to send keystrokes to Zoom to tell it to quit).

First, open the Security and Privacy pane in System Preferences:


Click on the padlock in the bottom-left corner to unlock the prefpane (you’ll need to enter your computer password), then click the “+” icon and navigate to where you put your Zoomkiller application and click “Open”.

Make sure the box next to “Zoomkiller” is checked…

…and that’s about it. I’ve dragged Zoomkiller into my Dock so that at the end of each call I can just tap on the icon – because the application is saved as run-only and because it doesn’t stay open after running it just neatly quits Zoom and then quits itself without any further required input.

PS: Copy/pasteable code below. Enjoy(?)

activate application “zoom.us”

tell application “System Events” to keystroke “q” using command down

tell application “System Events”

tell front window of (first application process whose frontmost is true)

click button 1

end tell

end tell

Modeling Threats (or, Helen Keller vs. Russian Hackers).

As I’ve made abundantly clear to a lot of people over the years, we should all have been paying more attention to Helen Keller.

Okay, maybe I should clarify that somewhat.

If you were to start talking about Helen Keller to the proverbial person-in-the-street then there are certain touchstones of knowledge that you’ll see come into play. Some people will have no idea who Helen Keller is – which I get, because she’s a much bigger deal in the USA than anywhere else, so for those folks I’d mention the whole being-born-deaf-and-blind thing (which is what most people customarily jump to), as well as the whole Socialism thing (which is an association that a significantly fewer number of people make), but those are kind of table stakes. They’re showy and textbook inspirational/surprising, but differently-abled socialists aren’t uniquely unknown.

No, the thing I’d really draw attention to was her incisive grasp of the nuances of late twentieth and early twenty-first century Information Security, which were as gimlet-sharp as they were eerily predictive – the latter being quite a feat considering that she died in 1968.

What I’m referring to – of course – is this quote:

“Security is mostly a superstition. It does not exist in nature, nor do the children of men as a whole experience it. Avoiding danger is no safer in the long run than outright exposure.”

Helen Keller – “The Open Door”

Now, I’ve used this quote in a lot of talks in a lot of hotel conference rooms near a lot of airports over the years, and once this current plague is over I hope to use it in a lot more, because it’s something that’s absolutely worth absorbing and I like the weird little swag bags you sometimes get when you’re a speaker at a conference because I never seem to have enough pens and novelty iPhone chargers. Pursuing absolute security is like blundering into your nearest National Park, blindly hoping to bump into a Unicorn; no matter what your intentions you’re going to end up cold and tired and wet and disappointed.

Security doesn’t exist. It’s a mental and conceptual model that we’ve created so that we can sleep at night, and nothing more. You are not safe from lightning strikes on clear summer days. You can be as cautious and careful as possible and be rigorous in your use of PPE and distancing and still get COVID-19. A meteor could crash through your house while you sleep. Terrible, unexplained, fatal things happen to people all over the world on a daily basis; sure, sometimes the odds are fantastically slim, but you’re still playing a game with those odds.

It’s usually after making that point that I bring up the next slide in the deck, which looks a little something like this:

When we talk about “security” what we’re really talking about is “the mitigation of risk.”

This is an unpleasant truth, and when I address it in front of the aforementioned crowds in the aforementioned hotel conference rooms I can usually see the audience do one of two things; dutifully nod and go back to screwing around on Facebook (which is what most conference attendees – whose presence is mandated by their bosses – do anyway), or actually start to pay attention (which, as a person standing on a stage who spent two hours the night before rehearsing in the bathroom mirror, is something I heartily approve of).

This, in an admittedly roundabout fashion, brings us around to this story. If you’re disinclined to go and follow and read that link then I’ll lay out the broad strokes thus: during the current imbroglio that is the SolarWinds investigations another security firm (CrowdStrike) reported that Russian hackers had used compromised access to the vendor that sold it Microsoft Office 365 licenses in order to attempt to harvest emails that – because of the nature of CrowdStrike as a security company – would probably have contained privileged information.

Apple people traditionally like to throw shade at PC people, and as an Apple person I hate being lumped in with that crowd. Talking trash about a company just because you think that its products are inferior to the products of the company that you prefer doesn’t make you right, or sophisticated, or some arbiter of taste. It means that you have an opinion – which is fine – and that your opinion is something that you can’t keep to yourself – which isn’t, and which in turn makes you an asshole. I don’t want to court controversy here, but I’d venture that not being an asshole is a low bar that everyone should really try and clear, or at least strive to.

So, with that in mind you have to step back and consider this story with a little distance. Sure, this sounds bad – and it is bad – but fair’s fair; CrowdStrike have been forthcoming about the attempted breach, and while it’s fun to sling mud and hand out blame there’s really no fault in their actions – nor is there any in the actions of Microsoft (without more information on the nature of the breach on the vendor’s side there’s little value in making accusations and throwing accountability around, but my hunch is that if we’re hearing about all of this then they’ve probably done the smart thing and been transparent about the issue too.)

The problem here was not Microsoft, nor the vendor, nor CrowdStrike. Giving all of them the benefit of the doubt they may have acted perfectly. No, the problem is that if the model you’ve created to ensure your organizational security isn’t correct then no matter how well that model is implemented it’s always going to be subject to compromise. Keller’s maxim is universal. Security is superstition.

This article isn’t really about the Microsofts and CrowdStrikes of the world; I can’t speak to that scale of company because I’m an independent IT consultant in a coastal SoCal town, and because I rarely actually bump into that kind of setup. Amazon and Raytheon and the handful of larger enterprises that have facilities around here aren’t my clients, because they’re dealing with issues of size and complexity that entail a full-time staff of in-house dedicated IT support. I’ve been that in-house guy before, and I’m very happy that those clients aren’t in my base (because I like to take weekends off and I like to sleep nights and because being on call 24/7/365 is exhausting). But there are lessons to be taken here that can be applied to smaller-scale organizations. So:

It’s your data.

Passwords, certificates, login credentials – they’re your data. They don’t belong to anyone else, and they shouldn’t be given to anyone else. Not third-party vendors, not indiscriminately handed out to employees. Not even given to IT consultants.

I don’t keep passwords, because it’s a terrible business practice. Leaving aside the blatantly horrifying liability issues, I firmly believe that clients have the right to fire their IT consultants (and vice versa). I’m fortunate in that I’ve only been fired by one client, and in that case it was less of a firing than a mutual parting of ways (after all, if you’re moving from an all-macOS Server infrastructure to an all-Windows Server infrastructure then there’s relatively little point in keeping the Apple consultant around when the Windows consultant is right there in the mix). On the other hand, I’ve fired a handful of clients over the years, and having both parties able to walk away amicably and secure in the knowledge that nobody owes anybody anything makes that process easier.

Good IT consulting outfits don’t retain your data. If you forget your password and call your IT person, and they can look it up for you in their records then you should fire that IT person immediately and then change all of your passwords – and I mean all of them. The offending IT person can be of the stoutest character and unimpeachable ethical standards, but if they have your data then they’re a threat because if a third party can get to their data then that third party also owns yours. There’s little point investing in locks and alarm systems if the person who maintains the locks and alarms leaves your keys and codes lying around their office for their cleaning staff to see.

Your data doesn’t just live on your computer.

It’s not just about the services and credentials that you use inside your organization; it’s also about the services and credentials that reside outside your organization. Some of the most critical things that effect your ability to do business are some of the most-often overlooked – a prime example is DNS.

DNS – and this is an immensely stripped down explanation so don’t shoot me – is the mechanism by which the internet knows where computers and servers actually are and what they do. DNS servers tell the world where your website is, and where your email server resides. Unless you’re hosting your own DNS server (which is thankfully a rarer occurrence these days) then your DNS host has the power to – deliberately or not – completely cut off and isolate your organization from the internet.

This sounds like a worst-case scenario, but I’ve seen this a lot more than I’d like; organizations that let third-parties administer their DNS without giving any control to the organization. If I had a nickel for every time I’ve asked a client if they have any documentation or information on where their DNS is hosted and then had nothing in return but a blank, panicked stare then I’d have… I don’t know. A lot of nickels.

And again, that’s understandable. The structural mechanics of How The Internet Words are a conceptual handful, and there’s no practical need for most people to stay on top of that as a matter of course. But there is a need to have that information if needed.

Have a secure repository for your data.

Yes, yes, I know; “Security is mostly a superstition” and so forth. That’s a given, but the rest of the Keller quote – part that I don’t generally like to include in the talks at the conferences in the hotels near the airports runs as follows: “Life is either a daring adventure, or nothing.” It’s easy to take that as a carefree expression of the vital need to embrace a zest for life, but I look at it as something more chilling. “Daring adventures” are a hell of a lot better than doing nothing, after all. It’s better to have a carefully thought-through and protected repository for your data than it is to write it down in a book and put it on your desk, or throw everything in a Filemaker database or spreadsheet on your server marked “Passwords”.

(Note: Those are actual examples of things I’ve moved actual people away from doing.)

Take some time and find the right tool for documentation. Something cloud-based would be good; better yet, something with a lot of redundancy and good encryption options. I like IT Glue, but that’s just a personal preference. If you’re at an appropriate scale then look into having something written for you by a decent web/database person – there are options to explore in this space. Just don’t blindly either put everything into one bucket that lives on your computer (which can be stolen/damaged/hacked/just decide to die one day) or equally blindly go and throw it all onto a Google Workspace or Office 365 document.

Know where your keys are.

I don’t mean “keys” in the PKI sense (well, okay, maybe I do, but that’s not where I’m going with this) – I mean the keys to the things that run your business. I’ve already mentioned DNS, but there’s also Domain Registration. Do you know where your domain is registered? Whose account was used for the registration? When it expires? What about organizational Apple IDs used to administer Apple Business Manager, or APNS? What about software licenses? How are you tracking that data? Whose account was used to purchase those, and from what vendor?

That’s a lot of questions – I apologize. But not much; it’s a regrettable truth that when your job often involves going into organizations experiencing systemic trouble then you tend to only see the worst case scenarios, and in those kinds of cases it’s not uncommon to discover that the absolute critical piece of information or credential is locked behind a defunct email address, or originally set up sans documentation by a former employee, or more often than not just missing in action without a trace or a clue.

There’s nothing that can’t be fixed (well, very little that can’t be fixed), but some fixes are well-documented and quickly squared away because there’s a clear chain of information, and other fixes can take literally days of complete downtime and mountains of billable hours. Don’t get me wrong; I enjoy billable hours – I just don’t particularly enjoy writing them for reasons that could have easily been averted.

Make sure that you’re being diligent in how you implement products and services, and that there are established procedures for how those are accessed and serviced. Apple recommends having a specific Apple ID for organizations just for administering Volume Purchasing/MDM, but I’d go further and suggest setting up a specific administrative account that’s used as the contact for everything else – web, DNS, registration, licensing, the whole nine yards. Not an account that’s regularly used by an individual, either – an account that’s purely reserved for that specific purpose and that alone, with critical notifications forwarded to people inside the organization that need to see them.

What Helen Keller got wrong.

To be perfectly fair, there’s not a lot to say here. The only thing I’d throw into the ring would be that – philosophically at least – there’s little value in accepting the “Security as superstition” maxim at face value. Yes, the broad strokes are accurate, but while the idea of safety is something that we’ve constructed with our meaty, inefficient animal brains we’ve also managed to create systems that are more capable of dealing in absolutes. Nobody is going to start declaring Public Key Infrastructure as the greatest invention since fire/the wheel/sliced bread etc, but the fact remains that we live in a world where danger is starting to run on diminishing returns. You can narrow the risks – slice them into thinner swathes than ever before – because now we have better, more finite, stronger tools that we can use to protect ourselves.

These are – as has become abundantly clear over the last twelve months or so – Unprecedented Times. While we’re trotting out tired platitudes I’ll throw “the world is getting smaller” into the ring, because that and the unprecedented-times bit tie in pretty neatly; when we’re able to communicate faster and more completely then our connections contract. They become less nuanced, more immediate, and far, far more polarizing – creating systems so vast that simple fixes are less likely to be attended to and more likely to be overlooked or misunderstood.

Earlier I wrote about Security as an abstract mental model – and I think that’s an important way to consider it. Models are – to my way of thinking – the primary way that we’re able to containerize the outside world and build frames of reference and connections that adequately map our personal constructs of our personal worlds to the reality we actually live in. Both people and organizations exist and integrate with each other by creating and maintaining those models of the world, and with rapid change those are models that have to be updated and checked and refitted on a continual basis – and this applies whether you’re considering correct personal pronoun usage or assessing organizational network weaknesses. The only ways to stay relevant are to be continuously reactive and adaptive in updating and maintaining those models, and attacks like the SolarWinds incident point to bad actors being similarly more determined and focussed.

At the end of the day, the responsibility for your data lies with you and you alone. It’s an uncomfortable truth (after all, it’s much more fun to blame someone else when everything goes horribly wrong) so selecting the right tools and approaches to try to protect that data is something best done carefully and with considered understanding. Your model is never going to be perfect, but the sooner you can accept and internalize that then the sooner you can adopt a critical approach to remedying potential threats.

YouTube-dl and the RIAA

This is, believe it or not, the busiest part of the year if you’re an IT consultant. There are excellent reasons for this; businesses are generally keen to close out budgets, or make purchases and roll out products prior to the end of the tax year, or even just feel the universal urge to do whatever it takes to tie a neat knot around the year and go into the next one loaded for bear. The practical upshot of that is that I haven’t written anything for this thing for a while because I’ve been far too busy running around and actually working (which is exactly the kind of problem you want to have if you’re me).

Still, I have a running list of things I wanted to touch on because I think they’re interesting and because this blog is as much as anything else a resource that I can go back and look at when I need to remember some piece of syntax that gets shoved out of my aging cranium to make space for something more critical. One of those things concerns my first love (at least in a professional sense), which is homebrew.

(Okay, honorary shout-out to Synology as my other first love. It’s the weirdest, nerdiest form of polygamy.)

Homebrew is fabulous. I like to support it financially when I can because it’s a simple and effective way of putting together tools that enable me to do my job. I mean, sure, there are other ways of building programs and tools that are perfectly functional, but if IT consultants are plumbers then homebrew is an organization that just hands out wrenches for free. You can’t beat that value, and you’d be a fool to try. Still, now and again open-source tools run afoul of the rest of the world, and it can be jarring to reach for a wrench only to find that – against all expectation – it’s not where you left it.

I’m referring in this case to YouTube-dl, and the recent debacle over it’s equally recent removal and reinstatement on GitHub. You can read that link, or I can outline the rough lines of the story, which is pretty simple. YouTube-dl is a tool that you can feed a streaming video URL to, and which will then process that URL to extract the video and audio feeds and download them to your computer. Despite the name it’s a tool that works on, well, basically every service that streams non-DRM encoded video, and while it’s (fairly predictably) used to scrape video content from the internet that providers may not want scraped, it also has a raft of legitimate and fair use applications. I work with educators who’ll use it to pull academically-licensed video down to machines for presentations and research purposes, for example.

Still, the problematic use cases of the tool (notably the bit about being capable of illegally downloading and saving copyrighted music and video) ran afoul of the RIAA, who complained to GitHub that they were hosting a tool that was in contravention of copyright law, and in turn GitHub pulled the tool and left a lot of proverbial plumbers without proverbial wrenches.

Fortunately, all parties saw sense and restored Youtube-dl within days, but during the outage I had to do some very specific poking around about how to find and build the tool from non-GitHub resources, and in turn how to use it to manually select video and audio formats, which turned out to be rather interesting.

As anyone who’s uploaded video to YouTube will tell you, it’s not simply a process of lobbing it a file and then going and having a cup of tea while it puts all the ones and zeroes onto a webpage for your delectation and delight. That’d be convenient, but there’s more to it; YouTube is accessed by all kinds of client computers and devices over all kinds of connections, and as such it likes to have a lot of different versions of those video and audio files to serve out to those computers and devices. After all, if I’m watching a video on my iPhone on an LTE connection and the only video they can send my iPhone is the same full-resolution 4k file they’d send to my desktop wired to fiber then I’d stand in the rain watching the thing spool for a very long time while I asked myself whether I really needed to spend thirty minutes waiting to watch a cat video, and whether I should have considered my life choices with greater attention to time-management and the ownership and use of an umbrella.

Fortunately, YouTube-dl makes it easy to look at all the different versions of audio and video streams associated with a YouTube video, and then allows you to pick and choose which versions you’d like to download. Let’s start with a classic educational staple – Dr. Richard P. Astley’s rendition of the immortal classic “I Shall Never Give You Up.”

The YouTube URL for this is:

If you copy that URL and paste into YouTube-dl while invoking the -F option then you’ll get this impressive looking list:

From looking at that list, we can see that there are four audio-only formats, seventeen video-only formats, and one combo option right at the very end. YouTube takes this menu, looks at your connection and the device you’re using, and then selects the best options from the list, but we can use YouTube-dl to make our own choices by invoking the -f option. Let’s say we want to download the highest-possible quality video file (the 1080p mp4 option – number 137 on the list ) and the worst quality audio file (the 49k webm option – number 249 on the list). To download that using YouTube-dl you’d use the command:

youtube-dl -f 137+249 https://youtu.be/dQw4w9WgXcQ

The output will look something like this:

…et voila – you now have a copy of the file in the home folder on your computer.

All talk of plumbers and nonsense aside, I can kind of see the concern about the use of tools like Youtube-dl. It is, after all, a tool that can be used for both legitimate and illegitimate use alike, and how it’s wielded is largely a matter of personal judgment and policy. On a practical level, I suspect that the prospect of it being used wholesale as a mainstream piracy tool is limited by the fact that unless you want to go spelunking into video and audio formats a default invocation of the command will feed you a pretty good copy of a video that’s really no better than just viewing the content for free on the internet. Further, if you’re of a mind to go digging around and trying to pull down higher-quality files then you’ll still usually end up with a sub-perfect product quality-wise, as well as something that will eat up probably a lot of storage space – in short, there are easier and better ways of getting to content than adopting this as a default part of your arsenal…

How Not To Go Insane In A Warehouse (or: replacing code signatures for fun and profit)

I spent most of last weekend in a warehouse in Carpinteria, spouting an ever more specific series of salty oaths and curses.

This isn’t – just so we’re on the same page here – the way that I normally like to spend my weekends. It’s terribly important to maintain a healthy work/life balance (particularly in These Trying Times), so keeping work and personal matters separate is important and a flagpole of mental health, and it’s vital to stay grounded and in touch with the people who are most important to you.

This is by way of saying that when I’m issuing salty oaths and curses on most weekends they are chiefly directed at my family, who are quick and open about returning them in kind.

Still, now and again the nature of honest toil involves going and working on a weekend, which is fine. A lot of substantive IT work gets done at hours when it’s less likely to cause massive disruption. Like most IT consultants, I’m no stranger to walking into a client office at 5pm and walking out at 8am the next morning. Or decamping to an onsite location for a weekend, for that matter. This is the nature of the gig; you can’t make fundamental changes to infrastructure while said infrastructure is being actively… infrastructed. It’s like repairing a car engine while the thing is hauling down the freeway. It can be done, but it’s not going to end well, there are going to be enormously destructive crashes that cost everyone a lot of money and time, and someone’s probably going to end up in the hospital.

So, last weekend should have been pretty straightforward. The migration from the client’s ancient and ailing Mac mini server to a nice, shiny new Synology NAS had been completed without incident – chiefly because Synology makes a solid, well-designed product – and all that was left to do was to install a remote access application on each Mac desktop so that the client could use their cloud-based accounting package. It was a simple matter of installing some applications, doing a little light configuration, then being home in time to sink a couple of cocktails replete in the general glow of a Job Well Done.

Except that, no, it wasn’t a simple matter. The remote access application flatly refused to launch on about half the Desktops for no discernible reason whatsoever. Same hardware, same exact operating system and patches, but while about half of them worked perfectly, the other half not only refused to launch but refused to even bounce in the dock.

This is unusual. Well-written applications either run just fine or give you some kind of polite-if-terse indication why they fail to do so. They don’t as a matter of course just sit there, unresponsive, glowering at you from the Dock while you wrack your brain and try and work out what’s wrong. A peruse of the Console.app showed an error message thus:

Termination Reason: Namespace CODESIGNING, Code 0x1

…which is the kind of thing that makes your blood run cold once you figure out what it means. Essentially, the program won’t run because the OS has decided that it either isn’t signed (see last week’s article on Gatekeeper) or because its signature is invalid. Downloading a fresh copy of the app from the Mac App Store made no difference, which pointed me in the direction of the OS thinking that the signature was invalid because anything you download from the App Store is, by the nature of the transaction, signed.

So, how to fix?

My first thought was that maybe – somehow – Gatekeeper on those Macs was somehow at fault. Other downloaded apps worked just fine, though, which rather scuppered that theory. My second thought was that maybe there was some issue with the app being flagged as damaged by the Macs, so I tried manually adding the apps to quarantine using xattr, like so:

sudo xattr -rd com.apple.quarantine /Applications/Microsoft\ Remote\ Desktop.app

(Spoiler – the app was Microsoft Remote Desktop).

Finally, I stumbled across the codesign command (installed as part of Xcode Command-Line tools). I’d run into it before while tinkering around with homebrew, and on reading the man page found that it had options for removing, altering, and replacing existing code signatures. Downloading the Xcode Command-Line tools can be done from the Terminal.app like so:

sudo xcode-select --install

The first move was to remove the existing code signature:

sudo codesign --force --deep --remove-signature - /Applications/Microsoft\ Remote\ Desktop.app

Next, now that the existing signature has been removed, we can re-sign the app (using the --force flag to actually replace the existing signature and --deep flag to ensure that any sub-hosted code signatures are also replaced) by issuing the following command:

sudo codesign --force --deep --sign - /Applications/Microsoft\ Remote\ Desktop.app

Thankfully, this worked like a charm, allowing all parties to return to their regularly scheduled weekend drinking. I mean families. Right? Right.

Let’s talk about GateKeeper

This week has been a quiet news week, which is probably a good thing. What with the election shenanigans raging to and fro I’ve sort of peered at the news with a cautious, jaundiced eye and been pleased that the default recommended behavior has not – for once – been to actively recoil. When we’re living in a world where my news feed is sending me stories about byzantine security measures in macOS and not doubling up on every specie and varietal of The Current Apocalypse then I’m prone to taking the win. It’s the little things, etc.

Still, some little things are – depending on where your priorities lie – big things. I refer of course to the minor brouhaha about macOS and GateKeeper – the former being a hugely popular and recently updated operating system (you may have heard of it) and the latter being Apple’s ingenious quarantining mechanism designed to keep nasty things from happening to your Mac. The current controversy kicked off with an article from November 12th which pointed out that with the advent of Big Sur/macOS 10.16/macOS 11 Apple was constantly collecting a lot of information about what programs you were opening, where you were when you opened them, what the time and date was, and what computer you were opening them on. Further, it noted that as a partner in PRISM Apple was essentially turning all this data over to The Powers That Be in order that Big Brother can track your every movement.

This, I’m sure we can all agree, sounds Bad. But – as in so much of life – an ounce or two of perspective can often throw things into a different light.

First of all, what the heck is GateKeeper?

Good Question.

Thanks!

GateKeeper’s ancestor was a system that Apple put in place back in 2007 which eventually evolved into a two-part mechanism designed to make sure that anything you download and install on your Mac isn’t riddled with malware. Initially it was a pretty basic tool; applications downloaded to your computer were quarantined until you explicitly gave permission to open them for the first time, and provided you knew what you were doing (or were at least prepared to say that you knew what you were doing) then the presumption was that nothing was apt to go awry. A year or two later Apple upgraded the system so that Mac OS X would check the downloaded application for known malware threats, and then the whole thing was spruced up again with Mac OS X Lion to incorporate signed apps.

And it’s this mechanism – the checking for signed apps – that’s really the crux of the recent concern. In a nutshell, here’s how the process works.

  1. A developer – let’s call him Dave – wants to write a macOS application. He signs up for an Apple Developer account, goes and bangs out his masterpiece in Xcode, and signs it with a certificate denoting his Developer ID.
  2. A customer – let’s call him Bob – purchases this amazing application. When he runs it, his Mac looks at the application, notes the certificate that Dave signed it with, then sends an inquiry to Apple to make sure that the application is legitimate and actually written by an actual Apple-approved Developer. Said inquiry is in the form of a hash that contains an identifier of the application that’s being opened.
  3. The OCSP (Online Certificate Status Protocol) responder at Apple looks at the hash it’s been sent, notes that yes, everything looks okay, and then tells Bob’s Mac that the application is okay to run.

This system is not without its flaws, but they tend to be the obfuscatory variety and not the destructive sort. The worst of the bunch is that occasionally a developer certificate will expire, so when the application is launched the hash pushed to OCSP is refused, leading to a lot of frustrating inabilities to open the application. Fortunately, renewing a developer certificate is a relatively simple process.

There’s also been some alarm about the fact that these hashes are sent with non-encrypted http instead of https, although logic dictates that if you use a certificate-encrypted https session to check for an OCSP certificate then you’ll first need to decrypt the https certificate, and eventually it’s certificates all the way down, which would at least give all the elephants something to look at.

Still, the idea that your computer is constantly sending a stream of information about what applications you’re running out to The World™ sans encryption isn’t a great look. So much so that Apple has published an updated document on the subject, thus.

It’s comforting to read that kind of thing, but one should also trust and verify. Thankfully, a lot of that kind of heavy lifting is done by better and wiser minds than mine; for example – Jacopo Jannone, who published an article that did a fascinating deep dive into the OCSP process. I’d encourage anyone who’s remotely interested in looking under the hood of their computer to follow his process. I mean, I know that I did; using Wireshark to capture an OCSP request for CodeRunner.app I was able to pull the serial number of the application and match it to what was being sent to OCSP, as well as noting that once that was sent the first time the app was opened after a reboot no further requests were sent, even after opening and closing the app.

So, a storm in a teacup, then. Apple isn’t tracking your every move via application opening and closing (or if they are then they’re doing a shockingly inefficient and terribly-implemented job of it). There’s still a temptation to disable your Mac’s ability to go talk to OCSP but that’s a temptation to be metered or avoided. Gatekeeper might seem like some authoritarian mechanism, but it’s a vast improvement on the absence of any kind of check or balance. In a world without rational, transparent security – even the kind that leaves an uncertain taste in your mouth, it’s all too easy to end up with a fully open sandbox where applications can run unmetered and unchecked, and send a lot more information out than the time and date you anonymously open a browser…

Apple Silicon for the Pro market?

Well, today Apple pulled the wraps off their new toys in a manner that surprised almost nobody at all. We got new portables and a new Mac mini (which hadn’t been talked about a great deal by anyone, but seemed a shoo-in on the grounds that the Apple Silicon Dev kit was… also a Mac mini). And these are all great products, and will do very well because they’ll do what they do very well.

What they won’t do very well? Not much, but I can think of one glaring problem if you’re anyone who works in design, video or do a lot of CAD work – and it’s not really Apple’s fault. What’s the problem? I’ll give you a hint in the form of the accessories available for the new MacBook Pro:

What’s missing here?

Too oblique? Okay, that’s understandable – if you’re looking at a forest and don’t notice that it’s missing a tree, then that’s not on you. Here, I’ll make it easier by showing you the accessories available for the older, Intel-based MacBook Pro:

Ruh-roh.

Now I am – and this is no surprise to anyone who knows me – not what you’d call a world-expert on chip design, but it’s pretty clear to me that in putting the entire system (CPU, Cache, Neural Engine, Fabric, GPU and DRAM) on a single chip then you’re somewhat boxed into the idea that you’re stuck with integrated graphics. And if that’s the case, then said system on a chip is – by its nature – not going to have any mechanism to go and talk to discreet graphics – whether it’s a graphics card or an eGPU. It’s counter to the design of the thing.

Still, no eGPU support isn’t entirely surprising when you consider the nature of the beast(s). These are, after all, not Pro machines. Yes, yes, I know: the MacBook Pro has “Pro” in the name and is used by professionals, but the 13-inch model isn’t historically renowned as the hard-hitting graphical powerhouse of the line. And, to be fair, the M1’s octo-core GPU generates some very decent numbers – from peering uneasily at screen grabs and doing some back-of-the-napkin math it looks like the thing’ll churn out about half of a Radeon RX 580, which while admittedly a long way from the top of the heap isn’t exactly chump change, either.

I don’t mean to dump on these new machines. They’re really, really great products (and I’ve already ordered a couple of Airs for my kids). As a first run, it’s extremely impressive that Apple’s managed to come up with machines that are bound to make Intel go a little pale and wobbly-footed, but it’s also true that having machines this powerful at the low end of the range generates some interesting questions about the rest of the product line. Benchmarks have yet to be forthcoming, but based on the claims of speed increases from the older, intel-based versions of the MacBook Air, MacBook Pro and Mac mini it rather looks like those computers will cheerfully stomp all over the iMac and iMac Pro in raw performance, and even give the Mac Pro a bit of a turn.

Except when it comes to tasks and pro use-cases that involve significant GPU compute needs, that is, which raises two questions (both of which I have actually been asked this morning):

Are the rest of Apple’s desktop products suddenly lame ducks?

If you’re, say, the manager of a small publishing company with a limited budget, what reason is there to go and buy four new iMacs? After all, there are going to be new, M1-based iMacs coming out at some point.

Are Apple’s Pro computers ever going to be good again?

Further from the last question – is it even possible that Apple can make a chip that can compete with some of the higher-end graphics cards? After all, those are companies that have years of experience and deep benches of R&D expertise, and even assuming that it’s possible to compete, why would you want to buy a pro machine with non-upgradeable graphics cards?

I won’t lie; this was an awkward conversation. But I’ll put down what I said after a couple of minutes of thought. Maybe – just maybe – we’re thinking about what a Pro machine is, and coming up with some answers that are informed by what we’ve been conditioned to believe instead of thinking flexibly. Maybe we’re looking at it all wrong.

Graphics cards are awful devices. No, really; finicky, phenomenally expensive, prone to failure and oft laid low by software problems (not to mention hot and noisy and wildly, wildly power-hungry). One of the rumors about the new Mac line has been about a supposed new Mac Pro – much smaller – and the feedback I’ve read has solidly fallen into discussions about how there’ll be no room for expandability, adding extra cards and storage and so forth. Maybe we’re looking at this kind of problem the same way that people looked at the first cars and sniffed, derisively, pointing out that there was no place to attach the horses to the front of the thing.

I don’t think we’re going to see Macs (and by association, a lot of the PC market) using discreet graphics in future. Yes, there are people who upgrade their pro machines with new-and-improved hardware as time goes by, but I’ve worked with those clients for the thick end of two decades and the vast, vast majority of those clients? When they’re ready for an upgraded graphic card, they look at the budget, look at the depreciation scheduled, and just buy a new computer.

There’s a reason that Apple rolled out Apple Silicon the way that it has. Consumer/Prosumer machines first (because the M1’s secret weapon is it’s absurdly low power footprint); and then, later on, a followup product with significantly more graphic cores. After all, if a Mac mini with eight GPU Cores can come within punching distance of a decent graphics card, what can a twelve core card do? Or a sixteen core? Or a thirty-two core?

My money says that we’ll see an Apple Silicon iMac within the year, with graphical performance that’ll jump up and down all over the current iMac range. In the mean time, though, I think there’ll be a lot of difficult decisions to make about sticking with Intel-based Macs…

Burning Down The House (or: What To Do When All Your Stuff Is On Fire.)

This is, admittedly, sort of close to home in a very literal sense; a few weeks ago I walked out of my back door, took a bracing lungful of clear morning, air, coughed, and then noticed that the large expanse of legally-contested Mesa behind my yard was, in fact, on fire. Thus:

Admittedly less dramatic than half an hour earlier, but in my defense I’d been too busy not being on fire to spend time composing artful studies on the savage beauty of the open flame. Also, it was very smoky.

Now, I don’t know about you, but this kind of thing is something I typically find… perturbing. You know what? I’m not ashamed to say it. I was perturbed. You could even make an argument for my being alarmed. There’s an initial instinct to be very British about it (which – being British – comes easily to me) and look at the encroaching flames and say things like “Right,” and “Ah,” and “I see,” and then go back indoors and spend a couple of minutes unpacking the emotional load that comes with oncoming disaster in order that those emotions can be best suppressed or – better yet – filed away, never to be spoken of (because, again, British). Once that initial instinct is out of the way, I’m glad to report that I behaved in an adult and responsible fashion and immediately called the local authorities that deal with these sorts of things, let them know who I was and where I was and what the issue was, and then hung up and watched the roiling inferno as it bore down upon my person, my loved ones, the collection of creatures that I think of as pets and that they think of as roommates and, oh yes, all my possessions.

I had a nice, long wait until fire trucks turned up. I’m not complaining; mine is an out-of-the-way sort of place that’s hard to get to. We don’t often see any kind of law enforcement-type activity back here, which is fine because we’re not exactly a hotbed of crime and public disorder (save for the party house on the next street and the occasional, terrified middle-class teenagers doing low-level drug deals in the most highly conspicuous, terrified middle-class teenager way underneath the sole streetlight at the end of the road). I spent some of that time messing around with hoses and calling neighbors, and the rest of the time running through the mental checklist of What I’d Do If My House Burned Down.

Other than protecting all my stuff and the welfare of my loved ones, it turned out to be a fairly short list. I have a go-bag with a laptop that’s signed into iCloud so that I can get to iCloud Keychain, a portable hard drive with a lot of copies of insurance information on, a key to a safe deposit box and a bunch of chargers and cables and batteries. The idea would be that if I had a few seconds I could grab that thing, throw it into whatever vehicle is nearest, and then leave the family manse to the flames, secure in the knowledge that as long as I can get to some kind of internet access I’ll be able to start piecing everything else together. Further, everyone in the household backs up to BackBlaze, so provided nothing wildly unexpected happens most (if not all) of everyone’s data should be available in some form.

So, well done me. Roll out the red carpet, do mischief to the fatted calf and so forth. The whole nine yards. But now that self-congratulatory claptrap is out of the way it’s probably worth establishing some basic guidelines so that you, gentle reader, can figure out what to do when sheets of flame come roaring and hissing down the hill behind your house while your neighbors talk to the local TV reporter and don’t pull their weight vis-a-vis the oncoming inferno. Yes, you can imagine the petty, annoyed tone in that last sentence. Oh please, it’s not like they read this, anyway.

Firstly, have a backup strategy that includes a cloud component. Or two; after all, while belt and suspenders are solid, belt and suspenders and another pair of suspenders are better. And optionally another belt. When I talk to clients about backup strategies I like to present three separate scenarios, ranging from mundane to ridiculous, that hopefully spell out the value in mixing different backup techniques. They are:

You lose a file or accidentally erase something. This one’s easy; use some kind of directly attached storage on your computer or (if you’re accessing files on a server) have some kind of directly attached storage hooked up to the server. This mostly works out to be some kind of big hard drive, and the product I chiefly recommend to clients to actually run the backups is Apple’s Time Machine. No, it’s not perfect, and yes, once in a blue moon it’ll just stop working, but it’s built in to the OS, its extremely easy to use, and it’s… well, it’s reliable enough

Your office/home burns to the ground. A little more alarming, but still possible. In that case I like to recommend a combined strategy of offline physical and (optionally) online cloud backups – a set of hard drives that are rotated out on a regular schedule and then the inactive drives stored at a separate physical location. If disaster strikes then you can retrieve the offsite backup, plug it into a replacement computer (or server), and within an hour or two you’re back in business.

The entire State of California sinks into the unforgiving blackness of the Pacific Ocean, or else is enveloped in relentless white-hot fire that pulls the air from your lungs even as if blackens the sky, bringing utter destruction and the irretrievable loss of not only your business premises but every other place where you might have a backup stored. Funny thing, this one. There was once a time when I’d trot this out and there’d be a certain amount of good-natured eye-rolling and general amusement. Of late this has started to tip over the edge from “kind of thing that people laugh about” to “kind of thing that people laugh nervously about.” It’s California. Sun. Surf. Gorgeous scenery. The Golden Coast. The American Riviera.

Except during fire season, when it unaccountably has a tendency to turn into bloody Mordor at the drop of a bloody hat. Having your data backed up to the cloud is a solid hedge against this kind of disaster. Cloud backups are massively slower than direct backups because, well, internet, but services like BackBlaze will send you a hard drive containing all of your data if you give them about a hundred bucks, which seems like an astonishingly efficient way of recovering huge amounts of data without having to muck around with hotel wifi.

Actual daytime photo of Santa Barbara. See that blue sky on the right? The part that isn’t swamped by oxidized trees? Just don’t go outside, and if you do, don’t breathe.

Secondly, have some kind of secure repository of information that you might need in case of disaster.

Now, that sounds completely subjective, as it doesn’t make a lot of distinctions about what either “secure,” “repository,” or “information” actually consist of. I’ll try and break this down in reverse order because, hey, that’s a little more fun.

Information. What information? That kind of depends on what you’re currently doing with your data and what you need to be able to do. For example: if you use two-factor authentication through an app like Google Authenticator the you should have a bunch of backup codes for each of the services you use so that (once your house has burned down with your phone in it) you’ll be able to set up a new device to get new two factor codes. Or this might mean something physical – paper copies of insurance documentation, deeds, birth certificates – stuff that’s not necessarily irreplaceable but certainly not something that can be re-sourced at the drop of a hat.

Repository. This can be digital, but it doesn’t have to be. That’s important to note; I think it’s widely assumed that having everything on a thumb drive or on The Cloud™ is better than traditional media, but that’s not always the case. When you’re talking about the aforementioned birth certificates, deeds etc then that’s kind of a moot point – there are some things that are only valid in dead tree format – but having vital information on paper can be a huge time saver. Of course, there are a slew of issues with having physical copies of things, so it’s worth mentioning…

Secure. Having that data – no matter what its form – secure is vital. Critical data in rest is always a target of some sort, or at least vulnerable to opportunism. In the simplest sense; having a notebook labelled “Passwords and Bank Account Info” lying around in case you need to grab it on the way out the door is only great in that one, narrow moment. Until then it’s just a book with everything required to remake or ruin your life, just lying around the house. Don’t do that. A Safe Deposit box with your local bank runs to a couple of hundred bucks a year. Put it in a vault, get a spare key, and put the spare key somewhere safe. If we’re not talking about physical security then think about data and encryption. Really consider things like the keys you’re using to encrypt your data, and where records of those keys might be kept.

Finally – and this is something that I don’t see mentioned a lot, but that I personally think is vital – have spare hardware. Nobody stands in the smoking ruins of their home, brushes themselves off, and says “Oh good. Now I can go stand around at the Apple Store for an hour spending upwards of a couple of grand on a new laptop. At last, the excuse that I’ve been looking for. Oh happy day! This was all worth while.”

Okay, maybe there are few really odd people out there, but I’m willing to bet they’re the exception rather than the rule. I go the other route – the laptop I have shoved in my IT go-bag is a 2013 MacBook Pro running macOS Mojave. It’s not some speed demon, and it’s not running anything except the basic, stock applications. I power the thing on a couple of times a year, kick the tires, make sure that everything seems in working order, then shut it off and put it away again. It’s not a thing for tinkering with; it’s the thing that I’m going to know is working properly when I absolutely need it to. You don’t need a laptop, though; an iPad does the job just fine for most things, and even an old iPhone or iPod touch will be serviceable in a pinch – provided that whatever you use can is recognized by iCloud or your cloud-solution of preference.

I’m reliably informed that – this being the internet and all – many people reading this are not in Southern California, but I think these simple guidelines work no matter where you are and no matter the disasters you’d like to mitigate. Fire, flood, violent political unrest – at the end of the day you end up coming back to something that I bang on about endlessly both in the written word and in person whenever I’m called on to speak to a room full of people who are checking Facebook while ostensibly paying attention at conferences: Helen Keller was right. Security is mostly superstition. It doesn’t exist in nature. The sun may rise and set from one day to the next for months, years, decades – but it’s an unwise person who believes that we’re playing anything other than a numbers game. One day, the axe will fall. Possibly when you least expect it – you’ll stroll out of your back door with coffee cup in hand, behold the fire as it races forward, borne by the cool morning breeze, and in a moment your world will shift minutely but significantly.

Thankfully, my house didn’t burn down (which was a great relief to all parties concerned), but yours might. Or your office, or in extreme cases, the city where you live. It may sound doom-and-gloom, but there’s no getting away from that; you can’t escape the risk, and you can’t prevent it. But, with a few careful decisions and an ounce or two of forethought, you can mitigate those risks and prepare for the worst.

After all, this is 2020. Preparing for the worst has practically become a national sport at this point… 🙂

Big Sur (or: It’s the little things that count).

I wrote last time about how updates to Operating Systems never fail to arouse the deepest passions in the bosoms of their users. Tears of joy vs gnashing of teeth, wearing of sackcloth and so forth. Any time you take something fundamental that people build their workflow off and make any kind of change you’re always going to court disaster and heartbreak, but very, very occasionally there’s a change that people are pretty much universally going to applaud.

Sometimes those things are the result of careful design or listening to the needs of the clamoring public. Sometimes those things are happy mistakes. Sometimes those are things that are just in the spirit of trying something new. And sometimes – just once in a while – they’re the result of looking at a prior change and then rolling that back. Big Sur (as of it’s current Public Beta 10) has a bunch of all of those – both large and small – but the one that I’m most excited-slash-relieved about is probably the most trivial: they fixed Show Original.

For anyone who doesn’t use file aliases (and yes, I’m including directories as being files because we could get into a useless syntactic discussion about that but this is my blog, dammit) an alias is a link to a file that lives at an alternate location. Maybe – like me – you have a bunch of folders that you regularly use but that you don’t want to have actually live on your Desktop. Or on your computer at all, for that matter. Maybe they live on an external drive, or a file server, or a NAS. There are lots of reasons for going that route, after all; shared access, retention, backup strategies – but it’s also just a lot more convenient to have the things you want to access close at hand. Now and again, though, you might want to know where the original file is or navigate to it, and in macOS Catalina that meant either scouring Finder menus or memorizing a bunch of keystrokes designed to break your own left hand. Here, this is what I mean:

I mean, look at that key combination. It’s… well, I don’t really have the words. “Bonkers” seems like a decent shot, though. I think what I’m aiming for is something more puzzling than rage-inducing; after all, decisions on this kind of thing aren’t made by accident because they are, after all, decisions. At some point, some bright, eager software engineer scratched his or her chin and said “You know what? There are too many people who are inadvertently attempting to find aliases of their files, and yes, Bob, I know that we’re talking about a fringe number of cases where someone has to select the alias in the Finder and then hit a keystroke or two to reveal the location of the individual file, but it’s still a risk that’s not worth taking, dammit. After all, nobody in their right mind wants to live in the kind of world where you can puncture the fragile illusion of how the file system works. Something must be done, so I think we should immediately implement a series of keystrokes that are difficult if not torturous to perform so that this eventuality never comes to fruition and so that we can sleep at night secure in the knowledge that we’ve demonstrably done something with our time. Sushi, anyone?”

(At least, I’m guessing that’s more or less how it went based on the small amount of time I’ve spent working for huge corporations and the much, much smaller amount of time I’ve spent at Infinite Loop eating Sushi at Caffe Macs.)

Just to make really, really sure that this was as unpleasant as possible, they then decided to use all the modifier keys on the keyboard that I – David Ball – have a hell of a time remembering.

Now, I might be alone in this one, and if that’s the case then – if you’ll pardon awkward metaphors – I’ll hold my hand up and take it on the chin. I’ve been working with Apple and macOS in a professional capacity for the better part of a quarter of a century, and while I’m comfortable with what the Command key looks like (), the other two – Option () and Control () are things that I have to sneak a peek at the keyboard for (which in the case of Control is particularly inexcusable because I’m always in the Terminal and am constantly hitting that key on a daily – if not hourly – basis). And so, this is me; and if I – someone who ostensibly knows his way around the macOS – am reduced to making confused, whining noises when trying to find the original of an alias then it’s a decent bet that other people are, too.

Of course, adding insult to injury is that the non-modifier key involved is the “A” key, which is smack dab in the middle of the three modifiers and up two rows, so no matter whether you hit the modifiers with whichever combination of fingers you’d care to go with you either end up twisting a finger around or doing some kind of wrist contortion to hit all four keys at once. It’s hard to take this as anything other than some kind of deliberate assault (albeit, a low-stakes one).

It didn’t use to be this way. Prior to macOS Catalina you could hit Command-R in the Finder while selecting an alias, which was simple and easy to mnemonically accommodate (“Command-R means… find ‘riginal?), and thankfully this is something that they’ve re-implemented in Big Sur, thus:

So, all is right with the world. We can all go back to our daily lives secure in the knowledge that this travesty has been resolved, that this great iniquity has been cast aside, and that once again we are free as a people to stand in the light of the sun and eat breakfast under newer, better skies. Okay, there might be the slightest hint of an over-reach in that sentiment; after all, many other things are still in assorted states of brokenness, but the point has enough legs to stand on (albeit in a highly qualified fashion).

The lesson here is not that you need to make a lot of changes to the way that you think about how operating systems work; it’s that there’s value in doing something right the first time, then having the clarity to appreciate and acknowledge that value. I’m not mad because Apple changed a keystroke combination that, let’s face it, most people would go to the appropriate pull-down menu to access anyway. I mean, that’s a fairly small hill to die on. No, the thing that concerns and annoys me is that while most good designers make decisions based on forethought and conceptual understanding, there’s always the pitfall of thinking that you’re going to do something better, and that the work that has been done before lacks value and needs to be remedied.

And it’s not something unique to Apple. I’ve seen that tendency in code that I’ve written and revisited, and I imagine that a lot of people in my shoes have had the same experience. Sometimes you’re so eager to improve something that you fall into the trap of thinking that everything you touch needs to be changed, and you end up throwing up roadblocks to productivity that didn’t need to be put there. You can measure twice and cut once as often as you like, but if the thing doesn’t need to be cut at all? Well. The next best thing you can do is to have the humility to undo your mistakes.

Everything Old is New And Broken

Today I shall be writing about macOS Big Sur, which is even as we speak wending its way through both the Public and Developer Beta programs while the good folks at Apple either glue bits on or hack them off with what we hope is some kind of grand design in mind.

New Operating Systems are polarizing things, and that’s the kind of attitude and behavior that I enjoy, nay, encourage. I like the seasonal nature of disgruntlement; the perennial moaning and scowling and disapprobation that people inevitably kick into high gear whenever what is – on a fundamental level – the single most important thing they use on their computer is improved. Or reimagined. Or… well, changed. There’s some kind of metaphor in there for the nature of man; we all come into the world fresh-faced and brimming with optimism, and then get stuck in our ways and end up grey-haired and angry at progress and prone to using words like “whelp” and “whippersnapper” in cold blood.

It’s freeing to realize this, because it’s a realization that sets you free. You’re not going to like change, and you’re not going to welcome it because you’re older and wiser than you used to be – and that’s okay. The measure of character is not how well we accommodate change, but how well we tolerate it. The test of your maturity lies in rolling with those punches and – instead of trying to change the world – realizing that you’re not infallible, and that maybe you should consider working on changing yourself.

Huh. That got real profound real fast. And I was only here to bitch about the menu bar clock. Let’s get back to that, shall we? Yes? Good.

The menu bar clock in macOS Big Sur is irrevocably stupid. Oh, it’s fine if you want to know what day of the week it is and what the time is, thus:

…but it’s not useful if you, say, want to know what the date is. Or (and this is admittedly rather less likely) know what the month is, just in case you’ve really overslept or have sustained some traumatic and untreated cranial injury.

In the good old days – before whippersnappers like you whelps were running around with your iPhone 12s and your Billie Eilish records and whatnot – you could happily go and jump into the Date and Time System Prefpane and change the way the menu bar clock reported the date and time, specify whether you preferred 24 or 12 hour time, whether you wanted such bizarre indulgences as flashing time separators or the ability to observe seconds as they ticked by. You were probably also able to go and buy shoes for a nickel, but these days that Prefpane shows you this instead:

This will never do. Now, I’m happy to let a lot slide in the name of progress, but I’ll go to the mat for the Date. I’m forty-seven, which is a fact that never ceases to surprise me and induce mild existential horror when I’m confronted by it. I’m forty-seven and my left knee is in a constant state of betrayal of the rest of my body and I wear glasses and I forget what the date is about thirty-thousand times a minute. My options extend to either getting the date tattooed on myself afresh each day or finding a way to get the date back into the menu bar. And I hate needles.

Fortunately, this turns out to be doable because while Apple doesn’t have a convenient button in there to allow you to specify clock options, the fundamental wiring for said clock options is still extent in the OS. To get to what they’ve done we’ll use the defaults command to read what’s going on with the menubar extra, thus:

Behold.

So, if “Fri 15:43” equates to “EEE HH:mm” then it’s a pretty solid bet that EEE = day of the week, HH = hour, and mm = minute. With that in mind, we can use defaults to write back some other options for the OS to look at. If you turn everything on and then look at the defaults read for the same plist under macOS Catalina then you’ll get this:

Right. So, it doesn’t take much to come to the conclusion that MMM = Month, ss = seconds, and (be still my beating, arthritic heart) d = date.

With that in mind, we’ll write all the above back into Big Sur, thus:

defaults write com.apple.menuextra.clock DateFormat -string "EEE d MMM HH:mm:ss"

…which magically turns into:

Ah. That’s much better. Change is a wonderful thing; particularly when it happens to other people.