A Minor Commentary on SED/Opal Encryption

Data-At-Rest Encryption is a common topic, but one which is often misunderstood. Having led engineering and architectural efforts involved with hardware standards for encryption compliance, including detailed analysis of performance impacts while operating in accordance to service level agreements, here are some bits of discussion on the SED/Opal implementation.

TL;DR

  • SED/Opal TCG spec’d drives are doing just fine. I trust it more than LUKS2 by a long shot.

Detailed Read

I have no issues trusting NAND and SAS firmware for the drive controllers that I’ve personally validated as part of a hardware acquisition process for global fleets which require federally mandated encryption.

Encryption compliance auditing is an involved process, as is the hardware validation process for identifying potential impacts to production storage performance workloads, and then there’s adapting encryption compliance requirements to an org’s procurement and provisioning pipelines, inc supply chain attack mitigations.

All of those concerns apply to hardware shipped outside of the country into export-regulated zones that have tight controls over encryption compliance.

Having been in charge of those workflows necessarily by job role, I can attest that it’s not exactly roses and rabbits and chocolate teddy bears every day of the week during audit season, but it is rigorous and it does create secure data persistence across geopolitical zones and high-risk locales*.

USA Enc Regs

LUKS2 CVEs

For non-USA regs there are similar standards for modern first-world countries, though I trust the US’s military industrial complex far more than anything from anywhere other than Germany (and to a lesser extent, also Japan).

Exo-Border Encryption IRL

  • My favorite story for that example are systems in several Taiwan datacenters which were seized by the CCP during the past N years. They didn’t want to cut the power, given the facility’s status and criticality, so the systems continued to run unabated until their dead-stage.

Effectively this created a situation of situationally enfoced uptime. As long as any system was online it was fine, but no maintenaces were possible, and any drive failures remained forever-failed as no person was authorized to replace any hardware (even a simple hot-swap drive bay flip). Most of those systems are still online, vastly exceeding any manufacturer specs or expectations of SLA. When shipping sensitive data and secure systems around the world one must expect the worst, and that issue certainly wasn’t the worst, at least the data is still live.

Combat zones are an entirely different thing, and one can look to the Special Forces requirements for their portables to see how encryption standards work during active live-fire scenarios. It’s often not enough to have encrypted drives; there must also be provisions for hot-swap/ejection of the hardware from its chassis, auto self-destruct features, and similar controls which seek to mitigate the less common edge-case concerns (torture, etc).

Original Conversation This conversation began as part of a comment on my Mastodon account: mastodon.bsd.cafe/@wintersc…