Your browser does not support JavaScript!

Like the title says – you probably have a lot of dead hens in your hardware design!

Meeting timing, keeping under power budget, delivering on time – all aspects of hardware design are pretty easy if you just relax the constraint of being “correct”! Hardware designers of course know this and are quick to find creative and easy fixes to their problems but are of course held in check by teams of diligent testing and verification engineers providing their evaluation of correctness. Add a cycle of delay here, handle this special case there, power gate this, change the specification of that – the success of hardware teams relies on the honest back and forth between these competing interests.  Unfortunately, as I touched on in my last blog entry, security is not covered by the traditional specifications and is completely unexamined by traditional test and verification procedures. We have lost our check and balance system and replaced it with one where the proverbial fox is guarding the hen house. Bad news for hens (i.e. your system security).

This problem is not just theoretical; many real security vulnerabilities are introduced this way. One example that quickly comes to mind: TOCTOU.  TOCTOU (Time Of Check to Time Of Use) vulnerabilities happen when there is a separation between the time where some condition (often a permission in the context of security) is checked and the time where the result of that condition is used (for example access is granted to some data). In network applications this is a classic form of attack where race conditions and other subtle timing issues can be used to separate the check and use for adversarial purposes. In hardware this is also a problem. Meeting timing requirements is no easy matter and long haul communication can be split between multiple cycles. When global “states” (such as debug or trust management mode) are entered or exited, there is often an accompanying check from these long lines across the chip to ensure that entering such a mode is allowed. While it is straightforward to get this to work when the requests are being made in the common use scenarios (such mode changes are infrequently requested) and we don’t worry about meeting timing requirements, does there exist a lurking vulnerability where a short but rapid sequence of accepted and/or failed requests eventually separate these two in your design? How would you know? What if the design was “optimized” by breaking the transfer of some lines across multiple cycles? It’s a tough thing to write a correctness specification for this property with traditional tools, and it is even tougher to think about *all* the ways such attacks might manifest in your designs. There needs to be a way to protect the hen house from correctness-maintaining but security-breaking design decisions. Really there are three big questions to ask about your hen house.

  1. Your security team is not a hardware design team. It is unlikely to ever be. It’s not clear if it even should be in an ideal world. The best laid security plans mean nothing if they are not implemented in a way that maintains security. How do the security properties of your design get checked through the continuous design and verification process?
  2. Security properties in many companies are often English language documents including statements such as “thing X should be visible in mode Y”. These checks, when they are done at all, often fall to engineers to *manually* trace individual lines in a design to make sure things are kept separate. How long does it take you check a property in a design?
  3. When you have a property such as the above “thing X should be visible in mode Y”, it is often not clear what “visible” even means. If I check that “thing X” is just completely electrically disconnected from anything in “mode Y” we may be over-conservative in our security enforcement. This in turn might either negatively impact performance and/or build distrust from design teams for the addition of security measures. Likewise, if I just check that “thing X” never directly exits the design, I miss all kinds of things such as whether I leak the inverse of X, a portion of the bits of X, or any other function of X. How under- or over-conservative am I being in my security analysis?

If the answers to the above questions are anything other than 1) automatically with every regression, 2) in a matter of minutes, and 3) I am able to precisely specify exactly the security property I need, then there is a better way than what you are doing right now. Design and verification already work together so well. Augmenting that process with automated checks for security frees your design team to try foxy optimizations in their designs with the knowledge that any security vulnerability introduced through their creativity will be caught and reported while at the same time keeping your security hens safe and sound behind the verification team.

About Tim Sherwood

Dr. Sherwood has worked both in industry and academia on silicon security for over a decade. He is a Professor in the department of Computer Science at UC Santa Barbara, specializing in the development of novel computer architectures for security, monitoring, and control. He leads an award-winning research lab focused on these areas, with funding from several major government grants. His work and teaching have both been recognized, as he is the recipient of the 2009 Northrop Grumman Teaching Excellence Award, the National Science Foundation Career Award, and the ACM SIGARCH Maurice Wilkes Award. Prior to joining UCSB in 2003, he graduated with a B.S. in Computer Science and Engineering from UC Davis (1998), and received his M.S. and Ph.D. from UC San Diego (2003).