Your browser does not support JavaScript!

Today’s semiconductor chips run cloud infrastructure, automotive controllers, industrial robots, edge AI processors, so effectively the entire technology market. Engineers must now ensure that silicon itself defends against attacks, protects embedded secrets, and complies with increasingly stringent global security standards, such as ISO/SAE 21434 and the EU Cyber Resilience Act. Regulators, partners, customers, hyperscalers, and end-product developers now expect proof that security was built in during the architecture phase. Every transistor now carries a burden of trust that extends throughout the entire development process. This requires a systematic approach to security throughout the pre-silicon development cycle, using verification to uncover weaknesses and evaluate effectiveness.

Security coverage provides a structured, measurable method for evaluating functionality and protections, identifying vulnerabilities, and verifying processes. This enables engineering teams to assess how thoroughly security controls are exercised and to detect potential gaps throughout the design lifecycle. The real challenge is knowing with confidence that defined assets, constraints, and protection boundaries are correctly enforced and remain effective.

Two Necessary Pillars

Hardware security comes down to two core pillars. Functional security verification confirms correctness, and security protection verification establishes robustness.

The functional security verification pillar ensures that security functionality behaves correctly under defined operating conditions and expected use cases. It uses known methods such as simulation, assertions, and formal analysis.  

For example, functional security verification may confirm that a cryptographic block retrieves a key only when authorized and within defined timing constraints, and restricted resources remain inaccessible to unauthorized agents. Functional verification is well-defined and bounded. Because it operates within known interfaces and specified behaviors, it provides confidence that security functionality performs as intended, but it does not assess how those protections hold up under unintended data flows. This pillar confirms that the logic works correctly.

To read the full article on SemiEngineering, click here

 

RISC-V adoption continues to accelerate across commercial and government microelectronics programs. Whether open-source or commercially licensed, most RISC-V processor cores are integrated as third-party IP (3PIP), potentially introducing supply chain security challenges that demand structured, design-level assurance.

As systems become more heterogeneous and interconnected, design supply chain security is no longer a documentation exercise, but an engineering challenge. A single weakness in processor IP can cascade into systemic risk. That reality makes scalable, repeatable 3PIP assurance essential, especially for RISC-V cores deployed in mission-critical environments.

From Third-Party IP Risk to Repeatable Assurance

Traditional IP integration workflows often rely on vendor claims, checklist-based reviews, and limited test evidence. While helpful, these approaches rarely provide design-level assurance across all relevant weakness classes. To address this gap, a Common Weakness Enumeration (CWE)-based methodology enables structured, measurable, and portable security validation.

A structured CWE-based methodology replaces ad hoc reviews with measurable validation. Relevant weaknesses are scoped from the MITRE database, translated into security requirements, verified through executable properties and tests, and captured as traceable assurance artifacts.

The outcome is not simply test coverage, but documented security assurance tied directly to recognized weakness definitions.

To read the full article on SemiWiki, click here