US20140095876A1 - Introduction of discrete roots of trust - Google Patents

Introduction of discrete roots of trust Download PDF

Info

Publication number
US20140095876A1
US20140095876A1 US13/629,887 US201213629887A US2014095876A1 US 20140095876 A1 US20140095876 A1 US 20140095876A1 US 201213629887 A US201213629887 A US 201213629887A US 2014095876 A1 US2014095876 A1 US 2014095876A1
Authority
US
United States
Prior art keywords
trust
root
encryption key
platform
code module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/629,887
Other versions
US8874916B2 (en
Inventor
Ned Smith
Sharon Smith
Willard Wiseman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to US13/629,887 priority Critical patent/US8874916B2/en
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WISEMAN, WILLARD, SMITH, SHARON LEA, SMITH, NED
Publication of US20140095876A1 publication Critical patent/US20140095876A1/en
Application granted granted Critical
Publication of US8874916B2 publication Critical patent/US8874916B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3263Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving certificates, e.g. public key certificate [PKC] or attribute certificate [AC]; Public key infrastructure [PKI] arrangements
    • H04L9/3265Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving certificates, e.g. public key certificate [PKC] or attribute certificate [AC]; Public key infrastructure [PKI] arrangements using certificate chains, trees or paths; Hierarchical trust model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3271Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using challenge-response

Definitions

  • Embodiments generally relate to introduction of discrete roots of trust. More particularly, embodiments relate to introducing a first root of trust on a platform to a second root of trust on the same platform.
  • a platform may be configured to use more than one root of trust.
  • a first security module representing a first root of trust
  • a second security module representing a second root of trust
  • these two security modules may often be unable to communicate directly, and may be mutually suspicious.
  • the second security module may regard the additional security logic as a separate discrete component with unknown security attributes.
  • FIG. 1 is a block diagram of an example of a system including a first root of trust and a second root of trust according to an embodiment
  • FIG. 2 is a block diagram of an example of a system configured to facilitate verification by a third party that a first root of trust and a second root of trust operating on a platform have been introduced according to an embodiment
  • FIG. 3 is a flowchart of an example of a method of verification by a third party that a first root of trust and a second root of trust operating on a platform have been introduced according to an embodiment
  • FIG. 4 is a block diagram of an example of a system configured to facilitate introduction of a first root of trust to a second root of trust including a discrete security component according to an embodiment
  • FIG. 5 is a flowchart of an example of a method of introducing a first root of trust to a second root of trust including a discrete security component according to an embodiment.
  • FIG. 1 is a block diagram of an example of a system 500 including a first root of trust and a second root of trust. More particularly, the illustrated system 500 includes a first input/output (I/O) module 10 , a second I/O module 20 , and an intermediary module 30 .
  • the first I/O module 10 may include a first security module 11 .
  • the first security module 11 may be configured to operate as a first root of trust for a platform.
  • the second I/O module 20 may include a second security module 21 and a logic component 22 , wherein the second security module 21 may be configured to operate as a second a root of trust for the platform.
  • the logic component 22 may be a discrete security component configured to operate in conjunction with the second security module 21 (e.g., as a value added component).
  • the second security module 21 and the logic component 22 may be separated by a firewall 23 .
  • the intermediary module 30 may be a component configured to, among other things, act as an intermediary between the first I/O module 10 and the second I/O module 20 .
  • the intermediary module 30 may include an authenticated code module (ACM) 31 .
  • ACM 31 may facilitate a trusted execution environment, allowing the ACM 31 to operate without threat from malware that may be present in memory components on the platform.
  • the intermediary module 30 may be configured facilitate an introduction of the first security module 11 (e.g., the first root of trust) to the second security module 21 (e.g., the second root of trust).
  • FIG. 2 is a block diagram of an example of a networking architecture 1000 configured to facilitate verification by a third party that a first root of trust and a second root of trust operating on a platform have been introduced.
  • the architecture 1000 may be enable a first root of trust to provide proof (i.e., cryptographic evidence) to the third party that the first root of trust and the second root of trust are operating together on the same platform, and within a trusted execution environment.
  • the architecture 1000 may include a third party system 70 and a platform 63 having a first I/O module 40 , a second I/O module 50 and an intermediary module 60 .
  • the first I/O module 40 may include a first security module 41 , an enhanced privacy identifier (EPID) 42 , and an attestation report 43 .
  • the first security module 41 may be configured to operate as a first root of trust for the platform 63 .
  • the EPID 42 may be used to identify the first security module 41 as a root of trust.
  • the attestation report 43 may be used to, among other things, provide proof (i.e., cryptographic evidence) that the first security module 41 may be operating with a second security module on the same platform 63 , and within a trusted execution environment.
  • the second I/O module 50 may include a second security module 51 and logic component 52 .
  • the second security module 51 may be configured to operate as a second root of trust for the platform.
  • the logic component 52 may be a discrete security component configured to operate in conjunction with the second security module 51 .
  • the second security module 51 and the logic component 52 may be separated by a firewall 56 .
  • the second I/O module 50 may also include one or more endorsement keys 53 .
  • the endorsement keys 53 may be used in a verification process.
  • the endorsement keys 53 may be embedded by, for example, the manufacturer of the second security module 51 .
  • the endorsement keys 53 may include a first key pair 54 and a second key pair 55 .
  • the first key pair 54 may be used to identify the second security module 51 as the second root of trust.
  • the first key pair 54 may be used in a verification process requested by a third party operating the off-platform third party system 70 .
  • the first key pair 54 may include a first public key and a first private key.
  • the first security module 41 may receive and countersign the first public key of the first key pair 54 , and then incorporate the countersigned key into the attestation report 43 as part of a process of verifying the identity of the second root of trust.
  • the first private key of the first key pair 54 may remain with the second I/O module (i.e., private), and may be used for attestation to a third party and to establish a secure channel between the logic component 52 and the first IO module 40 .
  • the second key pair 55 may be used during a key exchange process with another root of trust.
  • the second key pair 55 may also include a second public key and a second private key.
  • the second key pair 55 may be used to verify that the second security module 51 is operating as the second root of trust for the platform.
  • the second private key of the second key pair 55 may remain with the second I/O module (i.e., private), and may be used for attestation to a third party and to establish a secure channel between the logic component 52 and the first IO module 40 .
  • the intermediary module 60 may be a component configured to facilitate an introduction between the first root of trust and the second root of trust.
  • the intermediary module 60 may include ACM 61 .
  • the ACM 61 may be configured to use secure communications to introduce a first root of trust to a second root trust by creating a public key provisioning path between both.
  • the ACM 61 may be configured to introduce the second public key of the second key pair 55 of the second security module 51 to the first security module 41 over one or more trusted paths that may be accessible to the ACM 61 .
  • the third party system 70 may be operated by a third party looking to verify that that the first security module 41 (i.e., the first root of trust) and the second security module 51 (i.e., the second root of trust) are operating together on the same platform 63 , and within a trusted execution environment. So, in this example, the third party system 70 may be operated by an electronic commerce (e-commerce) vendor, who is performing a shopping cart transaction implicating the first security module 41 and the second security module 51 .
  • e-commerce electronic commerce
  • FIG. 3 is a flowchart of an example of a method of verifying by a third party, such as the third party operating the third party system 70 ( FIG. 2 ), that a first root of trust, such as the first security module 41 ( FIG. 2 ), and a second root of trust, such as the second security module 51 ( FIG. 2 ), have been introduced.
  • the third party system may require proof that the first security module and the second security module are operating together on the same platform, and within a trusted execution environment.
  • An intermediary module may, such as the intermediary module 60 ( FIG. 2 ), be configured to facilitate communication between the first security module and the second security module.
  • the method might be implemented as a set of logic instructions stored in a machine- or computer-readable storage medium such as, for example, random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc., in configurable logic such as programmable logic arrays (PLAs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), in fixed-functionality logic hardware using circuit technology such as application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof.
  • RAM random access memory
  • ROM read only memory
  • PROM programmable ROM
  • firmware flash memory
  • PLAs programmable logic arrays
  • FPGAs field programmable gate arrays
  • CPLDs complex programmable logic devices
  • ASIC application specific integrated circuit
  • CMOS complementary metal oxide semiconductor
  • TTL transistor-transistor logic
  • computer program code to carry out operations shown in the method may be written in any combination of one or more programming languages, including an object oriented programming language such as, for example, Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • object oriented programming language such as, for example, Java, Smalltalk, C++ or the like
  • conventional procedural programming languages such as the “C” programming language or similar programming languages.
  • the intermediary module may use an ACM, such as the ACM 61 ( FIG. 2 ), to obtain a public key, such as the second public key of the second key pair 55 ( FIG. 2 ), from the second security module.
  • the ACM may securely deliver the public key to the first security module.
  • the first security module may receive the public key.
  • the first security module may use an EPID, such as the EPID 42 ( FIG. 2 ), to countersign the public key from the second security module as part of an attestation process.
  • the first security module may generate an attestation report, such as the attestation report 43 ( FIG. 2 ), wherein the attestation report may include the EPID, the public key countersigned with the EPID, and an identification (e.g., a component ID, etc.) of the first security module that includes the make, model, version and cryptographic hash of the firmware contained in the first security module.
  • the first security module may transmit the attestation report to the third party for verification.
  • the third party system may use the EPID to verify that the first security module may be trusted.
  • the third party system may use the attestation report (including the public key countersigned with the EPID) to verify that a separate attestation report obtained directly from the second security module may be trusted. In particular, the third party system may do so by determining from the attestation report that the first security module and the second security module have been introduced, that the second security module has authorized the first security module to countersign its public key, and that both are operating together in a trusted environment (i.e., a shared root of trust) facilitated by the intermediary module 60 .
  • a trusted environment i.e., a shared root of trust
  • FIG. 4 is a block diagram of an example of a system configured to dynamically introducing a first root of trust and a second root of trust.
  • the system 2000 may be configured to enable introduction of a first root of trust and a second root of trust, wherein the second root of trust may be integrated with a discrete logic component.
  • FIG. 4 illustrates a system including a first I/O module 100 , a second I/O module 200 , and an intermediary module 300 .
  • the first I/O module 100 may include a first security module 101 and an enhanced private identification (EPID) 102 .
  • the first security module 101 may be configured to operate as a first root of trust for a platform.
  • the EPID 102 may be used to identify the first security module 101 as a first root of trust.
  • the second I/O module 200 may include a second security module 201 and a logic component 202 .
  • the second security module 201 may be configured to operate as a second root of trust for the platform.
  • the logic component 202 may be a discrete security component configured to operate in conjunction with the second security module 201 .
  • the second security module 201 and the logic component 202 are separated via a firewall 210 in the illustrated example.
  • the firewall 210 may preclude the logic component 202 from sharing the trusted relationships of the second security module 201 , and may cause the logic component 202 to be treated with suspicion in instances where the second security module 201 would not.
  • the endorsement keys 203 may be used in a verification process.
  • the endorsement keys 203 may be embedded by, for example, the manufacturer of the second security module 201 .
  • the endorsement keys 203 may include a first public key 204 and a second key public portion 205 a.
  • the first public key 204 may be used to identify a second root of trust facilitated by the second security module 201 .
  • the second key public portion 205 a may be used during a key exchange process with another root of trust.
  • the second key public portion 205 a may be located on a first side of the firewall 210 (i.e., on the side of the second security module 201 ).
  • the second key public portion 205 a may correspond to a second key private portion 205 b (i.e., a private key) located on a second side of the firewall 210 (i.e., on the side of the logic component 202 ).
  • the symmetry of the second key public portion 205 a and the second key private portion 205 b may be used to verify a trusted relationship between the second security module 201 and the logic component 202 .
  • the intermediary module 300 may be a component configured to facilitate an introduction between a first root of trust and a second root of trust.
  • the intermediary module 300 may include ACM 301 .
  • the ACM 301 may be configured to securely communicate with the first security module 101 , and to introduce the second security module 201 to the first security module 101 by creating a public key provisioning path between the two.
  • the ACM 301 may be configured to access the public key 205 a of the second security module 201 , and introduce the public key 205 a to the first security module 101 over one or more interfaces and trusted paths that may be accessible by the ACM 301 .
  • the ACM 301 may use a memory mapped input output (MMIO) instruction to obtain the public key 205 a from a device specific memory that the ACM 301 may have access to.
  • MMIO memory mapped input output
  • the first security module 101 may issue a challenge to the logic component 202 .
  • the first security module 101 may use the public portion 205 a to verify the identity of the logic component 202 . That is, the first security module may be able to verify that the second security module 201 (i.e., the second root of trust) and the logic component 202 are operating together, and that the logic component 202 may be trusted.
  • FIG. 5 is a flowchart of an example of a method of dynamically introducing a first root of trust to a second root of trust including a discrete security component.
  • a first security module such as the first security module 101 ( FIG. 4 ) may constitute the first root of trust.
  • a second security module such as the second security module 201 ( FIG. 4 ) may constitute a second root of trust.
  • the second security module and a logic component, such as the logic component 202 ( FIG. 4 ) may be separated by a firewall.
  • An intermediary module may, such as the intermediary module 300 ( FIG. 4 ), be configured to facilitate communication between the first security module and the second security module.
  • the method might be implemented as a set of logic instructions stored in a machine- or computer-readable storage medium such as, for example, RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as ASIC, CMOS or TTL technology, or any combination thereof.
  • computer program code to carry out operations shown in the method may be written in any combination of one or more programming languages, including an object oriented programming language such as, for example, Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the intermediary module may use an ACM, such as the ACM 301 ( FIG. 4 ), to obtain a public key, such as the public key 205 a ( FIG. 4 ), on behalf of the first security module.
  • the ACM may obtain the public key from the second security module.
  • the intermediary module may securely deliver the public key to the first security module.
  • the first security module may receive the public key.
  • the first security module may direct a challenge to the logic component over a direct communication line (e.g., a system management bus).
  • a direct communication line e.g., a system management bus.
  • the logic component may receive the challenge.
  • the logic component may respond to the challenge. The response may rely on a private key, such as the second key private portion 205 b ( FIG. 4 ).
  • the first security module may receive the challenge response from the logic component.
  • the first security engine may verify the logic component's response utilizing the public key delivered by the ACM.
  • the logic component may be recognized as working in conjunction with the second security module. That is, because the second security module may be configured to provide the public key exclusively to the intermediary module, the first security engine may use the response from the third security engine and the public key from the second security engine to determine that the logic component should not be treated with suspicion.
  • Embodiments may therefore provide an apparatus comprising an authenticated code module to obtain a first encryption key from a first root of trust on a platform using a memory mapped input output (MMIO) instruction and via a device specific memory that is dedicated to the authenticated code module.
  • the authenticated code module is to obtain the first encryption key from a trusted platform module located in a package on a first side of a firewall, and wherein the first encryption key is to is to correspond to a private key embedded in the package on a second side of the firewall.
  • the apparatus may also include a second root of trust to receive the first encryption key from the authenticated code module, receive a challenge response, wherein the challenge response is to be received from logic located in the package on the second side of the firewall, and
  • the authenticated code module is to transfer a second encryption key from the first root of trust to the second root of trust.
  • the second root of trust may generate an attestation report based on the second encryption key and send the attestation report to an off-platform verifier.
  • the second root of trust is to countersign the second encryption key to obtain a countersigned encryption key, and incorporate the countersigned encryption key into the attestation report.
  • the second root of trust is to issue a challenge to the first root of trust over a system management bus, and wherein the challenge response is to be received over the system management bus.
  • a platform comprising a first package including a first root of trust and a second package.
  • the second package may include an authenticated code module to obtain a first encryption key from the first root of trust, and a second root of trust to receive the first encryption key from the authenticated code module, receive a challenge response, and use the first encryption key to verify the challenge response.
  • the first root of trust includes a trusted platform module located on a first side of a firewall and the first package further includes logic located on a second side of the firewall.
  • the challenge response is to be received from the logic located on the second side of the firewall.
  • the first package further includes a private key embedded on the second side of the firewall, and wherein the first encryption key is to correspond to the private key embedded on the second side of the firewall.
  • the platform may include a device specific memory that is dedicated to the authenticated code module, wherein the authenticated code module is to obtain the first encryption key from the first root of trust via the device specific memory.
  • the authenticated code module is to use a memory mapped input output (MMIO) instruction to obtain the first encryption key from the device specific memory.
  • MMIO memory mapped input output
  • the authenticated code module is to transfer a second encryption key from the first root of trust to the second root of trust.
  • the second root of trust may generate an attestation report based on the second encryption key and send the attestation report to an off-platform verifier.
  • the second root of trust is to countersign the second encryption key to obtain a countersigned encryption key, and incorporate the countersigned encryption key into the attestation report.
  • the platform may include a system management bus coupled to the first root of trust and the second root of trust, wherein the second root of trust is to issue a challenge to the first root of trust over the system management bus, and wherein the challenge response is to be received over the system management bus.
  • Still another example may provide for an apparatus comprising an authenticated code module to obtain a first encryption key from a first root of trust on a platform, and a second root of trust to receive the first encryption key from the authenticated code module, receive a challenge response, and use the first encryption key to verify the challenge response.
  • the authenticated code module is to obtain the first encryption key from a trusted platform module located in a package on a first side of a firewall, and wherein the challenge response is to be received from logic located in the package on a second side of the firewall.
  • the first encryption key is to correspond to a private key embedded in the package on the second side of the firewall.
  • the authenticated code module is to obtain the first encryption key from the first root of trust via a device specific memory that is dedicated to the authenticated code module.
  • the authenticated code module is to use a memory mapped input output (MMIO) instruction to obtain the first encryption key from the device specific memory.
  • MMIO memory mapped input output
  • the authenticated code module is to transfer a second encryption key from the first root of trust to the second root of trust.
  • the second root of trust may generate an attestation report based on the second encryption key and send the attestation report to an off-platform verifier.
  • the second root of trust is to countersign the second encryption key to obtain a countersigned encryption key, and incorporate the countersigned encryption key into the attestation report.
  • the second root of trust is to issue a challenge to the first root of trust over a system management bus, and wherein the challenge response is to be received over the system management bus.
  • Another example may provide for a method comprising using an authenticated code module to transfer a first encryption key from a first root of trust on a platform to a second root of trust on the platform, receiving a challenge response at the second root of trust, and using the first encryption key to verify the challenge response.
  • Still another example may include using the authenticated code module to transfer the encryption key includes obtaining the first encryption key from a trusted platform module located in a package on a first side of a firewall, and wherein the challenge response is received from logic located in the package on a second side of the firewall.
  • the first encryption key corresponds to a private key embedded in the package on the second side of the firewall.
  • using the authenticated code module to transfer the encryption key includes obtaining the first encryption key from the first root of trust via a device specific memory that is dedicated to the authenticated code module.
  • Another example may include using a memory mapped input output (MMIO) instruction to obtain the first encryption key from the device specific memory.
  • MMIO memory mapped input output
  • Still another example may include using the authenticated code module to transfer a second encryption key from the first root of trust to the second root of trust, generating an attestation report based on the second encryption key, and sending the attestation report to an off-platform verifier.
  • generating the attestation report includes countersigning the second encryption key to obtain a countersigned encryption key, and incorporating the countersigned encryption key into the attestation report.
  • Another example may include issuing a challenge to the first root of trust over a system management bus, wherein the challenge response is received over the system management bus.
  • Various embodiments may be implemented using hardware elements, software elements, or a combination of both.
  • hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth.
  • Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints.
  • IP cores may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.
  • Embodiments of the present invention are applicable for use with all types of semiconductor integrated circuit (“IC”) chips.
  • IC semiconductor integrated circuit
  • Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, and the like.
  • PPAs programmable logic arrays
  • signal conductor lines are represented with lines. Some may be different, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit.
  • Any represented signal lines may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.
  • Example sizes/models/values/ranges may have been given, although embodiments of the present invention are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured.
  • well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments of the invention.
  • arrangements may be shown in block diagram form in order to avoid obscuring embodiments of the invention, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art.
  • Some embodiments may be implemented, for example, using a machine or tangible computer-readable medium or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments.
  • a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software.
  • the machine-readable medium or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium and/or storage unit, for example, memory, removable or non-removable media, erasable or non-erasable media, writeable or re-writeable media, digital or analog media, hard disk, floppy disk, Compact Disk Read Only Memory (CD-ROM), Compact Disk Recordable (CD-R), Compact Disk Rewriteable (CD-RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of Digital Versatile Disk (DVD), a tape, a cassette, or the like.
  • memory removable or non-removable media, erasable or non-erasable media, writeable or re-writeable media, digital or analog media, hard disk, floppy disk, Compact Disk Read Only Memory (CD-ROM), Compact Disk Recordable (CD-R), Compact Disk Rewriteable (CD-RW), optical disk, magnetic
  • the instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.
  • processing refers to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical quantities (e.g., electronic) within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.
  • physical quantities e.g., electronic
  • Coupled may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections.
  • first”, second”, etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.

Abstract

Systems and methods may provide introducing a first root of trust on a platform to a second root of trust on the same platform. In one example, the method may include using an authenticated code module to transfer a first encryption key from a first root of trust on a platform to a second root of trust on the platform, receiving a challenge response from the first root of trust at the second root of trust, and using the first encryption key to verify the challenge response

Description

    BACKGROUND
  • 1. Technical Field
  • Embodiments generally relate to introduction of discrete roots of trust. More particularly, embodiments relate to introducing a first root of trust on a platform to a second root of trust on the same platform.
  • 2. Discussion
  • In some instances, a platform may be configured to use more than one root of trust. For example, a first security module, representing a first root of trust, may be integrated into a platform for a first purpose, while a second security module, representing a second root of trust, may be integrated into the platform for a second purpose. In such a case, these two security modules may often be unable to communicate directly, and may be mutually suspicious.
  • Moreover, if the first security module includes additional security logic, the second security module may regard the additional security logic as a separate discrete component with unknown security attributes.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The various advantages of the embodiments of the present invention will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:
  • FIG. 1 is a block diagram of an example of a system including a first root of trust and a second root of trust according to an embodiment;
  • FIG. 2 is a block diagram of an example of a system configured to facilitate verification by a third party that a first root of trust and a second root of trust operating on a platform have been introduced according to an embodiment;
  • FIG. 3 is a flowchart of an example of a method of verification by a third party that a first root of trust and a second root of trust operating on a platform have been introduced according to an embodiment;
  • FIG. 4 is a block diagram of an example of a system configured to facilitate introduction of a first root of trust to a second root of trust including a discrete security component according to an embodiment; and
  • FIG. 5 is a flowchart of an example of a method of introducing a first root of trust to a second root of trust including a discrete security component according to an embodiment.
  • DETAILED DESCRIPTION
  • FIG. 1 is a block diagram of an example of a system 500 including a first root of trust and a second root of trust. More particularly, the illustrated system 500 includes a first input/output (I/O) module 10, a second I/O module 20, and an intermediary module 30. The first I/O module 10 may include a first security module 11. The first security module 11 may be configured to operate as a first root of trust for a platform.
  • The second I/O module 20 may include a second security module 21 and a logic component 22, wherein the second security module 21 may be configured to operate as a second a root of trust for the platform. The logic component 22 may be a discrete security component configured to operate in conjunction with the second security module 21 (e.g., as a value added component). The second security module 21 and the logic component 22 may be separated by a firewall 23.
  • The intermediary module 30 may be a component configured to, among other things, act as an intermediary between the first I/O module 10 and the second I/O module 20. The intermediary module 30 may include an authenticated code module (ACM) 31. As will be discussed in greater detail, the ACM 31 may facilitate a trusted execution environment, allowing the ACM 31 to operate without threat from malware that may be present in memory components on the platform. In addition, the intermediary module 30 may be configured facilitate an introduction of the first security module 11 (e.g., the first root of trust) to the second security module 21 (e.g., the second root of trust).
  • FIG. 2 is a block diagram of an example of a networking architecture 1000 configured to facilitate verification by a third party that a first root of trust and a second root of trust operating on a platform have been introduced. In particular, as will be discussed in greater detail, the architecture 1000 may be enable a first root of trust to provide proof (i.e., cryptographic evidence) to the third party that the first root of trust and the second root of trust are operating together on the same platform, and within a trusted execution environment. The architecture 1000 may include a third party system 70 and a platform 63 having a first I/O module 40, a second I/O module 50 and an intermediary module 60.
  • In this example, the first I/O module 40 may include a first security module 41, an enhanced privacy identifier (EPID) 42, and an attestation report 43. The first security module 41 may be configured to operate as a first root of trust for the platform 63. The EPID 42 may be used to identify the first security module 41 as a root of trust. The attestation report 43 may be used to, among other things, provide proof (i.e., cryptographic evidence) that the first security module 41 may be operating with a second security module on the same platform 63, and within a trusted execution environment.
  • The second I/O module 50 may include a second security module 51 and logic component 52. The second security module 51 may be configured to operate as a second root of trust for the platform. The logic component 52 may be a discrete security component configured to operate in conjunction with the second security module 51. The second security module 51 and the logic component 52 may be separated by a firewall 56.
  • The second I/O module 50 may also include one or more endorsement keys 53. The endorsement keys 53 may be used in a verification process. The endorsement keys 53 may be embedded by, for example, the manufacturer of the second security module 51. In this example, the endorsement keys 53 may include a first key pair 54 and a second key pair 55.
  • The first key pair 54 may be used to identify the second security module 51 as the second root of trust. For example, the first key pair 54 may be used in a verification process requested by a third party operating the off-platform third party system 70. The first key pair 54 may include a first public key and a first private key. As will be discussed in greater detail, the first security module 41 may receive and countersign the first public key of the first key pair 54, and then incorporate the countersigned key into the attestation report 43 as part of a process of verifying the identity of the second root of trust. The first private key of the first key pair 54 may remain with the second I/O module (i.e., private), and may be used for attestation to a third party and to establish a secure channel between the logic component 52 and the first IO module 40.
  • The second key pair 55 may be used during a key exchange process with another root of trust. The second key pair 55 may also include a second public key and a second private key. As will be discussed in greater detail, the second key pair 55 may be used to verify that the second security module 51 is operating as the second root of trust for the platform. Also, the second private key of the second key pair 55 may remain with the second I/O module (i.e., private), and may be used for attestation to a third party and to establish a secure channel between the logic component 52 and the first IO module 40.
  • In this example, the intermediary module 60 may be a component configured to facilitate an introduction between the first root of trust and the second root of trust. The intermediary module 60 may include ACM 61. As will be discussed in greater detail, the ACM 61 may be configured to use secure communications to introduce a first root of trust to a second root trust by creating a public key provisioning path between both. In this case, the ACM 61 may be configured to introduce the second public key of the second key pair 55 of the second security module 51 to the first security module 41 over one or more trusted paths that may be accessible to the ACM 61.
  • The third party system 70 may be operated by a third party looking to verify that that the first security module 41 (i.e., the first root of trust) and the second security module 51 (i.e., the second root of trust) are operating together on the same platform 63, and within a trusted execution environment. So, in this example, the third party system 70 may be operated by an electronic commerce (e-commerce) vendor, who is performing a shopping cart transaction implicating the first security module 41 and the second security module 51.
  • FIG. 3 is a flowchart of an example of a method of verifying by a third party, such as the third party operating the third party system 70 (FIG. 2), that a first root of trust, such as the first security module 41 (FIG. 2), and a second root of trust, such as the second security module 51 (FIG. 2), have been introduced. In this example, the third party system may require proof that the first security module and the second security module are operating together on the same platform, and within a trusted execution environment. An intermediary module may, such as the intermediary module 60 (FIG. 2), be configured to facilitate communication between the first security module and the second security module.
  • The method might be implemented as a set of logic instructions stored in a machine- or computer-readable storage medium such as, for example, random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc., in configurable logic such as programmable logic arrays (PLAs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), in fixed-functionality logic hardware using circuit technology such as application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof. For example, computer program code to carry out operations shown in the method may be written in any combination of one or more programming languages, including an object oriented programming language such as, for example, Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • At processing block 72, the intermediary module may use an ACM, such as the ACM 61 (FIG. 2), to obtain a public key, such as the second public key of the second key pair 55 (FIG. 2), from the second security module. At processing module 74, the ACM may securely deliver the public key to the first security module.
  • At processing block 76, the first security module may receive the public key. At processing block 78, the first security module may use an EPID, such as the EPID 42 (FIG. 2), to countersign the public key from the second security module as part of an attestation process.
  • At processing block 80, the first security module may generate an attestation report, such as the attestation report 43 (FIG. 2), wherein the attestation report may include the EPID, the public key countersigned with the EPID, and an identification (e.g., a component ID, etc.) of the first security module that includes the make, model, version and cryptographic hash of the firmware contained in the first security module. At processing block 82, the first security module may transmit the attestation report to the third party for verification.
  • At processing block 84, the third party system may use the EPID to verify that the first security module may be trusted. At processing block 86, the third party system may use the attestation report (including the public key countersigned with the EPID) to verify that a separate attestation report obtained directly from the second security module may be trusted. In particular, the third party system may do so by determining from the attestation report that the first security module and the second security module have been introduced, that the second security module has authorized the first security module to countersign its public key, and that both are operating together in a trusted environment (i.e., a shared root of trust) facilitated by the intermediary module 60.
  • FIG. 4 is a block diagram of an example of a system configured to dynamically introducing a first root of trust and a second root of trust. In particular, as will be discussed in greater detail, the system 2000 may be configured to enable introduction of a first root of trust and a second root of trust, wherein the second root of trust may be integrated with a discrete logic component. FIG. 4 illustrates a system including a first I/O module 100, a second I/O module 200, and an intermediary module 300.
  • In this example, the first I/O module 100 may include a first security module 101 and an enhanced private identification (EPID) 102. The first security module 101 may be configured to operate as a first root of trust for a platform. The EPID 102 may be used to identify the first security module 101 as a first root of trust.
  • The second I/O module 200 may include a second security module 201 and a logic component 202. The second security module 201 may be configured to operate as a second root of trust for the platform. The logic component 202 may be a discrete security component configured to operate in conjunction with the second security module 201.
  • The second security module 201 and the logic component 202 are separated via a firewall 210 in the illustrated example. The firewall 210 may preclude the logic component 202 from sharing the trusted relationships of the second security module 201, and may cause the logic component 202 to be treated with suspicion in instances where the second security module 201 would not.
  • The endorsement keys 203 may be used in a verification process. The endorsement keys 203 may be embedded by, for example, the manufacturer of the second security module 201. The endorsement keys 203 may include a first public key 204 and a second key public portion 205 a.
  • The first public key 204 may be used to identify a second root of trust facilitated by the second security module 201. The second key public portion 205 a may be used during a key exchange process with another root of trust. The second key public portion 205 a may be located on a first side of the firewall 210 (i.e., on the side of the second security module 201). The second key public portion 205 a may correspond to a second key private portion 205 b (i.e., a private key) located on a second side of the firewall 210 (i.e., on the side of the logic component 202). As will be discussed in greater detail, the symmetry of the second key public portion 205 a and the second key private portion 205 b may be used to verify a trusted relationship between the second security module 201 and the logic component 202.
  • The intermediary module 300 may be a component configured to facilitate an introduction between a first root of trust and a second root of trust. The intermediary module 300 may include ACM 301. The ACM 301 may be configured to securely communicate with the first security module 101, and to introduce the second security module 201 to the first security module 101 by creating a public key provisioning path between the two.
  • For example, the ACM 301 may be configured to access the public key 205 a of the second security module 201, and introduce the public key 205 a to the first security module 101 over one or more interfaces and trusted paths that may be accessible by the ACM 301. In one example, the ACM 301 may use a memory mapped input output (MMIO) instruction to obtain the public key 205 a from a device specific memory that the ACM 301 may have access to.
  • So, upon receiving the public portion 205 a from the second security module 201, the first security module 101 may issue a challenge to the logic component 202. Upon receiving a response from the logic component 202, the first security module 101 may use the public portion 205 a to verify the identity of the logic component 202. That is, the first security module may be able to verify that the second security module 201 (i.e., the second root of trust) and the logic component 202 are operating together, and that the logic component 202 may be trusted.
  • FIG. 5 is a flowchart of an example of a method of dynamically introducing a first root of trust to a second root of trust including a discrete security component. In this example, a first security module, such as the first security module 101 (FIG. 4), may constitute the first root of trust. A second security module, such as the second security module 201 (FIG. 4) may constitute a second root of trust. The second security module and a logic component, such as the logic component 202 (FIG. 4), may be separated by a firewall. An intermediary module may, such as the intermediary module 300 (FIG. 4), be configured to facilitate communication between the first security module and the second security module.
  • The method might be implemented as a set of logic instructions stored in a machine- or computer-readable storage medium such as, for example, RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as PLAs, FPGAs, CPLDs, in fixed-functionality logic hardware using circuit technology such as ASIC, CMOS or TTL technology, or any combination thereof. For example, computer program code to carry out operations shown in the method may be written in any combination of one or more programming languages, including an object oriented programming language such as, for example, Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • At processing block 3000, the intermediary module may use an ACM, such as the ACM 301 (FIG. 4), to obtain a public key, such as the public key 205 a (FIG. 4), on behalf of the first security module. The ACM may obtain the public key from the second security module. At processing block 3010, the intermediary module may securely deliver the public key to the first security module. At processing block 3020, the first security module may receive the public key.
  • At processing block 3030, the first security module may direct a challenge to the logic component over a direct communication line (e.g., a system management bus). At processing block 3040, the logic component may receive the challenge. At processing block 3050, the logic component may respond to the challenge. The response may rely on a private key, such as the second key private portion 205 b (FIG. 4).
  • At processing block 3060, the first security module may receive the challenge response from the logic component. At processing block 3070, the first security engine may verify the logic component's response utilizing the public key delivered by the ACM. At this point, the logic component may be recognized as working in conjunction with the second security module. That is, because the second security module may be configured to provide the public key exclusively to the intermediary module, the first security engine may use the response from the third security engine and the public key from the second security engine to determine that the logic component should not be treated with suspicion.
  • Embodiments may therefore provide an apparatus comprising an authenticated code module to obtain a first encryption key from a first root of trust on a platform using a memory mapped input output (MMIO) instruction and via a device specific memory that is dedicated to the authenticated code module. The authenticated code module is to obtain the first encryption key from a trusted platform module located in a package on a first side of a firewall, and wherein the first encryption key is to is to correspond to a private key embedded in the package on a second side of the firewall. The apparatus may also include a second root of trust to receive the first encryption key from the authenticated code module, receive a challenge response, wherein the challenge response is to be received from logic located in the package on the second side of the firewall, and
  • use the first encryption key to verify the challenge response.
  • In one example, the authenticated code module is to transfer a second encryption key from the first root of trust to the second root of trust. In this example, the second root of trust may generate an attestation report based on the second encryption key and send the attestation report to an off-platform verifier.
  • In another example, the second root of trust is to countersign the second encryption key to obtain a countersigned encryption key, and incorporate the countersigned encryption key into the attestation report.
  • In another example, the second root of trust is to issue a challenge to the first root of trust over a system management bus, and wherein the challenge response is to be received over the system management bus.
  • Another embodiment may provide for a platform comprising a first package including a first root of trust and a second package. The second package may include an authenticated code module to obtain a first encryption key from the first root of trust, and a second root of trust to receive the first encryption key from the authenticated code module, receive a challenge response, and use the first encryption key to verify the challenge response.
  • In one example, the first root of trust includes a trusted platform module located on a first side of a firewall and the first package further includes logic located on a second side of the firewall. In this example, the challenge response is to be received from the logic located on the second side of the firewall.
  • In another example, the first package further includes a private key embedded on the second side of the firewall, and wherein the first encryption key is to correspond to the private key embedded on the second side of the firewall.
  • In still another example, the platform may include a device specific memory that is dedicated to the authenticated code module, wherein the authenticated code module is to obtain the first encryption key from the first root of trust via the device specific memory.
  • In another example, the authenticated code module is to use a memory mapped input output (MMIO) instruction to obtain the first encryption key from the device specific memory.
  • In yet another example, the authenticated code module is to transfer a second encryption key from the first root of trust to the second root of trust. In this example, the second root of trust may generate an attestation report based on the second encryption key and send the attestation report to an off-platform verifier.
  • In another example, the second root of trust is to countersign the second encryption key to obtain a countersigned encryption key, and incorporate the countersigned encryption key into the attestation report.
  • In still another example, the platform may include a system management bus coupled to the first root of trust and the second root of trust, wherein the second root of trust is to issue a challenge to the first root of trust over the system management bus, and wherein the challenge response is to be received over the system management bus.
  • Still another example may provide for an apparatus comprising an authenticated code module to obtain a first encryption key from a first root of trust on a platform, and a second root of trust to receive the first encryption key from the authenticated code module, receive a challenge response, and use the first encryption key to verify the challenge response.
  • In another example, the authenticated code module is to obtain the first encryption key from a trusted platform module located in a package on a first side of a firewall, and wherein the challenge response is to be received from logic located in the package on a second side of the firewall.
  • In still another example, the first encryption key is to correspond to a private key embedded in the package on the second side of the firewall.
  • In yet another example, the authenticated code module is to obtain the first encryption key from the first root of trust via a device specific memory that is dedicated to the authenticated code module.
  • In one example, the authenticated code module is to use a memory mapped input output (MMIO) instruction to obtain the first encryption key from the device specific memory.
  • In another example, the authenticated code module is to transfer a second encryption key from the first root of trust to the second root of trust. In this example, the second root of trust may generate an attestation report based on the second encryption key and send the attestation report to an off-platform verifier.
  • In another example, the second root of trust is to countersign the second encryption key to obtain a countersigned encryption key, and incorporate the countersigned encryption key into the attestation report.
  • In still another example, the second root of trust is to issue a challenge to the first root of trust over a system management bus, and wherein the challenge response is to be received over the system management bus.
  • Another example may provide for a method comprising using an authenticated code module to transfer a first encryption key from a first root of trust on a platform to a second root of trust on the platform, receiving a challenge response at the second root of trust, and using the first encryption key to verify the challenge response.
  • Still another example may include using the authenticated code module to transfer the encryption key includes obtaining the first encryption key from a trusted platform module located in a package on a first side of a firewall, and wherein the challenge response is received from logic located in the package on a second side of the firewall.
  • In one example, the first encryption key corresponds to a private key embedded in the package on the second side of the firewall.
  • In another example, using the authenticated code module to transfer the encryption key includes obtaining the first encryption key from the first root of trust via a device specific memory that is dedicated to the authenticated code module.
  • Another example may include using a memory mapped input output (MMIO) instruction to obtain the first encryption key from the device specific memory.
  • Still another example may include using the authenticated code module to transfer a second encryption key from the first root of trust to the second root of trust, generating an attestation report based on the second encryption key, and sending the attestation report to an off-platform verifier.
  • In another example, generating the attestation report includes countersigning the second encryption key to obtain a countersigned encryption key, and incorporating the countersigned encryption key into the attestation report.
  • Another example may include issuing a challenge to the first root of trust over a system management bus, wherein the challenge response is received over the system management bus.
  • Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints.
  • One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.
  • Embodiments of the present invention are applicable for use with all types of semiconductor integrated circuit (“IC”) chips. Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, and the like. In addition, in some of the drawings, signal conductor lines are represented with lines. Some may be different, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.
  • Example sizes/models/values/ranges may have been given, although embodiments of the present invention are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments of the invention. Further, arrangements may be shown in block diagram form in order to avoid obscuring embodiments of the invention, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the invention, it should be apparent to one skilled in the art that embodiments of the invention can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting.
  • Some embodiments may be implemented, for example, using a machine or tangible computer-readable medium or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software. The machine-readable medium or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium and/or storage unit, for example, memory, removable or non-removable media, erasable or non-erasable media, writeable or re-writeable media, digital or analog media, hard disk, floppy disk, Compact Disk Read Only Memory (CD-ROM), Compact Disk Recordable (CD-R), Compact Disk Rewriteable (CD-RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of Digital Versatile Disk (DVD), a tape, a cassette, or the like. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.
  • Unless specifically stated otherwise, it may be appreciated that terms such as “processing,” “computing,” “calculating,” “determining,” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical quantities (e.g., electronic) within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices. The embodiments are not limited in this context.
  • The term “coupled” may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms “first”, “second”, etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.
  • Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments of the present invention can be implemented in a variety of forms. Therefore, while the embodiments of this invention have been described in connection with particular examples thereof, the true scope of the embodiments of the invention should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.

Claims (28)

We claim:
1. An apparatus comprising:
an authenticated code module to obtain a first encryption key from a first root of trust on a platform using a memory mapped input output (MMIO) instruction and via a device specific memory that is dedicated to the authenticated code module, wherein the authenticated code module is to obtain the first encryption key from a trusted platform module located in a package on a first side of a firewall, and wherein the first encryption key is to correspond to a private key embedded in the package on a second side of the firewall; and
a second root of trust to,
receive the first encryption key from the authenticated code module,
receive a challenge response, wherein the challenge response is to be received from logic located in the package on the second side of the firewall, and
use the first encryption key to verify the challenge response.
2. The apparatus of claim 1, wherein the authenticated code module is to transfer a second encryption key from the first root of trust to the second root of trust, and the second root of trust is to generate an attestation report based on the second encryption key and send the attestation report to an off-platform verifier.
3. The apparatus of claim 2, wherein the second root of trust is to countersign the second encryption key to obtain a countersigned encryption key, and incorporate the countersigned encryption key into the attestation report.
4. The apparatus of claim 1, wherein the second root of trust is to issue a challenge to the first root of trust over a system management bus, and wherein the challenge response is to be received over the system management bus.
5. A platform comprising:
a first package including a first root of trust; and
a second package including,
an authenticated code module to obtain a first encryption key from the first root of trust, and
a second root of trust to receive the first encryption key from the authenticated code module, receive a challenge response, and use the first encryption key to verify the challenge response.
6. The platform of claim 5, wherein the first root of trust includes a trusted platform module located on a first side of a firewall and the first package further includes logic located on a second side of the firewall, and wherein the challenge response is to be received from the logic located on the second side of the firewall.
7. The platform of claim 6, wherein the first package further includes a private key embedded on the second side of the firewall, and wherein the first encryption key is to correspond to the private key embedded on the second side of the firewall.
8. The platform of claim 5, further including a device specific memory that is dedicated to the authenticated code module, wherein the authenticated code module is to obtain the first encryption key from the first root of trust via the device specific memory.
9. The platform of claim 8, wherein the authenticated code module is to use a memory mapped input output (MMIO) instruction to obtain the first encryption key from the device specific memory.
10. The platform of claim 5, wherein the authenticated code module is to transfer a second encryption key from the first root of trust to the second root of trust, and the second root of trust is to generate an attestation report based on the second encryption key and send the attestation report to an off-platform verifier.
11. The platform of claim 10, wherein the second root of trust is to countersign the second encryption key to obtain a countersigned encryption key, and incorporate the countersigned encryption key into the attestation report.
12. The platform of claim 5, further including a system management bus coupled to the first root of trust and the second root of trust, wherein the second root of trust is to issue a challenge to the first root of trust over the system management bus, and wherein the challenge response is to be received over the system management bus.
13. An apparatus comprising:
an authenticated code module to obtain a first encryption key from a first root of trust on a platform; and
a second root of trust to receive the first encryption key from the authenticated code module, receive a challenge response, and use the first encryption key to verify the challenge response.
14. The apparatus of claim 13, wherein the authenticated code module is to obtain the first encryption key from a trusted platform module located in a package on a first side of a firewall, and wherein the challenge response is to be received from logic located in the package on a second side of the firewall.
15. The apparatus of claim 14, wherein the first encryption key is to correspond to a private key embedded in the package on the second side of the firewall.
16. The apparatus of claim 13, wherein the authenticated code module is to obtain the first encryption key from the first root of trust via a device specific memory that is dedicated to the authenticated code module.
17. The apparatus of claim 16, wherein the authenticated code module is to use a memory mapped input output (MMIO) instruction to obtain the first encryption key from the device specific memory.
18. The apparatus of claim 13, wherein the authenticated code module is to transfer a second encryption key from the first root of trust to the second root of trust, and the second root of trust is to generate an attestation report based on the second encryption key and send the attestation report to an off-platform verifier.
19. The apparatus of claim 18, wherein the second root of trust is to countersign the second encryption key to obtain a countersigned encryption key, and incorporate the countersigned encryption key into the attestation report.
20. The apparatus of claim 13, wherein the second root of trust is to issue a challenge to the first root of trust over a system management bus, and wherein the challenge response is to be received over the system management bus.
21. A method comprising:
using an authenticated code module to transfer a first encryption key from a first root of trust on a platform to a second root of trust on the platform;
receiving a challenge response at the second root of trust; and
using the first encryption key to verify the challenge response.
22. The method of claim 21, wherein using the authenticated code module to transfer the encryption key includes obtaining the first encryption key from a trusted platform module located in a package on a first side of a firewall, and wherein the challenge response is received from logic located in the package on a second side of the firewall.
23. The method of claim 22, wherein the first encryption key corresponds to a private key embedded in the package on the second side of the firewall.
24. The method of claim 21, wherein using the authenticated code module to transfer the encryption key includes obtaining the first encryption key from the first root of trust via a device specific memory that is dedicated to the authenticated code module.
25. The method of claim 24, further including using a memory mapped input output (MMIO) instruction to obtain the first encryption key from the device specific memory.
26. The method of claim 21, further including:
using the authenticated code module to transfer a second encryption key from the first root of trust to the second root of trust;
generating an attestation report based on the second encryption key; and
sending the attestation report to an off-platform verifier.
27. The method of claim 26, wherein generating the attestation report includes:
countersigning the second encryption key to obtain a countersigned encryption key; and
incorporating the countersigned encryption key into the attestation report.
28. The method of claim 21, further including issuing a challenge to the first root of trust over a system management bus, wherein the challenge response is received over the system management bus.
US13/629,887 2012-09-28 2012-09-28 Introduction of discrete roots of trust Expired - Fee Related US8874916B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/629,887 US8874916B2 (en) 2012-09-28 2012-09-28 Introduction of discrete roots of trust

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/629,887 US8874916B2 (en) 2012-09-28 2012-09-28 Introduction of discrete roots of trust

Publications (2)

Publication Number Publication Date
US20140095876A1 true US20140095876A1 (en) 2014-04-03
US8874916B2 US8874916B2 (en) 2014-10-28

Family

ID=50386409

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/629,887 Expired - Fee Related US8874916B2 (en) 2012-09-28 2012-09-28 Introduction of discrete roots of trust

Country Status (1)

Country Link
US (1) US8874916B2 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120311333A1 (en) * 2011-06-03 2012-12-06 Oracle International Coproration System and method for authenticating identity of discovered component in an infiniband (ib) network
US9231888B2 (en) 2012-05-11 2016-01-05 Oracle International Corporation System and method for routing traffic between distinct InfiniBand subnets based on source routing
CN105279439A (en) * 2014-06-20 2016-01-27 赛普拉斯半导体公司 Encryption method for execute-in-place memories
US9262155B2 (en) 2012-06-04 2016-02-16 Oracle International Corporation System and method for supporting in-band/side-band firmware upgrade of input/output (I/O) devices in a middleware machine environment
US20160080380A1 (en) * 2014-09-17 2016-03-17 Microsoft Technology Licensing, Llc Establishing trust between two devices
US9455898B2 (en) 2010-09-17 2016-09-27 Oracle International Corporation System and method for facilitating protection against run-away subnet manager instances in a middleware machine environment
US9935848B2 (en) 2011-06-03 2018-04-03 Oracle International Corporation System and method for supporting subnet manager (SM) level robust handling of unkown management key in an infiniband (IB) network
US20180137294A1 (en) 2014-06-20 2018-05-17 Cypress Semiconductor Corporation Encryption for xip and mmio external memories
US20190213359A1 (en) * 2018-01-10 2019-07-11 General Electric Company Secure provisioning of secrets into mpsoc devices using untrusted third-party systems
CN110324138A (en) * 2018-03-29 2019-10-11 阿里巴巴集团控股有限公司 Data encryption, decryption method and device
US10691838B2 (en) 2014-06-20 2020-06-23 Cypress Semiconductor Corporation Encryption for XIP and MMIO external memories
US10897360B2 (en) 2017-01-26 2021-01-19 Microsoft Technology Licensing, Llc Addressing a trusted execution environment using clean room provisioning
US10897459B2 (en) * 2017-01-26 2021-01-19 Microsoft Technology Licensing, Llc Addressing a trusted execution environment using encryption key
US10972265B2 (en) 2017-01-26 2021-04-06 Microsoft Technology Licensing, Llc Addressing a trusted execution environment
US20210243027A1 (en) * 2018-04-20 2021-08-05 Vishal Gupta Decentralized document and entity verification engine
CN113347168A (en) * 2021-05-26 2021-09-03 北京威努特技术有限公司 Protection method and system based on zero trust model
US11157626B1 (en) * 2019-05-29 2021-10-26 Northrop Grumman Systems Corporation Bi-directional chain of trust network
US11216389B2 (en) * 2015-12-02 2022-01-04 Cryptography Research, Inc. Device with multiple roots of trust
US11625633B2 (en) 2020-05-26 2023-04-11 Northrop Grumman Systems Corporation Machine learning to monitor operations of a device
US11722903B2 (en) 2021-04-09 2023-08-08 Northrop Grumman Systems Corporation Environmental verification for controlling access to data
US11831786B1 (en) 2018-11-13 2023-11-28 Northrop Grumman Systems Corporation Chain of trust

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8494585B2 (en) 2011-10-13 2013-07-23 The Boeing Company Portable communication devices with accessory functions and related methods
US10064240B2 (en) 2013-09-12 2018-08-28 The Boeing Company Mobile communication device and method of operating thereof
US9819661B2 (en) 2013-09-12 2017-11-14 The Boeing Company Method of authorizing an operation to be performed on a targeted computing device
US9497221B2 (en) * 2013-09-12 2016-11-15 The Boeing Company Mobile communication device and method of operating thereof
WO2017034811A1 (en) 2015-08-21 2017-03-02 Cryptography Research, Inc. Secure computation environment
US11455396B2 (en) 2017-05-12 2022-09-27 Hewlett Packard Enterprise Development Lp Using trusted platform module (TPM) emulator engines to measure firmware images

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070101156A1 (en) * 2005-10-31 2007-05-03 Manuel Novoa Methods and systems for associating an embedded security chip with a computer
US20090327684A1 (en) * 2008-06-25 2009-12-31 Zimmer Vincent J Apparatus and method for secure boot environment
US20120297200A1 (en) * 2011-05-17 2012-11-22 Microsoft Corporation Policy bound key creation and re-wrap service

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070101156A1 (en) * 2005-10-31 2007-05-03 Manuel Novoa Methods and systems for associating an embedded security chip with a computer
US20090327684A1 (en) * 2008-06-25 2009-12-31 Zimmer Vincent J Apparatus and method for secure boot environment
US20120297200A1 (en) * 2011-05-17 2012-11-22 Microsoft Corporation Policy bound key creation and re-wrap service

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9455898B2 (en) 2010-09-17 2016-09-27 Oracle International Corporation System and method for facilitating protection against run-away subnet manager instances in a middleware machine environment
US10630570B2 (en) 2010-09-17 2020-04-21 Oracle International Corporation System and method for supporting well defined subnet topology in a middleware machine environment
US9906429B2 (en) 2010-09-17 2018-02-27 Oracle International Corporation Performing partial subnet initialization in a middleware machine environment
US9614746B2 (en) 2010-09-17 2017-04-04 Oracle International Corporation System and method for providing ethernet over network virtual hub scalability in a middleware machine environment
US10063544B2 (en) 2011-06-03 2018-08-28 Oracle International Corporation System and method for supporting consistent handling of internal ID spaces for different partitions in an infiniband (IB) network
US9930018B2 (en) 2011-06-03 2018-03-27 Oracle International Corporation System and method for providing source ID spoof protection in an infiniband (IB) network
US9219718B2 (en) 2011-06-03 2015-12-22 Oracle International Corporation System and method for supporting sub-subnet in an infiniband (IB) network
US9270650B2 (en) 2011-06-03 2016-02-23 Oracle International Corporation System and method for providing secure subnet management agent (SMA) in an infiniband (IB) network
US20120311333A1 (en) * 2011-06-03 2012-12-06 Oracle International Coproration System and method for authenticating identity of discovered component in an infiniband (ib) network
US9935848B2 (en) 2011-06-03 2018-04-03 Oracle International Corporation System and method for supporting subnet manager (SM) level robust handling of unkown management key in an infiniband (IB) network
US9240981B2 (en) * 2011-06-03 2016-01-19 Oracle International Corporation System and method for authenticating identity of discovered component in an infiniband (IB) network
US9900293B2 (en) 2011-06-03 2018-02-20 Oracle International Corporation System and method for supporting automatic disabling of degraded links in an infiniband (IB) network
US9264382B2 (en) 2012-05-11 2016-02-16 Oracle International Corporation System and method for routing traffic between distinct infiniband subnets based on fat-tree routing
US9231888B2 (en) 2012-05-11 2016-01-05 Oracle International Corporation System and method for routing traffic between distinct InfiniBand subnets based on source routing
US9665719B2 (en) 2012-06-04 2017-05-30 Oracle International Corporation System and method for supporting host-based firmware upgrade of input/output (I/O) devices in a middleware machine environment
US9262155B2 (en) 2012-06-04 2016-02-16 Oracle International Corporation System and method for supporting in-band/side-band firmware upgrade of input/output (I/O) devices in a middleware machine environment
US10691838B2 (en) 2014-06-20 2020-06-23 Cypress Semiconductor Corporation Encryption for XIP and MMIO external memories
CN105279439A (en) * 2014-06-20 2016-01-27 赛普拉斯半导体公司 Encryption method for execute-in-place memories
US20180137294A1 (en) 2014-06-20 2018-05-17 Cypress Semiconductor Corporation Encryption for xip and mmio external memories
US10169618B2 (en) * 2014-06-20 2019-01-01 Cypress Semiconductor Corporation Encryption method for execute-in-place memories
US10192062B2 (en) 2014-06-20 2019-01-29 Cypress Semiconductor Corporation Encryption for XIP and MMIO external memories
US20160080380A1 (en) * 2014-09-17 2016-03-17 Microsoft Technology Licensing, Llc Establishing trust between two devices
US10362031B2 (en) 2014-09-17 2019-07-23 Microsoft Technology Licensing, Llc Establishing trust between two devices
US10581848B2 (en) 2014-09-17 2020-03-03 Microsoft Technology Licensing, Llc Establishing trust between two devices
US9716716B2 (en) * 2014-09-17 2017-07-25 Microsoft Technology Licensing, Llc Establishing trust between two devices
US11216389B2 (en) * 2015-12-02 2022-01-04 Cryptography Research, Inc. Device with multiple roots of trust
US10897459B2 (en) * 2017-01-26 2021-01-19 Microsoft Technology Licensing, Llc Addressing a trusted execution environment using encryption key
US10897360B2 (en) 2017-01-26 2021-01-19 Microsoft Technology Licensing, Llc Addressing a trusted execution environment using clean room provisioning
US10972265B2 (en) 2017-01-26 2021-04-06 Microsoft Technology Licensing, Llc Addressing a trusted execution environment
US10706179B2 (en) * 2018-01-10 2020-07-07 General Electric Company Secure provisioning of secrets into MPSoC devices using untrusted third-party systems
US20190213359A1 (en) * 2018-01-10 2019-07-11 General Electric Company Secure provisioning of secrets into mpsoc devices using untrusted third-party systems
CN110324138A (en) * 2018-03-29 2019-10-11 阿里巴巴集团控股有限公司 Data encryption, decryption method and device
US20210243027A1 (en) * 2018-04-20 2021-08-05 Vishal Gupta Decentralized document and entity verification engine
US11664995B2 (en) * 2018-04-20 2023-05-30 Vishal Gupta Decentralized document and entity verification engine
US11831786B1 (en) 2018-11-13 2023-11-28 Northrop Grumman Systems Corporation Chain of trust
US11157626B1 (en) * 2019-05-29 2021-10-26 Northrop Grumman Systems Corporation Bi-directional chain of trust network
US11625633B2 (en) 2020-05-26 2023-04-11 Northrop Grumman Systems Corporation Machine learning to monitor operations of a device
US11722903B2 (en) 2021-04-09 2023-08-08 Northrop Grumman Systems Corporation Environmental verification for controlling access to data
CN113347168A (en) * 2021-05-26 2021-09-03 北京威努特技术有限公司 Protection method and system based on zero trust model

Also Published As

Publication number Publication date
US8874916B2 (en) 2014-10-28

Similar Documents

Publication Publication Date Title
US8874916B2 (en) Introduction of discrete roots of trust
TWI701933B (en) Block chain data processing method, device, processing equipment and system
US20200372503A1 (en) Transaction messaging
EP3962020B1 (en) Information sharing methods and systems
WO2021239104A1 (en) Blockchain-based service processing
US10389727B2 (en) Multi-level security enforcement utilizing data typing
Hardin et al. Amanuensis: Information provenance for health-data systems
CN111542820B (en) Method and apparatus for trusted computing
US20140096212A1 (en) Multi-factor authentication process
CN108040507A (en) Sentry's equipment in Internet of Things field
US11756029B2 (en) Secured end-to-end communication for remote payment verification
CN110035052A (en) A kind of method, apparatus that checking historical transactional information and electronic equipment
US20130212391A1 (en) Elliptic curve cryptographic signature
US20210326889A1 (en) Information sharing methods and systems
Islam et al. IoT security, privacy and trust in home-sharing economy via blockchain
CN114785524B (en) Electronic seal generation method, device, equipment and medium
CN114826733B (en) File transmission method, device, system, equipment, medium and program product
US11176624B2 (en) Privacy-preserving smart metering
El Madhoun et al. Towards more secure EMV purchase transactions: A new security protocol formally analyzed by the Scyther tool
WO2019212829A1 (en) Techniques for performing secure operations
El Ismaili et al. A secure electronic transaction payment protocol design and implementation
CN116823257A (en) Information processing method, device, equipment and storage medium
CN113343309B (en) Natural person database privacy security protection method and device and terminal equipment
CN115599959A (en) Data sharing method, device, equipment and storage medium
CN114615087B (en) Data sharing method, device, equipment and medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SMITH, NED;SMITH, SHARON LEA;WISEMAN, WILLARD;SIGNING DATES FROM 20121116 TO 20130609;REEL/FRAME:030573/0636

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20221028